Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
advisor | Advisor Reference Cost Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-cost-recommendations.md | Learn more about [PostgreSQL server - OrcasPostgreSqlCpuRightSize (Right-size un Your Azure Cosmos DB free tier account currently contains resources with a total provisioned throughput exceeding 1,000 Request Units per second (RU/s). Because the free tier only covers the first 1000 RU/s of throughput provisioned across your account, any throughput beyond 1000 RU/s is billed at the regular pricing. As a result, we anticipate that you're charged for the throughput currently provisioned on your Azure Cosmos DB account. -Learn more about [Azure Cosmos DB account - CosmosDBFreeTierOverage (Review the configuration of your Azure Cosmos DB free tier account)](../cosmos-db/understand-your-bill.md#azure-free-tier). +Learn more about [Azure Cosmos DB account - CosmosDBFreeTierOverage (Review the configuration of your Azure Cosmos DB free tier account)](/azure/cosmos-db/understand-your-bill#azure-free-tier). ### Consider taking action on your idle Azure Cosmos DB containers Learn more about [Azure Cosmos DB account - CosmosDBIdleContainers (Consider tak Based on your usage in the past seven days, you can save by enabling autoscale. For each hour, we compared the RU/s provisioned to the actual utilization of the RU/s (what autoscale would have scaled to) and calculated the cost savings across the time period. Autoscale helps optimize your cost by scaling down RU/s when not in use. -Learn more about [Azure Cosmos DB account - CosmosDBAutoscaleRecommendations (Enable autoscale on your Azure Cosmos DB database or container)](../cosmos-db/provision-throughput-autoscale.md). +Learn more about [Azure Cosmos DB account - CosmosDBAutoscaleRecommendations (Enable autoscale on your Azure Cosmos DB database or container)](/azure/cosmos-db/provision-throughput-autoscale). ### Configure manual throughput instead of autoscale on your Azure Cosmos DB database or container Based on your usage in the past seven days, you can save by using manual throughput instead of autoscale. Manual throughput is more cost-effective when average utilization of your max throughput (RU/s) is greater than 66% or less than or equal to 10%. -Learn more about [Azure Cosmos DB account - CosmosDBMigrateToManualThroughputFromAutoscale (Configure manual throughput instead of autoscale on your Azure Cosmos DB database or container)](../cosmos-db/how-to-choose-offer.md). +Learn more about [Azure Cosmos DB account - CosmosDBMigrateToManualThroughputFromAutoscale (Configure manual throughput instead of autoscale on your Azure Cosmos DB database or container)](/azure/cosmos-db/how-to-choose-offer). |
advisor | Advisor Reference Operational Excellence Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-operational-excellence-recommendations.md | Learn more about [SQL virtual machine - SqlAssessmentAdvisorRec (Install SQL bes We noticed that your Azure Cosmos DB collection is using the legacy attachments feature. We recommend migrating attachments to Azure Blob Storage to improve the resiliency and scalability of your blob data. -Learn more about [Azure Cosmos DB account - CosmosDBAttachments (Migrate Azure Cosmos DB attachments to Azure Blob Storage)](../cosmos-db/attachments.md#migrating-attachments-to-azure-blob-storage). +Learn more about [Azure Cosmos DB account - CosmosDBAttachments (Migrate Azure Cosmos DB attachments to Azure Blob Storage)](/azure/cosmos-db/attachments#migrating-attachments-to-azure-blob-storage). ### Improve resiliency by migrating your Azure Cosmos DB accounts to continuous backup Your Azure Cosmos DB accounts are configured with periodic backup. Continuous backup with point-in-time restore is now available on these accounts. With continuous backup, you can restore your data to any point in time within the past 30 days. Continuous backup might also be more cost-effective as a single copy of your data is retained. -Learn more about [Azure Cosmos DB account - CosmosDBMigrateToContinuousBackup (Improve resiliency by migrating your Azure Cosmos DB accounts to continuous backup)](../cosmos-db/continuous-backup-restore-introduction.md). +Learn more about [Azure Cosmos DB account - CosmosDBMigrateToContinuousBackup (Improve resiliency by migrating your Azure Cosmos DB accounts to continuous backup)](/azure/cosmos-db/continuous-backup-restore-introduction). ### Enable partition merge to configure an optimal database partition layout |
advisor | Advisor Reference Performance Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-performance-recommendations.md | Learn more about [Azure Cosmos DB account - CosmosDBQueryPageSize (Configure you Your Azure Cosmos DB containers are running ORDER BY queries incurring high Request Unit (RU) charges. It's recommended to add composite indexes to your containers' indexing policy to improve the RU consumption and decrease the latency of these queries. -Learn more about [Azure Cosmos DB account - CosmosDBOrderByHighRUCharge (Add composite indexes to your Azure Cosmos DB container)](../cosmos-db/index-policy.md#composite-indexes). +Learn more about [Azure Cosmos DB account - CosmosDBOrderByHighRUCharge (Add composite indexes to your Azure Cosmos DB container)](/azure/cosmos-db/index-policy#composite-indexes). ### Optimize your Azure Cosmos DB indexing policy to only index what's needed Your Azure Cosmos DB containers are using the default indexing policy, which indexes every property in your documents. Because you're storing large documents, a high number of properties get indexed, resulting in high Request Unit consumption and poor write latency. To optimize write performance, we recommend overriding the default indexing policy to only index the properties used in your queries. -Learn more about [Azure Cosmos DB account - CosmosDBDefaultIndexingWithManyPaths (Optimize your Azure Cosmos DB indexing policy to only index what's needed)](../cosmos-db/index-policy.md). +Learn more about [Azure Cosmos DB account - CosmosDBDefaultIndexingWithManyPaths (Optimize your Azure Cosmos DB indexing policy to only index what's needed)](/azure/cosmos-db/index-policy). ### Use hierarchical partition keys for optimal data distribution |
advisor | Advisor Reference Reliability Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-reliability-recommendations.md | Learn more about [Azure Cosmos DB account - CosmosDBLazyIndexing (Configure Cons Your Azure Cosmos DB account is using an old version of the SDK. We recommend you upgrade to the latest version for the latest fixes, performance improvements, and new feature capabilities. -Learn more about [Azure Cosmos DB account - CosmosDBUpgradeOldSDK (Upgrade your old Azure Cosmos DB SDK to the latest version)](../cosmos-db/index.yml). +Learn more about [Azure Cosmos DB account - CosmosDBUpgradeOldSDK (Upgrade your old Azure Cosmos DB SDK to the latest version)](/azure/cosmos-db/). ### Upgrade your outdated Azure Cosmos DB SDK to the latest version Your Azure Cosmos DB account is using an outdated version of the SDK. We recommend you upgrade to the latest version for the latest fixes, performance improvements, and new feature capabilities. -Learn more about [Azure Cosmos DB account - CosmosDBUpgradeOutdatedSDK (Upgrade your outdated Azure Cosmos DB SDK to the latest version)](../cosmos-db/index.yml). +Learn more about [Azure Cosmos DB account - CosmosDBUpgradeOutdatedSDK (Upgrade your outdated Azure Cosmos DB SDK to the latest version)](/azure/cosmos-db/). ### Configure your Azure Cosmos DB containers with a partition key Your Azure Cosmos DB nonpartitioned collections are approaching their provisioned storage quota. Migrate these collections to new collections with a partition key definition so the service can automatically scale them out. -Learn more about [Azure Cosmos DB account - CosmosDBFixedCollections (Configure your Azure Cosmos DB containers with a partition key)](../cosmos-db/partitioning-overview.md#choose-partitionkey). +Learn more about [Azure Cosmos DB account - CosmosDBFixedCollections (Configure your Azure Cosmos DB containers with a partition key)](/azure/cosmos-db/partitioning-overview#choose-partitionkey). ### Upgrade your Azure Cosmos DB for MongoDB account to v4.0 to save on query/storage costs and utilize new features Based on their names and configuration, we have detected the Azure Cosmos DB acc > [!NOTE] > Additional regions incur extra costs. -Learn more about [Azure Cosmos DB account - CosmosDBSingleRegionProdAccounts (Add a second region to your production workloads on Azure Cosmos DB)](../cosmos-db/high-availability.md). +Learn more about [Azure Cosmos DB account - CosmosDBSingleRegionProdAccounts (Add a second region to your production workloads on Azure Cosmos DB)](/azure/cosmos-db/high-availability). ### Enable Server Side Retry (SSR) on your Azure Cosmos DB for MongoDB account Learn more about [Azure Cosmos DB account - CosmosDBMongoMigrationUpgrade (Migra It appears that your key vault's configuration is preventing your Azure Cosmos DB account from contacting the key vault to access your managed encryption keys. If you've recently performed a key rotation, make sure that the previous key or key version remains enabled and available until Azure Cosmos DB has completed the rotation. The previous key or key version can be disabled after 24 hours, or after the Azure Key Vault audit logs don't show activity from Azure Cosmos DB on that key or key version anymore. -Learn more about [Azure Cosmos DB account - CosmosDBKeyVaultWrap (Your Azure Cosmos DB account is unable to access its linked Azure Key Vault hosting your encryption key)](../cosmos-db/how-to-setup-cmk.md). +Learn more about [Azure Cosmos DB account - CosmosDBKeyVaultWrap (Your Azure Cosmos DB account is unable to access its linked Azure Key Vault hosting your encryption key)](/azure/cosmos-db/how-to-setup-cmk). ### Avoid being rate limited from metadata operations Learn more about [Azure Cosmos DB account - CosmosDBMongoNudge36AwayFrom32 (Use ### Upgrade to 2.6.14 version of the Async Java SDK v2 to avoid a critical issue or upgrade to Java SDK v4 as Async Java SDK v2 is being deprecated -There's a critical bug in version 2.6.13 and lower, of the Azure Cosmos DB Async Java SDK v2 causing errors when a Global logical sequence number (LSN) greater than the Max Integer value is reached. These service errors happen after a large volume of transactions occur in the lifetime of an Azure Cosmos DB container. Note: There's a critical hotfix for the Async Java SDK v2, however we still highly recommend you migrate to the [Java SDK v4](../cosmos-db/sql/sql-api-sdk-java-v4.md). +There's a critical bug in version 2.6.13 and lower, of the Azure Cosmos DB Async Java SDK v2 causing errors when a Global logical sequence number (LSN) greater than the Max Integer value is reached. These service errors happen after a large volume of transactions occur in the lifetime of an Azure Cosmos DB container. Note: There's a critical hotfix for the Async Java SDK v2, however we still highly recommend you migrate to the [Java SDK v4](/azure/cosmos-db/sql/sql-api-sdk-java-v4). -Learn more about [Azure Cosmos DB account - CosmosDBMaxGlobalLSNReachedV2 (Upgrade to 2.6.14 version of the Async Java SDK v2 to avoid a critical issue or upgrade to Java SDK v4 as Async Java SDK v2 is being deprecated)](../cosmos-db/sql/sql-api-sdk-async-java.md). +Learn more about [Azure Cosmos DB account - CosmosDBMaxGlobalLSNReachedV2 (Upgrade to 2.6.14 version of the Async Java SDK v2 to avoid a critical issue or upgrade to Java SDK v4 as Async Java SDK v2 is being deprecated)](/azure/cosmos-db/sql/sql-api-sdk-async-java). ### Upgrade to the current recommended version of the Java SDK v4 to avoid a critical issue There's a critical bug in version 4.15 and lower of the Azure Cosmos DB Java SDK v4 causing errors when a Global logical sequence number (LSN) greater than the Max Integer value is reached. These service errors happen after a large volume of transactions occur in the lifetime of an Azure Cosmos DB container. -Learn more about [Azure Cosmos DB account - CosmosDBMaxGlobalLSNReachedV4 (Upgrade to the current recommended version of the Java SDK v4 to avoid a critical issue)](../cosmos-db/sql/sql-api-sdk-java-v4.md). +Learn more about [Azure Cosmos DB account - CosmosDBMaxGlobalLSNReachedV4 (Upgrade to the current recommended version of the Java SDK v4 to avoid a critical issue)](/azure/cosmos-db/sql/sql-api-sdk-java-v4). |
ai-services | Cognitive Services Encryption Keys Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Encryption/cognitive-services-encryption-keys-portal.md | When you use a customer-managed key, these resources are _in your Azure subscrip These Microsoft-managed resources are located in a new Azure resource group is created in your subscription. This group is in addition to the resource group for your project. This resource group contains the Microsoft-managed resources that your key is used with. The resource group is named using the formula of `<Azure AI resource group name><GUID>`. It isn't possible to change the naming of the resources in this managed resource group. > [!TIP]-> * The [Request Units](../../cosmos-db/request-units.md) for the Azure Cosmos DB automatically scale as needed. +> * The [Request Units](/azure/cosmos-db/request-units) for the Azure Cosmos DB automatically scale as needed. > * If your AI resource uses a private endpoint, this resource group will also contain a Microsoft-managed Azure Virtual Network. This VNet is used to secure communications between the managed services and the project. You cannot provide your own VNet for use with the Microsoft-managed resources. You also cannot modify the virtual network. For example, you cannot change the IP address range that it uses. > [!IMPORTANT] |
ai-services | Identity Access Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/identity-access-token.md | curl -X POST 'https://<client-endpoint>/face/v1.0/identify' \ #### [C#](#tab/csharp) -The following code snippets show you how to use an access token with the [Face SDK for C#](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Vision.Face). +The following code snippets show you how to use an access token with the [Face SDK for C#](https://aka.ms/azsdk-csharp-face-pkg). -The following class uses an access token to create a **ServiceClientCredentials** object that can be used to authenticate a Face API client object. It automatically adds the access token as a header in every request that the Face client will make. +The following class uses an access token to create a **HttpPipelineSynchronousPolicy** object that can be used to authenticate a Face API client object. It automatically adds the access token as a header in every request that the Face client will make. ```csharp-public class LimitedAccessTokenWithApiKeyClientCredential : ServiceClientCredentials +public class LimitedAccessTokenPolicy : HttpPipelineSynchronousPolicy {- /// <summary> - /// Creates a new instance of the LimitedAccessTokenWithApiKeyClientCredential class - /// </summary> - /// <param name="apiKey">API Key for the Face API or CognitiveService endpoint</param> - /// <param name="limitedAccessToken">LimitedAccessToken to bypass the limited access program, requires ISV sponsership.</param> -- public LimitedAccessTokenWithApiKeyClientCredential(string apiKey, string limitedAccessToken) - { - this.ApiKey = apiKey; - this.LimitedAccessToken = limitedAccessToken; + /// <summary> + /// Creates a new instance of the LimitedAccessTokenPolicy class + /// </summary> + /// <param name="limitedAccessToken">LimitedAccessToken to bypass the limited access program, requires ISV sponsership.</param> + public LimitedAccessTokenPolicy(string limitedAccessToken) + { + _limitedAccessToken = limitedAccessToken; } - private readonly string ApiKey; - private readonly string LimitedAccesToken; -- /// <summary> - /// Add the Basic Authentication Header to each outgoing request - /// </summary> - /// <param name="request">The outgoing request</param> - /// <param name="cancellationToken">A token to cancel the operation</param> - public override Task ProcessHttpRequestAsync(HttpRequestMessage request, CancellationToken cancellationToken) - { - if (request == null) - throw new ArgumentNullException("request"); - request.Headers.Add("Ocp-Apim-Subscription-Key", ApiKey); - request.Headers.Add("LimitedAccessToken", $"Bearer {LimitedAccesToken}"); -- return Task.FromResult<object>(null); - } -} + private readonly string _limitedAccessToken; ++ /// <summary> + /// Add the authentication header to each outgoing request + /// </summary> + /// <param name="message">The outgoing message</param> + public override void OnSendingRequest(HttpMessage message) + { + message.Request.Headers.Add("LimitedAccessToken", $"Bearer {_limitedAccessToken}"); + } +} ``` In the client-side application, the helper class can be used like in this example: ```csharp-static void Main(string[] args) -{ +static void Main(string[] args) +{ // create Face client object- var faceClient = new FaceClient(new LimitedAccessTokenWithApiKeyClientCredential(apiKey: "<client-face-key>", limitedAccessToken: "<token>")); -- faceClient.Endpoint = "https://mytest-eastus2.cognitiveservices.azure.com"; + var clientOptions = new AzureAIVisionFaceClientOptions(); + clientOptions.AddPolicy(new LimitedAccessTokenPolicy("<token>"), HttpPipelinePosition.PerCall); + FaceClient faceClient = new FaceClient(new Uri("<client-endpoint>"), new AzureKeyCredential("<client-face-key>"), clientOptions); // use Face client in an API call- using (var stream = File.OpenRead("photo.jpg")) + using (var stream = File.OpenRead("photo.jpg")) {- var result = faceClient.Face.DetectWithStreamAsync(stream, detectionModel: "Detection_03", recognitionModel: "Recognition_04", returnFaceId: true).Result; + var response = faceClient.Detect(BinaryData.FromStream(stream), FaceDetectionModel.Detection03, FaceRecognitionModel.Recognition04, returnFaceId: true); - Console.WriteLine(JsonConvert.SerializeObject(result)); + Console.WriteLine(JsonConvert.SerializeObject(response.Value)); } } ``` |
ai-services | Specify Detection Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/specify-detection-model.md | var faces = response.Value; The Face service can extract face data from an image and associate it with a **Person** object through the [Add Person Group Person Face] API. In this API call, you can specify the detection model in the same way as in [Detect]. -See the following code example for the .NET client library. +See the following .NET code example. ```csharp // Create a PersonGroup and add a person with face detected by "detection_03" model This code creates a **PersonGroup** with ID `mypersongroupid` and adds a **Perso ## Add face to FaceList with specified model -You can also specify a detection model when you add a face to an existing **FaceList** object. See the following code example for the .NET client library. +You can also specify a detection model when you add a face to an existing **FaceList** object. See the following .NET code example. ```csharp using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = "My face collection", ["recognitionModel"] = "recognition_04" })))) In this article, you learned how to specify the detection model to use with diff * [Face .NET SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp) * [Face Python SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-python%253fpivots%253dprogramming-language-python)+* [Face Java SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-java%253fpivots%253dprogramming-language-java) * [Face JavaScript SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-javascript%253fpivots%253dprogramming-language-javascript) [Detect]: /rest/api/face/face-detection-operations/detect |
ai-services | Specify Recognition Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/specify-recognition-model.md | In this article, you learned how to specify the recognition model to use with di * [Face .NET SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp) * [Face Python SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-python%253fpivots%253dprogramming-language-python)+* [Face Java SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-java%253fpivots%253dprogramming-language-java) * [Face JavaScript SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-javascript%253fpivots%253dprogramming-language-javascript) [Detect]: /rest/api/face/face-detection-operations/detect |
ai-services | Jailbreak Detection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/concepts/jailbreak-detection.md | This shield aims to safeguard against attacks that use information not directly ### Language availability -Prompt Shields have been specifically trained and tested on the following languages: Chinese, English, French, German, Italian, Japanese, Portuguese. However, the feature can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application. +Prompt Shields have been specifically trained and tested on the following languages: Chinese, English, French, German, Spanish, Italian, Japanese, Portuguese. However, the feature can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application. ### Text length limitations |
ai-services | Custom Categories Rapid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/how-to/custom-categories-rapid.md | The following command creates an incident with a name and definition. curl --location --request PATCH 'https://<endpoint>/contentsafety/text/incidents/<text-incident-name>?api-version=2024-02-15-preview' \ --header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>' \ --header 'Content-Type: application/json' \data '{- \"incidentName\": \"<text-incident-name>\", - \"incidentDefinition\": \"string\" -}' +--data '{ \"incidentName\": \"<test-incident>\", \"incidentDefinition\": \"<string>\"}' ``` #### [Python](#tab/python) curl --location 'https://<endpoint>/contentsafety/text/incidents/<text-incident- --header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>' \ --header 'Content-Type: application/json' \ --data-raw '{- "IncidentSamples": [ - { "text": "<text-example-1>"}, - { "text": "<text-example-2>"}, + \"IncidentSamples\": [ + { \"text\": \"<text-example-1>\"}, + { \"text\": \"<text-example-2>\"}, ... ] }' curl --location 'https://<endpoint>/contentsafety/text:detectIncidents?api-versi --header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>' \ --header 'Content-Type: application/json' \ --data '{- "text": "<test-text>", - "incidentNames": [ - "<text-incident-name>" + \"text\": \"<test-text>\", + \"incidentNames\": [ + \"<text-incident-name>\" ] }' ``` curl --location 'https://<endpoint>/contentsafety/image/incidents/<image-inciden --header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>' \ --header 'Content-Type: application/json' \ --data '{- "IncidentSamples": [ + \"IncidentSamples\": [ {- "image": { - "content": "<base64-data>", - "bloburl": "<your-blob-storage-url>.png" + \"image\": { + \"content\": \"<base64-data>\", + \"bloburl\": \"<your-blob-storage-url>.png\" } } ] curl --location 'https://<endpoint>/contentsafety/image:detectIncidents?api-vers --header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>' \ --header 'Content-Type: application/json' \ --data '{- "image": { - "url": "<your-blob-storage-url>/image.png", + \"image\": { + \"url\": \"<your-blob-storage-url>/image.png\", "content": "<base64-data>" },- "incidentNames": [ - "<image-incident-name>" + \"incidentNames\": [ + \"<image-incident-name>\" ] } }' curl --location 'https://<endpoint>/contentsafety/text/incidents/<text-incident- --header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>' \ --header 'Content-Type: application/json' \ --data '{- "IncidentSampleIds": [ - "<your-incident-sample-id>" + \"IncidentSampleIds\": [ + \"<your-incident-sample-id>\" ] }' ``` curl --location 'https://<endpoint>/contentsafety/image/incidents/<image-inciden --header 'Ocp-Apim-Subscription-Key: <your-content-safety-key>' \ --header 'Content-Type: application/json' \ --data '{- "IncidentSampleIds": [ - "<your-incident-sample-id>" + \"IncidentSampleIds\": [ + \"<your-incident-sample-id>\" ] }' ``` |
ai-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/language-support.md | -> Other Azure AI Content Safety models have been specifically trained and tested on the following languages: Chinese, English, French, German, Italian, Japanese, Portuguese. However, these features can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application. +> Other Azure AI Content Safety models have been specifically trained and tested on the following languages: Chinese, English, French, German, Spanish, Italian, Japanese, Portuguese. However, these features can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application. > [!NOTE] > **Language auto-detection** |
ai-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/overview.md | Content filtering software can help your app comply with regulations or maintain This documentation contains the following article types: +* **[Concepts](concepts/harm-categories.md)** provide in-depth explanations of the service functionality and features. * **[Quickstarts](./quickstart-text.md)** are getting-started instructions to guide you through making requests to the service. * **[How-to guides](./how-to/use-blocklist.md)** contain instructions for using the service in more specific or customized ways. -* **[Concepts](concepts/harm-categories.md)** provide in-depth explanations of the service functionality and features. ## Where it's used The following are a few scenarios in which a software developer or team would re > [!IMPORTANT] > You cannot use Azure AI Content Safety to detect illegal child exploitation images. -## Product types +## Product features There are different types of analysis available from this service. The following table describes the currently available APIs. -| Type | Functionality | -| :-- | :- | -| [Prompt Shields](/rest/api/contentsafety/text-operations/detect-text-jailbreak) (preview) | Scans text for the risk of a [User input attack](./concepts/jailbreak-detection.md) on a Large Language Model. [Quickstart](./quickstart-jailbreak.md) | -| [Groundedness detection](/rest/api/contentsafety/text-groundedness-detection-operations/detect-groundedness-options) (preview) | Detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. [Quickstart](./quickstart-groundedness.md) | -| [Protected material text detection](/rest/api/contentsafety/text-operations/detect-text-protected-material) (preview) | Scans AI-generated text for [known text content](./concepts/protected-material.md) (for example, song lyrics, articles, recipes, selected web content). [Quickstart](./quickstart-protected-material.md)| -| Custom categories API (preview) | Lets you create and train your own [custom content categories](./concepts/custom-categories.md) and scan text for matches. [Quickstart](./quickstart-custom-categories.md) | -| Custom categories (rapid) API (preview) | Lets you define [emerging harmful content patterns](./concepts/custom-categories.md) and scan text and images for matches. [How-to guide](./how-to/custom-categories-rapid.md) | -| [Analyze text](/rest/api/contentsafety/text-operations/analyze-text) API | Scans text for sexual content, violence, hate, and self harm with multi-severity levels. | -| [Analyze image](/rest/api/contentsafety/image-operations/analyze-image) API | Scans images for sexual content, violence, hate, and self harm with multi-severity levels. | +| Feature | Functionality | Concepts guide | Get started | +| :-- | :- | --| --| +| [Prompt Shields](/rest/api/contentsafety/text-operations/detect-text-jailbreak) (preview) | Scans text for the risk of a User input attack on a Large Language Model. | [Prompt Shields concepts](/azure/ai-services/content-safety/concepts/jailbreak-detection)|[Quickstart](./quickstart-jailbreak.md) | +| [Groundedness detection](/rest/api/contentsafety/text-groundedness-detection-operations/detect-groundedness-options) (preview) | Detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. | [Groundedness detection concepts](/azure/ai-services/content-safety/concepts/groundedness)|[Quickstart](./quickstart-groundedness.md) | +| [Protected material text detection](/rest/api/contentsafety/text-operations/detect-text-protected-material) (preview) | Scans AI-generated text for known text content (for example, song lyrics, articles, recipes, selected web content). | [Protected material concepts](/azure/ai-services/content-safety/concepts/protected-material)|[Quickstart](./quickstart-protected-material.md)| +| Custom categories API (preview) | Lets you create and train your own custom content categories and scan text for matches. | [Custom categories concepts](/azure/ai-services/content-safety/concepts/custom-categories)|[Quickstart](./quickstart-custom-categories.md) | +| Custom categories (rapid) API (preview) | Lets you define emerging harmful content patterns and scan text and images for matches. | [Custom categories concepts](/azure/ai-services/content-safety/concepts/custom-categories)| [How-to guide](./how-to/custom-categories-rapid.md) | +| [Analyze text](/rest/api/contentsafety/text-operations/analyze-text) API | Scans text for sexual content, violence, hate, and self harm with multi-severity levels. | [Harm categories](/azure/ai-services/content-safety/concepts/harm-categories)| [Quickstart](/azure/ai-services/content-safety/quickstart-text) | +| [Analyze image](/rest/api/contentsafety/image-operations/analyze-image) API | Scans images for sexual content, violence, hate, and self harm with multi-severity levels. | [Harm categories](/azure/ai-services/content-safety/concepts/harm-categories)| [Quickstart](/azure/ai-services/content-safety/quickstart-image) | ## Content Safety Studio See the following list for the input requirements for each feature. ### Language support -Content Safety models have been specifically trained and tested in the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application. +Content Safety models have been specifically trained and tested in the following languages: English, German, Spanish, Japanese, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application. Custom Categories currently only works well in English. You can try to use other languages with your own dataset, but the quality might vary across languages. |
ai-services | Quickstart Custom Categories | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-custom-categories.md | +For more information on Custom categories, see the [Custom categories concept page](./concepts/custom-categories.md). For API input limits, see the [Input requirements](./overview.md#input-requirements) section of the Overview. + > [!IMPORTANT] > This feature is only available in certain Azure regions. See [Region availability](./overview.md#region-availability). |
ai-services | Quickstart Groundedness | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-groundedness.md | +For more information on Groundedness detection, see the [Groundedness detection concept page](./concepts/groundedness.md). For API input limits, see the [Input requirements](./overview.md#input-requirements) section of the Overview. + ## Prerequisites * An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/) If you want to clean up and remove an Azure AI services subscription, you can de - [Azure portal](/azure/ai-services/multi-service-resource?pivots=azportal#clean-up-resources) - [Azure CLI](/azure/ai-services/multi-service-resource?pivots=azcli#clean-up-resources) -## Next steps +## Related content -Combine Groundedness detection with other LLM safety features like Prompt Shields. +* [Groundedness detection concepts](./concepts/groundedness.md) +* Combine Groundedness detection with other LLM safety features like [Prompt Shields](./quickstart-jailbreak.md). -> [!div class="nextstepaction"] -> [Prompt Shields quickstart](./quickstart-jailbreak.md) |
ai-services | Quickstart Image | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-image.md | zone_pivot_groups: programming-languages-content-safety Get started with the Content Studio, REST API, or client SDKs to do basic image moderation. The Azure AI Content Safety service provides you with AI algorithms for flagging objectionable content. Follow these steps to try it out. +For more information on image moderation, see the [Harm categories concept page](./concepts/harm-categories.md). For API input limits, see the [Input requirements](./overview.md#input-requirements) section of the Overview. + > [!NOTE] > > The sample data and code may contain offensive content. User discretion is advised. If you want to clean up and remove an Azure AI services subscription, you can de - [Azure portal](../multi-service-resource.md?pivots=azportal#clean-up-resources) - [Azure CLI](../multi-service-resource.md?pivots=azcli#clean-up-resources) -## Next steps +## Related content -Configure filters for each category and test on datasets using [Content Safety Studio](studio-quickstart.md), export the code and deploy. +* [Harm categories](./concepts/harm-categories.md) +* Configure filters for each category and test on datasets using [Content Safety Studio](studio-quickstart.md), export the code and deploy. -> [!div class="nextstepaction"] -> [Content Safety Studio quickstart](./studio-quickstart.md) |
ai-services | Quickstart Jailbreak | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-jailbreak.md | +For more information on Prompt Shields, see the [Prompt Shields concept page](./concepts/jailbreak-detection.md). For API input limits, see the [Input requirements](./overview.md#input-requirements) section of the Overview. + ## Prerequisites * An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/) If you want to clean up and remove an Azure AI services subscription, you can de - [Azure portal](/azure/ai-services/multi-service-resource?pivots=azportal#clean-up-resources) - [Azure CLI](/azure/ai-services/multi-service-resource?pivots=azcli#clean-up-resources) -## Next steps -Configure filters for each category and test on datasets using [Content Safety Studio](studio-quickstart.md), export the code and deploy. +## Related content -> [!div class="nextstepaction"] -> [Content Safety Studio quickstart](./studio-quickstart.md) +* [Prompt Shields concepts](./concepts/jailbreak-detection.md) +* Configure filters for each category and test on datasets using [Content Safety Studio](studio-quickstart.md), export the code and deploy. |
ai-services | Quickstart Protected Material | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-protected-material.md | +For more information on protected material detection, see the [Protected material detection concept page](./concepts/protected-material.md). For API input limits, see the [Input requirements](./overview.md#input-requirements) section of the Overview. ++ ## Prerequisites * An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/) If you want to clean up and remove an Azure AI services subscription, you can de - [Azure portal](../multi-service-resource.md?pivots=azportal#clean-up-resources) - [Azure CLI](../multi-service-resource.md?pivots=azcli#clean-up-resources) -## Next steps -Configure filters for each category and test on datasets using [Content Safety Studio](studio-quickstart.md), export the code and deploy. +## Related content ++* [Protected material detection concepts](./concepts/protected-material.md) +* Configure filters for each category and test on datasets using [Content Safety Studio](studio-quickstart.md), export the code and deploy. -> [!div class="nextstepaction"] -> [Content Safety Studio quickstart](./studio-quickstart.md) |
ai-services | Quickstart Text | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-text.md | zone_pivot_groups: programming-languages-content-safety Get started with the Content Safety Studio, REST API, or client SDKs to do basic text moderation. The Azure AI Content Safety service provides you with AI algorithms for flagging objectionable content. Follow these steps to try it out. +For more information on text moderation, see the [Harm categories concept page](./concepts/harm-categories.md). For API input limits, see the [Input requirements](./overview.md#input-requirements) section of the Overview. ++ > [!NOTE] > > The sample data and code may contain offensive content. User discretion is advised. If you want to clean up and remove an Azure AI services subscription, you can de - [Azure portal](../multi-service-resource.md?pivots=azportal#clean-up-resources) - [Azure CLI](../multi-service-resource.md?pivots=azcli#clean-up-resources) -## Next steps -Configure filters for each category and test on datasets using [Content Safety Studio](studio-quickstart.md), export the code and deploy. -> [!div class="nextstepaction"] -> [Content Safety Studio quickstart](./studio-quickstart.md) +## Related content ++* [Harm categories](./concepts/harm-categories.md) +* Configure filters for each category and test on datasets using [Content Safety Studio](studio-quickstart.md), export the code and deploy. |
ai-services | Concept Custom Neural | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-neural.md | Title: Custom neural document model - Document Intelligence (formerly Form Recognizer) -description: Use the custom neural document model to train a model to extract data from structured, semistructured, and unstructured documents. +description: Use the custom neural document model to train a model to extract data from structured, semi-structured, and unstructured documents. Previously updated : 08/07/2024 Last updated : 08/13/2024 - references_regions monikerRange: '>=doc-intel-3.0.0' +<!-- markdownlint-disable MD001 --> +<!-- markdownlint-disable MD033 --> +<!-- markdownlint-disable MD051 --> +<!-- markdownlint-disable MD024 --> # Document Intelligence custom neural model Custom neural models are available in the [v3.0 and later models](v3-1-migration | Document Type | REST API | SDK | Label and Test Models| |--|--|--|--|-| Custom document | [Document Intelligence 3.1](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)| [Document Intelligence SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)| [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio) +| Custom document | [Document Intelligence 3.1](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)| [Document Intelligence SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)| [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)| The `Build` operation to train model supports a new ```buildMode``` property, to train a custom neural model, set the ```buildMode``` to ```neural```. :::moniker range="doc-intel-4.0.0" -```REST +```bash https://{endpoint}/documentintelligence/documentModels:build?api-version=2024-07-31-preview { https://{endpoint}/documentintelligence/documentModels:build?api-version=2024-07 :::moniker range="doc-intel-3.1.0" -```REST +```bash https://{endpoint}/formrecognizer/documentModels:build?api-version=v3.1:2023-07-31 { https://{endpoint}/formrecognizer/documentModels:build?api-version=v3.1:2023-07- :::moniker range="doc-intel-3.0.0" -```REST +```bash https://{endpoint}/formrecognizer/documentModels/{modelId}:copyTo?api-version=2022-08-31 { https://{endpoint}/formrecognizer/documentModels/{modelId}:copyTo?api-version=20 :::moniker range="doc-intel-4.0.0" ## Billing- -Starting with version `2024-07-31-preview`, you can train your custom neural model for longer durations than 30 minutes. Previous versions have been capped at 30 minutes per training instance, with a total of 20 free training instances per month. Now with `2024-07-31-preview`, you can receive **10 hours** of free model training, and train a model for as long as 10 hours. If you would like to train a model for longer than 10 hours, billing charges are calculated for model trainings that exceed 10 hours. You can choose to spend all of 10 free hours on a single build with a large set of data, or utilize it across multiple builds by adjusting the maximum duration value for the `build` operation by specifying `maxTrainingHours` as below: ++Starting with version `2024-07-31-preview`, you can train your custom neural model for longer durations than the standard 30 minutes. Previous versions are limited to 30 minutes per training instance, with a total of 20 free training instances per month. Now with `2024-07-31-preview`, you can receive **10 hours** of **free model training**, and train a model for as long as 10 hours. ++You can choose to spend all of 10 free hours on a single model build with a large set of data, or utilize it across multiple builds by adjusting the maximum duration value for the `build` operation by specifying `maxTrainingHours`: ```bash POST /documentModels:build } ``` -> [!NOTE] -> For Document Intelligence versions `v3.1 (2023-07-31)` and `v3.0 (2022-08-31)`, custom neural model's paid training is not enabled. For the two older versions, you will get a maximum of 30 minutes training duration per model. If you would like to train more than 20 model instances, you can request for increase in the training limit. --Each training hour is the amount of compute a single V100 GPU can perform in an hour. As each build takes different amount of time, billing is calculated for the actual time spent (excluding time in queue), with a minimum of 30 minutes per training job. The elapsed time is converted to V100 equivalent training hours and reported as part of the model. +> [!IMPORTANT] +> +> * If you would like to train additional neural models or train models for a longer time period that **exceed 10 hours**, billing charges apply. For details on the billing charges, refer to the [pricing page](https://azure.microsoft.com/pricing/details/ai-document-intelligence/). +> * You can opt in for this paid training service by setting the `maxTrainingHours` to the desired maximum number of hours. API calls with no budget but with the `maxTrainingHours` set as over 10 hours will fail. +> * As each build takes different amount of time depending on the type and size of the training dataset, billing is calculated for the actual time spent training the neural model, with a minimum of 30 minutes per training job. +> * This paid billing structure enables you to train larger data sets for longer durations with flexibility in the training hours. ```bash GET /documentModels/{myCustomModel} } ``` -This billing structure enables you to train larger data sets for longer durations with flexibility in the training hours. +> [!NOTE] +> For Document Intelligence versions `v3.1 (2023-07-31)` and `v3.0 (2022-08-31)`, custom neural model's paid training is not enabled. For the two older versions, you will get a maximum of 30 minutes training duration per model. If you would like to train more than 20 model instances, you can create an [Azure support ticket](service-limits.md#create-and-submit-support-request) to increase in the training limit. :::moniker-end This billing structure enables you to train larger data sets for longer duration ## Billing -For Document Intelligence versions `v3.1 (2023-07-31)` and `v3.0 (2022-08-31)`, you will get a maximum of 30 minutes training duration per model, and a maximum of 20 trainings for free per month. If you would like to train more than 20 model instances, you can request for increase in the training limit. +For Document Intelligence versions `v3.1 (2023-07-31) and v3.0 (2022-08-31)`, you receive a maximum 30 minutes of training duration per model, and a maximum of 20 trainings for free per month. If you would like to train more than 20 model instances, you can create an [Azure support ticket](service-limits.md#create-and-submit-support-request) to increase in the training limit. For the Azure support ticket, enter in the `summary` section a phrase such as `Increase Document Intelligence custom neural training (TPS) limit`. A ticket can only apply at a resource-level, not a subscription level. You can request a training limit increase for a single Document Intelligence resource by specifying your resource ID and region in the support ticket. -If you are interested in training models for longer durations than 30 minutes, we support **paid training** for our newest version, `v4.0 (2024-07-31)`. Using the latest version, you can train your model for a longer duration to process larger documents. +If you want to train models for longer durations than 30 minutes, we support **paid training** with our newest version, `v4.0 (2024-07-31-preview)`. Using the latest version, you can train your model for a longer duration to process larger documents. For more information about paid training, *see* [Billing v4.0](service-limits.md#billing). :::moniker-end If you are interested in training models for longer durations than 30 minutes, w ## Billing -For Document Intelligence versions `v3.1 (2023-07-31)` and `v3.0 (2022-08-31)`, you will get a maximum of 30 minutes training duration per model, and a maximum of 20 trainings for free per month. If you would like to train more than 20 model instances, you can request for increase in the training limit. +For Document Intelligence versions `v3.1 (2023-07-31) and v3.0 (2022-08-31)`, you receive a maximum 30 minutes of training duration per model, and a maximum of 20 trainings for free per month. If you would like to train more than 20 model instances, you can create an [Azure support ticket](service-limits.md#create-and-submit-support-request) to increase in the training limit. For the Azure support ticket, enter in the `summary` section a phrase such as `Increase Document Intelligence custom neural training (TPS) limit`. A ticket can only apply at a resource-level, not a subscription level. You can request a training limit increase for a single Document Intelligence resource by specifying your resource ID and region in the support ticket. -If you are interested in training models for longer durations than 30 minutes, we support **paid training** for our newest version, `v4.0 (2024-07-31)`. Using the latest version, you can train your model for a longer duration to process larger documents. +If you want to train models for longer durations than 30 minutes, we support **paid training** with our newest version, `v4.0 (2024-07-31)`. Using the latest version, you can train your model for a longer duration to process larger documents. For more information about paid training, *see* [Billing v4.0](service-limits.md#billing). :::moniker-end |
ai-services | Data Feeds From Different Sources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/data-feeds-from-different-sources.md | The following sections specify the parameters required for all authentication ty ## <span id="cosmosdb">Azure Cosmos DB (SQL)</span> -* **Connection string**: The connection string to access your Azure Cosmos DB instance. This can be found in the Azure Cosmos DB resource in the Azure portal, in **Keys**. For more information, see [Secure access to data in Azure Cosmos DB](../../cosmos-db/secure-access-to-data.md). +* **Connection string**: The connection string to access your Azure Cosmos DB instance. This can be found in the Azure Cosmos DB resource in the Azure portal, in **Keys**. For more information, see [Secure access to data in Azure Cosmos DB](/azure/cosmos-db/secure-access-to-data). * **Database**: The database to query against. In the Azure portal, under **Containers**, go to **Browse** to find the database. * **Collection ID**: The collection ID to query against. In the Azure portal, under **Containers**, go to **Browse** to find the collection ID. * **SQL query**: A SQL query to get and formulate data into multi-dimensional time series data. You can use the `@IntervalStart` and `@IntervalEnd` variables in your query. They should be formatted as follows: `yyyy-MM-ddTHH:mm:ssZ`. |
ai-services | Api Version Deprecation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/api-version-deprecation.md | This article is to help you understand the support lifecycle for the Azure OpenA Azure OpenAI API latest release: -- Inference: [2024-05-01-preview](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-05-01-preview/inference.json)-- Authoring: [2024-05-01-preview](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/authoring/preview/2024-05-01-preview/azureopenai.json)+- Inference: [2024-07-01-preview](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-07-01-preview/inference.json) +- Authoring: [2024-07-01-preview](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/authoring/preview/2024-07-01-preview/azureopenai.json) This version contains support for the latest Azure OpenAI features including: -- [Embeddings `encoding_format` and `dimensions` parameters] [**Added in 2024-03-01-preview**]+- Assistants V2 [**Added in 2024-05-01-preview**] +- Embeddings `encoding_format` and `dimensions` parameters [**Added in 2024-03-01-preview**] - [Assistants API](./assistants-reference.md). [**Added in 2024-02-15-preview**] - [Text to speech](./text-to-speech-quickstart.md). [**Added in 2024-02-15-preview**] - [DALL-E 3](./dall-e-quickstart.md). [**Added in 2023-12-01-preview**] This version contains support for the latest Azure OpenAI features including: - [Function calling](./how-to/function-calling.md) [**Added in 2023-07-01-preview**] - [Retrieval augmented generation with your data feature](./use-your-data-quickstart.md). [**Added in 2023-06-01-preview**] -## Changes between 2024-4-01-preview and 2024-05-01-preview API specification +## Changes between 2024-5-01-preview and 2024-07-01-preview API specification ++- [Batch API support added](./how-to/batch.md) +- [Vector store chunking strategy parameters](/azure/ai-services/openai/reference-preview?#request-body-17) +- `max_num_results` that the file search tool should output. ++## Changes between 2024-04-01-preview and 2024-05-01-preview API specification - Assistants v2 support - [File search tool and vector storage](https://go.microsoft.com/fwlink/?linkid=2272425) - Fine-tuning [checkpoints](https://github.com/Azure/azure-rest-api-specs/blob/9583ed6c26ce1f10bbea92346e28a46394a784b4/specification/cognitiveservices/data-plane/AzureOpenAI/authoring/preview/2024-05-01-preview/azureopenai.json#L586), [seed](https://github.com/Azure/azure-rest-api-specs/blob/9583ed6c26ce1f10bbea92346e28a46394a784b4/specification/cognitiveservices/data-plane/AzureOpenAI/authoring/preview/2024-05-01-preview/azureopenai.json#L1574), [events](https://github.com/Azure/azure-rest-api-specs/blob/9583ed6c26ce1f10bbea92346e28a46394a784b4/specification/cognitiveservices/data-plane/AzureOpenAI/authoring/preview/2024-05-01-preview/azureopenai.json#L529) - On your data updates-- Dall-e 2 now supports model deployment and can be used with the latest preview API.+- DALL-E 2 now supports model deployment and can be used with the latest preview API. - Content filtering updates ## Changes between 2024-03-01-preview and 2024-04-01-preview API specification This version contains support for the latest GA features like Whisper, DALL-E 3, We recommend first testing the upgrade to new API versions to confirm there's no impact to your application from the API update before making the change globally across your environment. -If you're using the OpenAI Python client library or the REST API, you'll need to update your code directly to the latest preview API version. +If you're using the OpenAI Python or JavaScript client libraries, or the REST API, you'll need to update your code directly to the latest preview API version. -If you're using one of the Azure OpenAI SDKs for C#, Go, Java, or JavaScript you'll instead need to update to the latest version of the SDK. Each SDK release is hardcoded to work with specific versions of the Azure OpenAI API. +If you're using one of the Azure OpenAI SDKs for C#, Go, or Java, you'll instead need to update to the latest version of the SDK. Each SDK release is hardcoded to work with specific versions of the Azure OpenAI API. ## Next steps |
ai-services | Model Retirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/model-retirements.md | Azure OpenAI notifies those who are members of the following roles for each subs ## How to get ready for model retirements and version upgrades -To prepare for model retirements and version upgrades, we recommend that customers evaluate their applications with the new models and versions and evaluate their behavior. We also recommend that customers update their applications to use the new models and versions before the retirement date. +To prepare for model retirements and version upgrades, we recommend that customers test their applications with the new models and versions and evaluate their behavior. We also recommend that customers update their applications to use the new models and versions before the retirement date. -For more information, see [How to upgrade to a new model or version](./model-versions.md). +For more information on the model evaluation process, see the [Getting started with model evaluation guide](https://techcommunity.microsoft.com/t5/ai-azure-ai-services-blog/how-to-evaluate-amp-upgrade-model-versions-in-the-azure-openai/ba-p/4218880). ++For information on the model upgrade process, see [How to upgrade to a new model or version](./model-versions.md). ## Current models For more information, see [How to upgrade to a new model or version](./model-ver These models are currently available for use in Azure OpenAI Service. -| Model | Version | Retirement date | -| - | - | - | -| `gpt-35-turbo` | 0301 | No earlier than October 1, 2024 | -| `gpt-35-turbo`<br>`gpt-35-turbo-16k` | 0613 | November 1, 2024 | -| `gpt-35-turbo` | 1106 | No earlier than Nov 17, 2024 | -| `gpt-35-turbo` | 0125 | No earlier than Feb 22, 2025 | -| `gpt-4`<br>`gpt-4-32k` | 0314 | **Deprecation:** October 1, 2024 <br> **Retirement:** June 6, 2025 | -| `gpt-4`<br>`gpt-4-32k` | 0613 | **Deprecation:** October 1, 2024 <br> **Retirement:** June 6, 2025 | -| `gpt-4` | 1106-preview | To be upgraded to `gpt-4` Version: `turbo-2024-04-09`, starting on November 15, 2024, or later **<sup>1</sup>** | -| `gpt-4` | 0125-preview |To be upgraded to `gpt-4` Version: `turbo-2024-04-09`, starting on November 15, 2024, or later **<sup>1</sup>** | -| `gpt-4` | vision-preview | To be upgraded to `gpt-4` Version: `turbo-2024-04-09`, starting on November 15, 2024, or later **<sup>1</sup>** | -| `gpt-3.5-turbo-instruct` | 0914 | No earlier than Sep 14, 2025 | -| `text-embedding-ada-002` | 2 | No earlier than April 3, 2025 | -| `text-embedding-ada-002` | 1 | No earlier than April 3, 2025 | -| `text-embedding-3-small` | | No earlier than Feb 2, 2025 | -| `text-embedding-3-large` | | No earlier than Feb 2, 2025 | +| Model | Version | Retirement date | Suggested replacement | +| - | - | - | | +| `gpt-35-turbo` | 0301 | No earlier than October 1, 2024 | `gpt-4o-mini` | +| `gpt-35-turbo`<br>`gpt-35-turbo-16k` | 0613 | November 1, 2024 | `gpt-4o-mini` | +| `gpt-35-turbo` | 1106 | No earlier than Nov 17, 2024 | `gpt-4o-mini` | +| `gpt-35-turbo` | 0125 | No earlier than Feb 22, 2025 | `gpt-4o-mini` | +| `gpt-4`<br>`gpt-4-32k` | 0314 | **Deprecation:** October 1, 2024 <br> **Retirement:** June 6, 2025 | `gpt-4o` | +| `gpt-4`<br>`gpt-4-32k` | 0613 | **Deprecation:** October 1, 2024 <br> **Retirement:** June 6, 2025 | `gpt-4o` | +| `gpt-4` | 1106-preview | To be upgraded to `gpt-4` Version: `turbo-2024-04-09`, starting on November 15, 2024, or later **<sup>1</sup>** | `gpt-4o`| +| `gpt-4` | 0125-preview |To be upgraded to `gpt-4` Version: `turbo-2024-04-09`, starting on November 15, 2024, or later **<sup>1</sup>** | `gpt-4o` | +| `gpt-4` | vision-preview | To be upgraded to `gpt-4` Version: `turbo-2024-04-09`, starting on November 15, 2024, or later **<sup>1</sup>** | `gpt-4o`| +| `gpt-3.5-turbo-instruct` | 0914 | No earlier than Sep 14, 2025 | | +| `text-embedding-ada-002` | 2 | No earlier than April 3, 2025 | `text-embedding-3-small` or `text-embedding-3-large` | +| `text-embedding-ada-002` | 1 | No earlier than April 3, 2025 | `text-embedding-3-small` or `text-embedding-3-large` | +| `text-embedding-3-small` | | No earlier than Feb 2, 2025 | | +| `text-embedding-3-large` | | No earlier than Feb 2, 2025 | | **<sup>1</sup>** We will notify all customers with these preview deployments at least 30 days before the start of the upgrades. We will publish an upgrade schedule detailing the order of regions and model versions that we will follow during the upgrades, and link to that schedule from here. These models were deprecated on July 6, 2023 and were retired on June 14, 2024. If you're an existing customer looking for information about these models, see [Legacy models](./legacy-models.md). | Model | Deprecation date | Retirement date | Suggested replacement |-| | | - | -- | +| | | - | -- | | ada | July 6, 2023 | June 14, 2024 | babbage-002 | | babbage | July 6, 2023 | June 14, 2024 | babbage-002 | | curie | July 6, 2023 | June 14, 2024 | davinci-002 | |
ai-services | Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md | These models can only be used with Embedding API requests. ### Assistants (Preview) -For Assistants you need a combination of a supported model, and a supported region. Certain tools and capabilities require the latest models. The following models are available in the Assistants API, SDK, Azure AI Studio and Azure OpenAI Studio. The following table is for pay-as-you-go. For information on Provisioned Throughput Unit (PTU) availability, see [provisioned throughput](./provisioned-throughput.md). The listed models and regions can be used with both Assistants v1 and v2. +For Assistants you need a combination of a supported model, and a supported region. Certain tools and capabilities require the latest models. The following models are available in the Assistants API, SDK, Azure AI Studio and Azure OpenAI Studio. The following table is for pay-as-you-go. For information on Provisioned Throughput Unit (PTU) availability, see [provisioned throughput](./provisioned-throughput.md). The listed models and regions can be used with both Assistants v1 and v2. You can use [global standard models](#global-standard-model-availability) if they are supported in the regions listed below. | Region | `gpt-35-turbo (0613)` | `gpt-35-turbo (1106)`| `fine tuned gpt-3.5-turbo-0125` | `gpt-4 (0613)` | `gpt-4 (1106)` | `gpt-4 (0125)` | `gpt-4o (2024-05-13)` | `gpt-4o-mini (2024-07-18)` | |--|::|::|::|::|::|::|::|::| |
ai-services | Prompt Transformation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/prompt-transformation.md | -Prompt transformation is a process in DALL-E 3 image generation that applies a safety and quality system message to your original prompt using a large language model (LLM) call before being sent to the model for image generation. This system message enriches your original prompt with the goal of generating more diverse and higher-quality images, while maintaining intent. --Prompt transformation is applied to all Azure OpenAI DALL-E 3 requests by default. There may be scenarios in which your use case requires a lower level of enrichment. To generate images that use prompts that more closely resemble your original prompt, append this text to your prompt: `I NEED to test how the tool works with extremely simple prompts. DO NOT add any detail, just use it AS-IS:`. This ensures there is minimal prompt transformation. Evaluating your system behavior with and without this prompt helps you better understand the impact and value of prompt transformation. +Prompt transformation is a process in DALL-E 3 image generation that applies a safety and quality system message to your original prompt using a large language model (LLM) call before being sent to the model for image generation. This system message enriches your original prompt with the goal of generating more diverse and higher-quality images, while maintaining intent. After prompt transformation is applied to the original prompt, content filtering is applied as a secondary step before image generation; for more information, see [Content filtering](./content-filter.md). Output Content: } ``` -> [!NOTE] -> Azure OpenAI Service does not offer configurability for prompt transformation at this time. To bypass prompt transformation, prepend the following to any request: `I NEED to test how the tool works with extremely simple prompts. DO NOT add any detail, just use it AS-IS:`. -> -> While this addition will encourage the revised prompt to be more representative of your original prompt, the system may alter specific details. ## Next steps |
ai-services | Provisioned Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/provisioned-migration.md | If a deployment is on a resource that has a commitment, and that commitment expi Customers that have commitments today can continue to use them at least through the end of 2024. This includes purchasing new PTUs on new or existing commitments and managing commitment renewal behaviors. However, the August update has changed certain aspects of commitment operation. -- Only models released as provisioned prior to August 1, 2023 or before can be deployed on a resource with a commitment.+- Only models released as provisioned prior to August 1, 2024 or before can be deployed on a resource with a commitment. - If the deployed PTUs under a commitment exceed the committed PTUs, the hourly overage charges will be emitted against the same hourly meter as used for the new hourly/reservation payment model. This allows the overage charges to be discounted via an Azure Reservation. - It is possible to deploy more PTUs than are committed on the resource. This supports the ability to guarantee capacity availability prior to increasing the commitment size to cover it. |
ai-services | Understand Embeddings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/understand-embeddings.md | -An embedding is a special format of data representation that machine learning models and algorithms can easily use. The embedding is an information dense representation of the semantic meaning of a piece of text. Each embedding is a vector of floating-point numbers, such that the distance between two embeddings in the vector space is correlated with semantic similarity between two inputs in the original format. For example, if two texts are similar, then their vector representations should also be similar. Embeddings power vector similarity search in Azure Databases such as [Azure Cosmos DB for MongoDB vCore](../../../cosmos-db/mongodb/vcore/vector-search.md) , [Azure SQL Database](/azure/azure-sql/database/ai-artificial-intelligence-intelligent-applications?view=azuresql&preserve-view=true#vector-search) or [Azure Database for PostgreSQL - Flexible Server](../../../postgresql/flexible-server/how-to-use-pgvector.md). +An embedding is a special format of data representation that machine learning models and algorithms can easily use. The embedding is an information dense representation of the semantic meaning of a piece of text. Each embedding is a vector of floating-point numbers, such that the distance between two embeddings in the vector space is correlated with semantic similarity between two inputs in the original format. For example, if two texts are similar, then their vector representations should also be similar. Embeddings power vector similarity search in Azure Databases such as [Azure Cosmos DB for MongoDB vCore](/azure/cosmos-db/mongodb/vcore/vector-search) , [Azure SQL Database](/azure/azure-sql/database/ai-artificial-intelligence-intelligent-applications?view=azuresql&preserve-view=true#vector-search) or [Azure Database for PostgreSQL - Flexible Server](/azure/postgresql/flexible-server/how-to-use-pgvector). ## Embedding models An alternative method of identifying similar documents is to count the number of ## Next steps * Learn more about using Azure OpenAI and embeddings to perform document search with our [embeddings tutorial](../tutorials/embeddings.md).-* Store your embeddings and perform vector (similarity) search using [Azure Cosmos DB for MongoDB vCore](../../../cosmos-db/mongodb/vcore/vector-search.md), [Azure Cosmos DB for NoSQL](../../../cosmos-db/rag-data-openai.md) , [Azure SQL Database](/azure/azure-sql/database/ai-artificial-intelligence-intelligent-applications?view=azuresql&preserve-view=true#vector-search) or [Azure Database for PostgreSQL - Flexible Server](../../../postgresql/flexible-server/how-to-use-pgvector.md). +* Store your embeddings and perform vector (similarity) search using [Azure Cosmos DB for MongoDB vCore](/azure/cosmos-db/mongodb/vcore/vector-search), [Azure Cosmos DB for NoSQL](/azure/cosmos-db/rag-data-openai) , [Azure SQL Database](/azure/azure-sql/database/ai-artificial-intelligence-intelligent-applications?view=azuresql&preserve-view=true#vector-search) or [Azure Database for PostgreSQL - Flexible Server](/azure/postgresql/flexible-server/how-to-use-pgvector). |
ai-services | Embeddings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/embeddings.md | recommendations: false # Learn how to generate embeddings with Azure OpenAI -An embedding is a special format of data representation that can be easily utilized by machine learning models and algorithms. The embedding is an information dense representation of the semantic meaning of a piece of text. Each embedding is a vector of floating point numbers, such that the distance between two embeddings in the vector space is correlated with semantic similarity between two inputs in the original format. For example, if two texts are similar, then their vector representations should also be similar. Embeddings power vector similarity search in Azure Databases such as [Azure Cosmos DB for MongoDB vCore](../../../cosmos-db/mongodb/vcore/vector-search.md) , [Azure SQL Database](/azure/azure-sql/database/ai-artificial-intelligence-intelligent-applications?view=azuresql&preserve-view=true#vector-search) or [Azure Database for PostgreSQL - Flexible Server](../../../postgresql/flexible-server/how-to-use-pgvector.md). +An embedding is a special format of data representation that can be easily utilized by machine learning models and algorithms. The embedding is an information dense representation of the semantic meaning of a piece of text. Each embedding is a vector of floating point numbers, such that the distance between two embeddings in the vector space is correlated with semantic similarity between two inputs in the original format. For example, if two texts are similar, then their vector representations should also be similar. Embeddings power vector similarity search in Azure Databases such as [Azure Cosmos DB for MongoDB vCore](/azure/cosmos-db/mongodb/vcore/vector-search) , [Azure SQL Database](/azure/azure-sql/database/ai-artificial-intelligence-intelligent-applications?view=azuresql&preserve-view=true#vector-search) or [Azure Database for PostgreSQL - Flexible Server](/azure/postgresql/flexible-server/how-to-use-pgvector). ## How to get embeddings Our embedding models may be unreliable or pose social risks in certain cases, an * Learn more about the [underlying models that power Azure OpenAI](../concepts/models.md). * Store your embeddings and perform vector (similarity) search using your choice of Azure service: * [Azure AI Search](../../../search/vector-search-overview.md)- * [Azure Cosmos DB for MongoDB vCore](../../../cosmos-db/mongodb/vcore/vector-search.md) + * [Azure Cosmos DB for MongoDB vCore](/azure/cosmos-db/mongodb/vcore/vector-search) * [Azure SQL Database](/azure/azure-sql/database/ai-artificial-intelligence-intelligent-applications?view=azuresql&preserve-view=true#vector-search)- * [Azure Cosmos DB for NoSQL](../../../cosmos-db/vector-search.md) - * [Azure Cosmos DB for PostgreSQL](../../../cosmos-db/postgresql/howto-use-pgvector.md) - * [Azure Database for PostgreSQL - Flexible Server](../../../postgresql/flexible-server/how-to-use-pgvector.md) + * [Azure Cosmos DB for NoSQL](/azure/cosmos-db/vector-search) + * [Azure Cosmos DB for PostgreSQL](/azure/cosmos-db/postgresql/howto-use-pgvector) + * [Azure Database for PostgreSQL - Flexible Server](/azure/postgresql/flexible-server/how-to-use-pgvector) * [Azure Cache for Redis](../../../azure-cache-for-redis/cache-tutorial-vector-similarity.md) |
ai-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/overview.md | Azure OpenAI Service provides REST API access to OpenAI's powerful language mode | Feature | Azure OpenAI | | | |-| Models available | **GPT-4o**<br> **GPT-4 series (including GPT-4 Turbo with Vision)** <br>**GPT-3.5-Turbo series**<br> Embeddings series <br> Learn more in our [Models](./concepts/models.md) page.| -| Fine-tuning | `GPT-4` (preview) <br>`GPT-3.5-Turbo` (0613) <br> `babbage-002` <br> `davinci-002`.| +| Models available | **GPT-4o & GPT-4o mini**<br> **GPT-4 series (including GPT-4 Turbo with Vision)** <br>**GPT-3.5-Turbo series**<br> Embeddings series <br> Learn more in our [Models](./concepts/models.md) page.| +| Fine-tuning | `GPT-4o-mini` (preview) <br> `GPT-4` (preview) <br>`GPT-3.5-Turbo` (0613) <br> `babbage-002` <br> `davinci-002`.| | Price | [Available here](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) <br> For details on GPT-4 Turbo with Vision, see the [special pricing information](../openai/concepts/gpt-with-vision.md#special-pricing-information).| | Virtual network support & private link support | Yes, unless using [Azure OpenAI on your data](./concepts/use-your-data.md). | | Managed Identity| Yes, via Microsoft Entra ID | |
ai-services | Quotas Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quotas-limits.md | |
ai-services | Embeddings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/tutorials/embeddings.md | Learn more about Azure OpenAI's models: * Store your embeddings and perform vector (similarity) search using your choice of Azure service: * [Azure AI Search](../../../search/vector-search-overview.md) * [Azure SQL Database](/azure/azure-sql/database/ai-artificial-intelligence-intelligent-applications?view=azuresql&preserve-view=true#vector-search)- * [Azure Cosmos DB for MongoDB vCore](../../../cosmos-db/mongodb/vcore/vector-search.md) + * [Azure Cosmos DB for MongoDB vCore](/azure/cosmos-db/mongodb/vcore/vector-search) * [Azure SQL Database](/azure/azure-sql/database/ai-artificial-intelligence-intelligent-applications?view=azuresql&preserve-view=true#vector-search)- * [Azure Cosmos DB for NoSQL](../../../cosmos-db/vector-search.md) - * [Azure Cosmos DB for PostgreSQL](../../../cosmos-db/postgresql/howto-use-pgvector.md) + * [Azure Cosmos DB for NoSQL](/azure/cosmos-db/vector-search) + * [Azure Cosmos DB for PostgreSQL](/azure/cosmos-db/postgresql/howto-use-pgvector) * [Azure Cache for Redis](../../../azure-cache-for-redis/cache-tutorial-vector-similarity.md) |
ai-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md | This article provides a summary of the latest releases and major documentation u ## August 2024 +### New preview API release ++API version `2024-07-01-preview` is the latest dataplane authoring & inference API release. It replaces API version `2024-05-01-preview` and adds support for: ++- [Batch API support added](./how-to/batch.md) +- [Vector store chunking strategy parameters](/azure/ai-services/openai/reference-preview?#request-body-17) +- `max_num_results` that the file search tool should output. ++For more information see our [reference documentation](./reference-preview.md) ++### GPT-4o mini regional availability ++- GPT-4o mini is available for standard and global standard deployment in the East US and Sweden Central regions. +- GPT-4o mini is available for global batch deployment in East US, Sweden Central, and West US regions. ++### Evaluations guide ++- New blog post on [getting started with model evaluations](https://techcommunity.microsoft.com/t5/ai-azure-ai-services-blog/how-to-evaluate-amp-upgrade-model-versions-in-the-azure-openai/ba-p/4218880). We recommend using this guide as part of the [model upgrade and retirement process](./concepts/model-retirements.md). + ### Latest GPT-4o model available in the early access playground (preview) On August 6, 2024, OpenAI [announced](https://openai.com/index/introducing-structured-outputs-in-the-api/) the latest version of their flagship GPT-4o model version `2024-08-06`. GPT-4o `2024-08-06` has all the capabilities of the previous version as well as: |
ai-services | Batch Transcription Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-create.md | You can query the status of your transcriptions with the [Transcriptions_Get](/r Call [Transcriptions_Delete](/rest/api/speechtotext/transcriptions/delete) regularly from the service, after you retrieve the results. Alternatively, set the `timeToLive` property to ensure the eventual deletion of the results. +> [!TIP] +> You can also try the Batch Transcription API using Python on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/batch/python/python-client/main.py). ++ ::: zone-end ::: zone pivot="speech-cli" spx help batch transcription ::: zone pivot="rest-api" -Here are some property options that you can use to configure a transcription when you call the [Transcriptions_Create](/rest/api/speechtotext/transcriptions/create) operation. +Here are some property options to configure a transcription when you call the [Transcriptions_Create](/rest/api/speechtotext/transcriptions/create) operation. You can find more examples on the same page, such as [creating a transcription with language identification](/rest/api/speechtotext/transcriptions/create/#create-a-transcription-with-language-identification). | Property | Description | |-|-| |
ai-services | Embedded Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/embedded-speech.md | Embedded TTS with neural voices is only supported on Arm64. Requires Linux on x64, Arm64, or Arm32 hardware with [supported Linux distributions](quickstarts/setup-platform.md?tabs=linux). -Embedded speech isn't supported on RHEL/CentOS 7. - Embedded TTS with neural voices isn't supported on Arm32. # [macOS](#tab/macos-target) Follow these steps to install the Speech SDK for Java using Apache Maven: <dependency> <groupId>com.microsoft.cognitiveservices.speech</groupId> <artifactId>client-sdk-embedded</artifactId>- <version>1.38.0</version> + <version>1.40.0</version> </dependency> </dependencies> </project> Be sure to use the `@aar` suffix when the dependency is specified in `build.grad ``` dependencies {- implementation 'com.microsoft.cognitiveservices.speech:client-sdk-embedded:1.38.0@aar' + implementation 'com.microsoft.cognitiveservices.speech:client-sdk-embedded:1.40.0@aar' } ``` ::: zone-end var embeddedSpeechConfig = EmbeddedSpeechConfig.FromPaths(paths.ToArray()); // For speech to text embeddedSpeechConfig.SetSpeechRecognitionModel( "Microsoft Speech Recognizer en-US FP Model V8",- Environment.GetEnvironmentVariable("MODEL_KEY")); + Environment.GetEnvironmentVariable("EMBEDDED_SPEECH_MODEL_LICENSE")); // For text to speech embeddedSpeechConfig.SetSpeechSynthesisVoice( "Microsoft Server Speech Text to Speech Voice (en-US, JennyNeural)",- Environment.GetEnvironmentVariable("VOICE_KEY")); + Environment.GetEnvironmentVariable("EMBEDDED_SPEECH_MODEL_LICENSE")); embeddedSpeechConfig.SetSpeechSynthesisOutputFormat(SpeechSynthesisOutputFormat.Riff24Khz16BitMonoPcm); ``` ::: zone-end auto embeddedSpeechConfig = EmbeddedSpeechConfig::FromPaths(paths); // For speech to text embeddedSpeechConfig->SetSpeechRecognitionModel(( "Microsoft Speech Recognizer en-US FP Model V8",- GetEnvironmentVariable("MODEL_KEY")); + GetEnvironmentVariable("EMBEDDED_SPEECH_MODEL_LICENSE")); // For text to speech embeddedSpeechConfig->SetSpeechSynthesisVoice( "Microsoft Server Speech Text to Speech Voice (en-US, JennyNeural)",- GetEnvironmentVariable("VOICE_KEY")); + GetEnvironmentVariable("EMBEDDED_SPEECH_MODEL_LICENSE")); embeddedSpeechConfig->SetSpeechSynthesisOutputFormat(SpeechSynthesisOutputFormat::Riff24Khz16BitMonoPcm); ``` var embeddedSpeechConfig = EmbeddedSpeechConfig.fromPaths(paths); // For speech to text embeddedSpeechConfig.setSpeechRecognitionModel( "Microsoft Speech Recognizer en-US FP Model V8",- System.getenv("MODEL_KEY")); + System.getenv("EMBEDDED_SPEECH_MODEL_LICENSE")); // For text to speech embeddedSpeechConfig.setSpeechSynthesisVoice( "Microsoft Server Speech Text to Speech Voice (en-US, JennyNeural)",- System.getenv("VOICE_KEY")); + System.getenv("EMBEDDED_SPEECH_MODEL_LICENSE")); embeddedSpeechConfig.setSpeechSynthesisOutputFormat(SpeechSynthesisOutputFormat.Riff24Khz16BitMonoPcm); ``` |
ai-services | How To Configure Openssl Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-configure-openssl-linux.md | zone_pivot_groups: programming-languages-set-three # Configure OpenSSL for Linux -> [!CAUTION] -> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md). - With the Speech SDK, [OpenSSL](https://www.openssl.org) is dynamically configured to the host-system version. > [!NOTE] Set the environment variable `SSL_CERT_DIR` to point at `/opt/ssl/certs` before export SSL_CERT_DIR=/opt/ssl/certs ``` -- OPENSSLDIR is `/etc/pki/tls` (like on RHEL/CentOS based systems). There's a `certs` subdirectory with a certificate bundle file, for example `ca-bundle.crt`.+- OPENSSLDIR is `/etc/pki/tls` (like on RHEL based systems). There's a `certs` subdirectory with a certificate bundle file, for example `ca-bundle.crt`. Set the environment variable `SSL_CERT_FILE` to point at that file before using the Speech SDK. For example: ```bash export SSL_CERT_FILE=/etc/pki/tls/certs/ca-bundle.crt |
ai-services | How To Configure Rhel Centos 7 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-configure-rhel-centos-7.md | - Title: How to configure RHEL/CentOS 7 - Speech service- -description: Learn how to configure RHEL/CentOS 7 so that the Speech SDK can be used. ---- Previously updated : 1/18/2024----# Configure RHEL/CentOS 7 --> [!CAUTION] -> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md). --To use the Speech SDK on Red Hat Enterprise Linux (RHEL) 7 x64 and CentOS 7 x64, update the C++ compiler (for C++ development) and the shared C++ runtime library on your system. --## Install dependencies --First install all general dependencies: --```bash -sudo rpm -Uvh https://packages.microsoft.com/config/rhel/7/packages-microsoft-prod.rpm --# Install development tools and libraries -sudo yum update -y -sudo yum groupinstall -y "Development tools" -sudo yum install -y alsa-lib dotnet-sdk-2.1 java-1.8.0-openjdk-devel openssl -sudo yum install -y gstreamer1 gstreamer1-plugins-base gstreamer1-plugins-good gstreamer1-plugins-bad-free gstreamer1-plugins-ugly-free -``` --## C/C++ compiler and runtime libraries --Install the prerequisite packages with this command: --```bash -sudo yum install -y gmp-devel mpfr-devel libmpc-devel -``` --Next update the compiler and runtime libraries: --```bash -# Build GCC 7.5.0 and runtimes and install them under /usr/local -curl https://ftp.gnu.org/gnu/gcc/gcc-7.5.0/gcc-7.5.0.tar.gz -O -tar -xf gcc-7.5.0.tar.gz -mkdir gcc-7.5.0-build && cd gcc-7.5.0-build -../gcc-7.5.0/configure --enable-languages=c,c++ --disable-bootstrap --disable-multilib --prefix=/usr/local -make -j$(nproc) -sudo make install-strip -``` --If the updated compiler and libraries need to be deployed on several machines, you can copy them from under `/usr/local` to other machines. If only the runtime libraries are needed, then the files in `/usr/local/lib64` are enough. --## Environment settings --Run the following commands to complete the configuration: --```bash -# Add updated C/C++ runtimes to the library path -# (this is required for any development/testing with Speech SDK) -export LD_LIBRARY_PATH=/usr/local/lib64:$LD_LIBRARY_PATH --# For C++ development only: -# - add the updated compiler to PATH -# (note, /usr/local/bin should be already first in PATH on vanilla systems) -# - add Speech SDK libraries from the Linux tar package to LD_LIBRARY_PATH -# (note, use the actual path to extracted files!) -export PATH=/usr/local/bin:$PATH -hash -r # reset cached paths in the current shell session just in case -export LD_LIBRARY_PATH=/path/to/extracted/SpeechSDK-Linux-<version>/lib/centos7-x64:$LD_LIBRARY_PATH -``` --> [!NOTE] -> The Linux .tar package contains specific libraries for RHEL/CentOS 7. These are in `lib/centos7-x64` as shown in the environment setting example for `LD_LIBRARY_PATH` above. Speech SDK libraries in `lib/x64` are for all the other supported Linux x64 distributions (including RHEL/CentOS 8) and don't work on RHEL/CentOS 7. --## Next steps --> [!div class="nextstepaction"] -> [About the Speech SDK](speech-sdk.md) |
ai-services | Video Translation Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/video-translation-studio.md | To create a video translation project, follow these steps: 1. Select **Video translation**. -1. On the **Create and Manage Projects** page, select **Upload file**. +1. On the **Create and Manage Projects** page, select **Create a project**. -1. On the **Video file** page, upload your video file by dragging and dropping the video file or selecting the file manually. +1. On the **New project** page, select **Voice type**. - Ensure the video is in .mp4 format, less than 500 MB, and shorter than 60 minutes. + :::image type="content" source="media/video-translation/select-voice-type.png" alt-text="Screenshot of selecting voice type on the new project page."::: + + You can select **Prebuilt neural voice** or **Personal voice** for **Voice type**. For prebuilt neural voice, the system automatically + selects the most suitable prebuilt voice by matching the speaker's voice in the video with prebuilt voices. For personal voice, the + system provides the model with superior voice cloning similarity. To use personal voice, you need to apply for [access](https://aka.ms/customneural). + +1. Upload your video file by dragging and dropping the video file or selecting the file manually. -1. Provide the **File name**, **Description**, and select **Voice type**, **Language of the video**, **Translate to** language. + :::image type="content" source="media/video-translation/upload-video-file.png" alt-text="Screenshot of uploading your video file on the new project page."::: - You can select **Prebuilt neural voice** or **Personal voice** for **Voice type**. For prebuilt neural voice, the system automatically selects the most suitable prebuilt voice by matching the speaker's voice in the video with prebuilt voices. For personal voice, the system provides the model with superior voice cloning similarity. To use personal voice, you need to apply for access. The application form will be available soon. + Ensure the video is in .mp4 format, less than 500 MB, and shorter than 60 minutes. + +1. Provide **Project name**, and select **Number of speakers**, **Language of the video**, **Translate to** language. - :::image type="content" source="media/video-translation/upload-video-file.png" alt-text="Screenshot of uploading your video file on the video file page."::: + :::image type="content" source="media/video-translation/provide-video-information.png" alt-text="Screenshot of providing video information on the new project page."::: + + If you want to use your own subtitle files, select **Add subtitle file**. You can choose to upload either the source subtitle file or the target subtitle file. The subtitle file can be in WebVTT or JSON format. You can download a sample VTT file for your reference by selecting **Download sample VTT file**. + + :::image type="content" source="media/video-translation/add-subtitle-file.png" alt-text="Screenshot of adding subtitle file on the new project page."::: 1. After reviewing the pricing information and code of conduct, then proceed to create the project. - When processing the video file, you can check the processing status on the project tab. + Once the upload is complete, you can check the processing status on the project tab. - Once the upload is complete, the project is created. You can then select the project to review detailed settings and make adjustments according to your preferences. + After the project is created, you can select the project to review detailed settings and make adjustments according to your preferences. ## Check and adjust voice settings On the right side of the video, you can view both the original script and the tr You can also add or remove segments as needed. When you want to add a segment, ensure that the new segment timestamp doesn't overlap with the previous and next segment, and the segment end time should be larger than the start time. The correct format of timestamp should be `hh:mm:ss.ms`. Otherwise, you can't apply the changes. +You can adjust the time frame of the scripts directly using the audio waveform below the video. After selecting **Apply changes**, the adjustments will be applied. + If you encounter segments with an "unidentified" voice name, it might be because the system couldn't accurately detect the voice, especially in situations where speaker voices overlap. In such cases, it's advisable to manually change the voice name. :::image type="content" source="media/video-translation/voice-unidentified.png" alt-text="Screenshot of one segment with unidentified voice name."::: If you want to adjust the voice, select **Voice settings** to make some changes. :::image type="content" source="media/video-translation/voice-settings.png" alt-text="Screenshot of adjusting voice settings on the voice settings page."::: -If you make changes multiple times but haven't finished, you only need to save the changes you've made by selecting **Save**. After making all changes, select **Apply changes** to apply them to the video. You'll be charged only after you select **Apply changes**. +If you make changes multiple times but haven't finished, you only need to save the changes you've made by selecting **Save**. After making all changes, select **Apply changes** to apply them to the video. You'll be charged only after you select **Apply changes**. :::image type="content" source="media/video-translation/apply-changes.png" alt-text="Screenshot of selecting apply changes button after making all changes."::: |
ai-studio | Ai Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/ai-resources.md | -Hubs are the primary top-level Azure resource for AI studio and provide a central way for a team to govern security, connectivity, and computing resources across playgrounds and projects. Once a hub is created, developers can create projects from it and access shared company resources without needing an IT administrator's repeated help. +Hubs are the primary top-level Azure resource for AI Studio and provide a central way for a team to govern security, connectivity, and computing resources across playgrounds and projects. Once a hub is created, developers can create projects from it and access shared company resources without needing an IT administrator's repeated help. Project workspaces that are created using a hub inherit the same security settings and shared resource access. Teams can create project workspaces as needed to organize their work, isolate data, and/or restrict access. Azure AI Studio layers on top of existing Azure services including Azure AI and [!INCLUDE [Resource provider kinds](../includes/resource-provider-kinds.md)] -When you create a new hub, a set of dependent Azure resources are required to store data that you upload or get generated when working in AI studio. If not provided by you, and required, these resources are automatically created. +When you create a new hub, a set of dependent Azure resources are required to store data that you upload or get generated when working in AI Studio. If not provided by you, and required, these resources are automatically created. [!INCLUDE [Dependent Azure resources](../includes/dependent-resources.md)] |
ai-studio | Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/architecture.md | AI Studio provides a unified experience for AI developers and data scientists to The top level AI Studio resources (hub and project) are based on Azure Machine Learning. Connected resources, such as Azure OpenAI, Azure AI services, and Azure AI Search, are used by the hub and project in reference, but follow their own resource management lifecycle. -- **AI hub**: The hub is the top-level resource in AI Studio. The Azure resource provider for a hub is `Microsoft.MachineLearningServices/workspaces`, and the kind of resource is `Hub`. It provides the following features:+- **AI Studio hub**: The hub is the top-level resource in AI Studio. The Azure resource provider for a hub is `Microsoft.MachineLearningServices/workspaces`, and the kind of resource is `Hub`. It provides the following features: - Security configuration including a managed network that spans projects and model endpoints. - Compute resources for interactive development, finetuning, open source, and serverless model deployments. - Connections to other Azure services such as Azure OpenAI, Azure AI services, and Azure AI Search. Hub-scoped connections are shared with projects created from the hub. - Project management. A hub can have multiple child projects. - An associated Azure storage account for data upload and artifact storage.-- **AI project**: A project is a child resource of the hub. The Azure resource provider for a project is `Microsoft.MachineLearningServices/workspaces`, and the kind of resource is `Project`. The project provides the following features:+- **AI Studio project**: A project is a child resource of the hub. The Azure resource provider for a project is `Microsoft.MachineLearningServices/workspaces`, and the kind of resource is `Project`. The project provides the following features: - Access to development tools for building and customizing AI applications. - Reusable components including datasets, models, and indexes. - An isolated container to upload data to (within the storage inherited from the hub). |
ai-studio | Safety Evaluations Transparency Note | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/safety-evaluations-transparency-note.md | Azure AI Studio provisions an Azure OpenAI GPT-4 model and orchestrates adversar The safety evaluations aren't intended to use for any purpose other than to evaluate content risks and jailbreak vulnerabilities of your generative AI application: -- **Evaluating your generative AI application pre-deployment**: Using the evaluation wizard in the Azure AI studio or the Azure AI Python SDK, safety evaluations can assess in an automated way to evaluate potential content or security risks.+- **Evaluating your generative AI application pre-deployment**: Using the evaluation wizard in the Azure AI Studio or the Azure AI Python SDK, safety evaluations can assess in an automated way to evaluate potential content or security risks. - **Augmenting your red-teaming operations**: Using the adversarial simulator, safety evaluations can simulate adversarial interactions with your generative AI application to attempt to uncover content and security risks.-- **Communicating content and security risks to stakeholders**: Using the Azure AI studio, you can share access to your Azure AI Studio project with safety evaluations results with auditors or compliance stakeholders.+- **Communicating content and security risks to stakeholders**: Using the Azure AI Studio, you can share access to your Azure AI Studio project with safety evaluations results with auditors or compliance stakeholders. #### Considerations when choosing a use case |
ai-studio | Create Hub Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-hub-terraform.md | Title: 'Use Terraform to create an Azure AI Studio hub' -description: In this article, you create an Azure AI hub, an AI project, an AI services resource, and more resources. +description: In this article, you create an Azure AI Studio hub, an Azure AI Studio project, an AI services resource, and more resources. Last updated 07/12/2024 In this article, you use Terraform to create an Azure AI Studio hub, a project, > * Set up a storage account > * Establish a key vault > * Configure AI services-> * Build an Azure AI hub -> * Develop an AI project +> * Build an AI Studio hub +> * Develop an AI Studio project > * Establish an AI services connection ## Prerequisites |
ai-studio | Create Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-projects.md | Use the following tabs to select the method you plan to use to create a project: For more information on authenticating, see [Authentication methods](/cli/azure/authenticate-azure-cli). -1. Once the extension is installed and authenticated to your Azure subscription, use the following command to create a new Azure AI project from an existing Azure AI hub: +1. Once the extension is installed and authenticated to your Azure subscription, use the following command to create a new Azure AI Studio project from an existing Azure AI Studio hub: ```azurecli az ml workspace create --kind project --hub-id {my_hub_ARM_ID} --resource-group {my_resource_group} --name {my_project_name} In addition, a number of resources are only accessible by users in your project | workspacefilestore | {project-GUID}-code | Hosts files created on your compute and using prompt flow | > [!NOTE]-> Storage connections are not created directly with the project when your storage account has public network access set to disabled. These are created instead when a first user accesses AI studio over a private network connection. [Troubleshoot storage connections](troubleshoot-secure-connection-project.md#troubleshoot-missing-storage-connections) +> Storage connections are not created directly with the project when your storage account has public network access set to disabled. These are created instead when a first user accesses AI Studio over a private network connection. [Troubleshoot storage connections](troubleshoot-secure-connection-project.md#troubleshoot-missing-storage-connections) ## Next steps |
ai-studio | Deploy Models Serverless | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-serverless.md | In this section, you create an endpoint with the name **meta-llama3-8b-qwerty**. ## Use the serverless API endpoint -Models deployed in Azure Machine Learning and Azure AI studio in Serverless API endpoints support the [Azure AI Model Inference API](../reference/reference-model-inference-api.md) that exposes a common set of capabilities for foundational models and that can be used by developers to consume predictions from a diverse set of models in a uniform and consistent way. +Models deployed in Azure Machine Learning and Azure AI Studio in Serverless API endpoints support the [Azure AI Model Inference API](../reference/reference-model-inference-api.md) that exposes a common set of capabilities for foundational models and that can be used by developers to consume predictions from a diverse set of models in a uniform and consistent way. Read more about the [capabilities of this API](../reference/reference-model-inference-api.md#capabilities) and how [you can use it when building applications](../reference/reference-model-inference-api.md#getting-started). Read more about the [capabilities of this API](../reference/reference-model-infe Endpoints for models deployed as Serverless APIs follow the public network access (PNA) flag setting of the AI Studio Hub that has the project in which the deployment exists. To secure your MaaS endpoint, disable the PNA flag on your AI Studio Hub. You can secure inbound communication from a client to your endpoint by using a private endpoint for the hub. -To set the PNA flag for the Azure AI hub: +To set the PNA flag for the Azure AI Studio hub: 1. Go to the [Azure portal](https://portal.azure.com).-2. Search for the Resource group to which the hub belongs, and select your Azure AI hub from the resources listed for this Resource group. -3. On the hub Overview page, use the left navigation pane to go to Settings > Networking. +2. Search for the Resource group to which the hub belongs, and select the **Azure AI hub** from the resources listed for this resource group. +3. From the hub **Overview** page on the left menu, select **Settings** > **Networking**. 4. Under the **Public access** tab, you can configure settings for the public network access flag. 5. Save your changes. Your changes might take up to five minutes to propagate. |
ai-studio | Ai Template Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/develop/ai-template-get-started.md | Start with our sample applications! Choose the right template for your needs, th | Template | App host | Tech stack | Description | | -- | -| -- | |-| [Contoso Chat Retail copilot with Azure AI Studio](https://github.com/Azure-Samples/contoso-chat) | [Azure AI Studio online endpoints](../../../machine-learning/concept-endpoints-online.md) | [Azure Cosmos DB](../../../cosmos-db/index-overview.md), [Azure Managed Identity](/entr), Bicep | A retailer conversation agent that can answer questions grounded in your product catalog and customer order history. This template uses a retrieval augmented generation architecture with cutting-edge models for chat completion, chat evaluation, and embeddings. Build, evaluate, and deploy, an end-to-end solution with a single command. | +| [Contoso Chat Retail copilot with Azure AI Studio](https://github.com/Azure-Samples/contoso-chat) | [Azure AI Studio online endpoints](../../../machine-learning/concept-endpoints-online.md) | [Azure Cosmos DB](/azure/cosmos-db/index-overview), [Azure Managed Identity](/entr), Bicep | A retailer conversation agent that can answer questions grounded in your product catalog and customer order history. This template uses a retrieval augmented generation architecture with cutting-edge models for chat completion, chat evaluation, and embeddings. Build, evaluate, and deploy, an end-to-end solution with a single command. | | [Process Automation: speech to text and summarization with Azure AI Studio](https://github.com/Azure-Samples/summarization-openai-python-prompflow) | [Azure AI Studio online endpoints](../../../machine-learning/concept-endpoints-online.md) | [Azure Managed Identity](/entr), [Azure AI speech to text service](../../../ai-services/speech-service/index-speech-to-text.yml), Bicep | An app for workers to report issues via text or speech, translating audio to text, summarizing it, and specify the relevant department. | | [Multi-Modal Creative Writing copilot with Dalle](https://github.com/Azure-Samples/agent-openai-python-prompty) | [Azure AI Studio online endpoints](../../../machine-learning/concept-endpoints-online.md) | [Azure AI Search](../../../search/search-what-is-azure-search.md), [Azure OpenAI Service](../../../ai-services/openai/overview.md), Bicep | demonstrates how to create and work with AI agents. The app takes a topic and instruction input and then calls a research agent, writer agent, and editor agent. | | [Assistant API Analytics Copilot with Python and Azure AI Studio](https://github.com/Azure-Samples/assistant-data-openai-python-promptflow) | [Azure AI Studio online endpoints](../../../machine-learning/concept-endpoints-online.md) | [Azure Managed Identity](/entr), Bicep| A data analytics chatbot based on the Assistants API. The chatbot can answer questions in natural language, and interpret them as queries on an example sales dataset. | Start with our sample applications! Choose the right template for your needs, th | Template | App host | Tech stack | Description | | -- | -| -- | -- |-| [Contoso Chat Retail copilot with .NET and Semantic Kernel](https://github.com/Azure-Samples/contoso-chat-csharp-prompty) | [Azure Container Apps](../../../container-apps/overview.md) | [Azure Cosmos DB](../../../cosmos-db/index-overview.md), [Azure Monitor](../../../azure-monitor/overview.md), [Azure Managed Identity](/entr), [Semantic Kernel](/semantic-kernel/overview/?tabs=Csharp), Bicep | A retailer conversation agent that can answer questions grounded in your product catalog and customer order history. This template uses a retrieval augmented generation architecture with cutting-edge models for chat completion, chat evaluation, and embeddings. Build, evaluate, and deploy, an end-to-end solution with a single command. | +| [Contoso Chat Retail copilot with .NET and Semantic Kernel](https://github.com/Azure-Samples/contoso-chat-csharp-prompty) | [Azure Container Apps](../../../container-apps/overview.md) | [Azure Cosmos DB](/azure/cosmos-db/index-overview), [Azure Monitor](../../../azure-monitor/overview.md), [Azure Managed Identity](/entr), [Semantic Kernel](/semantic-kernel/overview/?tabs=Csharp), Bicep | A retailer conversation agent that can answer questions grounded in your product catalog and customer order history. This template uses a retrieval augmented generation architecture with cutting-edge models for chat completion, chat evaluation, and embeddings. Build, evaluate, and deploy, an end-to-end solution with a single command. | | [Process Automation: speech to text and summarization with .NET and GPT 3.5 Turbo](https://github.com/Azure-Samples/summarization-openai-csharp-prompty) | [Azure Container Apps](../../../container-apps/overview.md) | [Azure Managed Identity](/entr), [Azure AI speech to text service](../../../ai-services/speech-service/index-speech-to-text.yml), Bicep | An app for workers to report issues via text or speech, translating audio to text, summarizing it, and specify the relevant department. | |
ai-studio | Flow Evaluate Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/develop/flow-evaluate-sdk.md | ml_client.evaluators.download("answer_len_uploaded", version=1, download_path=". evaluator = load_flow(os.path.join("answer_len_uploaded", flex_flow_path)) ``` -After logging your custom evaluator to your AI project, you can view it in your [Evaluator library](../evaluate-generative-ai-app.md#view-and-manage-the-evaluators-in-the-evaluator-library) under Evaluation tab in AI studio. +After logging your custom evaluator to your AI Studio project, you can view it in your [Evaluator library](../evaluate-generative-ai-app.md#view-and-manage-the-evaluators-in-the-evaluator-library) under Evaluation tab in AI Studio. ### Prompt-based evaluators ml_client.evaluators.download("prompty_uploaded", version=1, download_path=".") evaluator = load_flow(os.path.join("prompty_uploaded", "apology.prompty")) ``` -After logging your custom evaluator to your AI project, you can view it in your [Evaluator library](../evaluate-generative-ai-app.md#view-and-manage-the-evaluators-in-the-evaluator-library) under Evaluation tab in AI studio. +After logging your custom evaluator to your AI Studio project, you can view it in your [Evaluator library](../evaluate-generative-ai-app.md#view-and-manage-the-evaluators-in-the-evaluator-library) under **Evaluation** tab in AI Studio. ## Evaluate on test dataset using `evaluate()` result = evaluate( "ground_truth": "${data.truth}" } },- # Optionally provide your AI Studio project information to track your evaluation results in your Azure AI studio project + # Optionally provide your AI Studio project information to track your evaluation results in your Azure AI Studio project azure_ai_project = azure_ai_project, # Optionally provide an output path to dump a json of metric summary, row level data and metric and studio URL output_path="./myevalresults.json" |
ai-studio | Index Build Consume Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/develop/index-build-consume-sdk.md | You must have: - An [Azure AI Search service connection](../../how-to/connections-add.md#create-a-new-connection) to index the sample product and customer data. If you don't have an Azure AI Search service, you can create one from the [Azure portal](https://portal.azure.com/) or see the instructions [here](../../../search/search-create-service-portal.md). - Models for embedding: - You can use an ada-002 embedding model from Azure OpenAI. The instructions to deploy can be found [here](../deploy-models-openai.md).- - OR you can use any another embedding model deployed in your AI studio project. In this example we use Cohere multi-lingual embedding. The instructions to deploy this model can be found [here](../deploy-models-cohere-embed.md). + - OR you can use any another embedding model deployed in your AI Studio project. In this example we use Cohere multi-lingual embedding. The instructions to deploy this model can be found [here](../deploy-models-cohere-embed.md). ## Build and consume an index locally local_index_aoai=build_index( The above code builds an index locally. It uses environment variables to get the AI Search service and also to connect to the Azure OpenAI embedding model. -### Build an index locally using other embedding models deployed in your AI studio project +### Build an index locally using other embedding models deployed in your AI Studio project -To create an index that uses an embedding model deployed in your AI studio project, we configure the connection to the model using a `ConnectionConfig` as shown below. The `subscription`, `resource_group` and `workspace` refers to the project where the embedding model is installed. The `connection_name` refers to the connection name for the model, which can be found in the AI Studio project settings page. +To create an index that uses an embedding model deployed in your AI Studio project, we configure the connection to the model using a `ConnectionConfig` as shown below. The `subscription`, `resource_group` and `workspace` refers to the project where the embedding model is installed. The `connection_name` refers to the connection name for the model, which can be found in the AI Studio project settings page. ```python from promptflow.rag.config import ConnectionConfig embeddings_model_config = IndexModelConfiguration.from_connection( deployment_name="text-embedding-ada-002") ``` -You can connect to embedding model deployed in your AI studio project (non Azure OpenAI models) using the serverless connection. +You can connect to embedding model deployed in your AI Studio project (non Azure OpenAI models) using the serverless connection. ```python from azure.ai.ml.entities import IndexModelConfiguration |
ai-studio | Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/disaster-recovery.md | For more information, see [Availability zone service and regional support](/azur Determine the level of business continuity that you're aiming for. The level might differ between the components of your solution. For example, you might want to have a hot/hot configuration for production pipelines or model deployments, and hot/cold for development. -Azure AI studio is a regional service and stores data both service-side and on a storage account in your subscription. If a regional disaster occurs, service data can't be recovered. But you can recover the data stored by the service on the storage account in your subscription given storage redundancy is enforced. Service-side stored data is mostly metadata (tags, asset names, descriptions). Stored on your storage account is typically non-metadata, for example, uploaded data. +Azure AI Studio is a regional service and stores data both service-side and on a storage account in your subscription. If a regional disaster occurs, service data can't be recovered. But you can recover the data stored by the service on the storage account in your subscription given storage redundancy is enforced. Service-side stored data is mostly metadata (tags, asset names, descriptions). Stored on your storage account is typically non-metadata, for example, uploaded data. For connections, we recommend creating two separate resources in two distinct regions and then create two connections for the hub. For example, if AI Services is a critical resource for business continuity, creating two AI Services resources and two connections for the hub, would be a good strategy for business continuity. With this configuration, if one region goes down there's still one region operational. |
ai-studio | Fine Tune Model Llama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/fine-tune-model-llama.md | Fine-tuning of Llama 2 models is currently supported in projects located in West An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.-- An [Azure AI hub resource](../how-to/create-azure-ai-resource.md).+- An [Azure AI Studio hub](../how-to/create-azure-ai-resource.md). > [!IMPORTANT]- > For Meta Llama 3.1 models, the pay-as-you-go model fine-tune offering is only available with AI hubs created in **West US 3** regions. + > For Meta Llama 3.1 models, the pay-as-you-go model fine-tune offering is only available with hubs created in **West US 3** regions. -- An [Azure AI project](../how-to/create-projects.md) in Azure AI Studio.+- An [Azure AI Studio project](../how-to/create-projects.md) in Azure AI Studio. - Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions: - - On the Azure subscriptionΓÇöto subscribe the Azure AI project to the Azure Marketplace offering, once for each project, per offering: + - On the Azure subscriptionΓÇöto subscribe the AI Studio project to the Azure Marketplace offering, once for each project, per offering: - `Microsoft.MarketplaceOrdering/agreements/offers/plans/read` - `Microsoft.MarketplaceOrdering/agreements/offers/plans/sign/action` - `Microsoft.MarketplaceOrdering/offerTypes/publishers/offers/plans/agreements/read` Fine-tuning of Llama 2 models is currently supported in projects located in West - `Microsoft.SaaS/resources/read` - `Microsoft.SaaS/resources/write` - - On the Azure AI projectΓÇöto deploy endpoints (the Azure AI Developer role contains these permissions already): + - On the AI Studio projectΓÇöto deploy endpoints (the Azure AI Developer role contains these permissions already): - `Microsoft.MachineLearningServices/workspaces/marketplaceModelSubscriptions/*` - `Microsoft.MachineLearningServices/workspaces/serverlessEndpoints/*` |
ai-studio | Fine Tune Phi 3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/fine-tune-phi-3.md | The model underwent a rigorous enhancement process, incorporating both supervise ## [Phi-3-mini](#tab/phi-3-mini) -The following models are available in Azure AI studio for Phi 3 when fine-tuning as a service with pay-as-you-go: +The following models are available in Azure AI Studio for Phi 3 when fine-tuning as a service with pay-as-you-go: - `Phi-3-mini-4k-instruct` (preview) - `Phi-3-mini-128k-instruct` (preview) Fine-tuning of Phi-3 models is currently supported in projects located in East U ## [Phi-3-medium](#tab/phi-3-medium) -The following models are available in Azure AI studio for Phi 3 when fine-tuning as a service with pay-as-you-go: +The following models are available in Azure AI Studio for Phi 3 when fine-tuning as a service with pay-as-you-go: - `Phi-3-medium-4k-instruct` (preview) - `Phi-3-medium-128k-instruct` (preview) Verify the subscription is registered to the `Microsoft.Network` resource provid 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Subscriptions** from the left menu. 1. Select the subscription you want to use.-1. Select **AI project settings** > **Resource providers** from the left menu. +1. Select **Settings** > **Resource providers** from the left menu. 1. Confirm that **Microsoft.Network** is in the list of resource providers. Otherwise add it. To fine-tune a Phi-3 model: 1. On the model's **Details** page, select **fine-tune**. 1. Select the project in which you want to fine-tune your models. To use the pay-as-you-go model fine-tune offering, your workspace must belong to the **East US 2** region.-1. On the fine-tune wizard, select the link to **Azure AI studio Terms** to learn more about the terms of use. You can also select the **Azure AI studio offer details** tab to learn about pricing for the selected model. -1. If this is your first time fine-tuning the model in the project, you have to subscribe your project for the particular offering (for example, Phi-3-mini-128k-instruct) from Azure AI studio. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each project has its own subscription to the particular Azure AI studio offering, which allows you to control and monitor spending. Select **Subscribe and fine-tune**. +1. On the fine-tune wizard, select the link to **Azure AI Studio Terms** to learn more about the terms of use. You can also select the **Azure AI Studio offer details** tab to learn about pricing for the selected model. +1. If this is your first time fine-tuning the model in the project, you have to subscribe your project for the particular offering (for example, Phi-3-mini-128k-instruct) from Azure AI Studio. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each project has its own subscription to the particular Azure AI Studio offering, which allows you to control and monitor spending. Select **Subscribe and fine-tune**. > [!NOTE]- > Subscribing a project to a particular Azure AI studio offering (in this case, Phi-3-mini-128k-instruct) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites). + > Subscribing a project to a particular Azure AI Studio offering (in this case, Phi-3-mini-128k-instruct) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites). -1. Once you sign up the project for the particular Azure AI studio offering, subsequent fine-tuning of the _same_ offering in the _same_ project don't require subscribing again. Therefore, you don't need to have the subscription-level permissions for subsequent fine-tune jobs. If this scenario applies to you, select **Continue to fine-tune**. +1. Once you sign up the project for the particular Azure AI Studio offering, subsequent fine-tuning of the _same_ offering in the _same_ project don't require subscribing again. Therefore, you don't need to have the subscription-level permissions for subsequent fine-tune jobs. If this scenario applies to you, select **Continue to fine-tune**. 1. Enter a name for your fine-tuned model and the optional tags and description. 1. Select training data to fine-tune your model. See [data preparation](#data-preparation) for more information. To fine-tune a Phi-3 model: 1. On the model's **Details** page, select **fine-tune**. 1. Select the project in which you want to fine-tune your models. To use the pay-as-you-go model fine-tune offering, your workspace must belong to the **East US 2** region.-1. On the fine-tune wizard, select the link to **Azure AI studio Terms** to learn more about the terms of use. You can also select the **Azure AI studio offer details** tab to learn about pricing for the selected model. -1. If this is your first time fine-tuning the model in the project, you have to subscribe your project for the particular offering (for example, Phi-3-medium-128k-instruct) from Azure AI studio. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each project has its own subscription to the particular Azure AI studio offering, which allows you to control and monitor spending. Select **Subscribe and fine-tune**. +1. On the fine-tune wizard, select the link to **Azure AI Studio Terms** to learn more about the terms of use. You can also select the **Azure AI Studio offer details** tab to learn about pricing for the selected model. +1. If this is your first time fine-tuning the model in the project, you have to subscribe your project for the particular offering (for example, Phi-3-medium-128k-instruct) from Azure AI Studio. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each project has its own subscription to the particular Azure AI Studio offering, which allows you to control and monitor spending. Select **Subscribe and fine-tune**. > [!NOTE]- > Subscribing a project to a particular Azure AI studio offering (in this case, Phi-3-mini-128k-instruct) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites). + > Subscribing a project to a particular Azure AI Studio offering (in this case, Phi-3-mini-128k-instruct) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites). -1. Once you sign up the project for the particular Azure AI studio offering, subsequent fine-tuning of the _same_ offering in the _same_ project don't require subscribing again. Therefore, you don't need to have the subscription-level permissions for subsequent fine-tune jobs. If this scenario applies to you, select **Continue to fine-tune**. +1. Once you sign up the project for the particular Azure AI Studio offering, subsequent fine-tuning of the _same_ offering in the _same_ project don't require subscribing again. Therefore, you don't need to have the subscription-level permissions for subsequent fine-tune jobs. If this scenario applies to you, select **Continue to fine-tune**. 1. Enter a name for your fine-tuned model and the optional tags and description. 1. Select training data to fine-tune your model. See [data preparation](#data-preparation) for more information. |
ai-studio | Model Catalog Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/model-catalog-overview.md | To set the PNA flag for the AI Studio hub: * If you have an AI Studio hub with a private endpoint created before July 11, 2024, new MaaS endpoints added to projects in this hub won't follow the networking configuration of the hub. Instead, you need to create a new private endpoint for the hub and create new serverless API deployments in the project so that the new deployments can follow the hub's networking configuration. -* If you have an AI studio hub with MaaS deployments created before July 11, 2024, and you enable a private endpoint on this hub, the existing MaaS deployments won't follow the hub's networking configuration. For serverless API deployments in the hub to follow the hub's networking configuration, you need to create the deployments again. +* If you have an AI Studio hub with MaaS deployments created before July 11, 2024, and you enable a private endpoint on this hub, the existing MaaS deployments won't follow the hub's networking configuration. For serverless API deployments in the hub to follow the hub's networking configuration, you need to create the deployments again. * Currently, [Azure OpenAI On Your Data](/azure/ai-services/openai/concepts/use-your-data) support isn't available for MaaS deployments in private hubs, because private hubs have the PNA flag disabled. |
api-management | Cosmosdb Data Source Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cosmosdb-data-source-policy.md | -The `cosmosdb-data-source` resolver policy resolves data for an object type and field in a GraphQL schema by using a [Cosmos DB](../cosmos-db/introduction.md) data source. The schema must be imported to API Management as a GraphQL API. +The `cosmosdb-data-source` resolver policy resolves data for an object type and field in a GraphQL schema by using a [Cosmos DB](/azure/cosmos-db/introduction) data source. The schema must be imported to API Management as a GraphQL API. Use the policy to configure a single query request, read request, delete request, or write request and an optional response from the Cosmos DB data source. Use the policy to configure a single query request, read request, delete request |Name|Description|Required| |-|--|--| | [connection-info](#connection-info-elements) | Specifies connection to container in Cosmos DB database. | Yes |-| [query-request](#query-request-attributes) | Specifies settings for a [query request](../cosmos-db/nosql/how-to-dotnet-query-items.md) to Cosmos DB container. | Configure one of `query-request`, `read-request`, `delete-request`, or `write-request` | -| [read-request](#read-request-elements) | Specifies settings for a [read request](../cosmos-db/nosql/how-to-dotnet-read-item.md) to Cosmos DB container. | Configure one of `query-request`, `read-request`, `delete-request`, or `write-request` | +| [query-request](#query-request-attributes) | Specifies settings for a [query request](/azure/cosmos-db/nosql/how-to-dotnet-query-items) to Cosmos DB container. | Configure one of `query-request`, `read-request`, `delete-request`, or `write-request` | +| [read-request](#read-request-elements) | Specifies settings for a [read request](/azure/cosmos-db/nosql/how-to-dotnet-read-item) to Cosmos DB container. | Configure one of `query-request`, `read-request`, `delete-request`, or `write-request` | | [delete-request](#delete-request-attributes) | Specifies settings for a delete request to Cosmos DB container. | Configure one of `query-request`, `read-request`, `delete-request`, or `write-request` | | [write-request](#write-request-attributes) | Specifies settings for a write request to Cosmos DB container. | Configure one of `query-request`, `read-request`, `delete-request`, or `write-request` | | [response](#response-elements) | Optionally specifies child policies to configure the resolver's response. If not specified, the response is returned from Cosmos DB as JSON. | No | Use the policy to configure a single query request, read request, delete request |-|--|--| | sql-statement | A SQL statement for the query request. | No | | parameters | A list of query parameters, in [parameter](#parameter-attributes) subelements, for the query request. | No |-| [partition-key](#partition-key-attributes) | A Cosmos DB [partition key](../cosmos-db/resource-model.md#azure-cosmos-db-containers) to route the query to the location in the container. | No | -| [paging](#paging-elements) | Specifies settings to split query results into multiple [pages](../cosmos-db/nosql/query/pagination.md). | No | +| [partition-key](#partition-key-attributes) | A Cosmos DB [partition key](/azure/cosmos-db/resource-model#azure-cosmos-db-containers) to route the query to the location in the container. | No | +| [paging](#paging-elements) | Specifies settings to split query results into multiple [pages](/azure/cosmos-db/nosql/query/pagination). | No | #### parameter attributes Use the policy to configure a single query request, read request, delete request | Name|Description|Required| |-|--|--|-| [max-item-count](#max-item-count-attribute) | Specifies the [maximum number of items](../cosmos-db/nosql/query/pagination.md) returned by the query. Set to -1 if you don't want to place a limit on the number of results per query execution. | Yes | -| [continuation-token](#continuation-token-attribute) | Specifies the [continuation token](../cosmos-db/nosql/query/pagination.md#continuation-tokens) to attach to the query to get the next set of results. | Yes | +| [max-item-count](#max-item-count-attribute) | Specifies the [maximum number of items](/azure/cosmos-db/nosql/query/pagination) returned by the query. Set to -1 if you don't want to place a limit on the number of results per query execution. | Yes | +| [continuation-token](#continuation-token-attribute) | Specifies the [continuation token](/azure/cosmos-db/nosql/query/pagination#continuation-tokens) to attach to the query to get the next set of results. | Yes | #### max-item-count attribute Use the policy to configure a single query request, read request, delete request | Attribute | Description | Required | Default | | -- | - | -- | - |-| consistency-level | String. Sets the Cosmos DB [consistency level](../cosmos-db/consistency-levels.md) of the delete request. | No | N/A | -| pre-trigger | String. Identifier of a [pre-trigger](../cosmos-db/nosql/how-to-use-stored-procedures-triggers-udfs.md#how-to-run-pre-triggers) function that is registered in your Cosmos DB container. | No | N/A | -| post-trigger | String. Identifier of a [post-trigger](../cosmos-db/nosql/how-to-use-stored-procedures-triggers-udfs.md#how-to-run-post-triggers) function that is registered in your Cosmos DB container. | No | N/A | +| consistency-level | String. Sets the Cosmos DB [consistency level](/azure/cosmos-db/consistency-levels) of the delete request. | No | N/A | +| pre-trigger | String. Identifier of a [pre-trigger](/azure/cosmos-db/nosql/how-to-use-stored-procedures-triggers-udfs#how-to-run-pre-triggers) function that is registered in your Cosmos DB container. | No | N/A | +| post-trigger | String. Identifier of a [post-trigger](/azure/cosmos-db/nosql/how-to-use-stored-procedures-triggers-udfs#how-to-run-post-triggers) function that is registered in your Cosmos DB container. | No | N/A | ### delete-request elements Use the policy to configure a single query request, read request, delete request |-|--|--| | id | Identifier of the item to delete in the container. | Yes | | [partition-key](#partition-key-attributes) | A partition key for the location of the item in the container. | No | -| [etag](#etag-attribute) | Entity tag for the item in the container, used for [optimistic concurrency control](../cosmos-db/nosql/database-transactions-optimistic-concurrency.md#implementing-optimistic-concurrency-control-using-etag-and-http-headers). | No | +| [etag](#etag-attribute) | Entity tag for the item in the container, used for [optimistic concurrency control](/azure/cosmos-db/nosql/database-transactions-optimistic-concurrency#implementing-optimistic-concurrency-control-using-etag-and-http-headers). | No | #### write-request attributes | Attribute | Description | Required | Default | | -- | - | -- | - | | type | The type of write request: `insert`, `replace`, or `upsert`. | No | `upsert` |-| consistency-level | String. Sets the Cosmos DB [consistency level](../cosmos-db/consistency-levels.md) of the write request. | No | N/A | -| indexing-directive | The [indexing policy](../cosmos-db/index-policy.md) that determines how the container's items should be indexed. | No | `default` | -| pre-trigger | String. Identifier of a [pre-trigger](../cosmos-db/nosql/how-to-use-stored-procedures-triggers-udfs.md#how-to-run-pre-triggers) function that is registered in your Cosmos DB container. | No | N/A | -| post-trigger | String. Identifier of a [post-trigger](../cosmos-db/nosql/how-to-use-stored-procedures-triggers-udfs.md#how-to-run-post-triggers) function that is registered in your Cosmos DB container. | No | N/A | +| consistency-level | String. Sets the Cosmos DB [consistency level](/azure/cosmos-db/consistency-levels) of the write request. | No | N/A | +| indexing-directive | The [indexing policy](/azure/cosmos-db/index-policy) that determines how the container's items should be indexed. | No | `default` | +| pre-trigger | String. Identifier of a [pre-trigger](/azure/cosmos-db/nosql/how-to-use-stored-procedures-triggers-udfs#how-to-run-pre-triggers) function that is registered in your Cosmos DB container. | No | N/A | +| post-trigger | String. Identifier of a [post-trigger](/azure/cosmos-db/nosql/how-to-use-stored-procedures-triggers-udfs#how-to-run-post-triggers) function that is registered in your Cosmos DB container. | No | N/A | ### write-request elements |Name|Description|Required| |-|--|--| | id | Identifier of the item in the container. | Yes when `type` is `replace`. |-| [etag](#etag-attribute) | Entity tag for the item in the container, used for [optimistic concurrency control](../cosmos-db/nosql/database-transactions-optimistic-concurrency.md#implementing-optimistic-concurrency-control-using-etag-and-http-headers). | No | +| [etag](#etag-attribute) | Entity tag for the item in the container, used for [optimistic concurrency control](/azure/cosmos-db/nosql/database-transactions-optimistic-concurrency#implementing-optimistic-concurrency-control-using-etag-and-http-headers). | No | | [set-body](set-body-policy.md) | Sets the body in the write request. If not provided, the request payload will map arguments into JSON format.| No | ### response elements documents.azure.com:443/; ### Construct parameter input for Cosmos DB query -The following examples show ways to construct Cosmos DB [parameterized queries](../cosmos-db/nosql/query/parameterized-queries.md) using policy expressions. Choose a method based on the form of your parameter input. +The following examples show ways to construct Cosmos DB [parameterized queries](/azure/cosmos-db/nosql/query/parameterized-queries) using policy expressions. Choose a method based on the form of your parameter input. The examples are based on the following sample GraphQL schema, and generate the corresponding Cosmos DB parameterized query. |
app-service | Side By Side Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/side-by-side-migrate.md | Once you're ready to redirect traffic, you can complete the final step of the mi > [!NOTE] > It's important to complete this step as soon as possible. When your App Service Environment is in the hybrid state, it's unable to receive platform upgrades and security patches, which makes it more vulnerable to instability and security threats. >+> **You have 14 days to complete this step. After 14 days, the platform will automatically complete the migration and delete your old environment. If you need more time, you can open a support case to discuss your options**. +> If you discover any issues with your new App Service Environment v3, don't run the command to redirect customer traffic. This command also initiates the deletion of your App Service Environment v2. If you find an issue, contact support. az rest --method get --uri "${ASE_ID}?api-version=2022-03-01" --query properties ### 11. Redirect customer traffic, validate your App Service Environment v3, and complete migration -This step is your opportunity to test and validate your new App Service Environment v3. +This step is your opportunity to test and validate your new App Service Environment v3. ++> [!IMPORTANT] +> You have 14 days to complete this step. After 14 days, the platform will automatically complete the migration and delete your old environment. If you need more time, you can open a support case to discuss your options. +> Once you confirm your apps are working as expected, you can finalize the migration by running the following command. This command also deletes your old environment. |
app-service | Manage Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-backup.md | There are two types of backups in App Service. Automatic backups made for your a | Pricing tiers | **Basic**, **Standard**, **Premium**, **Isolated**. | **Basic**, **Standard**, **Premium**, **Isolated**. | | Configuration required | No. | Yes. | | Backup size | 30 GB. | 10 GB, 4 GB of which can be the linked database. |-| Linked database | Not backed up. | The following linked databases can be backed up: [SQL Database](/azure/azure-sql/database/), [Azure Database for MySQL](../mysql/index.yml), [Azure Database for PostgreSQL](../postgresql/index.yml), [MySQL in-app](https://azure.microsoft.com/blog/mysql-in-app-preview-app-service/). | +| Linked database | Not backed up. | The following linked databases can be backed up: [SQL Database](/azure/azure-sql/database/), [Azure Database for MySQL](/azure/mysql/), [Azure Database for PostgreSQL](/azure/postgresql/), [MySQL in-app](https://azure.microsoft.com/blog/mysql-in-app-preview-app-service/). | | [Storage account](../storage/index.yml) required | No. | Yes. | | Backup frequency | Hourly, not configurable. | Configurable. | | Retention | 30 days, not configurable. <br>- Days 1-3: hourly backups retained.<br>- Days 4-14: every third hourly backup retained.<br>- Days 15-30: every sixth hourly backup retained. | 0-30 days or indefinite. | When [backing up over an Azure Virtual Network](#back-up-and-restore-over-azure- Linked databases are backed up only for custom backups, up to the allowable maximum size. If the maximum backup size (10 GB) or the maximum database size (4 GB) is exceeded, your backup fails. Here are a few common reasons why your linked database isn't backed up: -* Backups of [TLS enabled Azure Database for MySQL](../mysql/concepts-ssl-connection-security.md) isn't supported. If a backup is configured, you get backup failures. -* Backups of [TLS enabled Azure Database for PostgreSQL](../postgresql/concepts-ssl-connection-security.md) isn't supported. If a backup is configured, you get backup failures. +* Backups of [TLS enabled Azure Database for MySQL](/azure/mysql/concepts-ssl-connection-security) isn't supported. If a backup is configured, you get backup failures. +* Backups of [TLS enabled Azure Database for PostgreSQL](/azure/postgresql/concepts-ssl-connection-security) isn't supported. If a backup is configured, you get backup failures. * In-app MySQL databases are automatically backed up without any configuration. If you make manual settings for in-app MySQL databases, such as adding connection strings, the backups might not work correctly. #### What happens if the backup size exceeds the allowable maximum? |
app-service | Manage Scale Up | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-scale-up.md | If your app depends on other services, such as Azure SQL Database or Azure Stora ![Navigate to resource group page to scale up your Azure app](./media/web-sites-scale/ResourceGroup.png) - To scale up the related resource, see the documentation for the specific resource type. For example, to scale up a single SQL Database, see [Scale single database resources in Azure SQL Database](/azure/azure-sql/database/single-database-scale). To scale up an Azure Database for MySQL resource, see [Scale MySQL resources](../mysql/concepts-pricing-tiers.md#scale-resources). + To scale up the related resource, see the documentation for the specific resource type. For example, to scale up a single SQL Database, see [Scale single database resources in Azure SQL Database](/azure/azure-sql/database/single-database-scale). To scale up an Azure Database for MySQL resource, see [Scale MySQL resources](/azure/mysql/concepts-pricing-tiers#scale-resources). <a name="OtherFeatures"></a> <a name="devfeatures"></a> |
app-service | Migrate Wordpress | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/migrate-wordpress.md | The prerequisite is that the WordPress on Linux Azure App Service must have been > [!NOTE]-> Azure Database for MySQL - Single Server is on the road to retirement by 16 September 2024. If your existing MySQL database is hosted on Azure Database for MySQL - Single Server, consider migrating to Azure Database for MySQL - Flexible Server using the following steps, or using [Azure Database Migration Service (DMS)](../mysql/single-server/whats-happening-to-mysql-single-server.md#migrate-from-single-server-to-flexible-server). +> Azure Database for MySQL - Single Server is on the road to retirement by 16 September 2024. If your existing MySQL database is hosted on Azure Database for MySQL - Single Server, consider migrating to Azure Database for MySQL - Flexible Server using the following steps, or using [Azure Database Migration Service (DMS)](/azure/mysql/single-server/whats-happening-to-mysql-single-server#migrate-from-single-server-to-flexible-server). > 6. If you migrate the database, import the SQL file downloaded from the source database into the database of your newly created WordPress site. You can do it via the PhpMyAdmin dashboard available at **\<sitename\>.azurewebsites.net/phpmyadmin**. If you're unable to one single large SQL file, separate the files into parts and try uploading again. Steps to import the database through phpmyadmin are described [here](https://docs.phpmyadmin.net/en/latest/import_export.html#import). |
app-service | Monitor App Service Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-app-service-reference.md | |
app-service | Monitor App Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-app-service.md | |
app-service | Overview App Gateway Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-app-gateway-integration.md | description: Learn how Application Gateway integrates with Azure App Service. ms.assetid: 073eb49c-efa1-4760-9f0c-1fecd5c251cc-+ Last updated 09/29/2023 |
app-service | Overview Manage Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-manage-costs.md | description: Learn how to plan for and manage costs for Azure App Service by usi -+ Last updated 06/23/2021 |
app-service | Overview Nat Gateway Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-nat-gateway-integration.md | |
app-service | Quickstart Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-java.md | -> [Connect to Azure Database for PostgreSQL with Java](../postgresql/connect-java.md) +> [Connect to Azure Database for PostgreSQL with Java](/azure/postgresql/connect-java) > [!div class="nextstepaction"] > [Set up CI/CD](deploy-continuous-deployment.md) |
app-service | Quickstart Wordpress | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-wordpress.md | -[WordPress](https://www.wordpress.org) is an open source Content Management System (CMS) used by over 40% of the web to create websites, blogs, and other applications. WordPress can be run on a few different Azure +[WordPress](https://www.wordpress.org) is an open source Content Management System (CMS) used by over 40% of the web to create websites, blogs, and other applications. WordPress can be run on a few different Azure -In this quickstart, you'll learn how to create and deploy your first [WordPress](https://www.wordpress.org/) site to [Azure App Service on Linux](overview.md#app-service-on-linux) with [Azure Database for MySQL - Flexible Server](../mysql/flexible-server/index.yml) using the [WordPress Azure Marketplace item by App Service](https://azuremarketplace.microsoft.com/marketplace/apps/WordPress.WordPress?tab=Overview). This quickstart uses the **Standard** tier for your app and a **Burstable, B2s** tier for your database, and incurs a cost for your Azure Subscription. For pricing, visit [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/), [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/), [Content Delivery Network pricing](https://azure.microsoft.com/pricing/details/storage/blobs/), and [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/). +In this quickstart, you'll learn how to create and deploy your first [WordPress](https://www.wordpress.org/) site to [Azure App Service on Linux](overview.md#app-service-on-linux) with [Azure Database for MySQL - Flexible Server](/azure/mysql/flexible-server/) using the [WordPress Azure Marketplace item by App Service](https://azuremarketplace.microsoft.com/marketplace/apps/WordPress.WordPress?tab=Overview). This quickstart uses the **Standard** tier for your app and a **Burstable, B2s** tier for your database, and incurs a cost for your Azure Subscription. For pricing, visit [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/), [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/), [Content Delivery Network pricing](https://azure.microsoft.com/pricing/details/storage/blobs/), and [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/). To complete this quickstart, you need an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs). When no longer needed, you can delete the resource group, App service, and all r - Database username and password of the MySQL Flexible Server are generated automatically. To retrieve these values after the deployment go to Application Settings section of the Configuration page in Azure App Service. The WordPress configuration is modified to use these [Application Settings](reference-app-settings.md#wordpress) to connect to the MySQL database. -- To change the MySQL database password, see [Reset admin password](../mysql/flexible-server/how-to-manage-server-portal.md#reset-admin-password). Whenever the MySQL database credentials are changed, the [Application Settings](reference-app-settings.md#wordpress) need to be updated. The [Application Settings for MySQL database](reference-app-settings.md#wordpress) begin with the **`DATABASE_`** prefix. For more information on updating MySQL passwords, see [WordPress on App Service](https://github.com/Azure/wordpress-linux-appservice/blob/main/WordPress/changing_mysql_database_password.md).+- To change the MySQL database password, see [Reset admin password](/azure/mysql/flexible-server/how-to-manage-server-portal#reset-admin-password). Whenever the MySQL database credentials are changed, the [Application Settings](reference-app-settings.md#wordpress) need to be updated. The [Application Settings for MySQL database](reference-app-settings.md#wordpress) begin with the **`DATABASE_`** prefix. For more information on updating MySQL passwords, see [WordPress on App Service](https://github.com/Azure/wordpress-linux-appservice/blob/main/WordPress/changing_mysql_database_password.md). ## Change WordPress admin password |
app-service | Reference App Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-app-settings.md | APACHE_RUN_GROUP | RUN sed -i 's!User ${APACHE_RUN_GROUP}!Group www-data!g' /etc > |`DATABASE_HOST`|Database|-|-|Database host used to connect to WordPress.| > |`DATABASE_NAME`|Database|-|-|Database name used to connect to WordPress.| > |`DATABASE_USERNAME`|Database|-|-|Database username used to connect to WordPress.|-> |`DATABASE_PASSWORD`|Database|-|-|Database password used to connect to the MySQL database. To change the MySQL database password, see [update admin password](../mysql/single-server/how-to-create-manage-server-portal.md#update-admin-password). Whenever the MySQL database password is changed, the Application Settings also need to be updated. | +> |`DATABASE_PASSWORD`|Database|-|-|Database password used to connect to the MySQL database. To change the MySQL database password, see [update admin password](/azure/mysql/single-server/how-to-create-manage-server-portal#update-admin-password). Whenever the MySQL database password is changed, the Application Settings also need to be updated. | > |`WORDPRESS_ADMIN_EMAIL`|Deployment only|-|-|WordPress admin email.| > |`WORDPRESS_ADMIN_PASSWORD`|Deployment only|-|-|WordPress admin password. This is only for deployment purposes. Modifying this value has no effect on the WordPress installation. To change the WordPress admin password, see [resetting your password](https://wordpress.org/support/article/resetting-your-password/#to-change-your-password).| > |`WORDPRESS_ADMIN_USER`|Deployment only|-|-|WordPress admin username| |
app-service | Samples Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/samples-terraform.md | |
app-service | Powershell Deploy Private Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-deploy-private-endpoint.md | ms.assetid: e1cc08d5-91cf-49d7-8d0a-c0e7bd2046ac Last updated 12/06/2022 -+ |
app-service | Template Deploy Private Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/template-deploy-private-endpoint.md | ms.assetid: 49e460d0-7759-4ceb-b5a4-f1357e4fde56 Last updated 07/08/2020 -+ |
app-service | Terraform Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/terraform-backup.md | description: In this quickstart, you create an Azure Windows web app with a back Last updated 07/02/2024 -+ customer intent: As a Terraform user, I want to see how to create an Azure Windows web app with a backup schedule and a .NET application stack. |
app-service | Terraform Secure Backend Frontend | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/terraform-secure-backend-frontend.md | ms.assetid: 3e5d1bbd-5581-40cc-8f65-bc74f1802156 Last updated 12/06/2022 -+ |
app-service | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Last updated 02/06/2024 -+ |
app-service | Tutorial Connect App Access Sql Database As User Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-sql-database-as-user-dotnet.md | Title: 'Tutorial - Web app accesses SQL Database as the user' description: Secure database connectivity with Microsoft Entra authentication from .NET web app, using the signed-in user. Learn how to apply it to other Azure services. -+ ms.devlang: csharp |
app-service | Tutorial Connect Msi Azure Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-azure-database.md | +- [Azure Database for MySQL](/azure/mysql/) +- [Azure Database for PostgreSQL](/azure/postgresql/) > [!NOTE]-> This tutorial doesn't include guidance for [Azure Cosmos DB](../cosmos-db/index.yml), which supports Microsoft Entra authentication differently. For more information, see the Azure Cosmos DB documentation, such as [Use system-assigned managed identities to access Azure Cosmos DB data](../cosmos-db/managed-identity-based-authentication.yml). +> This tutorial doesn't include guidance for [Azure Cosmos DB](/azure/cosmos-db/), which supports Microsoft Entra authentication differently. For more information, see the Azure Cosmos DB documentation, such as [Use system-assigned managed identities to access Azure Cosmos DB data](/azure/cosmos-db/managed-identity-based-authentication). Managed identities in App Service make your app more secure by eliminating secrets from your app, such as credentials in the connection strings. This tutorial shows you how to connect to the above-mentioned databases from App Service using managed identities. The following Azure CLI command uses a `--client-type` parameter. # [Azure Database for MySQL](#tab/mysql-sc) > [!NOTE]-> For Azure Database for MySQL - Flexible Server, you must first [manually set up Microsoft Entra authentication](../mysql/flexible-server/how-to-azure-ad.md), which requires a separate user-assigned managed identity and specific Microsoft Graph permissions. This step can't be automated. +> For Azure Database for MySQL - Flexible Server, you must first [manually set up Microsoft Entra authentication](/azure/mysql/flexible-server/how-to-azure-ad), which requires a separate user-assigned managed identity and specific Microsoft Graph permissions. This step can't be automated. -1. Manually [set up Microsoft Entra authentication for Azure Database for MySQL - Flexible Server](../mysql/flexible-server/how-to-azure-ad.md). +1. Manually [set up Microsoft Entra authentication for Azure Database for MySQL - Flexible Server](/azure/mysql/flexible-server/how-to-azure-ad). 1. Optionally run the command `az webapp connection create mysql-flexible -h` to get the supported client types. To grant database permissions for a Microsoft Entra group, see documentation for Connecting to the Azure database requires additional settings and is beyond the scope of this tutorial. For more information, see one of the following links: -[Configure TLS connectivity in Azure Database for PostgreSQL - Single Server](../postgresql/concepts-ssl-connection-security.md) -[Configure SSL connectivity in your application to securely connect to Azure Database for MySQL](../mysql/howto-configure-ssl.md) +[Configure TLS connectivity in Azure Database for PostgreSQL - Single Server](/azure/postgresql/concepts-ssl-connection-security) +[Configure SSL connectivity in your application to securely connect to Azure Database for MySQL](/azure/mysql/howto-configure-ssl) ## Next steps |
app-service | Tutorial Dotnetcore Sqldb App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md | Last updated 06/30/2024 ms.devlang: csharp-+ zone_pivot_groups: app-service-portal-azd |
app-service | Tutorial Java Quarkus Postgresql App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-quarkus-postgresql-app.md | zone_pivot_groups: app-service-portal-azd # Tutorial: Build a Quarkus web app with Azure App Service on Linux and PostgreSQL -This tutorial shows how to build, configure, and deploy a secure [Quarkus](https://quarkus.io) application in Azure App Service that's connected to a PostgreSQL database (using [Azure Database for PostgreSQL](../postgresql/index.yml)). Azure App Service is a highly scalable, self-patching, web-hosting service that can easily deploy apps on Windows or Linux. When you're finished, you'll have a Quarkus app running on [Azure App Service on Linux](overview.md). +This tutorial shows how to build, configure, and deploy a secure [Quarkus](https://quarkus.io) application in Azure App Service that's connected to a PostgreSQL database (using [Azure Database for PostgreSQL](/azure/postgresql/)). Azure App Service is a highly scalable, self-patching, web-hosting service that can easily deploy apps on Windows or Linux. When you're finished, you'll have a Quarkus app running on [Azure App Service on Linux](overview.md). :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-browse-app-2.png" alt-text="Screenshot of Quarkus application storing data in PostgreSQL."::: |
app-service | Tutorial Java Spring Cosmosdb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-spring-cosmosdb.md | -When you are finished, you will have a [Spring Boot](https://spring.io/projects/spring-boot) application storing data in [Azure Cosmos DB](../cosmos-db/index.yml) running on [Azure App Service on Linux](overview.md). +When you are finished, you will have a [Spring Boot](https://spring.io/projects/spring-boot) application storing data in [Azure Cosmos DB](/azure/cosmos-db/) running on [Azure App Service on Linux](overview.md). ![Spring Boot application storing data in Azure Cosmos DB](./media/tutorial-java-spring-cosmosdb/spring-todo-app-running-locally.jpg) az group delete --name <your-azure-group-name> --yes [Azure for Java Developers](/java/azure/) [Spring Boot](https://spring.io/projects/spring-boot), [Spring Data for Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db), -[Azure Cosmos DB](../cosmos-db/introduction.md) and +[Azure Cosmos DB](/azure/cosmos-db/introduction) and [App Service Linux](overview.md). Learn more about running Java apps on App Service on Linux in the developer guide. |
app-service | Tutorial Java Tomcat Connect Managed Identity Postgresql Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-tomcat-connect-managed-identity-postgresql-database.md | -[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service in Azure. It also provides a [managed identity](overview-managed-identity.md) for your app, which is a turn-key solution for securing access to [Azure Database for PostgreSQL](../postgresql/index.yml) and other Azure services. Managed identities in App Service make your app more secure by eliminating secrets from your app, such as credentials in the environment variables. In this tutorial, you learn how to: +[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service in Azure. It also provides a [managed identity](overview-managed-identity.md) for your app, which is a turn-key solution for securing access to [Azure Database for PostgreSQL](/azure/postgresql/) and other Azure services. Managed identities in App Service make your app more secure by eliminating secrets from your app, such as credentials in the environment variables. In this tutorial, you learn how to: > [!div class="checklist"] > * Create a PostgreSQL database. |
app-service | Tutorial Java Tomcat Mysql App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-tomcat-mysql-app.md | -This tutorial shows how to build, configure, and deploy a secure Tomcat application in Azure App Service that connects to a MySQL database (using [Azure Database for MySQL](../mysql/index.yml)). Azure App Service is a highly scalable, self-patching, web-hosting service that can easily deploy apps on Windows or Linux. When you're finished, you'll have a Tomcat app running on [Azure App Service on Linux](overview.md). +This tutorial shows how to build, configure, and deploy a secure Tomcat application in Azure App Service that connects to a MySQL database (using [Azure Database for MySQL](/azure/mysql/)). Azure App Service is a highly scalable, self-patching, web-hosting service that can easily deploy apps on Windows or Linux. When you're finished, you'll have a Tomcat app running on [Azure App Service on Linux](overview.md). :::image type="content" source="./media/tutorial-java-tomcat-mysql-app/azure-portal-browse-app-2.png" alt-text="Screenshot of Tomcat application storing data in MySQL."::: |
app-service | Tutorial Nodejs Mongodb App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-nodejs-mongodb-app.md | Title: Deploy a Node.js web app using MongoDB to Azure description: This article shows you have to deploy a Node.js app using Express.js and a MongoDB database to Azure. Azure App Service is used to host the web application and Azure Cosmos DB to host the database using the 100% compatible MongoDB API built into Azure Cosmos DB. Last updated 09/06/2022-+ ms.role: developer ms.devlang: javascript -[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service using the Linux operating system. This tutorial shows how to create a secure Node.js app in Azure App Service that's connected to a [Azure Cosmos DB for MongoDB](../cosmos-db/mongodb/mongodb-introduction.md) database. When you're finished, you'll have an Express.js app running on Azure App Service on Linux. +[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service using the Linux operating system. This tutorial shows how to create a secure Node.js app in Azure App Service that's connected to a [Azure Cosmos DB for MongoDB](/azure/cosmos-db/mongodb/mongodb-introduction) database. When you're finished, you'll have an Express.js app running on Azure App Service on Linux. :::image type="content" source="./media/tutorial-nodejs-mongodb-app/app-diagram.png" alt-text="A diagram showing how the Express.js app will be deployed to Azure App Service and the MongoDB data will be hosted inside of Azure Cosmos DB." lightbox="./media/tutorial-nodejs-mongodb-app/app-diagram-large.png"::: |
app-service | Tutorial Php Mysql App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-php-mysql-app.md | Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps :::row::: :::column span="2"::: **Step 4:** Using the same steps in **Step 3**, create the following app settings:- - **MYSQL_ATTR_SSL_CA**: Use */home/site/wwwroot/ssl/DigiCertGlobalRootCA.crt.pem* as the value. This app setting points to the path of the [TLS/SSL certificate you need to access the MySQL server](../mysql/flexible-server/how-to-connect-tls-ssl.md#download-the-public-ssl-certificate). It's included in the sample repository for convenience. + - **MYSQL_ATTR_SSL_CA**: Use */home/site/wwwroot/ssl/DigiCertGlobalRootCA.crt.pem* as the value. This app setting points to the path of the [TLS/SSL certificate you need to access the MySQL server](/azure/mysql/flexible-server/how-to-connect-tls-ssl#download-the-public-ssl-certificate). It's included in the sample repository for convenience. - **LOG_CHANNEL**: Use *stderr* as the value. This setting tells Laravel to pipe logs to stderr, which makes it available to the App Service logs. - **APP_DEBUG**: Use *true* as the value. It's a [Laravel debugging variable](https://laravel.com/docs/10.x/errors#configuration) that enables debug mode pages. - **APP_KEY**: Use *base64:Dsz40HWwbCqnq0oxMsjq7fItmKIeBfCBGORfspaI1Kw=* as the value. It's a [Laravel encryption variable](https://laravel.com/docs/10.x/encryption#configuration). |
app-service | Tutorial Python Postgresql App Fastapi | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app-fastapi.md | zone_pivot_groups: app-service-portal-azd # Deploy a Python FastAPI web app with PostgreSQL in Azure -In this tutorial, you deploy a data-driven Python web app (**[FastAPI](https://fastapi.tiangolo.com/)** ) to **[Azure App Service](./overview.md#app-service-on-linux)** with the **[Azure Database for PostgreSQL](../postgresql/index.yml)** relational database service. Azure App Service supports [Python](https://www.python.org/downloads/) in a Linux server environment. +In this tutorial, you deploy a data-driven Python web app (**[FastAPI](https://fastapi.tiangolo.com/)** ) to **[Azure App Service](./overview.md#app-service-on-linux)** with the **[Azure Database for PostgreSQL](/azure/postgresql/)** relational database service. Azure App Service supports [Python](https://www.python.org/downloads/) in a Linux server environment. :::image type="content" border="False" source="./media/tutorial-python-postgresql-app-fastapi/python-postgresql-app-architecture-240px.png" lightbox="./media/tutorial-python-postgresql-app-fastapi/python-postgresql-app-architecture.png" alt-text="An architecture diagram showing an App Service with a PostgreSQL database in Azure."::: |
app-service | Tutorial Python Postgresql App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md | zone_pivot_groups: app-service-portal-azd # Deploy a Python (Django or Flask) web app with PostgreSQL in Azure -In this tutorial, you'll deploy a data-driven Python web app (**[Django](https://www.djangoproject.com/)** or **[Flask](https://flask.palletsprojects.com/)**) to **[Azure App Service](./overview.md#app-service-on-linux)** with the **[Azure Database for PostgreSQL](../postgresql/index.yml)** relational database service. Azure App Service supports [Python](https://www.python.org/downloads/) in a Linux server environment. +In this tutorial, you'll deploy a data-driven Python web app (**[Django](https://www.djangoproject.com/)** or **[Flask](https://flask.palletsprojects.com/)**) to **[Azure App Service](./overview.md#app-service-on-linux)** with the **[Azure Database for PostgreSQL](/azure/postgresql/)** relational database service. Azure App Service supports [Python](https://www.python.org/downloads/) in a Linux server environment. :::image type="content" border="False" source="./media/tutorial-python-postgresql-app/python-postgresql-app-architecture-240px.png" lightbox="./media/tutorial-python-postgresql-app/python-postgresql-app-architecture.png" alt-text="An architecture diagram showing an App Service with a PostgreSQL database in Azure."::: |
app-spaces | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-spaces/overview.md | App Spaces only requires information that's needed during the development proces - [Deploy an App Spaces starter app](quickstart-deploy-starter-app.md) - [Compare Container Apps with other Azure contain options](../container-apps/compare-options.md)-- [About Azure Cosmos DB](../cosmos-db/introduction.md)+- [About Azure Cosmos DB](/azure/cosmos-db/introduction) |
azure-arc | Restore Adventureworks Sample Db Into Postgresql Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/restore-adventureworks-sample-db-into-postgresql-server.md | kubectl exec <PostgreSQL pod name> -n <namespace name> -c postgres -- psql --use ## Suggested next steps - Read the concepts and How-to guides of Azure Database for PostgreSQL to distribute your data across multiple PostgreSQL server nodes and to benefit from all the power of Azure Database for PostgreSQL. :- * [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md) - * [Determine application type](../../postgresql/hyperscale/howto-app-type.md) - * [Choose a distribution column](../../postgresql/hyperscale/howto-choose-distribution-column.md) - * [Table colocation](../../postgresql/hyperscale/concepts-colocation.md) - * [Distribute and modify tables](../../postgresql/hyperscale/howto-modify-distributed-tables.md) - * [Design a multi-tenant database](../../postgresql/hyperscale/tutorial-design-database-multi-tenant.md)* - * [Design a real-time analytics dashboard](../../postgresql/hyperscale/tutorial-design-database-realtime.md)* + * [Nodes and tables](/azure/postgresql/hyperscale/concepts-nodes) + * [Determine application type](/azure/postgresql/hyperscale/howto-app-type) + * [Choose a distribution column](/azure/postgresql/hyperscale/howto-choose-distribution-column) + * [Table colocation](/azure/postgresql/hyperscale/concepts-colocation) + * [Distribute and modify tables](/azure/postgresql/hyperscale/howto-modify-distributed-tables) + * [Design a multi-tenant database](/azure/postgresql/hyperscale/tutorial-design-database-multi-tenant)* + * [Design a real-time analytics dashboard](/azure/postgresql/hyperscale/tutorial-design-database-realtime)* > \* In the documents above, skip the sections **Sign in to the Azure portal**, & **Create an Azure Database for PostgreSQL**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL server offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL server. |
azure-arc | Security Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/security-overview.md | Azure Arc resource bridge follows data residency regulations specific to each re ## Data encryption at rest -Azure Arc resource bridge stores resource information in Azure Cosmos DB. As described in [Encryption at rest in Azure Cosmos DB](../../cosmos-db/database-encryption-at-rest.md), all the data is encrypted at rest. +Azure Arc resource bridge stores resource information in Azure Cosmos DB. As described in [Encryption at rest in Azure Cosmos DB](/azure/cosmos-db/database-encryption-at-rest), all the data is encrypted at rest. ## Security audit logs |
azure-cache-for-redis | Cache Overview Vector Similarity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-overview-vector-similarity.md | Additionally, Redis is often an economical choice because it's already so common There are multiple other solutions on Azure for vector storage and search. Other solutions include: - [Azure AI Search](../search/vector-search-overview.md)-- [Azure Cosmos DB](../cosmos-db/mongodb/vcore/vector-search.md) using the MongoDB vCore API-- [Azure Database for PostgreSQL - Flexible Server](../postgresql/flexible-server/how-to-use-pgvector.md) using `pgvector`+- [Azure Cosmos DB](/azure/cosmos-db/mongodb/vcore/vector-search) using the MongoDB vCore API +- [Azure Database for PostgreSQL - Flexible Server](/azure/postgresql/flexible-server/how-to-use-pgvector) using `pgvector` ## Related content |
azure-fluid-relay | Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/concepts/customer-managed-keys.md | Title: Customer-managed keys for Azure Fluid Relay encryption description: Better understand the data encryption with CMK Last updated 10/08/2021-+ |
azure-fluid-relay | Data Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/concepts/data-encryption.md | Title: Data encryption in Azure Fluid Relay description: Better understand the data encryption in Fluid Relay Server Last updated 10/08/2021-+ # Data encryption in Azure Fluid Relay -Azure Fluid Relay leverages the encryption-at-rest capability of [Azure Kubernetes Service](/azure/aks/enable-host-encryption), [Azure Cosmos DB](../../cosmos-db/database-encryption-at-rest.md) and [Azure Blob Storage](../../storage/common/storage-service-encryption.md). The service-to-service communication between Azure Fluid Relay and these resources is TLS encrypted and is enclosed in with the Azure Virtual Network boundary, protected from external interference by Network Security Rules. +Azure Fluid Relay leverages the encryption-at-rest capability of [Azure Kubernetes Service](/azure/aks/enable-host-encryption), [Azure Cosmos DB](/azure/cosmos-db/database-encryption-at-rest) and [Azure Blob Storage](../../storage/common/storage-service-encryption.md). The service-to-service communication between Azure Fluid Relay and these resources is TLS encrypted and is enclosed in with the Azure Virtual Network boundary, protected from external interference by Network Security Rules. The diagram below shows at a high level how Azure Fluid Relay is implemented and how it handles data storage. |
azure-functions | Dotnet Isolated Process Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md | Each trigger and binding extension also has its own minimum version requirement, <sup>1</sup> For output scenarios in which you would use an SDK type, you should create and work with SDK clients directly instead of using an output binding. See [Register Azure clients](#register-azure-clients) for a dependency injection example. -<sup>2</sup> The Cosmos DB trigger uses the [Azure Cosmos DB change feed](../cosmos-db/change-feed.md) and exposes change feed items as JSON-serializable types. The absence of SDK types is by-design for this scenario. +<sup>2</sup> The Cosmos DB trigger uses the [Azure Cosmos DB change feed](/azure/cosmos-db/change-feed) and exposes change feed items as JSON-serializable types. The absence of SDK types is by-design for this scenario. > [!NOTE] > When using [binding expressions](./functions-bindings-expressions-patterns.md) that rely on trigger data, SDK types for the trigger itself cannot be used. |
azure-functions | Flex Consumption Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/flex-consumption-plan.md | Keep these other considerations in mind when using Flex Consumption plan during + Continuous deployment using GitHub Actions (`functions-action@v1`) + **Scale**: The lowest maximum scale in preview is `40`. The highest currently supported value is `1000`. + **Managed dependencies**: [Managed dependencies in PowerShell](functions-reference-powershell.md#dependency-management) aren't supported by Flex Consumption. You must instead [define your own custom modules](functions-reference-powershell.md#custom-modules).++ **Diagnostic settings**: Diagnostic settings are not currently supported. ## Related articles |
azure-functions | Functions Add Output Binding Cosmos Db Vs Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-cosmos-db-vs-code.md | -This article shows you how to use Visual Studio Code to connect [Azure Cosmos DB](../cosmos-db/introduction.md) to the function you created in the previous quickstart article. The output binding that you add to this function writes data from the HTTP request to a JSON document stored in an Azure Cosmos DB container. +This article shows you how to use Visual Studio Code to connect [Azure Cosmos DB](/azure/cosmos-db/introduction) to the function you created in the previous quickstart article. The output binding that you add to this function writes data from the HTTP request to a JSON document stored in an Azure Cosmos DB container. ::: zone pivot="programming-language-csharp" Before you begin, you must complete the [quickstart: Create a C# function in Azure using Visual Studio Code](create-first-function-vs-code-csharp.md). If you already cleaned up resources at the end of that article, go through the steps again to recreate the function app and related resources in Azure. Before you get started, make sure to install the [Azure Databases extension](htt ## Create your Azure Cosmos DB account -Now, you create an Azure Cosmos DB account as a [serverless account type](../cosmos-db/serverless.md). This consumption-based mode makes Azure Cosmos DB a strong option for serverless workloads. +Now, you create an Azure Cosmos DB account as a [serverless account type](/azure/cosmos-db/serverless). This consumption-based mode makes Azure Cosmos DB a strong option for serverless workloads. 1. In Visual Studio Code, select **View** > **Command Palette...** then in the command palette search for `Azure Databases: Create Server...` Now, you create an Azure Cosmos DB account as a [serverless account type](../cos |Prompt| Selection| |--|--|- |**Select an Azure Database Server**| Choose **Core (NoSQL)** to create a document database that you can query by using a SQL syntax or a Query Copilot ([Preview](../cosmos-db/nosql/query/how-to-enable-use-copilot.md)) converting natural language prompts to queries. [Learn more about the Azure Cosmos DB](../cosmos-db/introduction.md). | + |**Select an Azure Database Server**| Choose **Core (NoSQL)** to create a document database that you can query by using a SQL syntax or a Query Copilot ([Preview](/azure/cosmos-db/nosql/query/how-to-enable-use-copilot)) converting natural language prompts to queries. [Learn more about the Azure Cosmos DB](/azure/cosmos-db/introduction). | |**Account name**| Enter a unique name to identify your Azure Cosmos DB account. The account name can use only lowercase letters, numbers, and hyphens (-), and must be between 3 and 31 characters long.|- |**Select a capacity model**| Select **Serverless** to create an account in [serverless](../cosmos-db/serverless.md) mode. + |**Select a capacity model**| Select **Serverless** to create an account in [serverless](/azure/cosmos-db/serverless) mode. |**Select a resource group for new resources**| Choose the resource group where you created your function app in the [previous article](./create-first-function-vs-code-csharp.md). | |**Select a location for new resources**| Select a geographic location to host your Azure Cosmos DB account. Use the location that's closest to you or your users to get the fastest access to your data. | Now, you create an Azure Cosmos DB account as a [serverless account type](../cos |--|--| |**Database name** | Type `my-database`.| |**Enter and ID for your collection**| Type `my-container`. |- |**Enter the partition key for the collection**|Type `/id` as the [partition key](../cosmos-db/partitioning-overview.md).| + |**Enter the partition key for the collection**|Type `/id` as the [partition key](/azure/cosmos-db/partitioning-overview).| 1. Select **OK** to create the container and database. |
azure-functions | Functions Bindings Cosmosdb V2 Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-input.md | The Azure Cosmos DB input binding uses the SQL API to retrieve one or more Azure For information on setup and configuration details, see the [overview](./functions-bindings-cosmosdb-v2.md). > [!NOTE]-> When the collection is [partitioned](../cosmos-db/partitioning-overview.md#logical-partitions), lookup operations must also specify the partition key value. +> When the collection is [partitioned](/azure/cosmos-db/partitioning-overview#logical-partitions), lookup operations must also specify the partition key value. > ::: zone pivot="programming-language-javascript,programming-language-typescript" The following example shows a [C# function](functions-dotnet-class-library.md) t The example shows how to use a binding expression in the `SqlQuery` parameter. You can pass route data to the `SqlQuery` parameter as shown, but currently [you can't pass query string values](https://github.com/Azure/azure-functions-host/issues/2554#issuecomment-392084583). > [!NOTE]-> If you need to query by just the ID, it is recommended to use a look up, like the [previous examples](#http-trigger-look-up-id-from-query-string-c), as it will consume less [request units](../cosmos-db/request-units.md). Point read operations (GET) are [more efficient](../cosmos-db/optimize-cost-reads-writes.md) than queries by ID. +> If you need to query by just the ID, it is recommended to use a look up, like the [previous examples](#http-trigger-look-up-id-from-query-string-c), as it will consume less [request units](/azure/cosmos-db/request-units). Point read operations (GET) are [more efficient](/azure/cosmos-db/optimize-cost-reads-writes) than queries by ID. > ```cs public class DocByIdFromRoute { The following example shows a Java function that retrieves a single document. The function is triggered by an HTTP request that uses a route parameter to specify the ID to look up. That ID is used to retrieve a document from the specified database and collection, converting the result set to a `ToDoItem[]`, since many documents may be returned, depending on the query criteria. > [!NOTE]-> If you need to query by just the ID, it is recommended to use a look up, like the [previous examples](#http-trigger-look-up-id-from-query-stringpojo-parameter-java), as it will consume less [request units](../cosmos-db/request-units.md). Point read operations (GET) are [more efficient](../cosmos-db/optimize-cost-reads-writes.md) than queries by ID. +> If you need to query by just the ID, it is recommended to use a look up, like the [previous examples](#http-trigger-look-up-id-from-query-stringpojo-parameter-java), as it will consume less [request units](/azure/cosmos-db/request-units). Point read operations (GET) are [more efficient](/azure/cosmos-db/optimize-cost-reads-writes) than queries by ID. > ```java |
azure-functions | Functions Bindings Cosmosdb V2 Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-trigger.md | zone_pivot_groups: programming-languages-set-functions # Azure Cosmos DB trigger for Azure Functions 2.x and higher -The Azure Cosmos DB Trigger uses the [Azure Cosmos DB change feed](../cosmos-db/change-feed.md) to listen for inserts and updates across partitions. The change feed publishes new and updated items, not including updates from deletions. +The Azure Cosmos DB Trigger uses the [Azure Cosmos DB change feed](/azure/cosmos-db/change-feed) to listen for inserts and updates across partitions. The change feed publishes new and updated items, not including updates from deletions. For information on setup and configuration details, see the [overview](./functions-bindings-cosmosdb-v2.md). Use the `@CosmosDBTrigger` annotation on parameters that read data from Azure Co |**leaseConnectionStringSetting** | (Optional) The name of an app setting or setting collection that specifies how to connect to the Azure Cosmos DB account that holds the lease container. <br><br> When not set, the `Connection` value is used. This parameter is automatically set when the binding is created in the portal. The connection string for the leases container must have write permissions.| |**leaseDatabaseName** | (Optional) The name of the database that holds the container used to store leases. When not set, the value of the `databaseName` setting is used. | |**leaseContainerName** | (Optional) The name of the container used to store leases. When not set, the value `leases` is used. |-|**createLeaseContainerIfNotExists** | (Optional) When set to `true`, the leases container is automatically created when it doesn't already exist. The default value is `false`. When using Microsoft Entra identities if you set the value to `true`, creating containers isn't [an allowed operation](../cosmos-db/nosql/troubleshoot-forbidden.md#non-data-operations-are-not-allowed) and your Function won't start.| +|**createLeaseContainerIfNotExists** | (Optional) When set to `true`, the leases container is automatically created when it doesn't already exist. The default value is `false`. When using Microsoft Entra identities if you set the value to `true`, creating containers isn't [an allowed operation](/azure/cosmos-db/nosql/troubleshoot-forbidden#non-data-operations-are-not-allowed) and your Function won't start.| |**leasesContainerThroughput** | (Optional) Defines the number of Request Units to assign when the leases container is created. This setting is only used when `CreateLeaseContainerIfNotExists` is set to `true`. This parameter is automatically set when the binding is created using the portal. | |**leaseContainerPrefix** | (Optional) When set, the value is added as a prefix to the leases created in the Lease container for this function. Using a prefix allows two separate Azure Functions to share the same Lease container by using different prefixes. | |**feedPollDelay**| (Optional) The time (in milliseconds) for the delay between polling a partition for new changes on the feed, after all current changes are drained. Default is 5,000 milliseconds, or 5 seconds.| |**leaseAcquireInterval**| (Optional) When set, it defines, in milliseconds, the interval to kick off a task to compute if partitions are distributed evenly among known host instances. Default is 13000 (13 seconds). | |**leaseExpirationInterval**| (Optional) When set, it defines, in milliseconds, the interval for which the lease is taken on a lease representing a partition. If the lease isn't renewed within this interval, it will expire and ownership of the partition moves to another instance. Default is 60000 (60 seconds).| |**leaseRenewInterval**| (Optional) When set, it defines, in milliseconds, the renew interval for all leases for partitions currently held by an instance. Default is 17000 (17 seconds). |-|**maxItemsPerInvocation**| (Optional) When set, this property sets the maximum number of items received per Function call. If operations in the monitored container are performed through stored procedures, [transaction scope](../cosmos-db/nosql/stored-procedures-triggers-udfs.md#transactions) is preserved when reading items from the change feed. As a result, the number of items received could be higher than the specified value so that the items changed by the same transaction are returned as part of one atomic batch. | +|**maxItemsPerInvocation**| (Optional) When set, this property sets the maximum number of items received per Function call. If operations in the monitored container are performed through stored procedures, [transaction scope](/azure/cosmos-db/nosql/stored-procedures-triggers-udfs#transactions) is preserved when reading items from the change feed. As a result, the number of items received could be higher than the specified value so that the items changed by the same transaction are returned as part of one atomic batch. | |**startFromBeginning**| (Optional) This option tells the Trigger to read changes from the beginning of the container's change history instead of starting at the current time. Reading from the beginning only works the first time the trigger starts, as in subsequent runs, the checkpoints are already stored. Setting this option to `true` when there are leases already created has no effect. | |**preferredLocations**| (Optional) Defines preferred locations (regions) for geo-replicated database accounts in the Azure Cosmos DB service. Values should be comma-separated. For example, "East US,South Central US,North Europe". | |
azure-functions | Functions Bindings Cosmosdb V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2.md | zone_pivot_groups: programming-languages-set-functions > * [Version 1](functions-bindings-cosmosdb.md) > * [Version 2 and higher](functions-bindings-cosmosdb-v2.md) -This set of articles explains how to work with [Azure Cosmos DB](../cosmos-db/serverless-computing-database.md) bindings in Azure Functions 2.x and higher. Azure Functions supports trigger, input, and output bindings for Azure Cosmos DB. +This set of articles explains how to work with [Azure Cosmos DB](/azure/cosmos-db/serverless-computing-database) bindings in Azure Functions 2.x and higher. Azure Functions supports trigger, input, and output bindings for Azure Cosmos DB. | Action | Type | ||| _This section describes using a [class library](./functions-dotnet-class-library This version of the Azure Cosmos DB bindings extension introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md). -This version also changes the types that you can bind to, replacing the types from the v2 SDK `Microsoft.Azure.DocumentDB` with newer types from the v3 SDK [Microsoft.Azure.Cosmos](../cosmos-db/sql/sql-api-sdk-dotnet-standard.md). Learn more about how these new types are different and how to migrate to them from the [SDK migration guide](../cosmos-db/sql/migrate-dotnet-v3.md), [trigger](./functions-bindings-cosmosdb-v2-trigger.md), [input binding](./functions-bindings-cosmosdb-v2-input.md), and [output binding](./functions-bindings-cosmosdb-v2-output.md) examples. +This version also changes the types that you can bind to, replacing the types from the v2 SDK `Microsoft.Azure.DocumentDB` with newer types from the v3 SDK [Microsoft.Azure.Cosmos](/azure/cosmos-db/sql/sql-api-sdk-dotnet-standard). Learn more about how these new types are different and how to migrate to them from the [SDK migration guide](/azure/cosmos-db/sql/migrate-dotnet-v3), [trigger](./functions-bindings-cosmosdb-v2-trigger.md), [input binding](./functions-bindings-cosmosdb-v2-input.md), and [output binding](./functions-bindings-cosmosdb-v2-output.md) examples. This extension version is available as a [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB), version 4.x. Earlier versions of extensions in the isolated worker process only support bindi |Property |Default |Description | |-|--|| |**connectionMode**|`Gateway`|The connection mode used by the function when connecting to the Azure Cosmos DB service. Options are `Direct` and `Gateway`|-|**protocol**|`Https`|The connection protocol used by the function when connection to the Azure Cosmos DB service. Read [here for an explanation of both modes](../cosmos-db/performance-tips.md#networking). | +|**protocol**|`Https`|The connection protocol used by the function when connection to the Azure Cosmos DB service. Read [here for an explanation of both modes](/azure/cosmos-db/performance-tips#networking). | |**leasePrefix**|n/a|Lease prefix to use across all functions in an app. | |
azure-functions | Functions Bindings Cosmosdb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb.md | -This article explains how to work with [Azure Cosmos DB](../cosmos-db/serverless-computing-database.md) bindings in Azure Functions. Azure Functions supports trigger, input, and output bindings for Azure Cosmos DB. +This article explains how to work with [Azure Cosmos DB](/azure/cosmos-db/serverless-computing-database) bindings in Azure Functions. Azure Functions supports trigger, input, and output bindings for Azure Cosmos DB. > [!NOTE] > This article is for Azure Functions 1.x. For information about how to use these bindings in Functions 2.x and higher, see [Azure Cosmos DB bindings for Azure Functions 2.x](functions-bindings-cosmosdb-v2.md). This article explains how to work with [Azure Cosmos DB](../cosmos-db/serverless >This binding was originally named DocumentDB. In Azure Functions version 1.x, only the trigger was renamed Azure Cosmos DB; the input binding, output binding, and NuGet package retain the DocumentDB name. > [!NOTE]-> Azure Cosmos DB bindings are only supported for use with the SQL API. For all other Azure Cosmos DB APIs, you should access the database from your function by using the static client for your API, including [Azure Cosmos DB for MongoDB](../cosmos-db/mongodb-introduction.md), [Azure Cosmos DB for Apache Cassandra](../cosmos-db/cassandra-introduction.md), [Azure Cosmos DB for Apache Gremlin](../cosmos-db/graph-introduction.md), and [Azure Cosmos DB for Table](../cosmos-db/table-introduction.md). +> Azure Cosmos DB bindings are only supported for use with the SQL API. For all other Azure Cosmos DB APIs, you should access the database from your function by using the static client for your API, including [Azure Cosmos DB for MongoDB](/azure/cosmos-db/mongodb-introduction), [Azure Cosmos DB for Apache Cassandra](/azure/cosmos-db/cassandra-introduction), [Azure Cosmos DB for Apache Gremlin](/azure/cosmos-db/graph-introduction), and [Azure Cosmos DB for Table](/azure/cosmos-db/table-introduction). ## Packages - Functions 1.x The Azure Cosmos DB bindings for Functions version 1.x are provided in the [Micr ## Trigger -The Azure Cosmos DB Trigger uses the [Azure Cosmos DB Change Feed](../cosmos-db/change-feed.md) to listen for inserts and updates across partitions. The change feed publishes inserts and updates, not deletions. +The Azure Cosmos DB Trigger uses the [Azure Cosmos DB Change Feed](/azure/cosmos-db/change-feed) to listen for inserts and updates across partitions. The change feed publishes inserts and updates, not deletions. ## Trigger - example The following table explains the binding configuration properties that you set i |**collectionName** |**CollectionName** | The name of the collection where the document is created. | |**createIfNotExists** |**CreateIfNotExists** | A boolean value to indicate whether the collection is created when it doesn't exist. The default is *false* because new collections are created with reserved throughput, which has cost implications. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/documentdb/). | |**partitionKey**|**PartitionKey** |When `CreateIfNotExists` is true, defines the partition key path for the created collection.|-|**collectionThroughput**|**CollectionThroughput**| When `CreateIfNotExists` is true, defines the [throughput](../cosmos-db/set-throughput.md) of the created collection.| +|**collectionThroughput**|**CollectionThroughput**| When `CreateIfNotExists` is true, defines the [throughput](/azure/cosmos-db/set-throughput) of the created collection.| |**connection** |**ConnectionStringSetting** |The name of the app setting containing your Azure Cosmos DB connection string. | [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] By default, when you write to the output parameter in your function, a document ## Next steps -* [Learn more about serverless database computing with Azure Cosmos DB](../cosmos-db/serverless-computing-database.md) +* [Learn more about serverless database computing with Azure Cosmos DB](/azure/cosmos-db/serverless-computing-database) * [Learn more about Azure Functions triggers and bindings](functions-triggers-bindings.md) <! |
azure-functions | Functions Bindings Storage Table Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-input.md | zone_pivot_groups: programming-languages-set-functions # Azure Tables input bindings for Azure Functions -Use the Azure Tables input binding to read a table in [Azure Cosmos DB for Table](../cosmos-db/table/introduction.md) or [Azure Table Storage](../storage/tables/table-storage-overview.md). +Use the Azure Tables input binding to read a table in [Azure Cosmos DB for Table](/azure/cosmos-db/table/introduction) or [Azure Table Storage](../storage/tables/table-storage-overview.md). For information on setup and configuration details, see the [overview](./functions-bindings-storage-table.md). namespace FunctionAppCloudTable2 } ``` -For more information about how to use CloudTable, see [Get started with Azure Table storage](../cosmos-db/tutorial-develop-table-dotnet.md). +For more information about how to use CloudTable, see [Get started with Azure Table storage](/azure/cosmos-db/tutorial-develop-table-dotnet). If you try to bind to `CloudTable` and get an error message, make sure that you have a reference to [the correct Storage SDK version](./functions-bindings-storage-table.md#azure-storage-sdk-version-in-functions-1x). |
azure-functions | Functions Bindings Storage Table Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-output.md | zone_pivot_groups: programming-languages-set-functions # Azure Tables output bindings for Azure Functions -Use an Azure Tables output binding to write entities to a table in [Azure Cosmos DB for Table](../cosmos-db/table/introduction.md) or [Azure Table Storage](../storage/tables/table-storage-overview.md). +Use an Azure Tables output binding to write entities to a table in [Azure Cosmos DB for Table](/azure/cosmos-db/table/introduction) or [Azure Table Storage](../storage/tables/table-storage-overview.md). For information on setup and configuration details, see the [overview](./functions-bindings-storage-table.md) |
azure-functions | Functions Bindings Storage Table | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table.md | zone_pivot_groups: programming-languages-set-functions-lang-workers # Azure Tables bindings for Azure Functions -Azure Functions integrates with [Azure Tables](../cosmos-db/table/introduction.md) via [triggers and bindings](./functions-triggers-bindings.md). Integrating with Azure Tables allows you to build functions that read and write data using [Azure Cosmos DB for Table](../cosmos-db/table/introduction.md) and [Azure Table Storage](../storage/tables/table-storage-overview.md). +Azure Functions integrates with [Azure Tables](/azure/cosmos-db/table/introduction) via [triggers and bindings](./functions-triggers-bindings.md). Integrating with Azure Tables allows you to build functions that read and write data using [Azure Cosmos DB for Table](/azure/cosmos-db/table/introduction) and [Azure Table Storage](../storage/tables/table-storage-overview.md). | Action | Type | ||| |
azure-functions | Functions Create Cosmos Db Triggered Function | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-cosmos-db-triggered-function.md | -Learn how to create a function in the Azure portal that is triggered when data is added to or changed in Azure Cosmos DB. To learn more about Azure Cosmos DB, see [Azure Cosmos DB: Serverless database computing using Azure Functions](../cosmos-db/serverless-computing-database.md). +Learn how to create a function in the Azure portal that is triggered when data is added to or changed in Azure Cosmos DB. To learn more about Azure Cosmos DB, see [Azure Cosmos DB: Serverless database computing using Azure Functions](/azure/cosmos-db/serverless-computing-database). [!INCLUDE [functions-in-portal-editing-note](../../includes/functions-in-portal-editing-note.md)] Next, you connect to your Azure Cosmos DB account and create the `Items` contain | || | | **Database ID** | Tasks |The name for your new database. This must match the name defined in your function binding. | | **Container ID** | Items | The name for the new container. This must match the name defined in your function binding. |- | **[Partition key](../cosmos-db/partitioning-overview.md)** | /category|A partition key that distributes data evenly to each partition. Selecting the correct partition key is important in creating a performant container. | + | **[Partition key](/azure/cosmos-db/partitioning-overview)** | /category|A partition key that distributes data evenly to each partition. Selecting the correct partition key is important in creating a performant container. | | **Throughput** |400 RU| Use the default value. If you want to reduce latency, you can scale up the throughput later. | 1. Click **OK** to create the Items container. It may take a short time for the container to get created. |
azure-functions | Functions Host Json V1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-host-json-v1.md | Configuration settings for the [Azure Cosmos DB trigger and bindings](functions- |Property |Default | Description | |||| |GatewayMode|Gateway|The connection mode used by the function when connecting to the Azure Cosmos DB service. Options are `Direct` and `Gateway`|-|Protocol|Https|The connection protocol used by the function when connection to the Azure Cosmos DB service. Read [here for an explanation of both modes](../cosmos-db/performance-tips.md#networking)| +|Protocol|Https|The connection protocol used by the function when connection to the Azure Cosmos DB service. Read [here for an explanation of both modes](/azure/cosmos-db/performance-tips#networking)| |leasePrefix|n/a|Lease prefix to use across all functions in an app.| ## durableTask |
azure-functions | Functions Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-overview.md | The following are a common, _but by no means exhaustive_, set of integrated scen | [Run scheduled task](./functions-scenarios.md#run-scheduled-tasks)| Execute data clean-up code on pre-defined timed intervals. | | [Build a scalable web API](./functions-scenarios.md#build-a-scalable-web-api)| Implement a set of REST endpoints for your web applications using HTTP triggers. | | [Build a serverless workflow](./functions-scenarios.md#build-a-serverless-workflow)| Create an event-driven workflow from a series of functions using Durable Functions. |-| [Respond to database changes](./functions-scenarios.md#respond-to-database-changes)| Run custom logic when a document is created or updated in [Azure Cosmos DB](../cosmos-db/introduction.md). | +| [Respond to database changes](./functions-scenarios.md#respond-to-database-changes)| Run custom logic when a document is created or updated in [Azure Cosmos DB](/azure/cosmos-db/introduction). | | [Create reliable message systems](./functions-scenarios.md#create-reliable-message-systems)| Process message queues using Queue Storage, Service Bus, or Event Hubs. | These scenarios allow you to build event-driven systems using modern architectural patterns. For more information, see [Azure Functions Scenarios](functions-scenarios.md). |
azure-functions | Functions Premium Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-premium-plan.md | These are the currently supported maximum scale-out values for a single plan in |Canada Central| 100 | 100 | |Central India| 100 | 20 | |Central US| 100 | 100 |-|China East 2| 100 | 20 | -|China North 2| 100 | 20 | +|China East 2| 20 | 20 | +|China North 2| 20 | 20 | +|China North 3| 20 | 20 | |East Asia| 100 | 20 | |East US | 100 | 100 |-|East US 2| 100 | 100 | +|East US 2| 80 | 100 | |France Central| 100 | 60 | |Germany West Central| 100 | 20 | |Israel Central| 100 | 20 | These are the currently supported maximum scale-out values for a single plan in |UAE North| 100 | 20 | |UK South| 100 | 100 | |UK West| 100 | 20 |-|USGov Arizona| 100 | 20 | -|USGov Texas| 100 | Not Available | -|USGov Virginia| 100 | 20 | +|USGov Arizona| 20 | 20 | +|USGov Texas| 20 | Not Available | +|USGov Virginia| 80 | 20 | |West Central US| 100 | 20 | |West Europe| 100 | 100 | |West India| 100 | 20 | |
azure-functions | Functions Reference Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md | When you deploy your project to a function app in Azure, the entire contents of ## Connect to a database -Azure Functions integrates well with [Azure Cosmos DB](../cosmos-db/introduction.md) for many [use cases](../cosmos-db/use-cases.md), including IoT, ecommerce, gaming, etc. +Azure Functions integrates well with [Azure Cosmos DB](/azure/cosmos-db/introduction) for many [use cases](/azure/cosmos-db/use-cases), including IoT, ecommerce, gaming, etc. -For example, for [event sourcing](/azure/architecture/patterns/event-sourcing), the two services are integrated to power event-driven architectures using Azure Cosmos DB's [change feed](../cosmos-db/change-feed.md) functionality. The change feed provides downstream microservices the ability to reliably and incrementally read inserts and updates (for example, order events). This functionality can be used to provide a persistent event store as a message broker for state-changing events and drive order processing workflow between many microservices (which can be implemented as [serverless Azure Functions](https://azure.com/serverless)). +For example, for [event sourcing](/azure/architecture/patterns/event-sourcing), the two services are integrated to power event-driven architectures using Azure Cosmos DB's [change feed](/azure/cosmos-db/change-feed) functionality. The change feed provides downstream microservices the ability to reliably and incrementally read inserts and updates (for example, order events). This functionality can be used to provide a persistent event store as a message broker for state-changing events and drive order processing workflow between many microservices (which can be implemented as [serverless Azure Functions](https://azure.com/serverless)). :::image type="content" source="~/reusable-content/ce-skilling/azure/media/cosmos-db/event-sourcing.png" alt-text="Azure Cosmos DB ordering pipeline reference architecture" border="false"::: -To connect to Azure Cosmos DB, first [create an account, database, and container](../cosmos-db/nosql/quickstart-portal.md). Then you can connect your function code to Azure Cosmos DB using [trigger and bindings](functions-bindings-cosmosdb-v2.md), like this [example](functions-add-output-binding-cosmos-db-vs-code.md). +To connect to Azure Cosmos DB, first [create an account, database, and container](/azure/cosmos-db/nosql/quickstart-portal). Then you can connect your function code to Azure Cosmos DB using [trigger and bindings](functions-bindings-cosmosdb-v2.md), like this [example](functions-add-output-binding-cosmos-db-vs-code.md). To implement more complex app logic, you can also use the Python library for Cosmos DB. An asynchronous I/O implementation looks like this: |
azure-functions | Functions Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-scenarios.md | For example, in a retail solution, a partner system can submit product catalog i [ ![Diagram of a file upload process using Azure Functions.](./media/functions-scenarios/process-file-uploads.png) ](./media/functions-scenarios/process-file-uploads-expanded.png#lightbox) -The following tutorials use an Event Grid trigger to process files in a blob container: +The following tutorials use a Blob trigger (Event Grid based) to process files in a blob container: ::: zone pivot="programming-language-csharp" public static async Task Run([BlobTrigger("catalog-uploads/{name}", Source = Blo } ``` ++ [Event-based Blob storage triggered function that converts PDF documents to text at scale](https://github.com/Azure-Samples/azure-functions-flex-consumption-samples/tree/main/E2E/BLOB-PDF) + [Upload and analyze a file with Azure Functions and Blob Storage](../storage/blobs/blob-upload-function-trigger.md?tabs=dotnet) + [Automate resizing uploaded images using Event Grid](../event-grid/resize-images-on-storage-blob-upload-event.md?tabs=dotnet) + [Trigger Azure Functions on blob containers using an event subscription](functions-event-grid-blob-trigger.md?pivots=programming-language-csharp) public static async Task Run([BlobTrigger("catalog-uploads/{name}", Source = Blo ## Real-time stream and event processing -So much telemetry is generated and collected from cloud applications, IoT devices, and networking devices. Azure Functions can process that data in near real-time as the hot path, then store it in [Azure Cosmos DB](../cosmos-db/introduction.md) for use in an analytics dashboard. +So much telemetry is generated and collected from cloud applications, IoT devices, and networking devices. Azure Functions can process that data in near real-time as the hot path, then store it in [Azure Cosmos DB](/azure/cosmos-db/introduction) for use in an analytics dashboard. Your functions can also use low-latency event triggers, like Event Grid, and real-time outputs like SignalR to process data in near-real-time. |
azure-functions | Functions Target Based Scaling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-target-based-scaling.md | Examples for the Node.js v4 programming model aren't yet available. > [!NOTE]-> Since Azure Cosmos DB is a partitioned workload, the target instance count for the database is capped by the number of physical partitions in your container. To learn more about Azure Cosmos DB scaling, see [physical partitions](../cosmos-db/nosql/change-feed-processor.md#dynamic-scaling) and [lease ownership](../cosmos-db/nosql/change-feed-processor.md#dynamic-scaling). +> Since Azure Cosmos DB is a partitioned workload, the target instance count for the database is capped by the number of physical partitions in your container. To learn more about Azure Cosmos DB scaling, see [physical partitions](/azure/cosmos-db/nosql/change-feed-processor#dynamic-scaling) and [lease ownership](/azure/cosmos-db/nosql/change-feed-processor#dynamic-scaling). ### Apache Kafka |
azure-functions | Manage Connections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/manage-connections.md | http.request(options, onResponseCallback); # [C#](#tab/csharp) -[CosmosClient](/dotnet/api/microsoft.azure.cosmos.cosmosclient) connects to an Azure Cosmos DB instance. The Azure Cosmos DB documentation recommends that you [use a singleton Azure Cosmos DB client for the lifetime of your application](../cosmos-db/performance-tips-dotnet-sdk-v3-sql.md#sdk-usage). The following example shows one pattern for doing that in a function: +[CosmosClient](/dotnet/api/microsoft.azure.cosmos.cosmosclient) connects to an Azure Cosmos DB instance. The Azure Cosmos DB documentation recommends that you [use a singleton Azure Cosmos DB client for the lifetime of your application](/azure/cosmos-db/performance-tips-dotnet-sdk-v3-sql#sdk-usage). The following example shows one pattern for doing that in a function: ```cs #r "Microsoft.Azure.Cosmos" Also, create a file named "function.proj" for your trigger and add the below con # [JavaScript](#tab/javascript) -[CosmosClient](/javascript/api/@azure/cosmos/cosmosclient) connects to an Azure Cosmos DB instance. The Azure Cosmos DB documentation recommends that you [use a singleton Azure Cosmos DB client for the lifetime of your application](../cosmos-db/performance-tips.md#sdk-usage). The following example shows one pattern for doing that in a function: +[CosmosClient](/javascript/api/@azure/cosmos/cosmosclient) connects to an Azure Cosmos DB instance. The Azure Cosmos DB documentation recommends that you [use a singleton Azure Cosmos DB client for the lifetime of your application](/azure/cosmos-db/performance-tips#sdk-usage). The following example shows one pattern for doing that in a function: ```javascript const cosmos = require('@azure/cosmos'); |
azure-functions | Migrate Cosmos Db Version 3 Version 4 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-cosmos-db-version-3-version-4.md | Update your `.csproj` project file to use the latest extension version for your ## Azure Cosmos DB SDK changes -The underlying SDK used by extension changed to use the Azure Cosmos DB V3 SDK, for cases where you were using SDK related types, please look at the [Azure Cosmos DB SDK V3 migration guide](../cosmos-db/nosql/migrate-dotnet-v3.md) for more information. +The underlying SDK used by extension changed to use the Azure Cosmos DB V3 SDK, for cases where you were using SDK related types, please look at the [Azure Cosmos DB SDK V3 migration guide](/azure/cosmos-db/nosql/migrate-dotnet-v3) for more information. The following table only includes attributes that were renamed or were removed f |**CollectionName** |**ContainerName** | The name of the container being monitored. | |**LeaseConnectionStringSetting** |**LeaseConnection** | (Optional) The name of an app setting or setting collection that specifies how to connect to the Azure Cosmos DB account that holds the lease container. <br><br> When not set, the `Connection` value is used. This parameter is automatically set when the binding is created in the portal. The connection string for the leases container must have write permissions.| |**LeaseCollectionName** |**LeaseContainerName** | (Optional) The name of the container used to store leases. When not set, the value `leases` is used. |-|**CreateLeaseCollectionIfNotExists** |**CreateLeaseContainerIfNotExists** | (Optional) When set to `true`, the leases container is automatically created when it doesn't already exist. The default value is `false`. When using Microsoft Entra identities if you set the value to `true`, creating containers isn't [an allowed operation](../cosmos-db/nosql/troubleshoot-forbidden.md#non-data-operations-are-not-allowed) and your Function won't be able to start.| +|**CreateLeaseCollectionIfNotExists** |**CreateLeaseContainerIfNotExists** | (Optional) When set to `true`, the leases container is automatically created when it doesn't already exist. The default value is `false`. When using Microsoft Entra identities if you set the value to `true`, creating containers isn't [an allowed operation](/azure/cosmos-db/nosql/troubleshoot-forbidden#non-data-operations-are-not-allowed) and your Function won't be able to start.| |**LeasesCollectionThroughput** |**LeasesContainerThroughput** | (Optional) Defines the number of Request Units to assign when the leases container is created. This setting is only used when `CreateLeaseContainerIfNotExists` is set to `true`. This parameter is automatically set when the binding is created using the portal. | |**LeaseCollectionPrefix** |**LeaseContainerPrefix** | (Optional) When set, the value is added as a prefix to the leases created in the Lease container for this function. Using a prefix allows two separate Azure Functions to share the same Lease container by using different prefixes. | |**UseMultipleWriteLocations** |*Removed* | This attribute is no longer needed as it's automatically detected. |-|**UseDefaultJsonSerialization** |*Removed* | This attribute is no longer needed as you can fully customize the serialization using built in support in the [Azure Cosmos DB version 3 .NET SDK](../cosmos-db/nosql/migrate-dotnet-v3.md#customize-serialization). | +|**UseDefaultJsonSerialization** |*Removed* | This attribute is no longer needed as you can fully customize the serialization using built in support in the [Azure Cosmos DB version 3 .NET SDK](/azure/cosmos-db/nosql/migrate-dotnet-v3#customize-serialization). | |**CheckpointInterval**|*Removed* | This attribute has been removed in the version 4 extension. | |**CheckpointDocumentCount** |*Removed* | This attribute has been removed in the version 4 extension. | The following table only includes attributes that changed or were removed from t |**collectionName** |**containerName** | The name of the container being monitored. | |**leaseConnectionStringSetting** |**leaseConnection** | (Optional) The name of an app setting or setting collection that specifies how to connect to the Azure Cosmos DB account that holds the lease container. <br><br> When not set, the `connection` value is used. This parameter is automatically set when the binding is created in the portal. The connection string for the leases container must have write permissions.| |**leaseCollectionName** |**leaseContainerName** | (Optional) The name of the container used to store leases. When not set, the value `leases` is used. |-|**createLeaseCollectionIfNotExists** |**createLeaseContainerIfNotExists** | (Optional) When set to `true`, the leases container is automatically created when it doesn't already exist. The default value is `false`. When using Microsoft Entra identities if you set the value to `true`, creating containers isn't [an allowed operation](../cosmos-db/nosql/troubleshoot-forbidden.md#non-data-operations-are-not-allowed) and your Function won't be able to start.| +|**createLeaseCollectionIfNotExists** |**createLeaseContainerIfNotExists** | (Optional) When set to `true`, the leases container is automatically created when it doesn't already exist. The default value is `false`. When using Microsoft Entra identities if you set the value to `true`, creating containers isn't [an allowed operation](/azure/cosmos-db/nosql/troubleshoot-forbidden#non-data-operations-are-not-allowed) and your Function won't be able to start.| |**leasesCollectionThroughput** |**leasesContainerThroughput** | (Optional) Defines the number of Request Units to assign when the leases container is created. This setting is only used when `createLeaseContainerIfNotExists` is set to `true`. This parameter is automatically set when the binding is created using the portal. | |**leaseCollectionPrefix** |**leaseContainerPrefix** | (Optional) When set, the value is added as a prefix to the leases created in the Lease container for this function. Using a prefix allows two separate Azure Functions to share the same Lease container by using different prefixes. | |**useMultipleWriteLocations** |*Removed* | This attribute is no longer needed as it's automatically detected. | The following table only includes attributes that changed or were removed from t ## Modify your function code -The Azure Functions extension version 4 is built on top of the Azure Cosmos DB .NET SDK version 3, which removed support for the [`Document` class](../cosmos-db/nosql/migrate-dotnet-v3.md#major-name-changes-from-v2-sdk-to-v3-sdk). Instead of receiving a list of `Document` objects with each function invocation, which you must then deserialize into your own object type, you can now directly receive a list of objects of your own type. +The Azure Functions extension version 4 is built on top of the Azure Cosmos DB .NET SDK version 3, which removed support for the [`Document` class](/azure/cosmos-db/nosql/migrate-dotnet-v3#major-name-changes-from-v2-sdk-to-v3-sdk). Instead of receiving a list of `Document` objects with each function invocation, which you must then deserialize into your own object type, you can now directly receive a list of objects of your own type. This example refers to a simple `ToDoItem` type. |
azure-government | Compare Azure Government Global Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md | To learn how to embed analytical content within your business process applicatio This section outlines variations and considerations when using Databases services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-api-for-fhir,data-factory,sql-server-stretch-database,redis-cache,database-migration,synapse-analytics,postgresql,mariadb,mysql,sql-database,cosmos-db®ions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true). -### [Azure Database for MySQL](../mysql/index.yml) +### [Azure Database for MySQL](/azure/mysql/) The following Azure Database for MySQL **features aren't currently available** in Azure Government: - Advanced Threat Protection -### [Azure Database for PostgreSQL](../postgresql/index.yml) +### [Azure Database for PostgreSQL](/azure/postgresql/) -For Flexible Server availability in Azure Government regions, see [Azure Database for PostgreSQL ΓÇô Flexible Server](../postgresql/flexible-server/overview.md#azure-regions). +For Flexible Server availability in Azure Government regions, see [Azure Database for PostgreSQL ΓÇô Flexible Server](/azure/postgresql/flexible-server/overview#azure-regions). The following Azure Database for PostgreSQL **features aren't currently available** in Azure Government: -- Azure Cosmos DB for PostgreSQL, formerly Azure Database for PostgreSQL ΓÇô Hyperscale (Citus). For more information about supported regions, see [Regional availability for Azure Cosmos DB for PostgreSQL](../cosmos-db/postgresql/resources-regions.md).+- Azure Cosmos DB for PostgreSQL, formerly Azure Database for PostgreSQL ΓÇô Hyperscale (Citus). For more information about supported regions, see [Regional availability for Azure Cosmos DB for PostgreSQL](/azure/cosmos-db/postgresql/resources-regions). - The following features of the Single Server deployment option - Advanced Threat Protection - Backup with long-term retention |
azure-government | Azure Services In Fedramp Auditscope | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) | ✅ | ✅ | | [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) | ✅ | ✅ | | [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) | ✅ | ✅ |-| [Azure Cosmos DB](../../cosmos-db/index.yml) | ✅ | ✅ | +| [Azure Cosmos DB](/azure/cosmos-db/) | ✅ | ✅ | | [Azure Container Apps](../../container-apps/index.yml) | ✅ | ✅ |-| [Azure Database for MariaDB](../../mariadb/index.yml) | ✅ | ✅ | -| [Azure Database for MySQL](../../mysql/index.yml) | ✅ | ✅ | -| [Azure Database for PostgreSQL](../../postgresql/index.yml) | ✅ | ✅ | +| [Azure Database for MariaDB](/azure/mariadb/) | ✅ | ✅ | +| [Azure Database for MySQL](/azure/mysql/) | ✅ | ✅ | +| [Azure Database for PostgreSQL](/azure/postgresql/) | ✅ | ✅ | | [Azure Databricks](/azure/databricks/) ****** | ✅ | ✅ | | [Azure Fluid Relay](../../azure-fluid-relay/index.yml) | ✅ | ✅ | | [Azure for Education](https://azureforeducation.microsoft.com/) | ✅ | ✅ | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Data Explorer](/azure/data-explorer/) | ✅ | ✅ | | [Data Factory](../../data-factory/index.yml) | ✅ | ✅ | | [Data Share](../../data-share/index.yml) | ✅ | ✅ |-| [Database Migration Service](../../dms/index.yml) | ✅ | ✅ | +| [Database Migration Service](/azure/dms/) | ✅ | ✅ | | [Dataverse](/powerapps/maker/data-platform/) (incl. [Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/export-to-data-lake)) | ✅ | ✅ | | [DDoS Protection](../../ddos-protection/index.yml) | ✅ | ✅ | | **Service** | **FedRAMP High** | **DoD IL2** | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) | ✅ | ✅ | ✅ | ✅ | | | **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | | [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ |-| [Azure Cosmos DB](../../cosmos-db/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | +| [Azure Cosmos DB](/azure/cosmos-db/) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Azure CXP Nomination Portal](https://cxp.azure.com/nominationportal/nominationform/fasttrack) | ✅ | ✅ | ✅ | ✅ | |-| [Azure Database for MariaDB](../../mariadb/index.yml) | ✅ | ✅ | ✅ | ✅ | | -| [Azure Database for MySQL](../../mysql/index.yml) | ✅ | ✅ | ✅ | ✅ | | -| [Azure Database for PostgreSQL](../../postgresql/index.yml) | ✅ | ✅ | ✅ | ✅ | | +| [Azure Database for MariaDB](/azure/mariadb/) | ✅ | ✅ | ✅ | ✅ | | +| [Azure Database for MySQL](/azure/mysql/) | ✅ | ✅ | ✅ | ✅ | | +| [Azure Database for PostgreSQL](/azure/postgresql/) | ✅ | ✅ | ✅ | ✅ | | | [Azure Databricks](/azure/databricks/) | ✅ | ✅ | ✅ | ✅ | | | [Azure Information Protection](/azure/information-protection/) ****** | ✅ | ✅ | ✅ | ✅ | ✅ | | [Azure Kubernetes Service (AKS)](/azure/aks/) | ✅ | ✅ | ✅ | ✅ | ✅ | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Data Explorer](/azure/data-explorer/) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Data Factory](../../data-factory/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Data Share](../../data-share/index.yml) | ✅ | ✅ | ✅ | ✅ | |-| [Database Migration Service](../../dms/index.yml) | ✅ | ✅ | ✅ | ✅ | | +| [Database Migration Service](/azure/dms/) | ✅ | ✅ | ✅ | ✅ | | | [Dataverse](/powerapps/maker/data-platform/) (formerly Common Data Service) | ✅ | ✅ | ✅ | ✅ | | | [DDoS Protection](../../ddos-protection/index.yml) | ✅ | ✅ | ✅ | ✅ | | | [Dedicated HSM](/azure/dedicated-hsm/) | ✅ | ✅ | ✅ | ✅ | | |
azure-government | Documentation Government Impact Level 5 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-impact-level-5.md | For Containers services availability in Azure Government, see [Products availabl For Databases services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-sql,sql-server-stretch-database,redis-cache,database-migration,postgresql,mariadb,mysql,sql-database,cosmos-db®ions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads. -### [Azure Cosmos DB](../cosmos-db/index.yml) +### [Azure Cosmos DB](/azure/cosmos-db/) -- Data stored in your Azure Cosmos DB account is automatically and seamlessly encrypted with keys managed by Microsoft (service-managed keys). Optionally, you can choose to add a second layer of encryption with keys you manage (customer-managed keys). For more information, see [Configure customer-managed keys for your Azure Cosmos DB account with Azure Key Vault](../cosmos-db/how-to-setup-cmk.md).+- Data stored in your Azure Cosmos DB account is automatically and seamlessly encrypted with keys managed by Microsoft (service-managed keys). Optionally, you can choose to add a second layer of encryption with keys you manage (customer-managed keys). For more information, see [Configure customer-managed keys for your Azure Cosmos DB account with Azure Key Vault](/azure/cosmos-db/how-to-setup-cmk). -### [Azure Database for MySQL](../mysql/index.yml) +### [Azure Database for MySQL](/azure/mysql/) -- Data encryption with customer-managed keys for Azure Database for MySQL enables you to bring your own key (BYOK) for data protection at rest. This encryption is set at the server level. For a given server, a customer-managed key, called the key encryption key (KEK), is used to encrypt the data encryption key (DEK) used by the service. For more information, see [Azure Database for MySQL data encryption with a customer-managed key](../mysql/concepts-data-encryption-mysql.md).+- Data encryption with customer-managed keys for Azure Database for MySQL enables you to bring your own key (BYOK) for data protection at rest. This encryption is set at the server level. For a given server, a customer-managed key, called the key encryption key (KEK), is used to encrypt the data encryption key (DEK) used by the service. For more information, see [Azure Database for MySQL data encryption with a customer-managed key](/azure/mysql/concepts-data-encryption-mysql). -### [Azure Database for PostgreSQL](../postgresql/index.yml) +### [Azure Database for PostgreSQL](/azure/postgresql/) -- Data encryption with customer-managed keys for Azure Database for PostgreSQL Single Server is set at the server level. For a given server, a customer-managed key, called the key encryption key (KEK), is used to encrypt the data encryption key (DEK) used by the service. For more information, see [Azure Database for PostgreSQL Single Server data encryption with a customer-managed key](../postgresql/concepts-data-encryption-postgresql.md).+- Data encryption with customer-managed keys for Azure Database for PostgreSQL Single Server is set at the server level. For a given server, a customer-managed key, called the key encryption key (KEK), is used to encrypt the data encryption key (DEK) used by the service. For more information, see [Azure Database for PostgreSQL Single Server data encryption with a customer-managed key](/azure/postgresql/concepts-data-encryption-postgresql). ### [Azure Healthcare APIs](../healthcare-apis/index.yml) (formerly Azure API for FHIR) |
azure-maps | How To Secure Daemon App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-daemon-app.md | To acquire the access token: :::image type="content" border="true" source="./media/how-to-manage-authentication/get-token-params.png" alt-text="Copy token parameters."::: -This article uses the [Postman](https://www.postman.com/) application to create the token request, but you can use a different API development environment. +This article uses the [bruno](https://www.usebruno.com/) application to create the token request, but you can use a different API development environment. -1. In the Postman app, select **New**. +1. Open the bruno app, select **NEW REQUEST** to create the request. -2. In the **Create New** window, select **HTTP Request**. +1. In the **NEW REQUEST** window, set **Type** to **HTTP**. Enter a **Name** for the request, such as *POST Token Request*. -3. Enter a **Request name** for the request, such as *POST Token Request*. --4. Select the **POST** HTTP method. --5. Enter the following URL to address bar (replace `{Tenant-ID}` with the Directory (Tenant) ID, the `{Client-ID}` with the Application (Client) ID, and `{Client-Secret}` with your client secret: +1. Select the **GET** HTTP method in the **URL** drop-down list, then enter the following URL: ```http https://login.microsoftonline.com/{Tenant-ID}/oauth2/v2.0/token?response_type=token&grant_type=client_credentials&client_id={Client-ID}&client_secret={Client-Secret}&scope=https://atlas.microsoft.com/.default ``` -6. Select **Send** + > [!NOTE] + > Replace: + > - `{Tenant-ID}` with the Directory (Tenant) ID + > - `{Client-ID}` with the Application (Client) ID + > - `{Client-Secret}` with your client secret. ++1. Select the run button -7. You should see the following JSON response: +You should see the following JSON response: ```json {- "token_type": "Bearer", - "expires_in": 86399, - "ext_expires_in": 86399, - "access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Im5PbzNaRHJPRFhFSzFq..." +"token_type": "Bearer", +"expires_in": 86399, +"ext_expires_in": 86399, +"access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Im5PbzNaRHJPRFhFSzFq..." } ``` |
azure-maps | How To Secure Sas App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-sas-app.md | az rest --method GET --url 'https://us.atlas.microsoft.com/search/address/json?a ## Real-world example -You can run requests to Azure Maps APIs from most clients, like C#, Java, or JavaScript. [Postman](https://learning.postman.com/docs/sending-requests/generate-code-snippets) converts an API request into a basic client code snippet in almost any programming language or framework you choose. You can use this generated code snippet in your front-end applications. +You can run requests to Azure Maps APIs from most clients, like C#, Java, or JavaScript. API development platforms like [bruno](https://www.usebruno.com) or [Postman](https://learning.postman.com/docs/sending-requests/generate-code-snippets) can convert an API request into a basic client code snippet in almost any programming language or framework you choose. You can use the generated code snippets in your front-end applications. The following small JavaScript code example shows how you could use your SAS token with the JavaScript [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#supplying_request_options) to get and return Azure Maps information. The example uses [Get Search Address](/rest/api/maps/search/get-search-address) API version 1.0. Supply your own value for `<your SAS token>`. |
azure-maps | How To Use Best Practices For Routing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-best-practices-for-routing.md | The Route Directions and Route Matrix APIs in Azure Maps [Route service] can be For more information about the coverage of the Route service, see [Routing Coverage]. -This article uses the [Postman] application to build REST calls, but you can choose any API development environment. +You can use any API development environment such as [Postman] or [bruno] to run the HTTP request samples shown in this article or to build REST calls. ## Choose between Route Directions and Matrix Routing Consider calling Matrix Routing API if your scenario is to: * Calculate the travel time or distance between a set of origins and destinations. For example, you have 12 drivers and you need to find the closest available driver to pick up the food delivery from the restaurant. * Sort potential routes by their actual travel distance or time. The Matrix API returns only travel times and distances for each origin and destination combination.-* Cluster data based on travel time or distances. For example, your company has 50 employees, find all employees that live within 20 minute Drive Time from your office. +* Cluster data based on travel time or distances. For example, your company has 50 employees, find all employees that live within 20 minute drive time from your office. Here's a comparison to show some capabilities of the Route Directions and Matrix APIs: To learn more, please see: [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [Azure Maps npm Package]: https://www.npmjs.com/package/azure-maps-rest [Azure Maps Route service]: /rest/api/maps/route+[bruno]: https://www.usebruno.com/ [How to use the Service module]: how-to-use-services-module.md [Point of Interest]: /rest/api/maps/search/getsearchpoi?view=rest-maps-1.0&preserve-view=true [Post Route Directions API documentation]: /rest/api/maps/route/postroutedirections#supportingpoints |
azure-maps | How To Use Best Practices For Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-best-practices-for-search.md | This article explains how to apply sound practices when you call data from Azure * An [Azure Maps account] * A [subscription key] -This article uses the [Postman] application to build REST calls, but you can choose any API development environment. +You can use any API development environment such as [Postman] or [bruno] to run the HTTP request samples shown in this article or to build REST calls. ## Best practices to geocode addresses Use the `language` parameter to set the language for the returned search results For more information, see [Azure Maps supported languages]. - ### Use predictive mode (automatic suggestions) To find more matches for partial queries, set the `typeahead` parameter to `true`. This query is interpreted as a partial input, and the search enters predictive mode. If you don't set the `typeahead` parameter to `true`, then the service assumes that all relevant information has been passed in. https://atlas.microsoft.com/search/address/json?subscription-key={Your-Azure-Map } ``` --### Encode a URI to handle special characters +### Encode a URI to handle special characters To find cross street addresses, you must encode the URI to handle special characters in the address. Consider this address example: *1st Avenue & Union Street, Seattle*. Here, encode the ampersand character (`&`) before you send the request. https://atlas.microsoft.com/search/poi/json?subscription-key={Your-Azure-Maps-Su } ``` - ### Airport search By using the Search POI API, you can look for airports by using their official code. For example, you can use *SEA* to find the Seattle-Tacoma International Airport: To learn more, please see: > [!div class="nextstepaction"] > [Search service API documentation](/rest/api/maps/search?view=rest-maps-1.0&preserve-view=true) -[Search service]: /rest/api/maps/search?view=rest-maps-1.0&preserve-view=true -[Search Fuzzy]: /rest/api/maps/search/getsearchfuzzy?view=rest-maps-1.0&preserve-view=true [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account-[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account -[Postman]: https://www.postman.com/downloads/ +[Azure Maps supported languages]: supported-languages.md +[bruno]: https://www.usebruno.com/ [Geocoding coverage]: geocoding-coverage.md-[Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0&preserve-view=true -[POI category search]: /rest/api/maps/search/getsearchpoicategory?view=rest-maps-1.0&preserve-view=true -[Search Nearby]: /rest/api/maps/search/getsearchnearby?view=rest-maps-1.0&preserve-view=true [Get Search Address]: /rest/api/maps/search/getsearchaddress?view=rest-maps-1.0&preserve-view=true--[Azure Maps supported languages]: supported-languages.md +[POI category search]: /rest/api/maps/search/getsearchpoicategory?view=rest-maps-1.0&preserve-view=true +[Postman]: https://www.postman.com/downloads/ +[Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0&preserve-view=true [Search Address]: /rest/api/maps/search/getsearchaddress?view=rest-maps-1.0&preserve-view=true+[Search Fuzzy]: /rest/api/maps/search/getsearchfuzzy?view=rest-maps-1.0&preserve-view=true +[Search Nearby]: /rest/api/maps/search/getsearchnearby?view=rest-maps-1.0&preserve-view=true +[Search POIs inside the geometry]: /rest/api/maps/search/postsearchinsidegeometry?view=rest-maps-1.0&preserve-view=true [Search Polygon service]: /rest/api/maps/search/getsearchpolygon?view=rest-maps-1.0&preserve-view=true+[Search service]: /rest/api/maps/search?view=rest-maps-1.0&preserve-view=true [Set up a geofence]: tutorial-geofence.md-[Search POIs inside the geometry]: /rest/api/maps/search/postsearchinsidegeometry?view=rest-maps-1.0&preserve-view=true +[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account |
azure-maps | Tutorial Iot Hub Maps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-iot-hub-maps.md | If you don't have an Azure subscription, create a [free account] before you begi * The [rentalCarSimulation] C# project > [!TIP]-> You can download the entire [rentalCarSimulation] C# project from GitHub as a single ZIP file by going to [the root of the sample] and selecting the green **<> Code** button, then **Download ZIP**. +> You can download the entire [rentalCarSimulation] C# project from GitHub as a single ZIP file by going to [the root of the sample] and selecting the green **Code** button, then **Download ZIP**. -This tutorial uses the [Postman] application, but you can choose a different API development environment. +This tutorial uses the [bruno] application, but you can choose a different API development environment. ->[!IMPORTANT] +> [!IMPORTANT] > In the URL examples, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. ## Use case: rental car tracking Now, set up your Azure function. 1. After the app is created, you add a function to it. Go to the function app. Select the **Create in Azure Portal** button. - >[!IMPORTANT] + > [!IMPORTANT] > The **Azure Event ***Hub*** Trigger** and the **Azure Event ***Grid*** Trigger** templates have similar names. Make sure you select the **Azure Event ***Grid*** Trigger** template. :::image type="content" source="./media/tutorial-iot-hub-maps/function-create.png" alt-text="Screenshot of create a function in Azure Portal."::: In your example scenario, you only want to receive messages when the rental car :::image type="content" source="./media/tutorial-iot-hub-maps/hub-filter.png" alt-text="Screenshot of filter routing messages."::: ->[!TIP] +> [!TIP] >There are various ways to query IoT device-to-cloud messages. To learn more about message routing syntax, see [IoT Hub message routing]. ## Send telemetry data to IoT Hub To learn more about how to send device-to-cloud telemetry, and the other way aro [Azure Functions]: ../azure-functions/functions-overview.md [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [Azure Maps REST APIs]: /rest/api/maps/spatial/getgeofence+[bruno]: https://www.usebruno.com/ [C# script]: https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing/blob/master/src/Azure%20Function/run.csx [create a storage account]: ../storage/common/storage-account-create.md?tabs=azure-portal [Create an Azure storage account]: #create-an-azure-storage-account To learn more about how to send device-to-cloud telemetry, and the other way aro [IoT Plug and Play]: ../iot/overview-iot-plug-and-play.md [geofence JSON data file]: https://raw.githubusercontent.com/Azure-Samples/iothub-to-azure-maps-geofencing/master/src/Data/geofence.json?token=AKD25BYJYKDJBJ55PT62N4C5LRNN4 [Plug and Play schema for geospatial data]: https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v1-preview/schemas/geospatial.md-[Postman]: https://www.postman.com/ [register a new device in the IoT hub]: ../iot-hub/create-connect-device.md [rentalCarSimulation]: https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing/tree/master/src/rentalCarSimulation [resource group]: ../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups |
azure-monitor | Azure Monitor Agent Data Field Differences | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-data-field-differences.md | Title: Data field differences between MMA and AMA - description: Documents that field lever data changes made in the migration. Last updated 06/21/2024- Customer intent: As an azure administrator, I want to understand which Log Analytics Workspace queries I may need to update after AMA migration.- # AMA agent data field differences from MMA+ [Azure Monitor Agent (AMA)](./agents-overview.md) replaces the Log Analytics agent, also known as Microsoft Monitor Agent (MMA) and OMS, for Windows and Linux machines, in Azure and non-Azure environments, on-premises and other clouds. The agent introduces a simplified, flexible method of configuring data collection using [Data Collection Rules (DCRs)](../essentials/data-collection-rule-overview.md). The article provides information on the data fields that change when collected by AMA, which is critical information for you to migrate your LAW queries. Each of the data changes was carefully considered and the rational for each change is provided in the table. If you encounter a data field that isn't in the tables file a support request. Your help keeping the tables current and complete is appreciated. ## Log analytics workspace tables -### W3CIISLog Table for Internet Information Services (IIS) ++### W3CIISLog table for Internet Information Services (IIS) + This table collects log data from the Internet Information Service on Window systems. -|LAW Field | Difference | Reason| Additional Information | -||||| +| LAW Field | Difference | Reason | Additional Information | +|--||--|| | sSiteName | Not be populated | depends on customer data collection configuration | The MMA agent could turn on collection by default, but by principle is restricted from making configuration changes in other services.<p>Enable the `Service Name (s-sitename)` field in W3C logging of IIS. See [Select W3C Fields to Log](/iis/manage/provisioning-and-managing-iis/configure-logging-in-iis#select-w3c-fields-to-log).|-| Fileuri | No longer populated | not required for MMA parity | MMA doesn't collect this field. This field was only populated for IIS logs collected from Azure Cloud Services through the Azure Diagnostics Extension.| +| Fileuri | No longer populated | not required for MMA parity | MMA doesn't collect this field. This field was only populated for IIS logs collected from Azure Cloud Services through the Azure Diagnostics Extension. | ### Windows event table+ This table collects Events from the Windows Event log. There are two other tables that are used to store Windows events, the SecurityEvent and Event tables. -|LAW Field | Difference | Reason| Additional Information | -||||| -| UserName | MMA enriches the event with the username prior to sending the event for ingestion. AMA do not do the same enrichment. | The AMA enrichment is not yet implemented. | AMA principles dictate that the event data should remain unchanged by default. Adding and enriched field adds possible processing errors and additional cost for storage. In this case, the customer demand for the field is very high and work is underway to add the username. | +| LAW Field | Difference | Reason | Additional Information | +|--||--|| +| UserName | MMA enriches the event with the username before sending the event for ingestion. AMA doesn't do the same enrichment. | The AMA enrichment isn't implemented yet. | AMA principles dictate that the event data should remain unchanged by default. Adding and enriched field adds possible processing errors and extra costs for storage. In this case, the customer demand for the field is very high and work is underway to add the username. | ++### Perf table for performance counters ++The perf table collects performance counters from Windows and Linux agents. It offers valuable insights into the performance of hardware components, operating systems, and applications. The following table shows key differences in how data is reported between OMS and Azure Monitor Agent (AMA). +| LAW Field | Difference | Reason | Additional Information | +|--||--|| +| InstanceName | Reported as **_Total** by OMS<br>Reported as **total** by AMA | | Where `ObjectName` is **"Logical Disk"** and `CounterName` is **"% Used Space"**, the `InstanceName` value is reported as **_Total** for records ingested by the OMS agent, and as **total** for records ingested by the Azure Monitor Agent (AMA).\* | +| CounterValue | Is rounded to the nearest whole number by OMS but not rounded by AMA | | Where `ObjectName` is **"Logical Disk"** and `CounterName` is **"% Used Space"**, the `CounterValue` value is rounded to the nearest whole number for records ingested by the OMS agent but not rounded for records ingested by the Azure Monitor Agent (AMA).\* | ++\* Doesn't apply to records ingested by the Microsoft Monitoring Agent (MMA) on Windows. + ## Next steps-- [Azure Monitor Agent migration helper workbook](./azure-monitor-agent-migration-helper-workbook.md)-- [DCR Config Generator](./azure-monitor-agent-migration-data-collection-rule-generator.md)++* [Azure Monitor Agent migration helper workbook](./azure-monitor-agent-migration-helper-workbook.md) +* [DCR Config Generator](./azure-monitor-agent-migration-data-collection-rule-generator.md) |
azure-monitor | Azure Monitor Agent Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md | If however you're using System Center Operations Manager (SCOM), keep the MMA ag A SCOM Admin Management Pack exists and can help you remove the workspace configurations at scale while retaining the SCOM Management Group configuration. For more information on the SCOM Admin Management Pack, see [SCOM Admin Management Pack](https://github.com/thekevinholman/SCOM.Management). -## Known parity gaps that may impact your migration -+## Known Migration Issues - IIS Logs: When IIS log collection is enabled, AMA might not populate the `sSiteName` column of the `W3CIISLog` table. This field gets collected by default when IIS log collection is enabled for the legacy agent. If you need to collect the `sSiteName` field using AMA, enable the `Service Name (s-sitename)` field in W3C logging of IIS. For steps to enable this field, see [Select W3C Fields to Log](/iis/manage/provisioning-and-managing-iis/configure-logging-in-iis#select-w3c-fields-to-log).--- Sentinel: Windows Firewall logs aren't generally available (GA) yet. - SQL Assessment Solution: This is now part of SQL best practice assessment. The deployment policies require one Log Analytics Workspace per subscription, which isn't the best practice recommended by the AMA team.-- Microsoft Defender for cloud: Some features for the new agent-less solution are in development. Your migration maybe impacted if you use File Integrity Monitoring (FIM), Endpoint protection discovery recommendations, OS Misconfigurations (Azure Security Benchmark (ASB) recommendations) and Adaptive Application controls.+- Microsoft Defender for cloud: is moving to an agent-less solution. Some features will not be ready by the deprecation date. Customers should stay on MMA for machines that use File Integrity Monitoring (FIM), Endpoint protection discovery recommendations, OS Misconfigurations (Azure Security Benchmark (ASB) recommendations) and Adaptive Application controls. +- Update management is moving to an agent-less solution but will not be ready by the MMA depreciation date. Customers that use Update Management should stay on MMA until the new service Automated Update Manager is ready. + ## Next steps |
azure-monitor | Data Collection Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-performance.md | -**Performance counters** is one of the data sources used in a [data collection rule (DCR)](../essentials/data-collection-rule-create-edit.md). Details for the creation of the DCR are provided in [Collect data with Azure Monitor Agent](./azure-monitor-agent-data-collection.md). This article provides additional details for the Windows events data source type. ++**Performance counters** is one of the data sources used in a [data collection rule (DCR)](../essentials/data-collection-rule-create-edit.md). Details for the creation of the DCR are provided in [Collect data with Azure Monitor Agent](./azure-monitor-agent-data-collection.md). This article provides more details for the Windows events data source type. Performance counters provide insight into the performance of hardware components, operating systems, and applications. [Azure Monitor Agent](azure-monitor-agent-overview.md) can collect performance counters from Windows and Linux machines at frequent intervals for near real time analysis. ## Prerequisites -- If you are going to send performance data to a [Log Analytics workspace](../logs/log-analytics-workspace-overview.md), then you must have one created where you have at least [contributor rights](../logs/manage-access.md#azure-rbac)..-- Either a new or existing DCR described in [Collect data with Azure Monitor Agent](./azure-monitor-agent-data-collection.md).+* If you're going to send performance data to a [Log Analytics workspace](../logs/log-analytics-workspace-overview.md), then you must have one created where you have at least [contributor rights](../logs/manage-access.md#azure-rbac). +* Either a new or existing DCR described in [Collect data with Azure Monitor Agent](./azure-monitor-agent-data-collection.md). -## Configure performance counters data source +## Configure performance counters data source Create a data collection rule, as described in [Collect data with Azure Monitor Agent](./azure-monitor-agent-data-collection.md). In the **Collect and deliver** step, select **Performance Counters** from the **Data source type** dropdown. For performance counters, select from a predefined set of objects and their samp Select **Custom** to specify an [XPath](https://www.w3schools.com/xml/xpath_syntax.asp) to collect any performance counters not available by default. Use the format `\PerfObject(ParentInstance/ObjectInstance#InstanceIndex)\Counter`. If the counter name contains an ampersand (&), replace it with `&`. For example, `\Memory\Free & Zero Page List Bytes`. You can view the default counters for examples. :::image type="content" source="media/data-collection-performance/data-source-performance-custom.png" lightbox="media/data-collection-performance/data-source-performance-custom.png" alt-text="Screenshot that shows the Azure portal form to select custom performance counters in a data collection rule." border="false":::- > [!NOTE] > At this time, Microsoft.HybridCompute ([Azure Arc-enabled servers](../../azure-arc/servers/overview.md)) resources can't be viewed in [Metrics Explorer](../essentials/metrics-getting-started.md) (the Azure portal UX), but they can be acquired via the Metrics REST API (Metric Namespaces - List, Metric Definitions - List, and Metrics - List). - ## Destinations Performance counters data can be sent to the following locations. -| Destination | Table / Namespace | -|:|:| -| Log Analytics workspace | [Perf](/azure/azure-monitor/reference/tables/perf) | -| Azure Monitor Metrics | Windows: Virtual Machine Guest<br>Linux: azure.vm.linux.guestmetrics +| Destination | Table / Namespace | +|:|:| +| Log Analytics workspace | Perf (see [Azure Monitor Logs reference](/azure/azure-monitor/reference/tables/perf#columns)) | +| Azure Monitor Metrics | Windows: Virtual Machine Guest<br>Linux: azure.vm.linux.guestmetrics | - > [!NOTE] > On Linux, using Azure Monitor Metrics as the only destination is supported in v1.10.9.0 or higher. :::image type="content" source="media/data-collection-performance/destination-metrics.png" lightbox="media/data-collection-performance/destination-metrics.png" alt-text="Screenshot that shows configuration of an Azure Monitor Logs destination in a data collection rule."::: +## Log queries with performance records ++The following queries are examples to retrieve performance records. ++#### All performance data from a particular computer ++```query +Perf +| where Computer == "MyComputer" +``` ++#### Average CPU utilization across all computers ++```query +Perf +| where ObjectName == "Processor" and CounterName == "% Processor Time" and InstanceName == "_Total" +| summarize AVGCPU = avg(CounterValue) by Computer +``` ++#### Hourly average, minimum, maximum, and 75-percentile CPU usage for a specific computer ++```query +Perf +| where CounterName == "% Processor Time" and InstanceName == "_Total" and Computer == "MyComputer" +| summarize ["min(CounterValue)"] = min(CounterValue), ["avg(CounterValue)"] = avg(CounterValue), ["percentile75(CounterValue)"] = percentile(CounterValue, 75), ["max(CounterValue)"] = max(CounterValue) by bin(TimeGenerated, 1h), Computer +``` ++> [!NOTE] +> Additional query examples are available at [Queries for the Perf table](/azure/azure-monitor/reference/queries/perf). + ## Next steps -- [Collect text logs by using Azure Monitor Agent](data-collection-text-log.md).-- Learn more about [Azure Monitor Agent](azure-monitor-agent-overview.md).-- Learn more about [data collection rules](../essentials/data-collection-rule-overview.md).+* [Collect text logs by using Azure Monitor Agent](data-collection-text-log.md). +* Learn more about [Azure Monitor Agent](azure-monitor-agent-overview.md). +* Learn more about [data collection rules](../essentials/data-collection-rule-overview.md). |
azure-monitor | Asp Net Dependencies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md | Application Insights SDKs for .NET and .NET Core ship with `DependencyTrackingTe |[Azure Blob Storage, Table Storage, or Queue Storage](https://www.nuget.org/packages/WindowsAzure.Storage/) | Calls made with the Azure Storage client. | |[Azure Event Hubs client SDK](https://nuget.org/packages/Azure.Messaging.EventHubs) | Use the latest package: https://nuget.org/packages/Azure.Messaging.EventHubs. | |[Azure Service Bus client SDK](https://nuget.org/packages/Azure.Messaging.ServiceBus)| Use the latest package: https://nuget.org/packages/Azure.Messaging.ServiceBus. |-|[Azure Cosmos DB](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) | Tracked automatically if HTTP/HTTPS is used. Tracing for operations in direct mode with TCP will also be captured automatically using preview package >= [3.33.0-preview](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.33.0-preview). For more details visit the [documentation](../../cosmos-db/nosql/sdk-observability.md). | +|[Azure Cosmos DB](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) | Tracked automatically if HTTP/HTTPS is used. Tracing for operations in direct mode with TCP will also be captured automatically using preview package >= [3.33.0-preview](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.33.0-preview). For more details visit the [documentation](/azure/cosmos-db/nosql/sdk-observability). | If you're missing a dependency or using a different SDK, make sure it's in the list of [autocollected dependencies](#dependency-auto-collection). If the dependency isn't autocollected, you can track it manually with a [track dependency call](./api-custom-events-metrics.md#trackdependency). Dependencies are automatically collected by using one of the following technique The following examples of dependencies, which aren't automatically collected, require manual tracking: -* Azure Cosmos DB is tracked automatically only if [HTTP/HTTPS](../../cosmos-db/performance-tips.md#networking) is used. TCP mode won't be automatically captured by Application Insights for SDK versions older than [`2.22.0-Beta1`](https://github.com/microsoft/ApplicationInsights-dotnet/blob/main/CHANGELOG.md#version-2220-beta1). +* Azure Cosmos DB is tracked automatically only if [HTTP/HTTPS](/azure/cosmos-db/performance-tips#networking) is used. TCP mode won't be automatically captured by Application Insights for SDK versions older than [`2.22.0-Beta1`](https://github.com/microsoft/ApplicationInsights-dotnet/blob/main/CHANGELOG.md#version-2220-beta1). * Redis For those dependencies not automatically collected by SDK, you can track them manually by using the [TrackDependency API](api-custom-events-metrics.md#trackdependency) that's used by the standard autocollection modules. |
azure-monitor | Data Collection Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-monitor.md | DCRLogErrors ```kusto DCRLogErrors | where _ResourceId == "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-resource-group/providers/microsoft.insights/datacollectionrules/my-dcr"-| where InputStream == "Custom-MyTable_CL" +| where InputStreamId == "Custom-MyTable_CL" ``` |
azure-monitor | Resource Logs Schema | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-schema.md | The schema for resource logs varies depending on the resource and log category. | Azure Container Instances | [Logging for Azure Container Instances](../../container-instances/container-instances-log-analytics.md#log-schema) | | Azure Container Registry | [Logging for Azure Container Registry](../../container-registry/monitor-service.md) | | Azure Content Delivery Network | [Diagnostic logs for Azure Content Delivery Network](../../cdn/cdn-azure-diagnostic-logs.md) |-| Azure Cosmos DB | [Azure Cosmos DB logging](../../cosmos-db/monitor-cosmos-db.md) | +| Azure Cosmos DB | [Azure Cosmos DB logging](/azure/cosmos-db/monitor-cosmos-db) | | Azure Data Explorer | [Azure Data Explorer logs](/azure/data-explorer/using-diagnostic-logs) | | Azure Data Factory | [Monitor Data Factory by using Azure Monitor](../../data-factory/monitor-using-azure-monitor.md) | | Azure Data Lake Analytics |[Accessing logs for Azure Data Lake Analytics](../../data-lake-analytics/data-lake-analytics-diagnostic-logs.md) | | Azure Data Lake Storage |[Accessing logs for Azure Data Lake Storage](../../data-lake-store/data-lake-store-diagnostic-logs.md) |-| Azure Database for MySQL | [Azure Database for MySQL diagnostic logs](../../mysql/concepts-server-logs.md#diagnostic-logs) | -| Azure Database for PostgreSQL | [Azure Database for PostgreSQL logs](../../postgresql/concepts-server-logs.md#resource-logs) | +| Azure Database for MySQL | [Azure Database for MySQL diagnostic logs](/azure/mysql/concepts-server-logs#diagnostic-logs) | +| Azure Database for PostgreSQL | [Azure Database for PostgreSQL logs](/azure/postgresql/concepts-server-logs#resource-logs) | | Azure Databricks | [Diagnostic logging in Azure Databricks](/azure/databricks/administration-guide/account-settings/azure-diagnostic-logs) | | Azure DDoS Protection | [Logging for Azure DDoS Protection](../../ddos-protection/ddos-view-diagnostic-logs.md#example-log-queries) | | Azure Digital Twins | [Set up Azure Digital Twins diagnostics](../../digital-twins/troubleshoot-diagnostics.md#log-schemas) |
azure-monitor | Insights Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/insights-overview.md | The following table lists the available curated visualizations and information a | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/storageInsights) | Provides comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. | | [Azure Backup](../../backup/backup-azure-monitoring-use-azuremonitor.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_DataProtection/BackupCenterMenuBlade/backupReportsConfigure/menuId/backupReportsConfigure) | Provides built-in monitoring and alerting capabilities in a Recovery Services vault. | |**Databases**||||-| [Azure Cosmos DB Insights](../../cosmos-db/cosmosdb-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/cosmosDBInsights) | Provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. | +| [Azure Cosmos DB Insights](/azure/cosmos-db/cosmosdb-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/cosmosDBInsights) | Provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. | | [Azure Monitor for Azure Cache for Redis (preview)](../../azure-cache-for-redis/redis-cache-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/redisCacheInsights) | Provides a unified, interactive view of overall performance, failures, capacity, and operational health. | |**Analytics**|||| | [Azure Data Explorer Insights](/azure/data-explorer/data-explorer-insights) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/adxClusterInsights) | Azure Data Explorer Insights provides comprehensive monitoring of your clusters by delivering a unified view of your cluster performance, operations, usage, and failures. | |
azure-monitor | Analyze Usage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/analyze-usage.md | Title: Analyze usage in a Log Analytics workspace in Azure Monitor description: Methods and queries to analyze the data in your Log Analytics workspace to help you understand usage and potential cause for high usage. Previously updated : 10/23/2023 Last updated : 08/14/2024 # Analyze usage in a Log Analytics workspace find where TimeGenerated between(startofday(ago(1d))..startofday(now())) project | sort by eventCount desc nulls last ``` -### Querying for data volumes excluding known free data types -The following query will return the monthly data volume in GB, excluding all data types which are supposed to be free from data ingestion charges: --```kusto -let freeTables = dynamic([ -"AppAvailabilityResults","AppSystemEvents","ApplicationInsights","AzureActivity","AzureNetworkAnalyticsIPDetails_CL", -"AzureNetworkAnalytics_CL","AzureTrafficAnalyticsInsights_CL","ComputerGroup","DefenderIoTRawEvent","Heartbeat", -"MAApplication","MAApplicationHealth","MAApplicationHealthIssues","MAApplicationInstance","MAApplicationInstanceReadiness", -"MAApplicationReadiness","MADeploymentPlan","MADevice","MADeviceNotEnrolled","MADeviceReadiness","MADriverInstanceReadiness", -"MADriverReadiness","MAProposedPilotDevices","MAWindowsBuildInfo","MAWindowsCurrencyAssessment", -"MAWindowsCurrencyAssessmentDailyCounts","MAWindowsDeploymentStatus","NTAIPDetails_CL","NTANetAnalytics_CL", -"OfficeActivity","Operation","SecurityAlert","SecurityIncident","UCClient","UCClientReadinessStatus", -"UCClientUpdateStatus","UCDOAggregatedStatus","UCDOStatus","UCDeviceAlert","UCServiceUpdateStatus","UCUpdateAlert", -"Usage","WUDOAggregatedStatus","WUDOStatus","WaaSDeploymentStatus","WaaSInsiderStatus","WaaSUpdateStatus"]); -Usage -| where DataType !in (freeTables) -| where TimeGenerated > ago(30d) -| summarize MonthlyGB=sum(Quantity)/1000 -``` --To look for data which might not have IsBillable correctly set (and which could result in incorrect billing, or more specifically under-billing), use this query on your workspace: --```kusto -let freeTables = dynamic([ -"AppAvailabilityResults","AppSystemEvents","ApplicationInsights","AzureActivity","AzureNetworkAnalyticsIPDetails_CL", -"AzureNetworkAnalytics_CL","AzureTrafficAnalyticsInsights_CL","ComputerGroup","DefenderIoTRawEvent","Heartbeat", -"MAApplication","MAApplicationHealth","MAApplicationHealthIssues","MAApplicationInstance","MAApplicationInstanceReadiness", -"MAApplicationReadiness","MADeploymentPlan","MADevice","MADeviceNotEnrolled","MADeviceReadiness","MADriverInstanceReadiness", -"MADriverReadiness","MAProposedPilotDevices","MAWindowsBuildInfo","MAWindowsCurrencyAssessment", -"MAWindowsCurrencyAssessmentDailyCounts","MAWindowsDeploymentStatus","NTAIPDetails_CL","NTANetAnalytics_CL", -"OfficeActivity","Operation","SecurityAlert","SecurityIncident","UCClient","UCClientReadinessStatus", -"UCClientUpdateStatus","UCDOAggregatedStatus","UCDOStatus","UCDeviceAlert","UCServiceUpdateStatus","UCUpdateAlert", -"Usage","WUDOAggregatedStatus","WUDOStatus","WaaSDeploymentStatus","WaaSInsiderStatus","WaaSUpdateStatus"]); -Usage -| where DataType !in (freeTables) -| where TimeGenerated > ago(30d) -| where IsBillable == false -| summarize MonthlyPotentialUnderbilledGB=sum(Quantity)/1000 by DataType -``` - ## Querying for common data types If you find that you have excessive billable data for a particular data type, you might need to perform a query to analyze data in that table. The following queries provide samples for some common data types: |
azure-monitor | Snapshot Debugger Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-data.md | You can view debug snapshots in the portal to see the call stack and inspect var ## View Snapshots in the Portal -After an exception has occurred in your application and a snapshot has been created, you should have snapshots to view in the Azure portal within 5 to 10 minutes. To view snapshots, in the **Failure** pane, either: +After an exception has occurred in your application and a snapshot is created, you can view snapshots in the Azure portal within 5 to 10 minutes. To view snapshots, in the **Failure** pane, either: * Select the **Operations** button when viewing the **Operations** tab, or * Select the **Exceptions** button when viewing the **Exceptions** tab. :::image type="content" source="./media/snapshot-debugger/failures-page.png" alt-text="Screenshot showing the Failures Page in Azure portal."::: -Select an operation or exception in the right pane to open the **End-to-End Transaction Details** pane, then select the exception event. If a snapshot is available for the given exception, an **Open Debug Snapshot** button appears on the right pane with details for the [exception](../app/asp-net-exceptions.md). +Select an operation or exception in the right pane to open the **End-to-End Transaction Details** pane, then select the exception event. +- If a snapshot is available for the given exception, select the **Open debug snapshot** button appears on the right pane with details for the [exception](../app/asp-net-exceptions.md). +- [If you do not see this button, no snapshot may be available. See the troubleshooting guide.](./snapshot-debugger-troubleshoot.md#use-the-snapshot-health-check) :::image type="content" source="./media/snapshot-debugger/e2e-transaction-page.png" alt-text="Screenshot showing the Open Debug Snapshot button on exception."::: In the Debug Snapshot view, you see a call stack and a variables pane. When you :::image type="content" source="./media/snapshot-debugger/open-snapshot-portal.png" alt-text="Screenshot showing the Open debug snapshot highlighted in the Azure portal."::: -Snapshots might include sensitive information. By default, you can only view snapshots if you've been assigned the `Application Insights Snapshot Debugger` role. +Snapshots might include sensitive information. By default, you can only view snapshots if you are assigned the `Application Insights Snapshot Debugger` role. -## View Snapshots in Visual Studio 2017 Enterprise or above +## View Snapshots in Visual Studio 2017 Enterprise or greater 1. Click the **Download Snapshot** button to download a `.diagsession` file, which can be opened by Visual Studio Enterprise. -1. To open the `.diagsession` file, you need to have the Snapshot Debugger Visual Studio component installed. The Snapshot Debugger component is a required component of the ASP.NET workload in Visual Studio and can be selected from the Individual Component list in the Visual Studio installer. If you're using a version of Visual Studio before Visual Studio 2017 version 15.5, you'll need to install the extension from the [Visual Studio Marketplace](https://aka.ms/snapshotdebugger). +1. To open the `.diagsession` file, you need to have the Snapshot Debugger Visual Studio component installed. The Snapshot Debugger component is a required component of the ASP.NET workload in Visual Studio and can be selected from the Individual Component list in the Visual Studio installer. If you're using a version of Visual Studio before Visual Studio 2017 version 15.5, you need to install the extension from the [Visual Studio Marketplace](https://aka.ms/snapshotdebugger). 1. After you open the snapshot file, the Minidump Debugging page in Visual Studio appears. Click **Debug Managed Code** to start debugging the snapshot. The snapshot opens to the line of code where the exception was thrown so that you can debug the current state of the process. |
azure-netapp-files | Application Volume Group Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-considerations.md | This article describes the requirements and considerations you need to be aware ## Requirements and considerations * You need to use the [manual QoS capacity pool](manage-manual-qos-capacity-pool.md) functionality. -* You must create a proximity placement group (PPG) and anchor it to your SAP HANA compute resources. Application volume group for SAP HANA needs this setup to search for an Azure NetApp Files resource that is close to the SAP HANA servers. For more information, see [Best practices about Proximity Placement Groups](#best-practices-about-proximity-placement-groups) and [Create a Proximity Placement Group using the Azure portal](../virtual-machines/windows/proximity-placement-groups-portal.md). +* You must create a proximity placement group (PPG) and anchor it to your SAP HANA compute resources. Application volume group for SAP HANA needs this setup to search for an Azure NetApp Files resource that is close to the SAP HANA servers. For more information, see [Best practices about Proximity Placement Groups](#best-practices-about-proximity-placement) and [Create a Proximity Placement Group using the Azure portal](../virtual-machines/windows/proximity-placement-groups-portal.md). >[!NOTE] >Do not delete the PPG. Deleting a PPG removes the pinning and can cause subsequent volume groups to be created in sub-optimal locations which could lead to increased latency. This article describes the requirements and considerations you need to be aware * Extension 1 supports [availability zone volume placement](use-availability-zones.md) as the new default method for placement. This upgrade mitigates the need for AVset pinning and eliminates the need for proximity placement groups. With support for availability zone volume placement, you only need to select the same availability zone as the database servers. Using availability zone volume placement aligns with the Microsoft recommendation on how to deploy SAP HANA infrastructures to achieve best performance with high-availability, maximum flexibility, and simplified deployment. If regions do not support availability zones, you can select a regional deployment or choose proximity placement groups. -## Best practices about proximity placement groups +## Best practices about proximity placement To deploy SAP HANA volumes using the application volume group, you need to ensure that your HANA database VMs and the Azure NetApp Files resources are in close proximity to ensure lowest possible latency. You can achieve close proximity using either of the following deployment methods: |
azure-netapp-files | Application Volume Group Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/application-volume-group-introduction.md | Application volume group for SAP HANA is supported for all regions where Azure N Application volume group for SAP HANA helps you simplify the deployment process and increase the storage performance for SAP HANA workloads. Some of the new features are as follows: * Use of proximity placement group (PPG) instead of manual pinning.- * You anchor the SAP HANA VMs using a PPG to guaranty lowest possible latency. The PPG enforces the creation of data, log, and shared volumes in the close proximity to the SAP HANA VMs. See [Best practices about Proximity Placement Groups](application-volume-group-considerations.md#best-practices-about-proximity-placement-groups) for details. + * You anchor the SAP HANA VMs using a PPG to guaranty lowest possible latency. The PPG enforces the creation of data, log, and shared volumes in the close proximity to the SAP HANA VMs. See [Best practices about Proximity Placement Groups](application-volume-group-considerations.md#best-practices-about-proximity-placement) for details. * Creation of separate storage endpoints (with different IP addresses) for data and log volumes. * This deployment method provides better performance and throughput for the SAP HANA database. |
azure-netapp-files | Azure Netapp Files Network Topologies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md | -Azure NetApp Files volumes are designed to be contained in a special purpose subnet called a [delegated subnet](../virtual-network/virtual-network-manage-subnet.md) within your Azure Virtual Network. Therefore, you can access the volumes directly from within Azure over VNet peering or from on-premises over a Virtual Network Gateway (ExpressRoute or VPN Gateway). The subnet is dedicated to Azure NetApp Files and there's no connectivity to the Internet. +Azure NetApp Files volumes are designed to be contained in a special purpose subnet called a [delegated subnet](../virtual-network/virtual-network-manage-subnet.md) within your Azure Virtual Network. Therefore, you can access the volumes directly from within Azure over virtual network (VNet) peering or from on-premises over a Virtual Network Gateway (ExpressRoute or VPN Gateway). The subnet is dedicated to Azure NetApp Files and there's no connectivity to the Internet. <a name="regions-standard-network-features"></a>The option to set Standard network features on new volumes and to modify network features for existing volumes is supported in all Azure NetApp Files-enabled regions. The following table describes whatΓÇÖs supported for each network features confi | Features | Standard network features | Basic network features | ||||-| Number of IPs in a VNet (including immediately peered VNets) accessing volumes in an Azure NetApp Files hosting VNet | [Same standard limits as VMs](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits) | 1000 | +| Number of IPs in a VNet (including immediately peered VNets) accessing volumes in an Azure NetApp Files hosting VNet | [Same standard limits as virtual machines (VMs)](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits) | 1000 | | Azure NetApp Files delegated subnets per VNet | 1 | 1 | | [Network Security Groups](../virtual-network/network-security-groups-overview.md) (NSGs) on Azure NetApp Files delegated subnets | Yes | No | | [User-defined routes](../virtual-network/virtual-networks-udr-overview.md#user-defined) (UDRs) on Azure NetApp Files delegated subnets | Yes | No | If the VNet is peered with another VNet, you can't expand the VNet address space > > It's also recommended that the size of the delegated subnet be at least /25 for SAP workloads and /26 for other workload scenarios. -### UDRs and NSGs +### <a name="udrs-and-nsgs"></a> User-defined routes (UDRs) and network security groups (NSGs) If the subnet has a combination of volumes with the Standard and Basic network features, user-defined routes (UDRs) and network security groups (NSGs) applied on the delegated subnets will only apply to the volumes with the Standard network features. |
azure-netapp-files | Azure Netapp Files Quickstart Set Up Account Create Volumes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-quickstart-set-up-account-create-volumes.md | Use the Azure portal, PowerShell, or the Azure CLI to delete the resource group. ## Next steps -> [!div class="nextstepaction"] -> [Storage hierarchy of Azure NetApp Files](azure-netapp-files-understand-storage-hierarchy.md) --> [!div class="nextstepaction"] -> [Service levels for Azure NetApp Files](azure-netapp-files-service-levels.md) --> [!div class="nextstepaction"] -> [Create an NFS volume](azure-netapp-files-create-volumes.md) --> [!div class="nextstepaction"] -> [Solution architectures using Azure NetApp Files](azure-netapp-files-solution-architectures.md) --> [!div class="nextstepaction"] -> [Application resilience FAQs for Azure NetApp Files](faq-application-resilience.md) +- [Storage hierarchy of Azure NetApp Files](azure-netapp-files-understand-storage-hierarchy.md) +- [Service levels for Azure NetApp Files](azure-netapp-files-service-levels.md) +- [Create an NFS volume](azure-netapp-files-create-volumes.md) +- [Solution architectures using Azure NetApp Files](azure-netapp-files-solution-architectures.md) +- [Application resilience FAQs for Azure NetApp Files](faq-application-resilience.md) |
azure-netapp-files | Azure Netapp Files Resource Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md | The following table describes resource limits for Azure NetApp Files: | Number of volumes per subscription | 500 | Yes | | Number of volumes per capacity pool | 500 | Yes | | Number of snapshots per volume | 255 | No |-| Number of IPs in a virtual network (including immediately peered VNets) accessing volumes in an Azure NetApp Files hosting VNet | <ul><li>**Basic**: 1000</li><li>**Standard**: [Same standard limits as VMs](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits)</li></ul> | No | +| Number of IPs in a virtual network (including immediately peered virtual networks [VNets]) accessing volumes in an Azure NetApp Files hosting VNet | <ul><li>**Basic**: 1000</li><li>**Standard**: [Same standard limits as VMs](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits)</li></ul> | No | | Minimum size of a single capacity pool | 1 TiB* | No | | Maximum size of a single capacity pool | 2,048 TiB | No | | Minimum size of a single regular volume | 50 GiB | No | |
azure-netapp-files | Azure Netapp Files Smb Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-smb-performance.md | This article helps you understand SMB performance and best practices for Azure N ## SMB Multichannel -SMB Multichannel is enabled by default in SMB shares. All SMB shares pre-dating existing SMB volumes have had the feature enabled, and all newly created volumes will also have the feature enabled at time of creation. +SMB Multichannel is enabled by default in SMB shares. All SMB shares pre-dating existing SMB volumes have the feature enabled; all newly created volumes also have the feature enabled at time of creation. -Any SMB connection established before the feature enablement will need to be reset to take advantage of the SMB Multichannel functionality. To reset, you can disconnect and reconnect the SMB share. +Any SMB connection established before the feature enablement needs to be reset to take advantage of the SMB Multichannel functionality. To reset, you can disconnect and reconnect the SMB share. -Windows has supported SMB Multichannel since Windows 2012 to enable best performance. See [Deploy SMB Multichannel](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn610980(v%3Dws.11)) and [The basics of SMB Multichannel](/archive/blogs/josebda/the-basics-of-smb-multichannel-a-feature-of-windows-server-2012-and-smb-3-0) for details. +Windows has supported SMB Multichannel since Windows 2012 to enable best performance. See [Deploy SMB Multichannel](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn610980(v%3Dws.11)) and [The basics of SMB Multichannel](/archive/blogs/josebda/the-basics-of-smb-multichannel-a-feature-of-windows-server-2012-and-smb-3-0) for details. ### Benefits of SMB Multichannel The following tests and graphs demonstrate the power of SMB Multichannel on sing #### Random I/O -With SMB Multichannel disabled on the client, pure 4 KiB read and write tests were performed using FIO and a 40 GiB working set. The SMB share was detached between each test, with increments of the SMB client connection count per RSS network interface settings of `1`,`4`,`8`,`16`, `set-SmbClientConfiguration -ConnectionCountPerRSSNetworkInterface <count>`. The tests show that the default setting of `4` is sufficient for I/O intensive workloads; incrementing to `8` and `16` had negligible effect. +With SMB Multichannel disabled on the client, pure 4 KiB read and write tests were performed using FIO and a 40 GiB working set. The SMB share was detached between each test, with increments of the SMB client connection count per RSS network interface settings of `1`,`4`,`8`,`16`, and `set-SmbClientConfiguration -ConnectionCountPerRSSNetworkInterface <count>`. The tests show that the default setting of `4` is sufficient for I/O intensive workloads; incrementing to `8` and `16` had negligible effect. -The command `netstat -na | findstr 445` proved that additional connections were established with increments from `1` to `4` to `8` and to `16`. Four CPU cores were fully utilized for SMB during each test, as confirmed by the perfmon `Per Processor Network Activity Cycles` statistic (not included in this article.) +The command `netstat -na | findstr 445` proved that additional connections were established with increments from `1` to `4`, `4` to `8`, and `8` to `16`. Four CPU cores were fully utilized for SMB during each test, as confirmed by the perfmon `Per Processor Network Activity Cycles` statistic (not included in this article.) ![Chart that shows random I/O comparison of SMB Multichannel.](./media/azure-netapp-files-smb-performance/azure-netapp-files-random-io-tests.png) -The Azure virtual machine does not affect SMB (nor NFS) storage I/O limits. As shown in the following chart, the D32ds instance type has a limited rate of 308,000 for cached storage IOPS and 51,200 for uncached storage IOPS. However, the graph above shows significantly more I/O over SMB. +The Azure virtual machine (VM) doesn't affect SMB (nor NFS) storage I/O limits. As shown in the following chart, the D32ds instance type has a limited rate of 308,000 for cached storage IOPS and 51,200 for uncached storage IOPS. However, the graph above shows significantly more I/O over SMB. ![Chart that shows random I/O comparison test.](./media/azure-netapp-files-smb-performance/azure-netapp-files-random-io-tests-list.png) -#### Sequential IO +#### Sequential I/O -Tests similar to the random I/O tests described previously were performed with 64-KiB sequential I/O. Although the increases in client connection count per RSS network interface beyond 4ΓÇÖ had no noticeable effect on random I/O, the same does not apply to sequential I/O. As the following graph shows, each increase is associated with a corresponding increase in read throughput. Write throughput remained flat due to network bandwidth restrictions placed by Azure for each instance type/size. +Tests similar to the random I/O tests described previously were performed with 64-KiB sequential I/O. Although the increases in client connection count per RSS network interface beyond four had no noticeable effect on random I/O, the same doesn't apply to sequential I/O. As the following graph shows, each increase is associated with a corresponding increase in read throughput. Write throughput remained flat due to network bandwidth restrictions placed by Azure for each instance type and size. ![Chart that shows throughput test comparison.](./media/azure-netapp-files-smb-performance/azure-netapp-files-sequential-io-tests.png) -Azure places network rate limits on each virtual machine type/size. The rate limit is imposed on outbound traffic only. The number of NICs present on a virtual machine has no bearing on the total amount of bandwidth available to the machine. For example, the D32ds instance type has an imposed network limit of 16,000 Mbps (2,000 MiB/s). As the sequential graph above shows, the limit affects the outbound traffic (writes) but not multichannel reads. +Azure places network rate limits on each VM type and size. The rate limit is imposed on outbound traffic only. The number of NICs present on a VM has no bearing on the total amount of bandwidth available to the machine. For example, the D32ds instance type has an imposed network limit of 16,000 Mbps (2,000 MiB/s). As the sequential graph above shows, the limit affects the outbound traffic (writes) but not multichannel reads. ![Chart that shows sequential I/O comparison test.](./media/azure-netapp-files-smb-performance/azure-netapp-files-sequential-io-tests-list.png) SMB Signing is supported for all SMB protocol versions that are supported by Azu ### Performance impact of SMB Signing -SMB Signing has a deleterious effect upon SMB performance. Among other potential causes of the performance degradation, the digital signing of each packet consumes additional client-side CPU as the perfmon output below shows. In this case, Core 0 appears responsible for SMB, including SMB Signing. A comparison with the non-multichannel sequential read throughput numbers in the previous section shows that SMB Signing reduces overall throughput from 875MiB/s to approximately 250MiB/s. +SMB Signing has a deleterious effect upon SMB performance. Among other potential causes of the performance degradation, the digital signing of each packet consumes additional client-side CPU as the perfmon output below shows. In this case, Core 0 appears responsible for SMB, including SMB Signing. A comparison with the non-multichannel sequential read throughput numbers in the previous section shows that SMB Signing reduces overall throughput from 875MiB/s to approximately 250MiB/s. ![Chart that shows SMB Signing performance impact.](./media/azure-netapp-files-smb-performance/azure-netapp-files-smb-signing-performance.png) ## Performance for a single instance with a 1-TB dataset -To provide more detailed insight into workloads with read/write mixes, the following two charts show the performance of a single, Ultra service-level cloud volume of 50 TB with a 1-TB dataset and with SMB multichannel of 4. An optimal IODepth of 16 was used, and Flexible IO (FIO) parameters were used to ensure the full use of the network bandwidth (`numjobs=16`). +To provide more detailed insight into workloads with read/write mixes, the following two charts show the performance of a single, Ultra service-level cloud volume of 50 TB with a 1-TB dataset and with SMB multichannel of 4. An optimal `IODepth` of 16 was used; Flexible I/O (FIO) parameters were used to ensure the full use of the network bandwidth (`numjobs=16`). The following chart shows the results for 4k random I/O, with a single VM instance and a read/write mix at 10% intervals: You can check for activity on each of the adapters in Windows Performance Monito ![Screenshot that shows Performance Monitor Add Counter interface.](./media/azure-netapp-files-smb-performance/smb-performance-performance-monitor-add-counter.png) -After you have data traffic running in your volumes, you can monitor your adapters in Windows Performance Monitor. If you do not use all of these 16 virtual adapters, you might not be maximizing your network bandwidth capacity. +After you have data traffic running in your volumes, you can monitor your adapters in Windows Performance Monitor. If you don't use all of these 16 virtual adapters, you might not be maximizing your network bandwidth capacity. ![Screenshot that shows Performance Monitor output.](./media/azure-netapp-files-smb-performance/smb-performance-performance-monitor-output.png) Windows 10, Windows 2012, and later versions support SMB encryption. SMB encryption is enabled at the share level for Azure NetApp Files. SMB 3.0 employs AES-CCM algorithm, while SMB 3.1.1 employs the AES-GCM algorithm. -SMB encryption is not required. As such, it is only enabled for a given share if the user requests that Azure NetApp Files enable it. Azure NetApp Files shares are never exposed to the internet. They are only accessible from within a given VNet, over VPN or express route, so Azure NetApp Files shares are inherently secure. The choice to enable SMB encryption is entirely up to the user. Be aware of the anticipated performance penalty before enabling this feature. +SMB encryption isn't required. As such, it's only enabled for a given share if the user requests that Azure NetApp Files enable it. Azure NetApp Files shares are never exposed to the internet. They're only accessible from within a given VNet, over VPN or express route, so Azure NetApp Files shares are inherently secure. The choice to enable SMB encryption is entirely up to the user. Be aware of the anticipated performance penalty before enabling this feature. ### <a name="smb_encryption_impact"></a>Impact of SMB encryption on client workloads Although SMB encryption has impact to both the client (CPU overhead for encrypti ## Accelerated Networking -For maximum performance, it is recommended that you configure [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-powershell.md) on your virtual machines where possible. Keep the following considerations in mind: +For maximum performance, it's recommended that you configure [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-powershell.md) on your VMs where possible. Keep the following considerations in mind: -* The Azure portal enables Accelerated Networking by default for virtual machines supporting this feature. However, other deployment methods such as Ansible and similar configuration tools may not. Failure to enable Accelerated Networking can hobble the performance of a machine. -* If Accelerated Networking is not enabled on the network interface of a virtual machine due to its lack of support for an instance type or size, it will remain disabled with larger instance types. You will need manual intervention in those cases. -* There is no need to set accelerated networking for the NICs in the dedicated subnet of Azure NetApp Files. Accelerated networking is a capability that only applies to Azure virtual machines. Azure NetApp Files NICs are optimized by design. +* The Azure portal enables Accelerated Networking by default for VMs supporting this feature. However, other deployment methods such as Ansible and similar configuration tools can't. Failure to enable Accelerated Networking can hobble the performance of a machine. +* If Accelerated Networking isn't enabled on the network interface of a VM due to its lack of support for an instance type or size, it remains disabled with larger instance types. You need manual intervention in those cases. +* There's no need to set accelerated networking for the NICs in the dedicated subnet of Azure NetApp Files. Accelerated networking is a capability that only applies to Azure VMs. Azure NetApp Files NICs are optimized by design. -## RSS +## Receive side scaling -Azure NetApp Files supports receive-side-scaling (RSS). +Azure NetApp Files supports receive side scaling (RSS). With SMB Multichannel enabled, an SMB3 client establishes multiple TCP connections to the Azure NetApp Files SMB server over a network interface card (NIC) that is single RSS capable. -To see if your Azure virtual machine NICs support RSS, run the command +To see if your Azure VM NICs support RSS, run the command `Get-SmbClientNetworkInterface` as follows and check the field `RSS Capable`: -![Screenshot that shows RSS output for Azure virtual machine.](./media/azure-netapp-files-smb-performance/azure-netapp-files-formance-rss-support.png) +![Screenshot that shows RSS output for Azure VM.](./media/azure-netapp-files-smb-performance/azure-netapp-files-formance-rss-support.png) ## Multiple NICs on SMB clients -You should not configure multiple NICs on your client for SMB. The SMB client will match the NIC count returned by the SMB server. Each storage volume is accessible from one and only one storage endpoint. That means that only one NIC will be used for any given SMB relationship. +You shouldn't configure multiple NICs on your client for SMB. The SMB client doesn't match the NIC count returned by the SMB server. Each storage volume is accessible from one and only one storage endpoint, meaning only one NIC is used for any given SMB relationship. -As the output of `Get-SmbClientNetworkInterace` below shows, the virtual machine has 2 network interfaces--15 and 12. As shown under the following command `Get-SmbMultichannelConnection`, even though there are two RSS-capable NICS, only interface 12 is used in connection with the SMB share; interface 15 is not in use. +As the output of `Get-SmbClientNetworkInterace` below shows, the VM has two network interfaces: 15 and 12. As shown under the following command `Get-SmbMultichannelConnection`, even though there are two RSS-capable NICs, only interface 12 is used in connection with the SMB share; interface 15 isn't in use. -![Screeshot that shows output for RSS-capable NICS.](./media/azure-netapp-files-smb-performance/azure-netapp-files-rss-capable-nics.png) +![Screenshot that shows output for RSS-capable NICs.](./media/azure-netapp-files-smb-performance/azure-netapp-files-rss-capable-nics.png) ## Next steps -- [SMB FAQs](faq-smb.md)-- See the [Azure NetApp Files: Managed Enterprise File Shares for SMB Workloads](https://cloud.netapp.com/hubfs/Resources/ANF%20SMB%20Quickstart%20doc%20-%2027-Aug-2019.pdf?__hstc=177456119.bb186880ac5cfbb6108d962fcef99615.1550595766408.1573471687088.1573477411104.328&__hssc=177456119.1.1573486285424&__hsfp=1115680788&hsCtaTracking=cd03aeb4-7f3a-4458-8680-1ddeae3f045e%7C5d5c041f-29b4-44c3-9096-b46a0a15b9b1) about using SMB file shares with Azure NetApp Files.+- [SMB FAQs](faq-smb.md) |
azure-netapp-files | Backup Requirements Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-requirements-considerations.md | Azure NetApp Files backup in a region can only protect an Azure NetApp Files vol * In a [cross-region replication](cross-region-replication-introduction.md) (CRR) or [cross-zone replication](cross-zone-replication-introduction.md) (CZR) setting, Azure NetApp Files backup can be configured on a source volume. - Backups on a destination volume are only supported for manually created snapshots. To take backups of a destination volume, create a snapshot on the source volume then wait for the snapshot to be replicated to the destination volume. From the destination volume, you can select the snapshot for backup, where you can select this snapshot for backup. Scheduled backups on a destination volume aren't supported. + Backups on a destination volume are only supported for manually created snapshots. To take backups of a destination volume, create a snapshot on the source volume then wait for the snapshot to be replicated to the destination volume. From the destination volume, you select the snapshot for backup. Scheduled backups on a destination volume aren't supported. * See [Restore a backup to a new volume](backup-restore-new-volume.md) for additional considerations related to restoring backups. |
azure-netapp-files | Configure Application Volume Group Sap Hana Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-application-volume-group-sap-hana-api.md | The following list describes all the possible volume types for application volum > A capacity pool can be resized at any time. For more information about changing a capacity pool, refer to [Manage a manual QoS capacity pool](manage-manual-qos-capacity-pool.md). 1. Create a NetApp storage account. 2. Create a manual QoS capacity pool.-1. **Create AvSet and proximity placement group (PPG):** For production landscapes, you should create an AvSet that is manually pinned to a data center where Azure NetApp Files resources are available in proximity. The AvSet pinning ensures that VMs won't be moved on restart. The proximity placement group (PPG) needs to be assigned to the AvSet. With the help of application volume groups, the PPG can find the closest Azure NetApp Files hardware. For more information, see [Best practices about proximity placement groups](application-volume-group-considerations.md#best-practices-about-proximity-placement-groups). +1. **Create AvSet and proximity placement group (PPG):** For production landscapes, you should create an AvSet that is manually pinned to a data center where Azure NetApp Files resources are available in proximity. The AvSet pinning ensures that VMs won't be moved on restart. The proximity placement group (PPG) needs to be assigned to the AvSet. With the help of application volume groups, the PPG can find the closest Azure NetApp Files hardware. For more information, see [Best practices about proximity placement groups](application-volume-group-considerations.md#best-practices-about-proximity-placement). 1. Create AvSet. 2. Create PPG. 3. Assign PPG to AvSet. |
azure-netapp-files | Cross Region Replication Requirements Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-requirements-considerations.md | This article describes requirements and considerations about [using the volume c * You can delete manual snapshots on the source volume of a replication relationship when the replication relationship is active or broken, and also after the replication relationship is deleted. You can't delete manual snapshots for the destination volume until the replication relationship is broken. * You can revert a source or destination volume of a cross-region replication to a snapshot, provided the snapshot is newer than the most recent SnapMirror snapshot. Snapshots older than the SnapMirror snapshot can't be used for a volume revert operation. For more information, see [Revert a volume using snapshot revert](snapshots-revert-volume.md). * Data replication volumes support [customer-managed keys](configure-customer-managed-keys.md).+* If you are copying large data sets into a volume that has cross-region replication enabled and you have spare capacity in the capacity pool, you should set the replication interval to 10 minutes, increase the volume size to allow for the changes to be stored, and temporarily disable replication. * If you use the cool access feature, see [Manage Azure NetApp Files standard storage with cool access](manage-cool-access.md#considerations) for more considerations. * [Large volumes](large-volumes-requirements-considerations.md) are supported with cross-region replication only with an hourly or daily replication schedule. |
azure-netapp-files | Directory Sizes Concept | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/directory-sizes-concept.md | Directory sizes are specific to a single directory and don't combine in sizes. F ## Determine if a directory is approaching the limit size <a name="directory-limit"></a> For a 320-MiB directory, the number of blocks is 655360, with each block size being 512 bytes. (That is, 320x1024x1024/512.) This number translates to approximately 4-5 million files maximum for a 320-MiB directory. However, the actual number of maximum files might be lower, depending on factors such as the number of files with non-ASCII characters in the directory. + You can use the `stat` command from a client to see whether a directory is approaching the maximum size limit for directory metadata (320 MB). If you reach the maximum size limit for a single directory for Azure NetApp Files, the error `No space left on device` occurs. For a 320-MB directory, the number of blocks is 655,360, with each block size being 512 bytes. (That is, 320x1024x1024/512.) This number translates to approximately 4 million files maximum for a 320-MB directory. However, the actual number of maximum files might be lower, depending on factors such as the number of files with non-ASCII characters in the directory. For information on how to monitor the maxdirsize, see [Monitoring `maxdirsize`](). In this case, consider corrective actions such as moving or deleting files. ## More information * [Azure NetApp Files resources limits](azure-netapp-files-resource-limits.md)+* [Understand `maxfiles`](maxfiles-concept.md) * [Understand file path lengths in Azure NetApp Files](understand-path-lengths.md) |
azure-netapp-files | Understand Path Lengths | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-path-lengths.md | -File and path length refers to the number of Unicode characters in a file path, including directories. This limit is a factor in the individual character lengths, which are determined by the size of the character in bytes. For instance, NFS and SMB allow path components of 255 bytes. The file encoding format of ASCII uses 8-bit encoding, meaning file path components (such as a file or folder name) in ASCII can be up to 255 characters since ASCII characters are 1 byte in size. +File and path length refers to the number of Unicode characters in a file path, including directories. This limit is a factor in the individual character lengths, which are determined by the size of the character in bytes. For instance, NFS and SMB allow path components of 255 bytes. The file encoding format of American Standard Code for Information Interchange (ASCII) uses 8-bit encoding, meaning file path components (such as a file or folder name) in ASCII can be up to 255 characters since ASCII characters are 1 byte in size. The following table shows the supported component and path lengths in Azure NetApp Files volumes: The path length max can be queried using the `getconf PATH_MAX /NFSmountpoint` c ## Dual-protocol volume considerations -When using Azure NetApp Files for dual-protocol access, the difference in how path lengths are handled in NFS and SMB protocols can create incompatibilities across file and folders. For instance, Windows SMB supports up to 32,767 characters in a path (provided the long path feature is enabled on the SMB client), but NFS support can exceed that amount. As such, if a path length is created in NFS that exceeds the support of SMB, clients are unable to access the data once the path length maximums have been reached. In those cases, either take care to consider the lower end limits of file path lengths across protocols when creating file and folder names (and folder path depth) or map SMB shares closer to the desired folder path to reduce the path length. +When using Azure NetApp Files for dual-protocol access, the difference in how path lengths are handled in NFS and SMB protocols can create incompatibilities across files and folders. For instance, Windows SMB supports up to 32,767 characters in a path (provided the long path feature is enabled on the SMB client), but NFS support can exceed that amount. As such, if a path length is created in NFS that exceeds the support of SMB, clients are unable to access the data once the path length maximums have been reached. In those cases, either take care to consider the lower end limits of file path lengths across protocols when creating file and folder names (and folder path depth) or map SMB shares closer to the desired folder path to reduce the path length. Instead of mapping the SMB share to the top level of the volume to navigate down to a path of `\\share\folder1\folder2\folder3\folder4`, consider mapping the SMB share to the entire path of `\\share\folder1\folder2\folder3\folder4`. As a result, a drive letter mapping to `Z:` lands in the desired folder and reduces the path length from `Z:\folder1\folder2\folder3\folder4\file` to `Z:\file`. |
azure-netapp-files | Understand Volume Languages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-volume-languages.md | For best practices, see [Character set best practices](#character-set-best-pract ## Character encoding in Azure NetApp Files NFS and SMB volumes -In an Azure NetApp Files file sharing environment, file and folder names are represented by a series of characters that end users read and interpret. The way those characters are displayed depends on how the client sends and receives encoding of those characters. For instance, if a client is sending legacy [ASCII (American Standard Code for Information Interchange)](https://www.ascii-code.com/) encoding to the Azure NetApp Files volume when accessing it, then it's limited to displaying only characters that are supported in the ASCII format. +In an Azure NetApp Files file sharing environment, file and folder names are represented by a series of characters that end users read and interpret. The way those characters are displayed depends on how the client sends and receives encoding of those characters. For instance, if a client is sending legacy [American Standard Code for Information Interchange (ASCII)](https://www.ascii-code.com/) encoding to the Azure NetApp Files volume when accessing it, then it's limited to displaying only characters that are supported in the ASCII format. For instance, the Japanese character for data is 資. Since this character can't be represented in ASCII, a client using ASCII encoding show a “?” instead of 資. Unicode uses [Unicode Transformation Format](https://unicode.org/faq/utf_bom.htm Unicode leverages 17 planes of 65,536 characters (256 code points multiplied by 256 boxes in the plane), with Plane 0 as the [Basic Multilingual Plane (BMP)](https://en.wikipedia.org/wiki/Plane_%28Unicode%29#Basic_Multilingual_Plane). This plane contains the most commonly used characters across multiple languages. Because the world's languages and character sets exceed 65536 characters, more planes are needed to support less commonly used character sets. -For instance, Plane 1 (the [Supplementary Multilingual Planes (SMP)](https://unicodeplus.com/plane/1)) includes historic scripts like cuneiform and Egyptian hieroglyphs as well as some [Osage](https://en.wikipedia.org/wiki/Osage_script), [Warang Citi](https://en.wikipedia.org/wiki/Warang_Citi), [Adlam](https://en.wikipedia.org/wiki/Adlam_script), [Wancho](https://en.wikipedia.org/wiki/Wancho_language#Orthography) and [Toto](https://en.wikipedia.org/wiki/Toto_language#Writing_system). Plane 1 also includes some symbols and [emoticon](https://en.wikipedia.org/wiki/Emoticons_(Unicode_block)) characters. +For instance, Plane 1 (the [Supplementary Multilingual Planes (SMP)](https://unicodeplus.com/plane/1)) includes historic scripts like cuneiform and Egyptian hieroglyphs as well as some [Osage](https://en.wikipedia.org/wiki/Osage_script), [Warang Citi](https://en.wikipedia.org/wiki/Warang_Citi), [Adlam](https://en.wikipedia.org/wiki/Adlam_script), [Wancho](https://en.wikipedia.org/wiki/Wancho_language#Orthography), and [Toto](https://en.wikipedia.org/wiki/Toto_language#Writing_system). Plane 1 also includes some symbols and [emoticon](https://en.wikipedia.org/wiki/Emoticons_(Unicode_block)) characters. Plane 2 – the [Supplementary Ideographic Plane (SIP)](https://unicodeplus.com/plane/2) – contains Chinese/Japanese/Korean (CJK) Unified Ideographs. Characters in planes 1 and 2 generally are 4 bytes in size. Azure NetApp Files supports most UTF-16 characters, including surrogate pairs. I ## Character set handling over remote clients -Remote connections to clients that mount Azure NetApp Files volumes (such as SSH connections to Linux clients to access NFS mounts) can be configured to send and receive specific volume language encodings. The language encoding sent to the client via the remote connection utility controls how character sets are created and viewed. As a result, a remote connection that uses a different language encoding than another remote connection (such as two different PuTTY windows) may show different results for characters when listing file and folder names in the Azure NetApp Files volume. In most cases, this won't create discrepancies (such as for Latin/English characters), but in the cases of special characters, such as emojis, results can vary. +Remote connections to clients that mount Azure NetApp Files volumes (such as SSH connections to Linux clients to access NFS mounts) can be configured to send and receive specific volume language encodings. The language encoding sent to the client via the remote connection utility controls how character sets are created and viewed. As a result, a remote connection that uses a different language encoding than another remote connection (such as two different PuTTY windows) can show different results for characters when listing file and folder names in the Azure NetApp Files volume. In most cases, this won't create discrepancies (such as for Latin/English characters), but in the cases of special characters, such as emojis, results can vary. For instance, using an encoding of UTF-8 for the remote connection shows predictable results for characters in Azure NetApp Files volumes since C.UTF-8 is the volume language. The Japanese character for "data" (資) displays differently depending on the encoding being sent by the terminal. When using SMB with Azure NetApp Files volumes, characters that exceed 3 bytes u How the characters display on the client depends on the system font and the language and locale settings. In general, characters that fall into the BMP are supported across all protocols, regardless if the encoding is UTF-8 or UTF-16. -When using either CMD or [PowerShell](/powershell/scripting/dev-cross-plat/vscode/understanding-file-encoding), the character set view may depend on the font settings. These utilities have limited font choices by default. CMD uses Consolas as the default font. +When using either CMD or [PowerShell](/powershell/scripting/dev-cross-plat/vscode/understanding-file-encoding), the character set display depends on the font settings. These utilities have limited font choices by default. CMD uses Consolas as the default font. :::image type="content" source="./media/understand-volume-languages/command-prompt-font.png" alt-text="Screenshot of command prompt font options."::: If the volume is enabled for dual-protocol (both NFS and SMB), you might observe ## NFS behaviors -How NFS displays special characters depends on the version of NFS used, the client's locale settings, installed fonts, and the settings of the remote connection client in use. For instance, using Bastion to access an Ubuntu client may handle character displays differently than a PuTTY client set to a different locale on the same VM. The ensuing NFS examples rely on these locale settings for the Ubuntu VM: +How NFS displays special characters depends on the version of NFS used, the client's locale settings, installed fonts, and the settings of the remote connection client in use. For instance, using Bastion to access an Ubuntu client handles character displays differently than a PuTTY client set to a different locale on the same VM. The ensuing NFS examples rely on these locale settings for the Ubuntu VM: ``` ~$ locale LC\_ALL= ### NFSv3 behavior -NFSv3 doesn't enforce UTF encoding on files and folders. In most cases, special character sets should have no issues. However, the connection client being used can affect how characters are sent and received. For instance, using Unicode characters outside of the BMP for a folder name in the Azure connection client Bastion can result in some unexpected behavior due to how the client encoding works. +NFSv3 doesn't enforce UTF encoding on files and folders. In most cases, special character sets should have no issues. However, the connection client used can affect how characters are sent and received. For instance, using Unicode characters outside of the BMP for a folder name in the Azure connection client Bastion can result in some unexpected behavior due to how the client encoding works. In the following screenshot, Bastion is unable to copy and paste the values to the CLI prompt from outside of the browser when naming a directory over NFSv3. When attempting to copy and paste the value of `NFSv3Bastion𓀀𫝁😃𐒸`, the special characters display as quotation marks in the input. NFSv4.x enforces UTF-8 encoding in file and folder names per the [RFC-8881 inte As a result, if a special character is sent with non-UTF-8 encoding, NFSv4.x might not allow the value. -In some cases, a command may be allowed using a character outside of the [Basic Multilingual Plane (BMP)](https://en.wikipedia.org/wiki/Plane_%28Unicode%29#Basic_Multilingual_Plane), but it might not display the value after it's created. +In some cases, a command can be allowed using a character outside of the [Basic Multilingual Plane (BMP)](https://en.wikipedia.org/wiki/Plane_%28Unicode%29#Basic_Multilingual_Plane), but it might not display the value after it's created. For instance, issuing `mkdir` with a folder name including the characters "𓀀𫝁😃𐒸" (characters in the [Supplementary Multilingual Planes (SMP)](https://unicodeplus.com/plane/1) and the [Supplementary Ideographic Plane (SIP)](https://unicodeplus.com/plane/2)) seems to succeed in NFSv4.x. The folder won't be visible when running the `ls` command. Windows clients are the primary type of clients that are used to access SMB shar :::image type="content" source="./media/understand-volume-languages/region-settings.png" alt-text="Screenshot of region settings window."::: -When a file or folder is created over an SMB share in Azure NetApp Files, the character set in use encode as UTF-16. As a result, clients using UTF-8 encoding (such as Linux-based NFS clients) may not be able to translate some character sets properly – particularly characters that fall outside of the [Basic Multilingual Plane (BMP)](https://en.wikipedia.org/wiki/Plane_%28Unicode%29#Basic_Multilingual_Plane). +When a file or folder is created over an SMB share in Azure NetApp Files, the character set encodes as UTF-16. As a result, clients using UTF-8 encoding (such as Linux-based NFS clients) might not be able to translate some character sets properly – particularly characters that fall outside of the [Basic Multilingual Plane (BMP)](https://en.wikipedia.org/wiki/Plane_%28Unicode%29#Basic_Multilingual_Plane). ##### Unsupported character behavior |
azure-resource-manager | Bicep Config Linter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-linter.md | Title: Linter settings for Bicep config description: Describes how to customize configuration values for the Bicep linter Previously updated : 07/19/2024 Last updated : 07/30/2024 # Add linter settings in the Bicep config file The following example shows the rules that are available for configuration. "level": "warning", "maxAllowedAgeInDays": 730 },+ "use-recent-module-versions": { + "level": "warning", + }, "use-resource-id-functions": { "level": "warning" }, |
azure-resource-manager | Linter Rule Use Recent Module Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-use-recent-module-versions.md | + + Title: Linter rule - use recent module versions +description: Linter rule - use recent module versions ++ Last updated : 07/30/2024+++# Linter rule - use recent module versions ++This rule looks for old [public module](./modules.md#public-module-registry) versions. It's best to use the most recent module versions. ++> [!NOTE] +> This rule is off by default, change the level in [bicepconfig.json](./bicep-config-linter.md) to enable it. ++## Linter rule code ++To customize rule settings, use the following value in the [Bicep configuration file](bicep-config-linter.md) : ++`use-recent-module-versions` ++## Solution ++The following example fails this test because an older module version is used: ++```bicep +module storage 'br/public:avm/res/storage/storage-account:0.6.0' = { + name: 'myStorage' + params: { + name: 'store${resourceGroup().name}' + } +} +``` ++Use the most recent module version. ++Use **Quick Fix** to use the latest module versions: +++## Next steps ++For more information about the linter, see [Use Bicep linter](./linter.md). |
azure-resource-manager | Linter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter.md | Title: Use Bicep linter description: Learn how to use Bicep linter. Previously updated : 07/19/2024 Last updated : 07/30/2024 # Use Bicep linter The default set of linter rules is minimal and taken from [arm-ttk test cases](. - [use-parent-property](./linter-rule-use-parent-property.md) - [use-protectedsettings-for-commandtoexecute-secrets](./linter-rule-use-protectedsettings-for-commandtoexecute-secrets.md) - [use-recent-api-versions](./linter-rule-use-recent-api-versions.md)+- [use-recent-module-versions](./linter-rule-use-recent-module-versions.md) - [use-resource-id-functions](./linter-rule-use-resource-id-functions.md) - [use-resource-symbol-reference](./linter-rule-use-resource-symbol-reference.md) - [use-safe-access](./linter-rule-use-safe-access.md) |
azure-resource-manager | Azure Services Resource Providers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md | The resource providers for database services are: | Resource provider namespace | Azure service | | | - | | Microsoft.Cache | [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) |-| Microsoft.DBforMariaDB | [Azure Database for MariaDB](../../mariadb/index.yml) | -| Microsoft.DBforMySQL | [Azure Database for MySQL](../../mysql/index.yml) | -| Microsoft.DBforPostgreSQL | [Azure Database for PostgreSQL](../../postgresql/index.yml) | -| Microsoft.DocumentDB | [Azure Cosmos DB](../../cosmos-db/index.yml) | +| Microsoft.DBforMariaDB | [Azure Database for MariaDB](/azure/mariadb/) | +| Microsoft.DBforMySQL | [Azure Database for MySQL](/azure/mysql/) | +| Microsoft.DBforPostgreSQL | [Azure Database for PostgreSQL](/azure/postgresql/) | +| Microsoft.DocumentDB | [Azure Cosmos DB](/azure/cosmos-db/) | | Microsoft.Sql | [Azure SQL Database](/azure/azure-sql/database/index)<br /> [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/index) <br />[Azure Synapse Analytics](/azure/sql-data-warehouse/) | | Microsoft.SqlVirtualMachine | [SQL Server on Azure Virtual Machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview) | | Microsoft.AzureData | [SQL Server enabled by Azure Arc](/sql/sql-server/azure-arc/overview) | The resource providers for migration services are: | Microsoft.ClassicInfrastructureMigrate | Classic deployment model migration | | Microsoft.DataBox | [Azure Data Box](../../databox/index.yml) | | Microsoft.DataBoxEdge | [Azure Stack Edge](../../databox-online/azure-stack-edge-overview.md) |-| Microsoft.DataMigration | [Azure Database Migration Service](../../dms/index.yml) | +| Microsoft.DataMigration | [Azure Database Migration Service](/azure/dms/) | | Microsoft.OffAzure | [Azure Migrate](../../migrate/migrate-services-overview.md) | | Microsoft.Migrate | [Azure Migrate](../../migrate/migrate-services-overview.md) | |
azure-resource-manager | Azure Subscription Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md | For Azure Container Apps limits, see [Quotas in Azure Container Apps](../../cont ## Azure Cosmos DB limits -For Azure Cosmos DB limits, see [Limits in Azure Cosmos DB](../../cosmos-db/concepts-limits.md). +For Azure Cosmos DB limits, see [Limits in Azure Cosmos DB](/azure/cosmos-db/concepts-limits). ## Azure Data Explorer limits For Azure Cosmos DB limits, see [Limits in Azure Cosmos DB](../../cosmos-db/conc ## Azure Database for MySQL -For Azure Database for MySQL limits, see [Limitations in Azure Database for MySQL](../../mysql/concepts-limits.md). +For Azure Database for MySQL limits, see [Limitations in Azure Database for MySQL](/azure/mysql/concepts-limits). ## Azure Database for PostgreSQL -For Azure Database for PostgreSQL limits, see [Limitations in Azure Database for PostgreSQL](../../postgresql/concepts-limits.md). +For Azure Database for PostgreSQL limits, see [Limitations in Azure Database for PostgreSQL](/azure/postgresql/concepts-limits). ## Azure Deployment Environments limits |
azure-resource-manager | Move Support Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md | Before starting your move operation, review the [checklist](./move-resource-grou > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |-> | servers | **Yes** | **Yes** | You can use a cross-region read replica to move an existing server. [Learn more](../../postgresql/howto-move-regions-portal.md).<br/><br/> If the service is provisioned with geo-redundant backup storage, you can use geo-restore to restore in other regions. [Learn more](../../mariadb/concepts-business-continuity.md#recovery-from-an-azure-regional-datacenter-outage). +> | servers | **Yes** | **Yes** | You can use a cross-region read replica to move an existing server. [Learn more](/azure/postgresql/howto-move-regions-portal).<br/><br/> If the service is provisioned with geo-redundant backup storage, you can use geo-restore to restore in other regions. [Learn more](/azure/mariadb/concepts-business-continuity#recovery-from-an-azure-regional-datacenter-outage). ## Microsoft.DBforMySQL Before starting your move operation, review the [checklist](./move-resource-grou > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | flexibleServers | **Yes** | **Yes** | No |-> | servers | **Yes** | **Yes** | You can use a cross-region read replica to move an existing server. [Learn more](../../mysql/howto-move-regions-portal.md). +> | servers | **Yes** | **Yes** | You can use a cross-region read replica to move an existing server. [Learn more](/azure/mysql/howto-move-regions-portal). ## Microsoft.DBforPostgreSQL Before starting your move operation, review the [checklist](./move-resource-grou > | - | -- | - | -- | > | flexibleServers | **Yes** | **Yes** | No | > | servergroups | No | No | No |-> | servers | **Yes** | **Yes** | You can use a cross-region read replica to move an existing server. [Learn more](../../postgresql/howto-move-regions-portal.md). +> | servers | **Yes** | **Yes** | You can use a cross-region read replica to move an existing server. [Learn more](/azure/postgresql/howto-move-regions-portal). > | serversv2 | **Yes** | **Yes** | No | ## Microsoft.DeploymentManager |
azure-resource-manager | Tag Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources.md | The following limitations apply to tags: * Tag names can't contain these characters: `<`, `>`, `%`, `&`, `\`, `?`, `/` > [!NOTE]- > * Azure Domain Name System (DNS) zones don't support the use of spaces in the tag or a tag that starts with a number. Azure DNS tag names don't support special and unicode characters. The value can contain all characters. + > * Azure Domain Name System (DNS) zones don't support the use of spaces or parentheses in the tag or a tag that starts with a number. Azure DNS tag names don't support special and unicode characters. The value can contain all characters. > > * Traffic Manager doesn't support the use of spaces, `#` or `:` in the tag name. The tag name can't start with a number. > |
azure-web-pubsub | Howto Connect Mqtt Websocket Client | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-connect-mqtt-websocket-client.md | + + Title: How to connect MQTT clients to Azure Web PubSub +description: How to connect MQTT clients to Azure Web PubSub +++ Last updated : 06/17/2024++++# How to connect MQTT clients to Azure Web PubSub ++MQTT is a lightweight pub/sub messaging protocol designed for devices with constrained resources. ++In this article, we introduce how to connect MQTT clients to the service, so that the clients can publish and subscribe messages. ++## Connection parameters ++WebSocket connection URI: `wss://{serviceName}.webpubsub.azure.com/clients/mqtt/hubs/{hub}?access_token={token}`. ++* {hub} is a mandatory parameter that provides isolation for different applications. +* {token} is required by default. Alternatively, you can include the token in the `Authorization` header in the format `Bearer {token}`. You can bypass the token requirement by enabling anonymous access to the hub. <!--TODO MQTT allow anonymous access to the hub--> ++If client library doesn't accept a URI, then you probably need to split the information in the URI into multiple parameters: ++* Host: `{serviceName}.webpubsub.azure.com` +* Path: `/clients/mqtt/hubs/{hub}?access_token={token}` +* Port: 443 +* Transport: WebSockets with [TLS](https://wikipedia.org/wiki/Transport_Layer_Security). +++By default MQTT clients don't have any permissions to publish or subscribe to any topics. You need to grant [permissions](#permissions) to MQTT clients. ++## Permissions ++A client can publish to other clients only when it's *authorized* to do so. A client's permissions can be granted when it's being connected or during the lifetime of the connection. ++| Role | Permission | +||| +| Not specified | The client can send event requests. | +| `webpubsub.joinLeaveGroup` | The client can join or leave any group. | +| `webpubsub.sendToGroup` | The client can publish messages to any group. | +| `webpubsub.joinLeaveGroup.<group>` | The client can join or leave group `<group>`. | +| `webpubsub.sendToGroup.<group>` | The client can publish messages to group `<group>`. | +| | | ++## Authentication and authorization ++There are two workflows supported by Web PubSub to authenticate and authorize MQTT clients, so that they have proper permissions. ++These workflows can be used individually or in combination. If they're used in together, the auth result in the latter workflow would be honored by the service. ++### 1. JWT workflow ++This is the default workflow, shown as follows: ++![Diagram of MQTT auth workflow with JWT.](./media/howto-connect-mqtt-websocket-client/mqtt-jwt-auth-workflow.png) ++1. The client negotiates with your auth server. The auth server contains the authorization middleware, which handles the client request and signs a JWT for the client to connect to the service. +1. The auth server returns the JWT to the client. +1. The client tries to connect to the Web PubSub service with the JWT token returned from the auth server. The token can be in either the query string, as `/clients/mqtt/hubs/{hub}?access_token={token}`, or the `Authorization` header, as `Authorization: Bearer {token}`. ++#### Supported claims +You could also configure properties for the client connection when generating the access token by specifying special claims inside the JWT token: ++| Description | Claim type | Claim value | Notes | +| | | | | +| The [permissions](#permissions) the client connection initially has | `role` | the role value defined in [permissions](#permissions) | Specify multiple `role` claims if the client has multiple permissions. | +| The lifetime of the token | `exp` | the expiration time | The `exp` (expiration time) claim identifies the expiration time on or after which the token MUST NOT be accepted for processing. | +| The initial groups that the client connection joins once it connects to Azure Web PubSub | `group` | the group to join | Specify multiple `group` claims if the client joins multiple groups. | +| The `userId` for the client connection | `sub` | the userId | Only one `sub` claim is allowed. | ++You could also add custom claims into the access token, and these values are preserved as the `claims` property in [connect upstream request body](./reference-mqtt-cloud-events.md#system-connect-event). ++[Server SDKs](./howto-generate-client-access-url.md#generate-from-service-sdk) provides APIs to generate the access token for MQTT clients. Note that you must specify the client protocol to `Mqtt`. ++# [JavaScript](#tab/javascript) ++1. Follow [Getting started with server SDK](./reference-server-sdk-js.md#getting-started) to create a `WebPubSubServiceClient` object `service` ++> [!NOTE] +> Generating MQTT client access URL is supported since [version 1.1.3](https://www.npmjs.com/package/@azure/web-pubsub/v/1.1.3?activeTab=versions). ++2. Generate Client Access URL by calling `WebPubSubServiceClient.getClientAccessToken`: ++ ```js + let token = await serviceClient.getClientAccessToken({ clientProtocol: "mqtt" }); + ``` ++# [C#](#tab/csharp) ++1. Follow [Getting started with server SDK](./reference-server-sdk-csharp.md#getting-started) to create a `WebPubSubServiceClient` object `service` ++> [!NOTE] +> Generating MQTT client access URL is supported since [version 1.4.0](https://www.nuget.org/packages/Azure.Messaging.WebPubSub/1.4.0). ++2. Generate Client Access URL by calling `WebPubSubServiceClient.GetClientAccessUri`: ++ ```csharp + var url = service.GetClientAccessUri(clientProtocol: WebPubSubClientProtocol.Mqtt); + ``` ++# [Python](#tab/python) ++1. Follow [Getting started with server SDK](./reference-server-sdk-python.md#install-the-package) to create a `WebPubSubServiceClient` object `service` ++> [!NOTE] +> Generating MQTT client access URL is supported since [version 1.2.0](https://pypi.org/project/azure-messaging-webpubsubservice/1.2.0/). ++2. Generate Client Access URL by calling `WebPubSubServiceClient.get_client_access_token`: ++ ```python + token = service.get_client_access_token(client_protocol="MQTT") + ``` ++# [Java](#tab/java) ++1. Follow [Getting started with server SDK](./reference-server-sdk-java.md#getting-started) to create a `WebPubSubServiceClient` object `service` ++> [!NOTE] +> Generating MQTT client access URL is supported since [version 1.3.0](https://central.sonatype.com/artifact/com.azure/azure-messaging-webpubsub/1.3.0). +++2. Generate Client Access URL by calling `WebPubSubServiceClient.getClientAccessToken`: ++ ```java + GetClientAccessTokenOptions option = new GetClientAccessTokenOptions(); + option.setWebPubSubClientProtocol(WebPubSubClientProtocol.MQTT); + WebPubSubClientAccessToken token = service.getClientAccessToken(option); + ``` +++### 2. Upstream server workflow ++The MQTT client sends an MQTT CONNECT packet after it establishes a WebSocket connection with the service, then the service calls an API in the upstream server. The upstream server can auth the client according to the username and password fields in the MQTT connection request, and the TLS certificate from the client. ++![Diagram of MQTT auth workflow with upstream server](./media/howto-connect-mqtt-websocket-client/mqtt-upstream-auth-workflow.png) ++This workflow needs explicit configuration. +* [Tutorial - Authenticate and authorize MQTT clients based on client certificates](./tutorial-upstream-auth-mqtt-client.md) +* For details about how to use upstream server to auth the clients, see [How to configure event handler](./howto-develop-eventhandler.md) ++## Troubleshooting ++If you're experiencing failure of connection, or unable to publish or subscribe messages, please check the reason code / return code from the service, or see [How to troubleshoot with resource logs](./howto-troubleshoot-resource-logs.md). +++ |
azure-web-pubsub | Howto Develop Event Listener | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-develop-event-listener.md | Last updated 09/30/2022 > [!NOTE] > Event listener feature is in preview.+> Sending MQTT client events to event listener is not supported yet. ## Overview Find your Azure Web PubSub service from **Azure portal**. Navigate to **Identity In this article, you learned how event listeners work and how to configure an event listener with an event hub endpoint. To learn the data format sent to Event Hubs, read the following specification. -> [!div class="nextstepaction"] +> [!div class="nextstepaction"] > [Specification: CloudEvents AMQP extension for Azure Web PubSub](./reference-cloud-events-amqp.md) <!--TODO: Add demo--> |
azure-web-pubsub | Howto Generate Client Access Url | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-generate-client-access-url.md | -A client, be it a browser 💻, a mobile app 📱, or an IoT device 💡, uses a **Client Access URL** to connect and authenticate with your resource. This URL follows a pattern of `wss://<service_name>.webpubsub.azure.com/client/hubs/<hub_name>?access_token=<token>`. This article shows you several ways to get the Client Access URL. +A client, be it a browser 💻, a mobile app 📱, or an IoT device 💡, uses a **Client Access URL** to connect and authenticate with your resource. ++The URL follows the below pattern: +* For MQTT clients, it's `wss://<service_name>.webpubsub.azure.com/clients/mqtt/hubs/<hub_name>?access_token=<token>`. +* For all other clients, it's `wss://<service_name>.webpubsub.azure.com/client/hubs/<hub_name>?access_token=<token>`. ++This article shows you several ways to get the Client Access URL. - For quick start, copy one from the Azure portal-- For development, generate the value using [Web PubSub server SDK](./reference-server-sdk-js.md)+- For development, generate the value using [Web PubSub server SDK](#generate-from-service-sdk) - If you're using Microsoft Entra ID, you can also invoke the [Generate Client Token REST API](/rest/api/webpubsub/dataplane/web-pub-sub/generate-client-token) ## Copy from the Azure portal In the Keys tab in Azure portal, there's a Client URL Generator tool to quickly generate a Client Access URL for you, as shown in the following diagram. Values input here aren't stored. +Note that for MQTT clients, you should select "MQTT Client" in the dropdown menu in front of the "Client Access URL" text box. + :::image type="content" source="./media/howto-websocket-connect/generate-client-url.png" alt-text="Screenshot of the Web PubSub Client URL Generator."::: ## Generate from service SDK The same Client Access URL can be generated by using the Web PubSub server SDK. 2. Generate Client Access URL by calling `WebPubSubServiceClient.getClientAccessToken`: + - Generate MQTT client access token ++ ```js + let token = await serviceClient.getClientAccessToken({ clientProtocol: "mqtt" }); + ``` + - Configure user ID ```js The same Client Access URL can be generated by using the Web PubSub server SDK. 2. Generate Client Access URL by calling `WebPubSubServiceClient.GetClientAccessUri`: + - Generate MQTT client access token ++ ```csharp + var url = service.GetClientAccessUri(clientProtocol: WebPubSubClientProtocol.Mqtt); + ``` + - Configure user ID ```csharp The same Client Access URL can be generated by using the Web PubSub server SDK. 2. Generate Client Access URL by calling `WebPubSubServiceClient.get_client_access_token`: + - Generate MQTT client access token ++ ```python + token = service.get_client_access_token(client_protocol="MQTT") + ``` + - Configure user ID ```python The same Client Access URL can be generated by using the Web PubSub server SDK. 2. Generate Client Access URL by calling `WebPubSubServiceClient.getClientAccessToken`: - - Configure user ID + - Generate MQTT client access token ++ ```java + GetClientAccessTokenOptions option = new GetClientAccessTokenOptions(); + option.setWebPubSubClientProtocol(WebPubSubClientProtocol.MQTT); + WebPubSubClientAccessToken token = service.getClientAccessToken(option); + ``` ++ - Configure user ID ```java GetClientAccessTokenOptions option = new GetClientAccessTokenOptions(); The same Client Access URL can be generated by using the Web PubSub server SDK. In real-world code, we usually have a server side to host the logic generating the Client Access URL. When a client request comes in, the server side can use the general authentication/authorization workflow to validate the client request. Only valid client requests can get the Client Access URL back. -## Invoke the Generate Client Token REST API +## Invoke the "Generate Client Token" REST API You can enable Microsoft Entra ID in your service and use the Microsoft Entra token to invoke [Generate Client Token rest API](/rest/api/webpubsub/dataplane/web-pub-sub/generate-client-token) to get the token for the client to use. 1. Follow [Authorize from application](./howto-authorize-from-application.md) to enable Microsoft Entra ID. 2. Follow [Get Microsoft Entra token](./howto-authorize-from-application.md#use-postman-to-get-the-microsoft-entra-token) to get the Microsoft Entra token with Postman. 3. Use the Microsoft Entra token to invoke `:generateToken` with Postman:- + > [!NOTE] > Please use the latest version of Postman. Old versions of Postman have [some issue](https://github.com/postmanlabs/postman-app-support/issues/3994#issuecomment-893453089) supporting colon `:` in path. - 1. For the URI, enter `https://{Endpoint}/api/hubs/{hub}/:generateToken?api-version=2022-11-01` + 1. For the URI, enter `https://{Endpoint}/api/hubs/{hub}/:generateToken?api-version=2024-01-01`. If you'd like to generate token for MQTT clients, append query parameter `&clientType=mqtt` to the URL. 2. On the **Auth** tab, select **Bearer Token** and paste the Microsoft Entra token fetched in the previous step 3. Select **Send** and you see the Client Access Token in the response: You can enable Microsoft Entra ID in your service and use the Microsoft Entra to "token": "ABCDEFG.ABC.ABC" } ```--5. The Client Access URI is in the format of `wss://<endpoint>/client/hubs/<hub_name>?access_token=<token>` |
azure-web-pubsub | Overview Mqtt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/overview-mqtt.md | + + Title: MQTT support in Azure Web PubSub service +description: Get an overview of Azure Web PubSub's support for the MQTT protocols, understand typical use case scenarios to use MQTT in Azure Web PubSub, and learn the key benefits of MQTT in Azure Web PubSub. +keywords: MQTT, MQTT on Azure Web PubSub, MQTT over WebSocket ++ Last updated : 07/15/2024++++# Overview: MQTT in Azure Web PubSub service (Preview) ++[MQTT](https://mqtt.org/) is a lightweight pub/sub messaging protocol designed for devices with constrained resources. Azure Web PubSub service now natively supports MQTT over WebSocket transport. ++You can use MQTT protocols in Web PubSub service for the following scenarios: ++* Pub/Sub among MQTT clients and Web PubSub native clients. +* Broadcast messages to MQTT clients. +* Get notifications for MQTT client lifetime events. ++> [!NOTE] +> MQTT support in Azure Web PubSub is in preview stage. ++## Key features ++### Standard MQTT protocols support ++Web PubSub service supports MQTT 3.1.1 and 5.0 protocols in a standard way that any MQTT SDK with WebSocket transport support can connect to Web PubSub. Users who wish to use Web PubSub in a programming language that doesn't have a native Web PubSub SDK can still connect and communicate using MQTT. ++### Cross-protocol communication ++MQTT clients can communicate with clients of other Web PubSub protocols. Find more details [here](./reference-mqtt-cross-protocol-communication.md) ++### Easy MQTT adoption for current Web PubSub users ++Current users of Azure Web PubSub can use MQTT protocol with minimal modifications to their existing upstream servers. The Web PubSub REST API is already equipped to handle MQTT connections, simplifying the transition process. ++### Client-to-server request/response model ++In addition to the client-to-client pub/sub model provided by the MQTT protocols, Web PubSub also support a client-to-server request/response model. Basically Web PubSub converts a specific kind of MQTT application messages into HTTP requests to registered webhooks, and sends the HTTP responses as application messages back to the MQTT clients. ++For more details, see [MQTT custom event handler protocol](./reference-mqtt-cloud-events.md#user-custom_event-event). ++## MQTT feature support status ++Web PubSub support MQTT protocol version 3.1.1 and 5.0. The supported features include but not limited to: ++* All the levels of Quality Of Service including at most once, at least once and exactly once. +* Persistent session. MQTT sessions are preserved for up to 30 seconds when client connections are interrupted. +* Last Will & Testament +* Client Certificate Authentication ++### Additional features supported for MQTT 5.0 ++* Message Expiry Interval and Session Expiry Interval +* Subscription Identifier. +* Assigned Client ID. +* Flow Control +* Server-Sent Disconnect ++### Not supported feature ++* Wildcard subscription +* Retained messages +* Topic alias +* Shared subscription ++## How MQTT is adapted into Web PubSub's system ++This section assumes you have basic knowledge about MQTT protocols and Web PubSub. You can find the definitions of MQTT terms in [MQTT V5.0.0 Spec](https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Toc3901003). You can also learn basic concepts of Web PubSub in [Basic Concepts](./key-concepts.md). ++The following table shows similar or equivalent term mappings between MQTT and Web PubSub. It helps you understand how we adapt MQTT concepts into the Web PubSub's system. It's essential if you want to use the [data-plane REST API](./reference-rest-api-data-plane.md) or [client event handlers](./howto-develop-eventhandler.md) to interact with MQTT clients. +++## Client authentication and authorization ++In general, a server to authenticate and authorize MQTT clients is required. There are two workflows supported by Web PubSub to authenticate and authorize MQTT clients. ++* Workflow 1: The MQTT client gets a [JWT(JSON Web Token)](https://jwt.io) from somewhere with its credential, usually from an auth server. Then the client includes the token in the WebSocket upgrading request to the Web PubSub service, and the Web PubSub service validates the token and auth the client. This workflow is enabled by default. ++![Diagram of MQTT Auth Workflow With JWT.](./media/howto-connect-mqtt-websocket-client/mqtt-jwt-auth-workflow.png) ++* Workflow 2: The MQTT client sends an MQTT CONNECT packet after it establishes a WebSocket connection with the service, then the service calls an API in the upstream server. The upstream server can auth the client according to the username and password fields in the MQTT connection request, and the TLS certificate from the client. This workflow needs explicit configuration. +<!--Add link to tutorial and configuration--> ++![Diagram of MQTT Auth Workflow With Upstream Server.](./media/howto-connect-mqtt-websocket-client/mqtt-upstream-auth-workflow.png) ++These two workflows can be used individually or in combination. If they're used in together, the auth result in the latter workflow would be honored by the service. ++For details on client authentication and authorization, see [How To Connect MQTT Clients to Web PubSub](./howto-connect-mqtt-websocket-client.md). ++## Client lifetime event notification ++You can register event handlers to get notification when a Web PubSub client connection is started or ended, that is, an MQTT session started or ended. ++* [Event handler in Azure Web PubSub service](./howto-develop-eventhandler.md) +* [MQTT CloudEvents Protocol](./reference-mqtt-cloud-events.md) ++## REST API support ++You can use REST API to do the following things: ++* Publish messages to a topic, a connection, a Web PubSub user, or all the connections. +* Manage client permissions and subscriptions. ++[REST API specification for MQTT](./reference-rest-api-mqtt.md) ++## Event listener support ++> [!NOTE] +> Sending MQTT client events to Event Hubs is not supported yet. ++## Next step ++> [!div class="nextstepaction"] +> [Quickstart: Pub/Sub among MQTT clients](./quickstarts-pubsub-among-mqtt-clients.md) ++> [!div class="nextstepaction"] +> [How To Connect MQTT Clients to Web PubSub](./howto-connect-mqtt-websocket-client.md) |
azure-web-pubsub | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/overview.md | description: Better understand what typical use cases and app scenarios Azure We - Previously updated : 07/26/2024+ Last updated : 07/30/2024 # What is Azure Web PubSub service? -Azure Web PubSub Service makes it easy to build web applications where server and clients need to exchange data in real-time. Real-time data exchange is the bedrock of certain time-sensitive apps developers build and maintain. Developers have used the service in a variety of applications and industries, for example, in chat apps, real-time dashboards, multi-player games, online auctions, multi-user collaborative apps, location tracking, notifications, and more. +Azure Web PubSub Service makes it easy to build web applications where server and clients need to exchange data in real-time. Real-time data exchange is the bedrock of certain time-sensitive apps developers build and maintain. Developers have used the service in a variety of applications and industries, for example, in chat apps, real-time dashboards, multi-player games, online auctions, multi-user collaborative apps, location tracking, notifications, and more. -When an app's usage is small, developers typically opt for a polling mechanism to provide real-time communication between server and clients - clients send repeated HTTP requests to server over a time interval. However, developers often report that while polling mechanism is straightforward to implement, it suffers three important drawbacks. -- Outdated data. -- Inconsistent data. +When an app's usage is small, developers typically opt for a polling mechanism to provide real-time communication between server and clients - clients send repeated HTTP requests to server over a time interval. However, developers often report that while polling mechanism is straightforward to implement, it suffers three important drawbacks. +- Outdated data. +- Inconsistent data. - Wasted bandwidth and compute resources. These drawbacks are the primary motivations that drive developers to look for alternatives. This article provides an overview of Azure Web PubSub service and how developers can use it to build real-time communication channel fast and at scale. These drawbacks are the primary motivations that drive developers to look for al ## What is Azure Web PubSub service used for? ### Streaming token in AI-assisted chatbot-With the recent surge in interest in AI, Web PubSub has become an invaluable tool to developers building AI-enabled applications for token streaming. The service is battle-tested to scale to tens of millions of concurrent connections and offers ultra-low latency. +With the recent surge in interest in AI, Web PubSub has become an invaluable tool to developers building AI-enabled applications for token streaming. The service is battle-tested to scale to tens of millions of concurrent connections and offers ultra-low latency. ### Delivering real-time updates-Any app scenario where updates at the data resource need to be delivered to other components across network can benefit from using Azure Web PubSub. As the name suggests, the service facilities the communication between a publisher and subscribers. A publisher is a component that publishes data updates. A subscriber is a component that subscribes to data updates. +Any app scenario where updates at the data resource need to be delivered to other components across network can benefit from using Azure Web PubSub. As the name suggests, the service facilities the communication between a publisher and subscribers. A publisher is a component that publishes data updates. A subscriber is a component that subscribes to data updates. -Azure Web PubSub service is used in a multitude of industries and app scenarios where data is time-sensitive. Here's a partial list of some common use cases. +Azure Web PubSub service is used in a multitude of industries and app scenarios where data is time-sensitive. Here's a partial list of some common use cases. |Use case |Example applications | |-|-| |High frequency data updates | Multi-player games, social media voting, opinion polling, online auctioning | |Live dashboards and monitoring | Company dashboard, financial market data, instant sales update, game leaderboard, IoT monitoring | |Cross-platform chat| Live chat room, online customer support, real-time shopping assistant, messenger, in-game chat |-|Location tracking | Vehicle asset tracking, delivery status tracking, transportation status updates, ride-hailing apps | +|Location tracking | Vehicle asset tracking, delivery status tracking, transportation status updates, ride-hailing apps | |Multi-user collaborative apps | coauthoring, collaborative whiteboard and team meeting apps |-|Cross-platform push notifications | Social media, email, game status, travel alert | +|Cross-platform push notifications | Social media, email, game status, travel alert | |IoT and connected devices | Real-time IoT metrics, managing charging network for electric vehicles, live concert engagement |-|Automation | Real-time trigger from upstream events | +|Automation | Real-time trigger from upstream events | ## What are the benefits using Azure Web PubSub service? Azure Web PubSub service offers real-time, bi-directional communication between |Broadcast to all clients | A server sends data updates to all connected clients. | |Broadcast to a subset of clients | A server sends data updates to a subset of clients arbitrarily defined by you. | |Broadcast to all clients owned by a specific human user | A human user can have multiple browser tabs or device open, you can broadcast to the user so that all the web clients used by the user are synchronized. |-|Client pub/sub | A client sends messages to clients that are in a group arbitrarily defined by you without your server's involvement.| -|Clients to server | Clients send messages to server at low latency. | +|Client pub/sub | A client sends messages to clients that are in a group arbitrarily defined by you without your server's involvement.| +|Clients to server | Clients send messages to server at low latency. | ## How to use the Azure Web PubSub service? There are many different ways to program with Azure Web PubSub service, as some of the samples listed here: -- **Build serverless real-time applications**: Use Azure Functions' integration with Azure Web PubSub service to build serverless real-time applications in languages such as JavaScript, C#, Java and Python. -- **Use WebSocket subprotocol to do client-side only Pub/Sub** - Azure Web PubSub service provides WebSocket subprotocols to empower authorized clients to publish to other clients in a convenient manner.+- **Build serverless real-time applications**: Use Azure Functions' integration with Azure Web PubSub service to build serverless real-time applications in languages such as JavaScript, C#, Java and Python. +- **Use WebSocket subprotocol to do client-side only Pub/Sub** - Azure Web PubSub service provides WebSocket subprotocols including MQTT to empower authorized clients to publish to other clients in a convenient manner. - **Use provided SDKs to manage the WebSocket connections in self-host app servers** - Azure Web PubSub service provides SDKs in C#, JavaScript, Java and Python to manage the WebSocket connections easily, including broadcast messages to the connections, add connections to some groups, or close the connections, etc. - **Send messages from server to clients via REST API** - Azure Web PubSub service provides REST API to enable applications to post messages to clients connected, in any REST capable programming languages. |
azure-web-pubsub | Quickstarts Pubsub Among Mqtt Clients | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstarts-pubsub-among-mqtt-clients.md | + + Title: PubSub among MQTT clients ++description: A quickstarts guide that shows to how to subscribe to messages on a topic and send messages to a topic without the involvement of a typical application server ++++ Last updated : 06/14/2024+++# Publish/subscribe among MQTT clients ++This quickstart guide demonstrates how to +> [!div class="checklist"] +> * **connect** to your Web PubSub resource +> * **subscribe** to messages on a specific topic +> * **publish** messages to a topic ++## Prerequisites +- A Web PubSub resource. If you haven't created one, you can follow the guidance: [Create a Web PubSub resource](./howto-develop-create-instance.md) +- A code editor, such as Visual Studio Code +- Install the dependencies for the language you plan to use ++> [!NOTE] +> Except for the MQTT client libraries mentioned belows, you can choose any standard MQTT client libraries that meet the following requirements to connect to Web PubSub: +> * Support WebSocket transport. +> * Support MQTT protocol 3.1.1 or 5.0. ++# [JavaScript](#tab/javascript) ++```bash +mkdir pubsub_among_clients +cd pubsub_among_clients ++npm install mqtt +``` ++# [C#](#tab/csharp) ++```bash +mkdir pubsub_among_clients +cd pubsub_among_clients ++# Create a new .net console project +dotnet new console ++dotnet add package MqttNet +``` ++# [Python](#tab/python) +```bash +mkdir pubsub_among_clients +cd pubsub_among_clients ++pip install paho-mqtt +``` ++<!--Java, Go, C++(Using VCPKG)--> +++## Connect to Web PubSub ++An MQTT uses a **Client Access URL** to connect and authenticate with your resource. This URL follows a pattern of `wss://<service_name>.webpubsub.azure.com/clients/mqtt/hubs/<hub_name>?access_token=<token>`. ++A client can have a few ways to obtain the Client Access URL. It's best practice to not hard code the Client Access URL in your code. In the production world, we usually set up an app server to return this URL on demand. [Generate Client Access URL](./howto-generate-client-access-url.md) describes the practice in detail. ++For this quick start, you can copy and paste one from Azure portal shown in the following diagram. ++![The diagram shows how to get MQTT client access url.](./media/quickstarts-pubsub-among-mqtt-clients/portal-mqtt-client-access-uri-generation.png) ++As shown in the preceding code, the client has the permissions to send messages to topic `group1` and to subscribe to topic `group2`. +++The following code shows how to connect MQTT clients to WebPubSub with MQTT protocol version 5.0, clean start, 30-seconds session expiry interval. ++# [JavaScript](#tab/javascript) ++Create a file with name `index.js` and add following code ++```javascript +const mqtt = require('mqtt'); +var client = mqtt.connect(`wss://<service_name>.webpubsub.azure.com/clients/mqtt/hubs/<hub_name>?access_token=<token>`, + { + clientId: "client1", + protocolVersion: 5, // Use MQTT 5.0 protocol + clean: true, + properties: { + sessionExpiryInterval: 30, + }, + }); +``` ++# [C#](#tab/csharp) ++Edit the `Program.cs` file and add following code ++```csharp +using MQTTnet; +using MQTTnet.Client; ++var mqttFactory = new MqttFactory(); +var client = mqttFactory.CreateMqttClient(); +var mqttClientOptions = new MqttClientOptionsBuilder() + .WithWebSocketServer((MqttClientWebSocketOptionsBuilder b) => + b.WithUri("wss://<service_name>.webpubsub.azure.com/clients/mqtt/hubs/<hub_name>?access_token=<token>")) + .WithClientId("client1") + .WithProtocolVersion(MQTTnet.Formatter.MqttProtocolVersion.V500) + .WithCleanStart() + .WithSessionExpiryInterval(30) + .Build(); +await client.ConnectAsync(mqttClientOptions, CancellationToken.None); +``` ++# [Python](#tab/python) +```python +import paho.mqtt.client as mqtt +from paho.mqtt.packettypes import PacketTypes ++def on_connect(client, userdata, flags, reasonCode, properties): + print("Connected with result code "+str(reasonCode)) ++def on_connect_fail(client, userData): + print("Connection failed") + print(userData) ++def on_log(client, userdata, level, buf): + print("log: ", buf) ++host = "<service_name>.webpubsub.azure.com" +port = 443 +client = mqtt.Client(client_id= client_id, transport="websockets", protocol= mqtt.MQTTv5) +client.ws_set_options(path="/clients/mqtt/hubs/<hub_name>?access_token=<token>") +client.tls_set() +client.on_connect = on_connect +client.on_connect_fail = on_connect_fail +client.on_log = on_log +connect_properties.SessionExpiryInterval = 30 +client.connect(host, port, clean_start = True, properties=connect_properties) +``` +++### Troubleshooting ++If your client failed to connect, you could use the Azure Monitor for troubleshooting. See [Monitor Azure Web PubSub](./howto-azure-monitor.md) for more details. ++You can check the connection parameters and get more detailed error messages from the Azure Monitor. For example, the following screenshot of Azure Log Analytics shows that the connection was rejected because it set an invalid keep alive interval. +![Screenshot of Azure Log Analytics.](./media/quickstarts-pubsub-among-mqtt-clients/diagnostic-log.png) ++## Subscribe to a topic ++To receive messages from topics, the client +- must subscribe to the topic it wishes to receive messages from +- has a callback to handle message event ++The following code shows a client subscribes to topics named `group2`. ++# [JavaScript](#tab/javascript) ++```javascript +// ...code from the last step ++// Provide callback to the message event. +client.on("message", async (topic, payload, packet) => { + console.log(topic, payload) +}); ++// Subscribe to a topic. +client.subscribe("group2", { qos: 1 }, (err, granted) => { console.log("subscribe", granted); }) ++``` ++# [C#](#tab/csharp) ++```csharp +// ...code from the last step ++// Provide callback to the message event. +client.ApplicationMessageReceivedAsync += (args) => +{ + Console.WriteLine($"Received message on topic '{args.ApplicationMessage.Topic}': {System.Text.Encoding.UTF8.GetString(args.ApplicationMessage.PayloadSegment)}"); + return Task.CompletedTask; +}; +// Subscribe to a topic "topic". +await client.SubscribeAsync("group2", MQTTnet.Protocol.MqttQualityOfServiceLevel.AtLeastOnce); +``` ++# [Python](#tab/python) ++```python +# ...code from the last step ++# Provide callback to the message event. +def subscriber_on_message(client, userdata, msg): + print(msg.topic+" "+str(msg.payload)) +client.on_message = subscriber_on_message ++# Subscribe to a topic "topic". +client.subscribe("group2") ++# Blocking call that processes network traffic, dispatches callbacks and +# handles reconnecting. +# Other loop*() functions are available that give a threaded interface and a +# manual interface. +client.loop_forever() +``` ++++## Publish a message to a group +In the previous step, we've set up everything needed to receive messages from `group1`, now we send messages to that group. ++# [JavaScript](#tab/javascript) ++```javascript +// ...code from the last step ++// Send message "Hello World" in the "text" format to "group1". +client.publish("group1", "Hello World!") +``` ++# [C#](#tab/csharp) ++```csharp +// ...code from the last step ++// Send message "Hello World" in the "text" format to "group1". +await client.PublishStringAsync("group1", "Hello World!"); +``` ++# [Python](#tab/python) ++```python +# ...code from the last step ++# Send message "Hello World" in the "text" format to "group1". +client.publish("group1", "Hello World!") +``` +++## Next steps +By using the client SDK, you now know how to +> [!div class="checklist"] +> * **connect** to your Web PubSub resource +> * **subscribe** to topics +> * **publish** messages to topics ++Next, you learn how to **push messages in real-time** from an application server to your clients. +> [!div class="nextstepaction"] +> [Push message from application server](quickstarts-push-messages-from-server.md) |
azure-web-pubsub | Reference Mqtt Cloud Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-mqtt-cloud-events.md | + + Title: Reference - CloudEvents extension for Azure Web PubSub MQTT event handler with HTTP protocol +description: This reference describes CloudEvents extensions for the Azure Web PubSub MQTT event handler with HTTP protocol. ++++ Last updated : 07/16/2024+++# CloudEvents extension for Azure Web PubSub MQTT event handler with HTTP protocol ++The Web PubSub service delivers client events to the upstream webhook using the [CloudEvents HTTP protocol binding](https://github.com/cloudevents/spec/blob/v1.0.1/http-protocol-binding.md). ++Data sent from the Web PubSub service to the server is always in CloudEvents `binary` format. ++- [Webhook Validation](#webhook-validation) +- [Web PubSub CloudEvents Attribute Extension](#web-pubsub-cloudevents-attribute-extension) +- [Events](#events) + - [Blocking Events](#blocking-events) + - [System `connect` Event](#system-connect-event) + - [User Events](#user-custom_event-event) + - [Unblocking Events](#unblocking-events) + - [System `connected` Event](#system-connected-event) + - [System `disconnected` Event](#system-disconnected-event) ++## Webhook Validation ++The webhook validation follows [CloudEvents](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#4-abuse-protection). Each webhook endpoint should accept HTTP OPTIONS requests containing `WebHook-Request-Origin: xxx.webpubsub.azure.com` in the header and reply to the request by including the `WebHook-Allowed-Origin` header. For example: ++`WebHook-Allowed-Origin: *` ++Or: ++`WebHook-Allowed-Origin: xxx.webpubsub.azure.com` ++Currently, [WebHook-Request-Rate](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#414-webhook-request-rate) and [WebHook-Request-Callback](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#413-webhook-request-callback) aren't supported. ++## Web PubSub CloudEvents Attribute Extension ++This extension defines the attributes used by Web PubSub for every event it produces. +++| Name | Type | Description | Example | +|||-|| +| `userId` | `string` | The authenticated user ID of the connection. | | +| `hub` | `string` | The hub the connection belongs to. | | +| `connectionId` | `string` | The client ID in the MQTT protocol. | | +| `eventName` | `string` | The name of the event without a prefix. | | +| `subprotocol` | `string` | It's always `mqtt`. | | +| `connectionState` | `string` | Defines the state of the connection. You can use the same response header to reset the value of the state. Multiple `connectionState` headers aren't allowed. Encode the string value in base64 if it contains complex characters; for example, use `base64(jsonString)` to pass complex objects using this attribute. | | +| `signature` | `string` | The signature for the upstream webhook to validate if the incoming request is from the expected origin. The service calculates the value using both primary and secondary access keys as the HMAC key: `Hex_encoded(HMAC_SHA256(accessKey, connectionId))`. The upstream should verify if the request is valid before processing it. | | +| `physicalConnectionId` | `string` | A unique ID generated by the service for each physical connection. Its format may change, so you shouldn't parse it. | | +| `sessionId` | `string` | A unique ID generated by the service for each session. It doesn't exist in the [`connect`](#system-connect-event) event. Its format may change, so you shouldn't parse it. | | ++## Events ++There are two types of events: *blocking* events, where the service waits for the response of the event to continue, and *unblocking* events, where the service doesn't wait for the response of such event before processing the next message. ++### Blocking Events +- [System `connect` Event](#system-connect-event) +- [User Events](#user-custom_event-event) ++### Unblocking Events +- [System `connected` Event](#system-connected-event) +- [System `disconnected` Event](#system-disconnected-event) ++> [!NOTE] +> * For a TypeSpec version of this specification, see [TypeSpec](https://github.com/Azure/azure-webpubsub/blob/main/protocols/server/cloud-events/main.tsp). You may want to open the filein the [TypeSpec Playground](https://typespec.io/playground) for a better reading experience and a generated Swagger UI. +> * For a Swagger version of this specification, see [Swagger](https://github.com/Azure/azure-webpubsub/blob/main/protocols/server/cloud-events/tsp-output/%40typespec/openapi3/openapi.yaml). You may want to open the file in the [Swagger Editor](https://editor.swagger.io/) for a better reading experience and a generated Swagger UI. ++### System `connect` Event ++* `ce-type`: `azure.webpubsub.sys.connect` +* `Content-Type`: `application/json` ++#### Request Format ++Every time the services receive a CONNECT packet from clients, it sends a `connect` request to upstream. ++```HTTP +POST /upstream HTTP/1.1 +Host: xxxxxx +WebHook-Request-Origin: xxx.webpubsub.azure.com +Content-Type: application/json; charset=utf-8 +Content-Length: nnnn +ce-specversion: 1.0 +ce-type: azure.webpubsub.sys.connect +ce-source: /hubs/{hub}/client/{clientId}/{physicalConnectionId} +ce-id: {eventId} +ce-time: 2021-01-01T00:00:00Z +ce-signature: sha256={connection-id-hash-primary},sha256={connection-id-hash-secondary} +ce-connectionId: {clientId} +ce-hub: {hub} +ce-eventName: connect +ce-physicalConnectionId: {physicalConnectionId} ++{ + "mqtt": { + "protocolVersion": 4, + "cleanStart": true, + "username": null, + "password": null, + "userProperties": null + }, + "claims": { + "type1": ["value1"] + }, + "query": { + "queryKey1": ["queryValue1"] + }, + "headers": { + "Connection": ["Upgrade"] + }, + "subprotocols": ["mqtt"], + "clientCertificates": [ + { + "thumbprint": "3ce9b08a37566915dec4d1662cd2102121a99868", + "content": "{string content of PEM format certificate}" + } + ] +} +``` ++* `mqtt.protocolVersion`: `{MQTT protocol version of the client}` + An integer; possible values are `4` (MQTT 3.1.1) and `5` (MQTT 5.0). ++* `mqtt.username`: `{Username field in the MQTT CONNECT packet}` + A UTF-8 encoded string. `mqtt.username` and `mqtt.password` can be used for upstream webhook authentication of the clients. ++* `mqtt.password`: `{Password field in the MQTT CONNECT packet}` + Base64-encoded binary data. Combined with the username for client authentication. ++* `mqtt.userProperties`: `{User properties field in the MQTT CONNECT packet}` + A list of string key-value pairs. It's additional diagnostic or other information provided by clients whose protocols support user properties. Currently, only MQTT 5.0 supports it. ++#### Success Response Format ++* Header `ce-connectionState`: If this header exists, the connection's state will be updated to the value of the header. Note that only *blocking* events can update the connection state. The sample below uses a base64 encoded JSON string to store the complex state for the connection. ++* HTTP Status Code: + * `204`: Success, with no content. + * `200`: Success; the content SHOULD be in JSON format, with the following properties allowed: ++```HTTP +HTTP/1.1 200 OK ++{ + "groups": ["newGroup1"], + "subProtocol": "mqtt", + "userId": "userId1", + "roles": ["webpubsub.sendToGroup", "webpubsub.joinLeaveGroup"], + "mqtt": { + "userProperties": [ + { + "name": "name1", + "value": "value1" + } + ] + } +} +``` ++* `subprotocol`: It should always be `mqtt` or null. When it's null, it defaults to `mqtt`. ++* `userId`: `{authenticated user ID}` + As the service allows anonymous connections, it's the `connect` event's responsibility to tell the service the user ID of the client connection. The service will read the user ID from the response payload `userId` if it exists. + This property takes effect only when a new session is created for the physical connection. If the physical connection is connected to an existing session, this property is ignored, and the user ID of the existing session is used. ++* `groups`: `{groups to join}` + This property provides a convenient way to add the connection to one or multiple groups. In terms of MQTT, it assigns one or multiple subscriptions with default subscription options to the client, and the group names are topic filters. + This property takes effect only when a new session is created for the physical connection. If the physical connection is connected to an existing session, this property is ignored. ++* `roles`: `{roles the client has}` + This property provides a way for the upstream webhook to authorize the client. There are different roles to grant initial permissions for PubSub WebSocket clients. Details about the permissions are described in [Client Permissions](./concept-client-protocols.md#permissions). + This property takes effect only when a new session is created for the physical connection. If the physical connection is connected to an existing session, this property is ignored. ++* `mqtt.userProperties`: `{user properties that will be sent to clients in the CONNACK packet}` + A list of string key-value pairs. They will be converted to user properties in the CONNACK and sent to clients whose protocols support user properties. Currently, only MQTT 5.0 supports user properties. The upstream webhook can use this property for additional diagnostics or other information. ++Once the service receives a success response from upstream, it sends a successful CONNACK packet to the client. ++#### Error Response Format ++* `4xx` or `5xx`: Error. The content-type should be `application/json`. Once the service receives an error response, it sends a failed CONNACK packet to the client accordingly. ++```HTTP +HTTP/1.1 401 Unauthorized +{ + "mqtt": { + "code": 138, // The CONNACK return code / reason code + "reason": "banned by server", // The reason string + "userProperties": [{ "name": "name1", "value": "value1" }] + } +} +``` ++* `mqtt.code`: `{return code (MQTT 3.1.1) or reason code (MQTT 5.0) that will be sent to clients in the CONNACK packet}` + An integer indicating the failure reason. It will be sent to the clients in the CONNACK packet as a [return code (MQTT 3.1.1)](https://docs.oasis-open.org/mqtt/mqtt/v3.1.1/errata01/os/mqtt-v3.1.1-errata01-os-complete.html#_Toc385349257) or [reason code (MQTT 5.0)](https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Toc3901079). The upstream webhook should select a valid integer value defined by the MQTT protocols according to the protocol versions of the clients. If the upstream webhook sets an invalid value, clients will receive "unspecified error" in the CONNACK packet. ++* `mqtt.reason`: `{failure reason string}` + A human-readable failure reason string designed for diagnostics. It will be sent to clients whose protocols support the reason string in the CONNACK packet. Currently, only MQTT 5.0 supports it. ++* `mqtt.userProperties`: `{user properties that will be sent to clients in the CONNACK packet}` + A list of string key-value pairs. They will be converted to user properties in the CONNACK packet and sent to clients whose protocols support user properties. Currently, only MQTT 5.0 supports user properties. The upstream webhook can use this property for additional diagnostics or other information. ++### System `connected` Event ++The service uses this event to notify the upstream that a **new** session is created. If a client connects to an **existing** session, there is no upstream call. In terms of MQTT, the service sends this event when it sends a Session Present flag 0 CONNACK packet to clients. ++* `ce-type`: `azure.webpubsub.sys.connected` +* `Content-Type`: `application/json` ++Request body is empty JSON. ++#### Request Format ++```HTTP +POST /upstream HTTP/1.1 +Host: xxxxxx +WebHook-Request-Origin: xxx.webpubsub.azure.com +Content-Type: application/json; charset=utf-8 +Content-Length: nnnn +ce-specversion: 1.0 +ce-type: azure.webpubsub.sys.connected +ce-source: /hubs/{hub}/client/{clientId}/{physicalConnectionId} +ce-id: {eventId} +ce-time: 2021-01-01T00:00:00Z +ce-signature: sha256={connection-id-hash-primary},sha256={connection-id-hash-secondary} +ce-connectionId: {clientId} +ce-hub: {hub} +ce-eventName: connected +ce-physicalConnectionId: {physicalConnectionId} +ce-sessionId: {sessionId} ++{} +``` ++Compared to the `connect` event, the `connected` event includes a new header `ce-sessionId`. It's a unique ID generated by the service for each session. For the rest of the event types, the session ID header is also included. ++#### Response Format ++* `2xx`: Success response. ++The `connected` event is asynchronous. When the response status code isn't successful, the service logs an error. ++```HTTP +HTTP/1.1 200 OK +``` ++### System `disconnected` Event ++The service uses this event to notify upstream that a session has expired or ended. ++* `ce-type`: `azure.webpubsub.sys.disconnected` +* `Content-Type`: `application/json` ++#### Request Format ++```HTTP +POST /upstream HTTP/1.1 +Host: xxxxxx +WebHook-Request-Origin: xxx.webpubsub.azure.com +Content-Type: application/json; charset=utf-8 +Content-Length: nnnn +ce-specversion: 1.0 +ce-type: azure.webpubsub.sys.disconnected +ce-source: /hubs/{hub}/client/{clientId}/{physicalConnectionId} +ce-id: {eventId} +ce-time: 2021-01-01T00:00:00Z +ce-signature: sha256={connection-id-hash-primary},sha256={connection-id-hash-secondary} +ce-connectionId: {clientId} +ce-hub: {hub} +ce-eventName: disconnected +ce-physicalConnectionId: {physicalConnectionId} +ce-sessionId: {sessionId} ++{ + "reason":"", + "mqtt":{ + "initiatedByClient": true, + "disconnectPacket":{ + "code": 0, + "userProperties": [{ "name": "name1", "value": "value1" }] + } + } +} +``` ++* `reason`: `{nullable string of the disconnect reason}` + A human-readable string describing why the client disconnected. It may be null if it's a normal disconnection. If multiple connections connect and disconnect in a session, this property stands for the reason of the last disconnection. The reason is provided by clients or the service, depending on who initiated the disconnection. + For MQTT, the reason may come from the reason string in the DISCONNECT packet from MQTT 5.0 clients or from the service. MQTT 3.1.1 protocol doesn't have a reason string in the DISCONNECT packet, so this property for MQTT 3.1.1 clients may be null or provided by the service. ++* `mqtt.initiatedByClient`: A boolean flag indicating whether the client initiates the disconnection by sending a DISCONNECT packet. It's `true` when the client sends a DISCONNECT packet to end the connection; otherwise, it's `false`. ++* `mqtt.disconnectPacket`: `{nullable object containing properties of the last delivered DISCONNECT packet}` + It's null when the connection is disconnected without the client or service sending a DISCONNECT packet, for example, due to an IO error or network failure. + The upstream can use `mqtt.initiatedByClient` to determine who sent the DISCONNECT packet. ++* `mqtt.disconnectPacket.code`: `{integer reason code in the DISCONNECT packet}` + For MQTT 3.1.1 clients, as there's no reason code in the DISCONNECT packet, this property defaults to 0. + For MQTT 5.0, it's the reason code in the DISCONNECT packet sent from the client or service. ++* `mqtt.disconnectPacket.userProperties`: `{user properties in the DISCONNECT packet}` + A list of string key-value pairs. Clients can use this property to send additional diagnostics or other information to the upstream. If the DISCONNECT packet is sent by the service, it's null. ++#### Response Format ++* `2xx`: Success response. ++The `disconnected` event is asynchronous. When the response status code isn't successful, the service logs an error. ++```HTTP +HTTP/1.1 200 OK +``` ++### User `{custom_event}` Event ++The service converts specific messages published by MQTT clients to HTTP requests to the upstream webhook and converts the responses from the upstream to messages and sends them to clients. ++#### Trigger Conditions ++* An MQTT client publishes a message to a topic in the format `$webpubsub/server/events/{eventName}`. `{eventName}` cannot contain the `/` character. +* The MQTT client has permission to publish to that topic. +* If the client's protocol is MQTT 5.0, and the PUBLISH packet contains a content type field, the content type value should be a valid MIME type because it will be converted to the `Content-Type` header of an HTTP request. ++#### Request Format ++##### MQTT Request Packet ++The following table shows the usage of fields in an MQTT request message. ++| MQTT Request Fields | Usage | +|--|--| +| Topic | Indicates the message is a request to upstream, and specifies the event name.| +| Payload | Be the body of the HTTP request. | +| Content Type | Be the content type header of the HTTP request. | +| Correlation Data | Be the correlation data field in the **response** message, used by sender to identify which request the response message is for. | +| QoS | Be the level of assurance for delivery for both request and response message. | +| User Properties | Becomes HTTP headers prefixed with `mqtt-` in the HTTP requests. Provides additional information between clients and upstream webhook. | ++The following code block shows a sample MQTT PUBLISH packet in the JSON format. +```json +{ + "topic": "$webpubsub/server/events/{eventName}", + "payload": "{mqtt-request-payload}", + "content-type": "{request/MIME}", + "correlation-data": "{correlation-data}", + "QoS": "{qos}", + "user-properties": [ + { + "name": "{request-property-1}", + "value": "{request-property-value1}" + } + ] +} +``` ++The following code block shows the HTTP request converted from the MQTT PUBLISH packet. ++##### HTTP Request +```HTTP +POST /upstream HTTP/1.1 +Host: xxxxxx +WebHook-Request-Origin: xxx.webpubsub.azure.com +Content-Type: {request/MIME} +Content-Length: nnnn +ce-specversion: 1.0 +ce-type: azure.webpubsub.user.{eventName} +ce-source: /hubs/{hub}/client/{clientId}/{physicalConnectionId} +ce-id: {eventId} +ce-time: 2021-01-01T00:00:00Z +ce-signature: sha256={connection-id-hash-primary},sha256={connection-id-hash-secondary} +ce-connectionId: {clientId} +ce-hub: {hub} +ce-eventName: {eventName} +ce-physicalConnectionId: {physicalConnectionId} +ce-sessionId: {sessionId} +mqtt-{request-property-1}: {request-property-value1} ++{mqtt-request-payload} ++``` ++#### Response format ++The following table shows the usage of different fields in the HTTP response. ++| HTTP Response Field| Usage| +|--|--| +| Content Type | Be the content type field of response MQTT message. | +| Body | Be the payload of response MQTT message.| +| Headers prefixed with `mqtt-` | Become user properties in the response MQTT message. Provides additional information between clients and upstream webhook. | +| Status Code | Indicates whether the request succeeds. If it's successful (2xx), the response topic is `$webpubsub/server/events/{eventName}/succeeded`, otherwise `$webpubsub/server/events/{eventName}/failed`. It also becomes a user property named `azure-status-code` in the response MQTT message. | ++The following code block shows a sample HTTP response. +##### HTTP Response +```HTTP +HTTP/1.1 200 OK +Host: xxxxxx +Content-Type: {response/MIME} +Content-Length: nnnn +ce-connectionState: eyJrZXkiOiJhIn0= +mqtt-response-property-1: response-property-value1 ++{mqtt-response-payload} ++``` ++##### MQTT Response +The following code block shows a sample MQTT response message converted from the HTTP response. +```json +{ + "topic": "$webpubsub/server/events/{eventName}/succeeded", + "payload": "{mqtt-response-payload}", + "content-type": "{response/MIME}", + "correlation-data": "{correlation-data}", + "QoS": "{qos}", + "user-properties": [ + { + "name": "{response-property-1}", + "value": "{response-property-value1}" + } + ] +} +``` ++## Next steps + |
azure-web-pubsub | Reference Mqtt Cross Protocol Communication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-mqtt-cross-protocol-communication.md | + + Title: Cross-protocol communication between MQTT clients and Azure Web PubSub clients +description: Describes the behavior of cross-protocol communication between MQTT clients and Web PubSub clients +keywords: MQTT, MQTT on Azure Web PubSub, MQTT over WebSocket ++ Last updated : 07/30/2024+++++# Cross-protocol communication between MQTT clients and Web PubSub clients ++Sometimes you'd like to have MQTT clients and other clients using Azure Web PubSub's protocols together in one hub, enabling cross-protocol communication. This document defines how such communication works. ++## Concepts ++First, let's clarify the concepts in the context of cross-protocol communication. ++* **MQTT clients**: Clients using [MQTT](https://mqtt.org/) protocols. +* **Web PubSub clients**: Clients using Web PubSub's own protocols with pub/sub capabilities. Examples include `json.webpubsub.azure.v1`, `protobuf.webpubsub.azure.v1`, `json.reliable.webpubsub.azure.v1`, and `protobuf.reliable.webpubsub.azure.v1`. You can find an overview of Web PubSub's client protocols [here](./concept-client-protocols.md). +* **Reliable Web PubSub clients**: A subset of Web PubSub clients using Web PubSub reliable protocols, specifically `json.reliable.webpubsub.azure.v1`, and `protobuf.reliable.webpubsub.azure.v1`. ++## Concept mappings ++### Message routing behavior ++From the [Overview: MQTT in Azure Web PubSub Service](./overview-mqtt.md), we learn that joining a group in Web PubSub's protocols works the same as subscribing to the same named topic in MQTT. Similarly, sending to a group means publishing to the same named topic. This means if a client using Web PubSub protocols joins group `a`, it'll' get messages from MQTT clients sending to topic `a`, and vice versa. ++### Message content type conversion ++In Web PubSub protocols, there are four message data types: Text, Binary, JSON, and Protobuf. ++In MQTT protocols, there's no field to indicate message content type in MQTT 3.1.1, but there's a string "content type" field in MQTT 5.0. ++Here's the conversion between the MQTT "content type" field and Web PubSub message data type: ++| MQTT "content type" | Web PubSub "message data type" | +|--|--| +| `application/json` | JSON | +| `text/plain` | Text | +| `application/x-protobuf` | Protobuf | +| `application/octet-stream` | Binary | +| Absent or MQTT 3.1.1 | Binary | ++### Message content conversion ++For text-based Web PubSub message data types, including `Text` and `Json`, they convert to and from MQTT by UTF-8 encoding. For binary-based Web PubSub message data types, including `Protobuf` and `Binary`, they remain exactly the same in the MQTT message content. ++### Message quality of service (QoS) conversion ++In Web PubSub protocols, the QoS of a message a client receives is determined by the client's protocol. Reliable clients get only QoS 1 messages, while other clients get only QoS 0 messages. ++In MQTT protocols, the QoS of a message a client receives is determined by both the message QoS (sending QoS) and the granted subscription QoS, specifically the smaller value of the two. ++When messages transfer across protocols, the received QoS is defined as follows: ++| Message sender | Message receiver | QoS evaluation | +|-||-| +| MQTT clients | Reliable Web PubSub clients | QoS is always 1 | +| MQTT clients | Other Web PubSub clients | QoS is always 0 | +| Web PubSub clients | MQTT clients | Min(1, granted subscription QoS) | ++### Others ++Message properties listed here take effect across protocols. The others don't. ++* MQTT message expiry interval |
azure-web-pubsub | Reference Rest Api Mqtt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-rest-api-mqtt.md | + + Title: Azure Web PubSub service data plane REST API specification for MQTT +description: Clarifies the meanings of the Web PubSub data-plane REST API in the context of MQTT ++++ Last updated : 07/23/2024+++# REST API specification for MQTT ++This document clarifies the meanings of the Web PubSub data-plane REST API in the context of MQTT. The existing Web PubSub REST API documentation is focused on Web PubSubΓÇÖs own protocols, which may make its application to MQTT unclear. ++## Term mappings ++To begin, familiarize yourself with the term mappings between Web PubSub and MQTT. If you are already familiar with these terms, you may skip this section. +++## Operation mappings ++For a comprehensive list of available operations, refer to the [REST API reference](/rest/api/webpubsub/dataplane/web-pub-sub). ++The REST API operations are categorized into the following groups: ++* [Message sending operations](#message-sending-operations) +* [Subscription management operations](#subscription-management-operations) +* [Permission management operations](#permission-management-operations) +* [Existence management operations](#existence-management-operations) +* [Client token generation operations](#client-token-generation-operations) ++Each of these categories is defined below. ++### Message sending operations ++| REST API Operation | Effect on MQTT | +| | -- | +| Send to Group | MQTT connections subscribed to the topic named with the group name will receive the message. | +| Send to All<br>Send to User<br>Send to Connection | The respective MQTT connections will receive a message with the topic `$webpubsub/server/messages`. | ++Messages are published with a QoS of 1. The QoS of received messages may be downgraded based on the clients' subscription options, following the standard MQTT downgrading rules. ++### Subscription management operations ++| REST API Operation | Effect on MQTT | +| | -- | +| Add Connections to Groups<br>Add Connection to Group | Adds a subscription for the specified connections. | +| Add User to Group | Adds a subscription for all connections of the specified user. | +| Remove Connection from All Groups<br>Remove Connection from Group<br>Remove Connections from Groups<br>Remove User from All Groups<br>Remove User from Group | Removes one or all subscriptions for the specified connections or users. | ++The group name corresponds to the MQTT topic filter. When adding connections or users to groups, default MQTT subscription options are used. ++### Permission management operations ++These operations are straightforward in the context of MQTT and thus the definition is ignored. +* Check Permission +* Grant Permission +* Revoke Permission ++### Existence management operations ++| REST API Operation | Effect on MQTT | +| | -- | +| Connection Exists<br>Group Exists<br>User Exists | Checks whether a session exists for the specified connection, user, or group. Note that this differs from checking if a connection is currently online. | +| Close All Connections<br>Close Group Connections<br>Close User Connections | Ends the specified sessions and terminates the corresponding physical connections. | ++### Client token generation operations ++| REST API Operation | Effect on MQTT | +| | -- | +| Generate Client Token | Generates the connection token and URL for MQTT clients to connect. | ++Please note that MQTT support is available starting from REST API version `2024-01-01`. You must specify the query parameter `clientType=MQTT` for MQTT clients. |
azure-web-pubsub | Tutorial Upstream Auth Mqtt Client | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-upstream-auth-mqtt-client.md | + + Title: Tutorial - Authenticate and authorize MQTT clients with Azure Web PubSub event handlers +description: A tutorial to walk through how to authenticate and authorize MQTT clients based on client certificates, username, and password. ++++ Last updated : 07/12/2024+++# Tutorial - Authenticate and authorize MQTT clients based on client certificates with event handlers ++In this tutorial, you'll learn how to write a .NET web server to authenticate and authorize MQTT clients. ++## Prerequisites ++* An Azure account with an active subscription. If you don't have an Azure account, you can [create an account for free](https://azure.microsoft.com/free/). +* An Azure Web PubSub service (must be Standard tier or above). +* A client certificate in PEM format. +* [.NET Runtime](https://dotnet.microsoft.com/download/dotnet) installed. +* [Node.js](https://nodejs.org) ++## Deploy Azure Web PubSub service ++Here are the Bicep/Azure Resource Manager templates to deploy an Azure Web PubSub service with client certificate authentication enabled and event handlers configured. ++We configure the `connect` event handler to tell the service the webhook endpoint for authenticating and authorizing clients. We set it to `tunnel:///MqttConnect`. `tunnel://` is a special syntax leveraging the [awps-tunnel](./howto-web-pubsub-tunnel-tool.md) tool to expose your local auth server to public network. `/MqttConnect` is the endpoint that will be exposed by your local auth server. ++We enable client certificate authentication via the property `tls.clientCertEnabled` so that the client certificate is sent to your server in the `connect` event. ++Also note that `anonymousConnectPolicy` needs to be set to `allow` so clients no longer need to send access tokens. ++# [Bicep](#tab/bicep) ++```bicep +param name string +param hubName string = 'hub1' +param eventHandlerUrl string = 'tunnel:///MqttConnect' +param location string = resourceGroup().location ++resource awps 'Microsoft.SignalRService/WebPubSub@2023-03-01-preview' = { + name: name + location: location + sku: { + name: 'Standard_S1' + tier: 'Standard' + size: 'S1' + capacity: 1 + } + properties: { + tls: { + clientCertEnabled: true + } + } +} ++resource hub 'Microsoft.SignalRService/WebPubSub/hubs@2023-03-01-preview' = { + parent: awps + name: '${hubName}' + properties: { + eventHandlers: [ + { + urlTemplate: eventHandlerUrl + userEventPattern: '*' + systemEvents: [ + 'connect' + ] + } + ] + anonymousConnectPolicy: 'allow' + } +} +``` ++# [Azure Resource Manager](#tab/arm) ++```json +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "name": { + "type": "String" + }, + "hubName": { + "defaultValue": "hub1", + "type": "String" + }, + "eventHandlerUrl": { + "defaultValue": "tunnel:///MqttConnect", + "type": "String" + }, + "location": { + "defaultValue": "[resourceGroup().location]", + "type": "String" + } + }, + "resources": [ + { + "type": "Microsoft.SignalRService/WebPubSub", + "apiVersion": "2023-03-01-preview", + "name": "[parameters('name')]", + "location": "[parameters('location')]", + "sku": { + "name": "Standard_S1", + "tier": "Standard", + "size": "S1", + "capacity": 1 + }, + "properties": { + "tls": { + "clientCertEnabled": true + } + } + }, + { + "type": "Microsoft.SignalRService/WebPubSub/hubs", + "apiVersion": "2023-03-01-preview", + "name": "[concat(parameters('name'), '/', parameters('hubName'))]", + "dependsOn": [ + "[resourceId('Microsoft.SignalRService/WebPubSub', parameters('name'))]" + ], + "properties": { + "eventHandlers": [ + { + "urlTemplate": "[parameters('eventHandlerUrl')]", + "userEventPattern": "*", + "systemEvents": [ + "connect" + ] + } + ], + "anonymousConnectPolicy": "allow" + } + } + ] +} +``` ++++## Set up auth server ++We've provided an auth server sample [here](https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp/mqttAuthServer). Please download the project. ++Let's take a look at the project structure: +``` +- mqttAuthServer + - Models + - MqttConnectEventRequest.cs + - ... + - MqttAuthServer.csproj + - Program.cs +``` ++The `Models` directory contains all the model files to describe the request and response body of MQTT `connect` event. The `Program.cs` contains the logic to handle MQTT `connect` event, including parsing the client certificate contents from request, validating the certificates, and authorizing the client. ++The following code snippet is the main logic of handling `connect` event request: +```cs + var request = await httpContext.Request.ReadFromJsonAsync<MqttConnectEventRequest>(); + var certificates = request.ClientCertificates.Select(cert => GetCertificateFromPemString(cert.Content)); + // Simulate Logic to validate client certificate + if (!request.Query.TryGetValue("failure", out _)) + { + // As a demo, we just accept all client certificates and grant the clients with permissions to publish and subscribe to all the topics when the query parameter "success" is present. + await httpContext.Response.WriteAsJsonAsync(new MqttConnectEventSuccessResponse() + { + Roles = ["webpubsub.joinLeaveGroup", "webpubsub.sendToGroup"] + }); + } + else + { + // If you want to reject the connection, you can return a MqttConnectEventFailureResponse + var mqttCodeForUnauthorized = request.Mqtt.ProtocolVersion switch + { + 4 => 5, // UnAuthorized Return Code in Mqtt 3.1.1 + 5 => 0x87, // UnAuthorized Reason Code in Mqtt 5.0 + _ => throw new NotSupportedException($"{request.Mqtt.ProtocolVersion} is not supported.") + }; + httpContext.Response.StatusCode = (int)HttpStatusCode.Unauthorized; + await httpContext.Response.WriteAsJsonAsync(new MqttConnectEventFailureResponse(new MqttConnectEventFailureResponseProperties() + { + Code = mqttCodeForUnauthorized, + Reason = "Invalid Certificate" + } + )); + } +``` ++To run the project, execute the following command in the root directory. +```dotnetcli +dotnet run +``` +++### Expose server endpoint to public network ++#### Download and install awps-tunnel +The tool runs on [Node.js](https://nodejs.org/) version 16 or higher. ++```bash +npm install -g @azure/web-pubsub-tunnel-tool +``` ++#### Use the service connection string and run +```bash +export WebPubSubConnectionString="<your connection string>" +awps-tunnel run --hub {hubName} --upstream http://localhost:{portExposedByYourAuthServer} +``` ++## Implement MQTT clients ++We will implement the client side in Node.JS. ++Initialize a NodeJS project with the following command. +```bash +npm init +``` ++Install the `mqtt` module. +```bash +npm install mqtt +``` ++Create a new file named `index.js`, and add the following code to the file. ++```javascript +const mqtt = require('mqtt'); ++var client = mqtt.connect(`wss://{serviceName}.webpubsub.azure.com/clients/mqtt/hubs/{hubName}`, + { + clientId: "client1", + cert: `--BEGIN CERTIFICATE-- +{Complete the certificate here} +--END CERTIFICATE--`, + key: `--BEGIN PRIVATE KEY-- +{Complete the private key here} +--END PRIVATE KEY--`, + protocolVersion: 5, + }); +client.on("connect", (connack) => { + console.log("connack", connack); +}); +client.on("error", (err) => { + console.log(err); +}); +``` ++Update the `index.js`: +* Update the `{serviceName}` and `{hubName}` variable according to the resources you created. +* Complete the client certificate and key in the file. ++Then you're able to run the project with command +```bash +node index.js +``` ++If everything works well, you'll be able to see a successful CONNACK response printed in the console. ++``` +connack Packet { + cmd: 'connack', + retain: false, + qos: 0, + dup: false, + length: 2, + topic: null, + payload: null, + sessionPresent: false, + returnCode: 0 +} +``` ++To simulate the certificate validation failure, append a failure query to the connection URL as this +```js +var client = mqtt.connect(`wss://{serviceName}.webpubsub.azure.com/clients/mqtt/hubs/{hubName}?failure=xxx`, +``` ++And rerun the client, you'll be able to see an unauthorized CONNACK response. ++## Next step ++Now that you have known that how to authenticate and authorize MQTT clients e2e. +Next, you can check our event handler protocol for MQTT clients. ++> [!div class="nextstepaction"] +> [Reference - CloudEvents extension for Azure Web PubSub MQTT event handler with HTTP protocol](./reference-mqtt-cloud-events.md) + |
backup | Backup Azure Database Postgresql Flex Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-database-postgresql-flex-overview.md | +- You can extend your backup retention beyond 35 days which is the maximum supported limit by the operational tier backup capability of PostgreSQL flexible database. [Learn more](/azure/postgresql/flexible-server/concepts-backup-restore#backup-retention). - The backups are copied to an isolated storage environment outside of customer tenant and subscription, thus providing protection against ransomware attacks. - Azure Backup provides enhanced backup resiliency by protecting the source data from different levels of data loss ranging from accidental deletion to ransomware attacks. - The zero-infrastructure solution with Azure Backup service managing the backups with automated retention and backup scheduling. |
backup | Backup Azure Database Postgresql Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-database-postgresql-overview.md | Azure Backup and Azure Database Services have come together to build an enterpri - Backups are stored in separate security and fault domains. If the source server or subscription is compromised in any circumstances, the backups remain safe in the [Backup vault](./backup-vault-overview.md) (in Azure Backup managed storage accounts). - Use of **pg_dump** allows a greater flexibility in restores. This helps you restore across database versions -You can use this solution independently or in addition to the [native backup solution offered by Azure PostgreSQL](../postgresql/concepts-backup.md) that offers retention up to 35 days. The native solution is suited for operational recoveries, such as when you want to recover from the latest backups. The Azure Backup solution helps you with your compliance needs and more granular and flexible backup/restore. +You can use this solution independently or in addition to the [native backup solution offered by Azure PostgreSQL](/azure/postgresql/concepts-backup) that offers retention up to 35 days. The native solution is suited for operational recoveries, such as when you want to recover from the latest backups. The Azure Backup solution helps you with your compliance needs and more granular and flexible backup/restore. ## Backup process |
backup | Backup Azure Database Postgresql Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-database-postgresql-support-matrix.md | Azure Database for PostgreSQL server backup is available in all regions, except |Scenarios | Details | || |-|Deployments | [Azure Database for PostgreSQL - Single Server](../postgresql/overview.md#azure-database-for-postgresqlsingle-server) | +|Deployments | [Azure Database for PostgreSQL - Single Server](/azure/postgresql/overview#azure-database-for-postgresqlsingle-server) | |Azure PostgreSQL versions | 9.5, 9.6, 10, 11 | ## Feature considerations and limitations |
backup | Backup Postgresql Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-postgresql-cli.md | -This article explains how to back up [Azure PostgreSQL database](../postgresql/overview.md#azure-database-for-postgresqlsingle-server) using Azure CLI. +This article explains how to back up [Azure PostgreSQL database](/azure/postgresql/overview#azure-database-for-postgresqlsingle-server) using Azure CLI. In this article, you'll learn how to: |
backup | Backup Postgresql Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-postgresql-ps.md | -This article explains how to back up [Azure PostgreSQL database](../postgresql/overview.md#azure-database-for-postgresqlsingle-server) using Azure PowerShell. +This article explains how to back up [Azure PostgreSQL database](/azure/postgresql/overview#azure-database-for-postgresqlsingle-server) using Azure PowerShell. In this article, you'll learn how to: |
backup | Quick Backup Postgresql Database Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-backup-postgresql-database-portal.md | -Azure Database for PostgreSQL is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. This quickstart shows you how to back up Azure Database for PostgreSQL server running on an Azure VM to an Azure Backup Recovery Services vault. To create Azure Database for PostgreSQL server, see the [tutorial](../postgresql/tutorial-design-database-using-azure-portal.md). +Azure Database for PostgreSQL is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. This quickstart shows you how to back up Azure Database for PostgreSQL server running on an Azure VM to an Azure Backup Recovery Services vault. To create Azure Database for PostgreSQL server, see the [tutorial](/azure/postgresql/tutorial-design-database-using-azure-portal). ## Prerequisites Before you start back up of Azure PostgreSQL database: - Identify or [create a Backup Vault](tutorial-postgresql-backup.md#create-a-backup-vault) in the same region where you want to back up the Azure Database for PostgreSQL server instance.-- Check that Azure Database for PostgreSQL server is named in accordance with naming guidelines for Azure Backup. [Learn more](../postgresql/tutorial-design-database-using-azure-portal.md#create-an-azure-database-for-postgresql)+- Check that Azure Database for PostgreSQL server is named in accordance with naming guidelines for Azure Backup. [Learn more](/azure/postgresql/tutorial-design-database-using-azure-portal#create-an-azure-database-for-postgresql) - [Create secrets in the key vault](backup-azure-database-postgresql.md#create-secrets-in-the-key-vault). - [Grant privileges to database users using PowerShell scripts](backup-azure-database-postgresql.md#run-powershell-script-to-grant-privileges-to-database-users). - [Allow access permissions for the relevant key vault](backup-azure-database-postgresql-overview.md#access-permissions-on-the-azure-key-vault-associated-with-the-postgresql-server). |
backup | Restore Postgresql Database Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-postgresql-database-cli.md | -This article explains how to restore [Azure PostgreSQL databases](../postgresql/overview.md#azure-database-for-postgresqlsingle-server) to an Azure PostgreSQL server backed-up by Azure Backup. +This article explains how to restore [Azure PostgreSQL databases](/azure/postgresql/overview#azure-database-for-postgresqlsingle-server) to an Azure PostgreSQL server backed-up by Azure Backup. Being a PaaS database, the Original Location Recovery (OLR) option to restore by replacing the existing database (from where the backups were taken) isn't supported. You can restore from a recovery point to create a new database in the same Azure PostgreSQL server or in any other PostgreSQL server, which is called Alternate-Location Recovery (ALR) that helps to keep both - the source database and the restored (new) database. |
backup | Restore Postgresql Database Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-postgresql-database-ps.md | -This article explains how to restore [Azure PostgreSQL databases](../postgresql/overview.md#azure-database-for-postgresqlsingle-server) to an Azure PostgreSQL server backed-up by Azure Backup. +This article explains how to restore [Azure PostgreSQL databases](/azure/postgresql/overview#azure-database-for-postgresqlsingle-server) to an Azure PostgreSQL server backed-up by Azure Backup. Being a PaaS database, the Original-Location Recovery (OLR) option to restore by replacing the existing database (from where the backups were taken) isn't supported. You can restore from a recovery point to create a new database in the same Azure PostgreSQL server or in other PostgreSQL server. This is called Alternate-Location Recovery (ALR) that helps to keep both - the source database and the restored (new) database. |
backup | Restore Postgresql Database Use Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-postgresql-database-use-rest-api.md | -This article explains how to restore [Azure PostgreSQL databases](../postgresql/overview.md#azure-database-for-postgresqlsingle-server) to an Azure PostgreSQL server backed-up by Azure Backup. +This article explains how to restore [Azure PostgreSQL databases](/azure/postgresql/overview#azure-database-for-postgresqlsingle-server) to an Azure PostgreSQL server backed-up by Azure Backup. Being a PaaS database, the Original-Location Recovery (OLR) option to restore by replacing the existing database (from where the backups were taken) isn't supported. You can restore from a recovery point to create a new database in the same Azure PostgreSQL server or in any other PostgreSQL server. This is called Alternate-Location Recovery (ALR) that helps to keep both - the source database and the restored (new) database. |
backup | Tutorial Postgresql Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-postgresql-backup.md | This tutorial shows you how to back up Azure Database for PostgreSQL server runn Before you back up your Azure Database for PostgreSQL server: - Identify or create a Backup Vault in the same region where you want to back up the Azure Database for PostgreSQL server instance.-- Check that Azure Database for PostgreSQL server is named in accordance with naming guidelines for Azure Backup. [Learn more](../postgresql/tutorial-design-database-using-azure-portal.md#create-an-azure-database-for-postgresql)+- Check that Azure Database for PostgreSQL server is named in accordance with naming guidelines for Azure Backup. [Learn more](/azure/postgresql/tutorial-design-database-using-azure-portal#create-an-azure-database-for-postgresql) - [Create secrets in the key vault](backup-azure-database-postgresql.md#create-secrets-in-the-key-vault). - [Allow access permissions for the relevant key vault](backup-azure-database-postgresql-overview.md#access-permissions-on-the-azure-key-vault-associated-with-the-postgresql-server). - [Provide database user's backup privileges on the database](backup-azure-database-postgresql-overview.md#database-users-backup-privileges-on-the-database). |
chaos-studio | Chaos Studio Fault Library | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md | This section applies to the `Microsoft.Cache/redis` resource type. [Learn more a ## Cosmos DB -This section applies to the `Microsoft.DocumentDB/databaseAccounts` resource type. [Learn more about Cosmos DB](../cosmos-db/introduction.md). +This section applies to the `Microsoft.DocumentDB/databaseAccounts` resource type. [Learn more about Cosmos DB](/azure/cosmos-db/introduction). | Fault name | Applicable scenarios | ||-| Currently, a maximum of 4 process names can be listed in the processNames parame |-|-| | Capability name | Failover-1.0 | | Target type | Microsoft-CosmosDB |-| Description | Causes an Azure Cosmos DB account with a single write region to fail over to a specified read region to simulate a [write region outage](../cosmos-db/high-availability.md). | +| Description | Causes an Azure Cosmos DB account with a single write region to fail over to a specified read region to simulate a [write region outage](/azure/cosmos-db/high-availability). | | Prerequisites | None. | | Urn | `urn:csci:microsoft:cosmosDB:failover/1.0` | | Fault type | Continuous. | |
chaos-studio | Chaos Studio Tutorial Service Direct Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-service-direct-cli.md | You can use these same steps to set up and run an experiment for any service-dir ## Prerequisites - An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]-- An Azure Cosmos DB account. If you don't have an Azure Cosmos DB account, you can [create one](../cosmos-db/sql/create-cosmosdb-resources-portal.md).+- An Azure Cosmos DB account. If you don't have an Azure Cosmos DB account, you can [create one](/azure/cosmos-db/sql/create-cosmosdb-resources-portal). - At least one read and one write region setup for your Azure Cosmos DB account. ## Open Azure Cloud Shell |
chaos-studio | Chaos Studio Tutorial Service Direct Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-service-direct-portal.md | You can use these same steps to set up and run an experiment for any service-dir ## Prerequisites - An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]-- An Azure Cosmos DB account. If you don't have an Azure Cosmos DB account, follow these steps to [create one](../cosmos-db/sql/create-cosmosdb-resources-portal.md).+- An Azure Cosmos DB account. If you don't have an Azure Cosmos DB account, follow these steps to [create one](/azure/cosmos-db/sql/create-cosmosdb-resources-portal). - At least one read and one write region setup for your Azure Cosmos DB account. ## Enable Chaos Studio on your Azure Cosmos DB account |
cloud-services | Cloud Services Python How To Use Service Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-python-how-to-use-service-management.md | sms.delete_deployment('myhostedservice', 'v1') ``` ## <a name="CreateStorageService"> </a>Create a storage service-A [storage service](../storage/common/storage-account-create.md) gives you access to Azure [blobs](../storage/blobs/storage-quickstart-blobs-python.md), [tables](../cosmos-db/table-storage-how-to-use-python.md), and [queues](/azure/storage/queues/storage-quickstart-queues-python?tabs=passwordless%2Croles-azure-portal%2Cenvironment-variable-windows%2Csign-in-azure-cli). To create a storage service, you need a name for the service (between 3 and 24 lowercase characters and unique within Azure). You also need a description, a label (up to 100 characters, automatically encoded to base64), and a location. The following example shows how to create a storage service by specifying a location: +A [storage service](../storage/common/storage-account-create.md) gives you access to Azure [blobs](../storage/blobs/storage-quickstart-blobs-python.md), [tables](/azure/cosmos-db/table-storage-how-to-use-python), and [queues](/azure/storage/queues/storage-quickstart-queues-python?tabs=passwordless%2Croles-azure-portal%2Cenvironment-variable-windows%2Csign-in-azure-cli). To create a storage service, you need a name for the service (between 3 and 24 lowercase characters and unique within Azure). You also need a description, a label (up to 100 characters, automatically encoded to base64), and a location. The following example shows how to create a storage service by specifying a location: ```python from azure import * |
cloud-services | Cloud Services Python Ptvs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-python-ptvs.md | For more details about using Azure services from your web and worker roles, such [Blob Service]:../storage/blobs/storage-python-how-to-use-blob-storage.md [Queue Service]: ../storage/queues/storage-python-how-to-use-queue-storage.md-[Table Service]:../cosmos-db/table-storage-how-to-use-python.md +[Table Service]:/azure/cosmos-db/table-storage-how-to-use-python [Service Bus Queues]: ../service-bus-messaging/service-bus-python-how-to-use-queues.md [Service Bus Topics]: ../service-bus-messaging/service-bus-python-how-to-use-topics-subscriptions.md |
cloud-shell | Faq Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/faq-troubleshooting.md | description: This article answers common questions and explains how to troubleshoot Cloud Shell issues. ms.contributor: jahelmic Previously updated : 11/08/2023 Last updated : 08/14/2024 tags: azure-resource-manager your anticipated usage date. ### I created some files in Cloud Shell, but they're gone. What happened? The machine that provides your Cloud Shell session is temporary and is recycled after your session-is inactive for 20 minutes. Cloud Shell uses an Azure fileshare mounted to the `clouddrive` folder -in your session. The fileshare contains the image file that contains your `$HOME` directory. Only -files that you upload or create in the `clouddrive` folder are persisted across sessions. Any files -created outside your `clouddrive` directory aren't persisted. +is inactive for 20 minutes. -Files stored in the `clouddrive` directory are visible in the Azure portal using Storage browser. -However, any files created in the `$HOME` directory are stored in the image file and aren't visible -in the portal. +When you started Cloud Shell the first time, you were prompted to choose a storage option. ++- If you chose the **Mount storage account** option, Cloud Shell mounts an Azure fileshare to the + `clouddrive` folder in your session. Files stored in the `clouddrive` folder are visible in the + Azure portal using Storage browser. Files stored in the `clouddrive` folder persist across + sessions. ++- If you chose the **No storage account required** option, you can only write files to your `$HOME` + folder. ++In both scenarios, you can write files to the `$HOME` folder. However, the `$HOME` folder only +exists in the Cloud Shell container image that you're currently using. Files in the `$HOME` folder +aren't visible in the Storage browser and are deleted when your session ends. ### I create a file in the Azure: drive, but I don't see it. What happened? -PowerShell users can use the `Azure:` drive to access Azure resources. The `Azure:` drive is created -by a PowerShell provider that structures data as a file system drive. The `Azure:` drive is a -virtual drive that doesn't allow you to create files. +Cloud Shell loads a PowerShell provider for Azure that presents Azure resource data as a file system +drive. PowerShell users can use the `Azure:` drive to access Azure resources. The `Azure:` drive is +a virtual drive that doesn't allow you to create files. Files that you create a new file using other tools, such as `vim` or `nano` while your current-location is the `Azure:` drive, are saved to your `$HOME` directory. +location is the `Azure:` drive, are saved to your `$HOME` folder. ### I want to install a tool in Cloud Shell that requires `sudo`. Is that possible? |
communication-services | Email Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/email-logs.md | Communication Services offers the following types of logs that you can enable: ## Email Status Update operational logs -*Email status update operational logs* provide in-depth insights into message and recipient level delivery status updates on your sendmail API requests. These logs offer message-specific details, such as the time of delivery, as well as recipient-level details, such as email addresses and delivery status updates. By tracking these logs, you can ensure full visibility into your email delivery process, quickly identifying any issues that may arise and taking corrective action as necessary. +*Email status update operational logs* provide in-depth insights into message-level and recipient-level delivery status updates on your sendmail API requests. +- Message-level status updates provide the status of the long-running email send operation (similar to the status updates you receive through calling our GET APIs). These are marked by the absence of `RecipientId` property because these updates are for the entire message and not applicable to a specific recipient in that message request. `DeliveryStatus` property contains the message-level delivery status. Possible values for `DeliveryStatus` for this type of event are `Dropped`, `OutForDelivery`, and `Queued`. +- Recipient-level status updates provide the status of email delivery for each individual recipient to whom the email was sent in a single message. These contain a `RecipientId` property with the recipient's email address. Recipient-level delivery status is provided in the `DeliveryStatus` property. Possible values for `DeliveryStatus` for this type of event are `Delivered`, `Expanded`, `Failed`, `Quarantined`, `FilteredSpam`, `Suppressed`, and `Bounced`. +By tracking these logs, you can ensure full visibility into your email delivery process, quickly identifying any issues that may arise and taking corrective action as necessary. | Property | Description | | -- | | Communication Services offers the following types of logs that you can enable: | `OperationVersion` | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. | | `Category` | The log category of the event. The category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. | | `CorrelationID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. For all Email operational logs, the CorrelationId is mapped to the MessageId, which is returned from a successful SendMail request. |-| `RecipientId` | The email address for the targeted recipient. If this is a message-level event, the property will be empty. | -| `DeliveryStatus` | The terminal status of the message. | +| `RecipientId` | The email address for the targeted recipient. It is only present for recipient-level events. If this is a message-level event, the property will be empty. | +| `DeliveryStatus` | The terminal status of the message. Possible valuse for message-level event are: `Dropped`, `OutForDelivery`, `Queued`. Possible valuse for a recipient-level event are: `Delivered`, `Expanded`, `Failed`, `Quarantined`, `FilteredSpam`, `Suppressed`, `Bounced`. | | `SmtpStatusCode` | SMTP status code returned from the recipient email server in response to a send mail request. | `EnhancedSmtpStatusCode` | Enhanced SMTP status code returned from the recipient email server. | `SenderDomain` | The domain portion of the SenderAddress used in sending emails. Communication Services offers the following types of logs that you can enable: "EngagementContext":"https://www.contoso.com/support?id=12345", "UserAgent":"Mozilla/5.0" }-``` +``` |
communication-services | Troubleshooting Pstn Call Failures | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/troubleshooting-pstn-call-failures.md | -localization_priority: Normal # Troubleshooting Azure Communication Services PSTN call failures For more information about common error codes and suggested actions, see [Troubl ## Related articles -For more information, see [Troubleshooting in Azure Communication Services](../troubleshooting-info.md). +For more information, see [Troubleshooting in Azure Communication Services](../troubleshooting-info.md). |
communication-services | Audio Conferencing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/audio-conferencing.md | -In this article, you learn how to use Azure Communication Services Calling SDK to retrieve Microsoft Teams Meeting audio conferencing details. This functionality allows users who are already connected to a Microsoft Teams Meeting to be able to get the conference ID and dial in phone number associated with the meeting. Teams audio conferencing feature returns a collection of all toll and toll-free numbers, with concomitant country names and city names, giving users control on what Teams meeting dial-in details to use. +In this article, you learn how to use Azure Communication Services Calling SDK to retrieve Microsoft Teams Meeting Audio Conferencing details. This functionality allows users who are already connected to a Microsoft Teams Meeting to be able to get the conference ID and dial in phone number associated with the meeting. Teams Meeting Audio Conferencing feature returns a collection of all toll and toll-free numbers, with concomitant country names and city names, giving users control on what Teams meeting dial-in details to use. ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). In this article, you learn how to use Azure Communication Services Calling SDK t - A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/identity/access-tokens.md). - Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md) +## Support +The following tables define support for Audio Conferencing in Azure Communication Services. ++### Identities and call types +The following table shows support for call and identity types. ++|Identities | Teams meeting | Room | 1:1 call | Group call | 1:1 Teams interop call | Group Teams interop call | +|--|||-|||--| +|Communication Services user | ✔️ | | | | | | +|Microsoft 365 user | ✔️ | | | | | | ++### Operations +The following table shows support for individual APIs in calling SDK for individual identity types. ++|Operations | Communication Services user | Microsoft 365 user | +|--||-| +|Get audio conferencing details | ✔️ | ✔️ | ++### SDKs +The following table shows support for the Audio Conferencing feature in individual Azure Communication Services SDKs. ++| Platforms | Web | Web UI | iOS | iOS UI | Android | Android UI | Windows | +||--|--|--|--|-|--|| +|Is Supported | ✔️ | | | | | | | + [!INCLUDE [Audio Conferencing Client-side JavaScript](./includes/audio-conferencing/audio-conferencing-web.md)] ## Next steps |
communication-services | Breakoutrooms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/breakoutrooms.md | + + Title: Tutorial - Integrate Microsoft Teams breakout rooms ++description: Use Azure Communication Services SDKs to access BreakoutRooms. +++++ Last updated : 07/15/2024++++# BreakoutRooms +In this article, you learn how to implement Microsoft Teams breakout rooms with Azure Communication Services. This capability allows Azure Communication Services users in Teams meetings to participate in breakout rooms. Teams administrators control availability of breakout rooms in Teams meeting with Teams meeting policy. You can find additional information about breakout rooms in [Teams documentation](https://support.microsoft.com/office/use-breakout-rooms-in-microsoft-teams-meetings-7de1f48a-da07-466c-a5ab-4ebace28e461). +++## Prerequisites ++- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +- A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md). +- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/identity/access-tokens.md). +- Teams meeting organizer needs to assign Teams meeting policy that enables breakout rooms.[Teams meeting policy](/powershell/module/teams/set-csteamsmeetingpolicy?view=teams-ps&preserve-view=true) +- Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md) ++Only Microsoft 365 Users with Organizer, Co-Organizer, or Breakout Room manager roles can manage the breakout rooms. ++## Support +The following tables define support of breakout rooms in Azure Communication Services. +### Identities and call types +The following tables show support of breakout rooms for specific call type and identity. ++|Identities | Teams meeting | Room | 1:1 call | Group call | 1:1 Teams interop call | Group Teams interop call | +|--|||-|||--| +|Communication Services user | ✔️ | | | | | | +|Microsoft 365 user | ✔️ | | | | | | ++++### Operations +The following tables show support of individual APIs in calling SDK to individual identity types. ++|Operations | Communication Services user | Microsoft 365 user | +|--||-| +|Get assigned breakout room | ✔️ | ✔️ | +|Get all breakout rooms | | ✔️[1] | +|Join breakout room | ✔️ | ✔️ | +|Manage breakout rooms | | | +|Participate in breakout room chat | | ✔️[2] | +|Get breakout room settings|✔️ | ✔️ | ++[1] Only Microsoft 365 user with role organizer, co-organizer, or breakout room manager. ++[2] Microsoft 365 users can use Graph API to participate in breakout room chat. The thread ID of the chat is provided in the assigned breakout room object. ++### SDKs +The following tables show support of breakout rooms feature in individual Azure Communication Services SDKs. ++| | Web | Web UI | iOS | iOS UI | Android | Android UI | Windows | +|-|--|--|--|--|-|--|| +|Is Supported | ✔️ | | | | | | | ++## Breakout rooms +++## Next steps +- [Learn how to manage calls](./manage-calls.md) +- [Learn how to manage video](./manage-video.md) +- [Learn how to record calls](./record-calls.md) +- [Learn how to transcribe calls](./call-transcription.md) |
communication-services | Together Mode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/together-mode.md | + + Title: Together Mode ++description: Make your Microsoft Teams virtual meetings feel more personal with Teams together mode. +++++ Last updated : 07/17/2024+++++# Together Mode +In this article, you learn how to implement Microsoft Teams Together Mode with Azure Communication Services Calling SDKs. This feature enhances virtual meetings and calls, making them feel more personal. By creating a unified view that places everyone in a shared background, participants can connect seamlessly and collaborate effectively. +++## Prerequisites ++- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +- A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md). +- A user's access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/identity/access-tokens.md). +- Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md) ++## Support +The following tables define support for Together Mode in Azure Communication Services. ++### Identities and call types +The following table shows support for call and identity types. ++|Identities | Teams meeting | Room | 1:1 call | Group call | 1:1 Teams interop call | Group Teams interop call | +|--|||-|||--| +|Communication Services user | ✔️ | | | ✔️ | | ✔️ | +|Microsoft 365 user | ✔️ | | | ✔️ | | ✔️ | ++### Operations +The following table shows support for individual APIs in Calling SDK to individual identity types. ++|Operations | Communication Services user | Microsoft 365 user | +|--||-| +| Start together mode stream | | ✔️ [1] | +| Get together mode stream | ✔️ | ✔️ | +| Get scene size | ✔️ | ✔️ | +| Get seating map | ✔️ | ✔️ | +| Change scene | | | +| Change seat assignment | | | ++[1] Start Together Mode can only be called by a Microsoft 365 user with the role of organizer, co-organizer, or presenter. + +### SDKs +The following table shows support for Together Mode feature in individual Azure Communication Services SDKs. ++| Platforms | Web | Web UI | iOS | iOS UI | Android | Android UI | Windows | +||--|--|--|--|-|--|| +|Is Supported | ✔️ | | | | | | | ++## Together Mode ++++## Next steps +- [Learn how to manage calls](./manage-calls.md) +- [Learn how to manage video](./manage-video.md) +- [Learn how to record calls](./record-calls.md) +- [Learn how to transcribe calls](./call-transcription.md) |
confidential-computing | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/overview.md | Confidential computing is an industry term established by the [Confidential Comp > > These secure and isolated environments prevent unauthorized access or modification of applications and data while they are in use, thereby increasing the security level of organizations that manage sensitive and regulated data. +Microsoft is one of the founding members of the CCC and provides Trusted Execution Environments (TEEs) in Azure based on this CCC definition. + ## Reducing the attack surface :::image type="content" source="media/overview/three-states-and-confidential-computing-consortium-definition.png" alt-text="Diagram of three states of data protection, with confidential computing's data in use highlighted."::: +Azure already encrypts data at rest and in transit. Confidential computing helps protect data in use, including cryptographic keys. Azure confidential computing helps customers prevent unauthorized access to data in use, including from the cloud operator, by processing data in a hardware-based and attested Trusted Execution Environment (TEE). When Azure confidential computing is enabled and properly configured, Microsoft is not able to access unencrypted customer data. + The threat model aims to reduce trust or remove the ability for a cloud provider operator or other actors in the tenant's domain accessing code and data while it's being executed. This is achieved in Azure using a hardware root of trust not controlled by the cloud provider, which is designed to ensure unauthorized access or modification of the environment. When used with data encryption at rest and in transit, confidential computing extends data protections further to protect data whilst it's in use. This is beneficial for organizations seeking further protections for sensitive data and applications hosted in cloud environments. |
connectors | Built In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/built-in.md | Built-in connectors provide ways for you to control your workflow's schedule and For a smaller number of services, systems, and protocols, Azure Logic Apps provides a built-in version alongside the managed version. The number and range of built-in connectors vary based on whether you create a Consumption logic app workflow that runs in multitenant Azure Logic Apps or a Standard logic app workflow that runs in single-tenant Azure Logic Apps. In most cases, the built-in version provides better performance, capabilities, pricing, and so on. In a few cases, some built-in connectors are available only in one logic app workflow type and not the other. -For example, a Standard workflow can use both managed connectors and built-in connectors for Azure Blob Storage, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, FTP, IBM DB2, IBM MQ, SFTP, and SQL Server. A Consumption workflow doesn't have the built-in versions. A Consumption workflow can use built-in connectors for Azure API Management, and Azure App Services, while a Standard workflow doesn't have these built-in connectors. +For example, a Standard workflow can use both managed connectors and built-in connectors for Azure Blob Storage, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, FTP, IBM DB2, IBM MQ, SFTP, and SQL Server. A Consumption workflow doesn't have the built-in versions. A Consumption workflow can use built-in connectors for Azure API Management, and Azure App Service, while a Standard workflow doesn't have these built-in connectors. Also, in Standard workflows, some [built-in connectors with specific attributes are informally known as *service providers*](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). Some built-in connectors support only a single way to authenticate a connection to the underlying service. Other built-in connectors can offer a choice, such as using a connection string, Microsoft Entra ID, or a managed identity. All built-in connectors run in the same process as the Azure Logic Apps runtime. For more information, review [Single-tenant versus multitenant and integration service environment (ISE)](../logic-apps/single-tenant-overview-compare.md). The following table lists the current and expanding galleries of built-in connec | Consumption | Standard | |-|-|-| Azure API Management<br>Azure App Services <br>Azure Functions <br>Azure Logic Apps <br>Batch <br>Control <br>Data Operations <br>Date Time <br>Flat File <br>HTTP <br>Inline Code <br>Integration Account <br>Liquid <br>Request <br>Schedule <br>Variables <br>XML | AS2 (v2) <br>Azure AI Search* <br>Azure Automation* <br>Azure Blob Storage* <br>Azure Cosmos DB* <br>Azure Event Grid Publisher* <br>Azure Event Hubs* <br>Azure File Storage* <br>Azure Functions <br>Azure Key Vault* <br>Azure OpenAI* <br>Azure Queue Storage* <br>Azure Service Bus* <br>Azure Table Storage* <br>Batch Operations <br>Control <br>Data Mapper Operations <br>Data Operations <br>Date Time <br>EDIFACT <br>File System* <br>Flat File <br>FTP* <br>HTTP <br>IBM 3270* <br>IBM CICS* <br>IBM DB2* <br>IBM Host File* <br>IBM IMS* <br>IBM MQ* <br>Inline Code <br>Integration Account <br>JDBC* <br>Liquid Operations <br>Request <br>RosettaNet <br>SAP* <br>Schedule <br>SFTP* <br>SMTP* <br>SQL Server* <br>SWIFT <br>Variables <br>Workflow Operations <br>X12 <br>XML Operations | +| Azure API Management<br>Azure App Service <br>Azure Functions <br>Azure Logic Apps <br>Batch <br>Control <br>Data Operations <br>Date Time <br>Flat File <br>HTTP <br>Inline Code <br>Integration Account <br>Liquid <br>Request <br>Schedule <br>Variables <br>XML | AS2 (v2) <br>Azure AI Search* <br>Azure Automation* <br>Azure Blob Storage* <br>Azure Cosmos DB* <br>Azure Event Grid Publisher* <br>Azure Event Hubs* <br>Azure File Storage* <br>Azure Functions <br>Azure Key Vault* <br>Azure OpenAI* <br>Azure Queue Storage* <br>Azure Service Bus* <br>Azure Table Storage* <br>Batch Operations <br>Control <br>Data Mapper Operations <br>Data Operations <br>Date Time <br>EDIFACT <br>File System* <br>Flat File <br>FTP* <br>HTTP <br>IBM 3270* <br>IBM CICS* <br>IBM DB2* <br>IBM Host File* <br>IBM IMS* <br>IBM MQ* <br>Inline Code <br>Integration Account <br>JDBC* <br>Liquid Operations <br>Request <br>RosettaNet <br>SAP* <br>Schedule <br>SFTP* <br>SMTP* <br>SQL Server* <br>SWIFT <br>Variables <br>Workflow Operations <br>X12 <br>XML Operations | <a name="service-provider-interface-implementation"></a> You can use the following built-in connectors to perform general tasks, for exam :::row::: :::column::: [![Schedule icon][schedule-icon]][schedule-doc]- \ - \ - [**Schedule**][schedule-doc] - \ - \ - [**Recurrence**][schedule-recurrence-doc]: Trigger a workflow based on the specified recurrence. - \ - \ - [**Sliding Window**][schedule-sliding-window-doc]<br>(*Consumption workflow only*): <br>Trigger a workflow that needs to handle data in continuous chunks. - \ - \ - [**Delay**][schedule-delay-doc]: Pause your workflow for the specified duration. - \ - \ - [**Delay until**][schedule-delay-until-doc]: Pause your workflow until the specified date and time. + <br><br>[**Schedule**][schedule-doc] + <br><br>[**Recurrence**][schedule-recurrence-doc]: Trigger a workflow based on the specified recurrence. + <br><br>[**Sliding Window**][schedule-sliding-window-doc] + <br>(*Consumption workflow only*) + <br>Trigger a workflow that needs to handle data in continuous chunks. + <br><br>[**Delay**][schedule-delay-doc]: Pause your workflow for the specified duration. + <br><br>[**Delay until**][schedule-delay-until-doc]: Pause your workflow until the specified date and time. :::column-end::: :::column::: [![HTTP trigger and action icon][http-icon]][http-doc]- \ - \ - [**HTTP**][http-doc] - \ - \ - Call an HTTP or HTTPS endpoint by using either the HTTP trigger or action. - \ - \ - You can also use these other built-in HTTP triggers and actions: + <br><br>[**HTTP**][http-doc] + <br><br>Call an HTTP or HTTPS endpoint by using either the HTTP trigger or action. + <br><br>You can also use these other built-in HTTP triggers and actions: - [HTTP + Swagger][http-swagger-doc] - [HTTP + Webhook][http-webhook-doc] :::column-end::: :::column::: [![Request trigger icon][http-request-icon]][http-request-doc]- \ - \ - [**Request**][http-request-doc] - \ - \ - [**When a HTTP request is received**][http-request-doc]: Wait for a request from another workflow, app, or service. This trigger makes your workflow callable without having to be checked or polled on a schedule. - \ - \ - [**Response**][http-request-doc]: Respond to a request received by the **When a HTTP request is received** trigger in the same workflow. + <br><br>[**Request**][http-request-doc] + <br><br>[**When a HTTP request is received**][http-request-doc]: Wait for a request from another workflow, app, or service. This trigger makes your workflow callable without having to be checked or polled on a schedule. + <br><br>[**Response**][http-request-doc]: Respond to a request received by the **When a HTTP request is received** trigger in the same workflow. :::column-end::: :::column::: [![Batch icon][batch-icon]][batch-doc]- \ - \ - [**Batch**][batch-doc] - \ - \ - [**Batch messages**][batch-doc]: Trigger a workflow that processes messages in batches. - \ - \ - [**Send messages to batch**][batch-doc]: Call an existing workflow that currently starts with a **Batch messages** trigger. + <br><br>[**Batch**][batch-doc] + <br><br>[**Batch messages**][batch-doc]: Trigger a workflow that processes messages in batches. + <br><br>[**Send messages to batch**][batch-doc]: Call an existing workflow that currently starts with a **Batch messages** trigger. :::column-end::: :::row-end::: :::row::: :::column::: [![File System icon][file-system-icon]][file-system-doc]- \ - \ - [**File System**][file-system-doc]<br>(*Standard workflow only*) - \ - \ - Connect to a file system on your network machine to create and manage files. + <br><br>[**File System**][file-system-doc]<br>(*Standard workflow only*) + <br><br>Connect to a file system on your network machine to create and manage files. :::column-end::: :::column::: [![FTP icon][ftp-icon]][ftp-doc]- \ - \ - [**FTP**][ftp-doc]<br>(*Standard workflow only*) - \ - \ - Connect to an FTP or FTPS server in your Azure virtual network so that you can work with your files and folders. + <br><br>[**FTP**][ftp-doc]<br>(*Standard workflow only*) + <br><br>Connect to an FTP or FTPS server in your Azure virtual network so that you can work with your files and folders. :::column-end::: :::column::: [![SFTP-SSH icon][sftp-ssh-icon]][sftp-doc]- \ - \ - [**SFTP**][sftp-doc]<br>(*Standard workflow only*) - \ - \ - Connect to an SFTP server in your Azure virtual network so that you can work with your files and folders. + <br><br>[**SFTP**][sftp-doc]<br>(*Standard workflow only*) + <br><br>Connect to an SFTP server in your Azure virtual network so that you can work with your files and folders. :::column-end::: :::column::: [![SMTP icon][smtp-icon]][smtp-doc]- \ - \ - [**SMTP**][smtp-doc]<br>(*Standard workflow only*) - \ - \ - Connect to an SMTP server so that you can send email. - :::column-end::: - :::column::: + <br><br>[**SMTP**][smtp-doc]<br>(*Standard workflow only*) + <br><br>Connect to an SMTP server so that you can send email. :::column-end::: :::row-end::: You can use the following built-in connectors to access specific services and sy :::row::: :::column::: [![Azure AI Search icon][azure-ai-search-icon]][azure-ai-search-doc]- \ - \ - [**Azure AI Search**][azure-ai-search-doc]<br>(*Standard workflow only*) - \ - \ - Connect to AI Search so that you can perform document indexing and search operations in your workflow. + <br><br>[**Azure AI Search**][azure-ai-search-doc]<br>(*Standard workflow only*) + <br><br>Connect to AI Search so that you can perform document indexing and search operations in your workflow. :::column-end::: :::column::: [![Azure API Management icon][azure-api-management-icon]][azure-api-management-doc]- \ - \ - [**Azure API Management**][azure-api-management-doc]<br>(*Consumption workflow only*) - \ - \ - Call your own triggers and actions in APIs that you define, manage, and publish using [Azure API Management](../api-management/api-management-key-concepts.md). <br><br>**Note**: Not supported when using [Consumption tier for API Management](../api-management/api-management-features.md). - :::column-end::: - :::column::: - [![Azure App Services icon][azure-app-services-icon]][azure-app-services-doc] - \ - \ - [**Azure App Services**][azure-app-services-doc]<br>(*Consumption workflow only*) - \ - \ - Call apps that you create and host on [Azure App Service](../app-service/overview.md), for example, API Apps and Web Apps. - \ - \ - When Swagger is included, the triggers and actions defined by these apps appear like any other first-class triggers and actions in Azure Logic Apps. + <br><br>[**Azure API Management**][azure-api-management-doc]<br>(*Consumption workflow only*) + <br><br>Call your own triggers and actions in APIs that you define, manage, and publish using [Azure API Management](../api-management/api-management-key-concepts.md). <br><br>**Note**: Not supported when using [Consumption tier for API Management](../api-management/api-management-features.md). + :::column-end::: + :::column::: + [![Azure App Service icon][azure-app-service-icon]][azure-app-service-doc] + <br><br>[**Azure App Service**][azure-app-service-doc]<br>(*Consumption workflow only*) + <br><br>Call apps that you create and host on [Azure App Service](../app-service/overview.md), for example, API Apps and Web Apps. + <br><br>When Swagger is included, the triggers and actions defined by these apps appear like any other first-class triggers and actions in Azure Logic Apps. :::column-end::: :::column::: [![Azure Automation icon][azure-automation-icon]][azure-automation-doc]- \ - \ - [**Azure Automation**][azure-automation-doc]<br>(*Standard workflow only*) - \ - \ - Connect to your Azure Automation accounts so you can create and manage Azure Automation jobs. + <br><br>[**Azure Automation**][azure-automation-doc]<br>(*Standard workflow only*) + <br><br>Connect to your Azure Automation accounts so you can create and manage Azure Automation jobs. :::column-end::: :::column::: [![Azure Blob Storage icon][azure-blob-storage-icon]][azure-blob-storage-doc]- \ - \ - [**Azure Blob Storage**][azure-blob-storage-doc]<br>(*Standard workflow only*) - \ - \ - Connect to your Azure Blob Storage account so you can create and manage blob content. + <br><br>[**Azure Blob Storage**][azure-blob-storage-doc]<br>(*Standard workflow only*) + <br><br>Connect to your Azure Blob Storage account so you can create and manage blob content. :::column-end::: :::row-end::: :::row::: :::column::: [![Azure Cosmos DB icon][azure-cosmos-db-icon]][azure-cosmos-db-doc]- \ - \ - [**Azure Cosmos DB**][azure-cosmos-db-doc]<br>(*Standard workflow only*) - \ - \ - Connect to Azure Cosmos DB so that you can access and manage Azure Cosmos DB documents. + <br><br>[**Azure Cosmos DB**][azure-cosmos-db-doc]<br>(*Standard workflow only*) + <br><br>Connect to Azure Cosmos DB so that you can access and manage Azure Cosmos DB documents. :::column-end::: :::column::: [![Azure Event Grid Publisher icon][azure-event-grid-publisher-icon]][azure-event-grid-publisher-doc]- \ - \ - [**Azure Event Grid Publisher**][azure-event-grid-publisher-doc]<br>(*Standard workflow only*) - \ - \ - Connect to Azure Event Grid for event-based programming using pub-sub semantics. + <br><br>[**Azure Event Grid Publisher**][azure-event-grid-publisher-doc]<br>(*Standard workflow only*) + <br><br>Connect to Azure Event Grid for event-based programming using pub-sub semantics. :::column-end::: :::column::: [![Azure Event Hubs icon][azure-event-hubs-icon]][azure-event-hubs-doc]- \ - \ - [**Azure Event Hubs**][azure-event-hubs-doc]<br>(*Standard workflow only*) - \ - \ - Consume and publish events through an event hub. For example, get output from your workflow with Event Hubs, and then send that output to a real-time analytics provider. + <br><br>[**Azure Event Hubs**][azure-event-hubs-doc]<br>(*Standard workflow only*) + <br><br>Consume and publish events through an event hub. For example, get output from your workflow with Event Hubs, and then send that output to a real-time analytics provider. :::column-end::: :::column::: [![Azure File Storage icon][azure-file-storage-icon]][azure-file-storage-doc]- \ - \ - [**Azure File Storage**][azure-file-storage-doc]<br>(*Standard workflow only*) - \ - \ - Connect to your Azure Storage account so that you can create, update, and manage files. + <br><br>[**Azure File Storage**][azure-file-storage-doc]<br>(*Standard workflow only*) + <br><br>Connect to your Azure Storage account so that you can create, update, and manage files. :::column-end::: :::column::: [![Azure Functions icon][azure-functions-icon]][azure-functions-doc]- \ - \ - [**Azure Functions**][azure-functions-doc] - \ - \ - Call [Azure-hosted functions](../azure-functions/functions-overview.md) to run your own *code snippets* (C# or Node.js) within your workflow. + <br><br>[**Azure Functions**][azure-functions-doc] + <br><br>Call [Azure-hosted functions](../azure-functions/functions-overview.md) to run your own *code snippets* (C# or Node.js) within your workflow. :::column-end::: :::row-end::: :::row::: :::column::: [![Azure Key Vault icon][azure-key-vault-icon]][azure-key-vault-doc]- \ - \ - [**Azure Key Vault**][azure-key-vault-doc]<br>(*Standard workflow only*) - \ - \ - Connect to Azure Key Vault to store, access, and manage secrets. + <br><br>[**Azure Key Vault**][azure-key-vault-doc]<br>(*Standard workflow only*) + <br><br>Connect to Azure Key Vault to store, access, and manage secrets. :::column-end::: :::column::: [![Azure Logic Apps icon][azure-logic-apps-icon]][nested-logic-app-doc]- \ - \ - [**Azure Logic Apps**][nested-logic-app-doc]<br>(*Consumption workflow*) <br><br>-or-<br><br>**Workflow Operations**<br>(*Standard workflow*) - \ - \ - Call other workflows that start with the Request trigger named **When a HTTP request is received**. + <br><br>[**Azure Logic Apps**][nested-logic-app-doc]<br>(*Consumption workflow*) <br><br>-or-<br><br>**Workflow Operations**<br>(*Standard workflow*) + <br><br>Call other workflows that start with the Request trigger named **When a HTTP request is received**. :::column-end::: :::column::: [![Azure OpenAI icon][azure-openai-icon]][azure-openai-doc]- \ - \ - [**Azure OpenAI**][azure-openai-doc]<br>(*Standard workflow only*) - \ - \ - Connect to Azure OpenAI to perform operations on large language models. + <br><br>[**Azure OpenAI**][azure-openai-doc]<br>(*Standard workflow only*) + <br><br>Connect to Azure OpenAI to perform operations on large language models. :::column-end::: :::column::: [![Azure Service Bus icon][azure-service-bus-icon]][azure-service-bus-doc]- \ - \ - [**Azure Service Bus**][azure-service-bus-doc]<br>(*Standard workflow only*) - \ - \ - Manage asynchronous messages, queues, sessions, topics, and topic subscriptions. + <br><br>[**Azure Service Bus**][azure-service-bus-doc]<br>(*Standard workflow only*) + <br><br>Manage asynchronous messages, queues, sessions, topics, and topic subscriptions. :::column-end::: :::column::: [![Azure Table Storage icon][azure-table-storage-icon]][azure-table-storage-doc]- \ - \ - [**Azure Table Storage**][azure-table-storage-doc]<br>(*Standard workflow only*) - \ - \ - Connect to your Azure Storage account so that you can create, update, query, and manage tables. + <br><br>[**Azure Table Storage**][azure-table-storage-doc]<br>(*Standard workflow only*) + <br><br>Connect to your Azure Storage account so that you can create, update, query, and manage tables. :::column-end::: :::row-end::: :::row::: :::column::: [![Azure Queue Storage][azure-queue-storage-icon]][azure-queue-storage-doc]- \ - \ - [**Azure Queue Storage**][azure-queue-storage-doc]<br>(*Standard workflow only*) - \ - \ - Connect to your Azure Storage account so that you can create, update, and manage queues. + <br><br>[**Azure Queue Storage**][azure-queue-storage-doc]<br>(*Standard workflow only*) + <br><br>Connect to your Azure Storage account so that you can create, update, and manage queues. :::column-end::: :::column::: [![IBM 3270 icon][ibm-3270-icon]][ibm-3270-doc]- \ - \ - [**IBM 3270**][ibm-3270-doc]<br>(*Standard workflow only*) - \ - \ - Call 3270 screen-driven apps on IBM mainframes from your workflow. + <br><br>[**IBM 3270**][ibm-3270-doc]<br>(*Standard workflow only*) + <br><br>Call 3270 screen-driven apps on IBM mainframes from your workflow. :::column-end::: :::column::: [![IBM CICS icon][ibm-cics-icon]][ibm-cics-doc]- \ - \ - [**IBM CICS**][ibm-cics-doc]<br>(*Standard workflow only*) - \ - \ - Call CICS programs on IBM mainframes from your workflow. + <br><br>[**IBM CICS**][ibm-cics-doc]<br>(*Standard workflow only*) + <br><br>Call CICS programs on IBM mainframes from your workflow. :::column-end::: :::column::: [![IBM DB2 icon][ibm-db2-icon]][ibm-db2-doc]- \ - \ - [**IBM DB2**][ibm-db2-doc]<br>(*Standard workflow only*) - \ - \ - Connect to IBM DB2 in the cloud or on-premises. Update a row, get a table, and more. + <br><br>[**IBM DB2**][ibm-db2-doc]<br>(*Standard workflow only*) + <br><br>Connect to IBM DB2 in the cloud or on-premises. Update a row, get a table, and more. :::column-end::: :::column::: [![IBM Host File icon][ibm-host-file-icon]][ibm-host-file-doc]- \ - \ - [**IBM Host File**][ibm-host-file-doc]<br>(*Standard workflow only*) - \ - \ - Connect to IBM Host File and generate or parse contents. + <br><br>[**IBM Host File**][ibm-host-file-doc]<br>(*Standard workflow only*) + <br><br>Connect to IBM Host File and generate or parse contents. :::column-end::: :::row-end::: :::row::: :::column::: [![IBM IMS icon][ibm-ims-icon]][ibm-ims-doc]- \ - \ - [**IBM IMS**][ibm-ims-doc]<br>(*Standard workflow only*) - \ - \ - Call IMS programs on IBM mainframes from your workflow. + <br><br>[**IBM IMS**][ibm-ims-doc]<br>(*Standard workflow only*) + <br><br>Call IMS programs on IBM mainframes from your workflow. :::column-end::: :::column::: [![IBM MQ icon][ibm-mq-icon]][ibm-mq-doc]- \ - \ - [**IBM MQ**][ibm-mq-doc]<br>(*Standard workflow only*) - \ - \ - Connect to IBM MQ on-premises or in Azure to send and receive messages. + <br><br>[**IBM MQ**][ibm-mq-doc]<br>(*Standard workflow only*) + <br><br>Connect to IBM MQ on-premises or in Azure to send and receive messages. :::column-end::: :::column::: [![JDBC icon][jdbc-icon]][jdbc-doc]- \ - \ - [**JDBC**][jdbc-doc]<br>(*Standard workflow only*) - \ - \ - Connect to a relational database using JDBC drivers. + <br><br>[**JDBC**][jdbc-doc]<br>(*Standard workflow only*) + <br><br>Connect to a relational database using JDBC drivers. :::column-end::: :::column::: [![SAP icon][sap-icon]][sap-doc]- \ - \ - [**SAP**][sap-doc]<br>(*Standard workflow only*) - \ - \ - Connect to SAP so you can send or receive messages and invoke actions. + <br><br>[**SAP**][sap-doc]<br>(*Standard workflow only*) + <br><br>Connect to SAP so you can send or receive messages and invoke actions. :::column-end::: :::column::: [![SQL Server icon][sql-server-icon]][sql-server-doc]- \ - \ - [**SQL Server**][sql-server-doc]<br>(*Standard workflow only*) - \ - \ - Connect to your SQL Server on premises or an Azure SQL Database in the cloud so that you can manage records, run stored procedures, or perform queries. + <br><br>[**SQL Server**][sql-server-doc]<br>(*Standard workflow only*) + <br><br>Connect to your SQL Server on premises or an Azure SQL Database in the cloud so that you can manage records, run stored procedures, or perform queries. :::column-end::: :::row-end::: Azure Logic Apps provides the following built-in actions for running your own co :::row::: :::column::: [![Azure Functions icon][azure-functions-icon]][azure-functions-doc]- \ - \ - [**Azure Functions**][azure-functions-doc] - \ - \ - Call [Azure-hosted functions](../azure-functions/functions-overview.md) to run your own *code snippets* (C# or Node.js) within your workflow. + <br><br>[**Azure Functions**][azure-functions-doc] + <br><br>Call [Azure-hosted functions](../azure-functions/functions-overview.md) to run your own *code snippets* (C# or Node.js) within your workflow. :::column-end::: :::column::: [![Inline Code action icon][inline-code-icon]][inline-code-doc]- \ - \ - [**Inline Code**][inline-code-doc] - \ - \ - [Add and run inline JavaScript code snippets](../logic-apps/logic-apps-add-run-inline-code.md) from your workflow. + <br><br>[**Inline Code**][inline-code-doc] + <br><br>[Add and run inline JavaScript code snippets](../logic-apps/logic-apps-add-run-inline-code.md) from your workflow. :::column-end::: :::column::: [![Local Function Operations icon][local-function-icon]][local-function-doc]- \ - \ - [**Local Function Operations**][local-function-doc]<br>(Standard workflow only) - \ - \ - [Create and run .NET Framework code](../logic-apps/create-run-custom-code-functions.md) from your workflow. + <br><br>[**Local Function Operations**][local-function-doc]<br>(Standard workflow only) + <br><br>[Create and run .NET Framework code](../logic-apps/create-run-custom-code-functions.md) from your workflow. :::column-end::: :::column::: :::column-end::: Azure Logic Apps provides the following built-in actions for structuring and con :::row::: :::column::: [![Condition action icon][condition-icon]][condition-doc]- \ - \ - [**Condition**][condition-doc] - \ - \ - Evaluate a condition and run different actions based on whether the condition is true or false. + <br><br>[**Condition**][condition-doc] + <br><br>Evaluate a condition and run different actions based on whether the condition is true or false. :::column-end::: :::column::: [![For Each action icon][for-each-icon]][for-each-doc]- \ - \ - [**For Each**][for-each-doc] - \ - \ - Perform the same actions on every item in an array. + <br><br>[**For Each**][for-each-doc] + <br><br>Perform the same actions on every item in an array. :::column-end::: :::column::: [![Scope action icon][scope-icon]][scope-doc]- \ - \ - [**Scope**][scope-doc] - \ - \ - Group actions into *scopes*, which get their own status after the actions in the scope finish running. + <br><br>[**Scope**][scope-doc] + <br><br>Group actions into *scopes*, which get their own status after the actions in the scope finish running. :::column-end::: :::column::: [![Switch action icon][switch-icon]][switch-doc]- \ - \ - [**Switch**][switch-doc] - \ - \ - Group actions into *cases*, which are assigned unique values except for the default case. Run only that case whose assigned value matches the result from an expression, object, or token. If no matches exist, run the default case. + <br><br>[**Switch**][switch-doc] + <br><br>Group actions into *cases*, which are assigned unique values except for the default case. Run only that case whose assigned value matches the result from an expression, object, or token. If no matches exist, run the default case. :::column-end::: :::row-end::: :::row::: :::column::: [![Terminate action icon][terminate-icon]][terminate-doc]- \ - \ - [**Terminate**][terminate-doc] - \ - \ - Stop an actively running workflow. + <br><br>[**Terminate**][terminate-doc] + <br><br>Stop an actively running workflow. :::column-end::: :::column::: [![Until action icon][until-icon]][until-doc]- \ - \ - [**Until**][until-doc] - \ - \ - Repeat actions until the specified condition is true or some state has changed. + <br><br>[**Until**][until-doc] + <br><br>Repeat actions until the specified condition is true or some state has changed. :::column-end::: :::column::: :::column-end::: Azure Logic Apps provides the following built-in actions for working with data o :::row::: :::column::: [![Data Operations icon][data-operations-icon]][data-operations-doc]- \ - \ - [**Data Operations**][data-operations-doc] - \ - \ - Perform operations with data. - \ - \ - **Compose**: Create a single output from multiple inputs with various types. - \ - \ - **Create CSV table**: Create a comma-separated-value (CSV) table from an array with JSON objects. - \ - \ - **Create HTML table**: Create an HTML table from an array with JSON objects. - \ - \ - **Filter array**: Create an array from items in another array that meet your criteria. - \ - \ - **Join**: Create a string from all items in an array and separate those items with the specified delimiter. - \ - \ - **Parse JSON**: Create user-friendly tokens from properties and their values in JSON content so that you can use those properties in your workflow. - \ - \ - **Select**: Create an array with JSON objects by transforming items or values in another array and mapping those items to specified properties. + <br><br>[**Data Operations**][data-operations-doc] + <br><br>Perform operations with data. + <br><br>**Compose**: Create a single output from multiple inputs with various types. + <br><br>**Create CSV table**: Create a comma-separated-value (CSV) table from an array with JSON objects. + <br><br>**Create HTML table**: Create an HTML table from an array with JSON objects. + <br><br>**Filter array**: Create an array from items in another array that meet your criteria. + <br><br>**Join**: Create a string from all items in an array and separate those items with the specified delimiter. + <br><br>**Parse JSON**: Create user-friendly tokens from properties and their values in JSON content so that you can use those properties in your workflow. + <br><br>**Select**: Create an array with JSON objects by transforming items or values in another array and mapping those items to specified properties. :::column-end::: :::column::: ![Date Time action icon][date-time-icon]- \ - \ - **Date Time** - \ - \ - Perform operations with timestamps. - \ - \ - **Add to time**: Add the specified number of units to a timestamp. - \ - \ - **Convert time zone**: Convert a timestamp from the source time zone to the target time zone. - \ - \ - **Current time**: Return the current timestamp as a string. - \ - \ - **Get future time**: Return the current timestamp plus the specified time units. - \ - \ - **Get past time**: Return the current timestamp minus the specified time units. - \ - \ - **Subtract from time**: Subtract a number of time units from a timestamp. + <br><br>**Date Time** + <br><br>Perform operations with timestamps. + <br><br>**Add to time**: Add the specified number of units to a timestamp. + <br><br>**Convert time zone**: Convert a timestamp from the source time zone to the target time zone. + <br><br>**Current time**: Return the current timestamp as a string. + <br><br>**Get future time**: Return the current timestamp plus the specified time units. + <br><br>**Get past time**: Return the current timestamp minus the specified time units. + <br><br>**Subtract from time**: Subtract a number of time units from a timestamp. :::column-end::: :::column::: [![Variables action icon][variables-icon]][variables-doc]- \ - \ - [**Variables**][variables-doc] - \ - \ - Perform operations with variables. - \ - \ - **Append to array variable**: Insert a value as the last item in an array stored by a variable. - \ - \ - **Append to string variable**: Insert a value as the last character in a string stored by a variable. - \ - \ - **Decrement variable**: Decrease a variable by a constant value. - \ - \ - **Increment variable**: Increase a variable by a constant value. - \ - \ - **Initialize variable**: Create a variable and declare its data type and initial value. - \ - \ - **Set variable**: Assign a different value to an existing variable. + <br><br>[**Variables**][variables-doc] + <br><br>Perform operations with variables. + <br><br>**Append to array variable**: Insert a value as the last item in an array stored by a variable. + <br><br>**Append to string variable**: Insert a value as the last character in a string stored by a variable. + <br><br>**Decrement variable**: Decrease a variable by a constant value. + <br><br>**Increment variable**: Increase a variable by a constant value. + <br><br>**Initialize variable**: Create a variable and declare its data type and initial value. + <br><br>**Set variable**: Assign a different value to an existing variable. :::column-end::: :::column::: :::column-end::: For more information, review the following documentation: :::row::: :::column::: [![AS2 v2 icon][as2-v2-icon]][as2-doc]- \ - \ - [**AS2 (v2)**][as2-doc]<br>(*Standard workflow only*) - \ - \ - Encode and decode messages that use the AS2 protocol. + <br><br>[**AS2 (v2)**][as2-doc]<br>(*Standard workflow only*) + <br><br>Encode and decode messages that use the AS2 protocol. :::column-end::: :::column::: [![EDIFACT icon][edifact-icon]][edifact-doc]- \ - \ - [**EDIFACT**][edifact-doc] - \ - \ - Encode and decode messages that use the EDIFACT protocol. + <br><br>[**EDIFACT**][edifact-doc] + <br><br>Encode and decode messages that use the EDIFACT protocol. :::column-end::: :::column::: [![Flat File icon][flat-file-icon]][flat-file-doc]- \ - \ - [**Flat File**][flat-file-doc] - \ - \ - Encode and decode XML messages between trading partners. + <br><br>[**Flat File**][flat-file-doc] + <br><br>Encode and decode XML messages between trading partners. :::column-end::: :::column::: [![Integration account icon][integration-account-icon]][integration-account-doc]- \ - \ - [**Integration Account Artifact Lookup**][integration-account-doc] - \ - \ - Get custom metadata for artifacts, such as trading partners, agreements, schemas, and so on, in your integration account. + <br><br>[**Integration Account Artifact Lookup**][integration-account-doc] + <br><br>Get custom metadata for artifacts, such as trading partners, agreements, schemas, and so on, in your integration account. :::column-end::: :::column::: [![Liquid Operations icon][liquid-icon]][liquid-transform-doc]- \ - \ - [**Liquid Operations**][liquid-transform-doc] - \ - \ - Convert the following formats by using Liquid templates: <br><br>- JSON to JSON <br>- JSON to TEXT <br>- XML to JSON <br>- XML to TEXT + <br><br>[**Liquid Operations**][liquid-transform-doc] + <br><br>Convert the following formats by using Liquid templates: <br><br>- JSON to JSON <br>- JSON to TEXT <br>- XML to JSON <br>- XML to TEXT :::column-end::: :::row-end::: :::row::: :::column::: [![RosettaNet icon][rosettanet-icon]][rosettanet-doc]- \ - \ - [**RosettaNet**][rosettanet-doc] - \ - \ - Encode and decode messages that use the RosettaNet protocol. + <br><br>[**RosettaNet**][rosettanet-doc] + <br><br>Encode and decode messages that use the RosettaNet protocol. :::column-end::: :::column::: [![SWIFT icon][swift-icon]][swift-doc]- \ - \ - [**SWIFT**][swift-doc]<br>(*Standard workflow only*) - \ - \ - Encode and decode Society for Worldwide Interbank Financial Telecommuncation (SIWFT) transactions in flat-file XML message format. + <br><br>[**SWIFT**][swift-doc]<br>(*Standard workflow only*) + <br><br>Encode and decode Society for Worldwide Interbank Financial Telecommuncation (SIWFT) transactions in flat-file XML message format. :::column-end::: :::column::: [![Transform XML icon][xml-transform-icon]][xml-transform-doc]- \ - \ - [**Transform XML**][xml-transform-doc] - \ - \ - Convert the source XML format to another XML format. + <br><br>[**Transform XML**][xml-transform-doc] + <br><br>Convert the source XML format to another XML format. :::column-end::: :::column::: [![X12 icon][x12-icon]][x12-doc]- \ - \ - [**X12**][x12-doc] - \ - \ - Encode and decode messages that use the X12 protocol. + <br><br>[**X12**][x12-doc] + <br><br>Encode and decode messages that use the X12 protocol. :::column-end::: :::column::: [![XML validation icon][xml-validate-icon]][xml-validate-doc]- \ - \ - [**XML Validation**][xml-validate-doc] - \ - \ - Validate XML documents against the specified schema. + <br><br>[**XML Validation**][xml-validate-doc] + <br><br>Validate XML documents against the specified schema. :::column-end::: :::row-end::: For more information, review the following documentation: <!-- Built-in icons --> [azure-ai-search-icon]: ./media/apis-list/azure-ai-search.png [azure-api-management-icon]: ./media/apis-list/azure-api-management.png-[azure-app-services-icon]: ./media/apis-list/azure-app-services.png +[azure-app-service-icon]: ./media/apis-list/azure-app-service.png [azure-automation-icon]: ./media/apis-list/azure-automation.png [azure-blob-storage-icon]: ./media/apis-list/azure-blob-storage.png [azure-cosmos-db-icon]: ./media/apis-list/azure-cosmos-db.png For more information, review the following documentation: <!--Built-in doc links--> [azure-ai-search-doc]: https://techcommunity.microsoft.com/t5/azure-integration-services-blog/public-preview-of-azure-openai-and-ai-search-in-app-connectors/ba-p/4049584 "Connect to AI Search so that you can perform document indexing and search operations in your workflow" [azure-api-management-doc]: ../api-management/get-started-create-service-instance.md "Create an Azure API Management service instance for managing and publishing your APIs"-[azure-app-services-doc]: ../logic-apps/logic-apps-custom-api-host-deploy-call.md "Integrate logic app workflows with App Service API Apps" +[azure-app-service-doc]: ../logic-apps/logic-apps-custom-api-host-deploy-call.md "Integrate logic app workflows with App Service API Apps" [azure-automation-doc]: /azure/logic-apps/connectors/built-in/reference/azureautomation/ "Connect to your Azure Automation accounts so you can create and manage Azure Automation jobs" [azure-blob-storage-doc]: /azure/logic-apps/connectors/built-in/reference/azureblob/ "Manage files in your blob container with Azure Blob storage" [azure-cosmos-db-doc]: /azure/logic-apps/connectors/built-in/reference/azurecosmosdb/ "Connect to Azure Cosmos DB so you can access and manage Azure Cosmos DB documents" |
connectors | Connectors Create Api Cosmos Db | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-cosmos-db.md | You can connect to Azure Cosmos DB from both **Logic App (Consumption)** and **L - Currently, only stateful workflows in a **Logic App (Standard)** resource can use both the managed connector operations and built-in operations. Stateless workflows can use only built-in operations. -- The Azure Cosmos DB connector supports only Azure Cosmos DB accounts created with [Azure Cosmos DB for NoSQL](../cosmos-db/choose-api.md#coresql-api).+- The Azure Cosmos DB connector supports only Azure Cosmos DB accounts created with [Azure Cosmos DB for NoSQL](/azure/cosmos-db/choose-api#coresql-api). ## Prerequisites - An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- An [Azure Cosmos DB account](../cosmos-db/sql/create-cosmosdb-resources-portal.md).+- An [Azure Cosmos DB account](/azure/cosmos-db/sql/create-cosmosdb-resources-portal). - A logic app workflow from which you want to access your Azure Cosmos DB account. To use the Azure Cosmos DB trigger, you need to [create your logic app using the **Logic App (Standard)** resource type](../logic-apps/create-single-tenant-workflows-azure-portal.md), and add a blank workflow. You can connect to Azure Cosmos DB from both **Logic App (Consumption)** and **L In Azure Logic Apps, every workflow must start with a [trigger](../logic-apps/logic-apps-overview.md#logic-app-concepts), which fires when a specific event happens or when a specific condition is met. -If you're working with the **Logic App (Standard)** resource type, the built-in trigger called **When an item is created or modified (preview)** is available and is based on the [Azure Cosmos DB change feed pattern](../cosmos-db/sql/change-feed-design-patterns.md). This trigger is unavailable for the **Logic App (Consumption)** resource type. +If you're working with the **Logic App (Standard)** resource type, the built-in trigger called **When an item is created or modified (preview)** is available and is based on the [Azure Cosmos DB change feed pattern](/azure/cosmos-db/sql/change-feed-design-patterns). This trigger is unavailable for the **Logic App (Consumption)** resource type. ### [Consumption](#tab/consumption) To add an Azure Cosmos DB built-in trigger to a logic app workflow in single-ten | **Database Id** | Yes | <*database-name*> | The name of the database with the container that you want to monitor. This database should also have the lease container. If you don't already have a lease container, the connector will create one for you in a later step. | | **Monitored Container Id** | Yes | <*container-name*> | The name of the container that you want to monitor. This container should already exist in the specified database. | | **Lease Container Id** | Yes | <*lease-container-name*> | The name of either an existing lease container or a new container that you want created for you. The trigger pre-fills `leases` as a common default name. |- | **Create Lease Container** | No | **No** or **Yes** | If the lease container already exists in the specified database, select **No**. If you want the trigger to create this container, select **Yes**. If you select **Yes** and are using manual throughput dedicated for each container, make sure to open the **Add new parameter** list to select the **Lease Container Throughput** property. Enter the number of [request units (RUs)](../cosmos-db/request-units.md) that you want to provision for this container. | + | **Create Lease Container** | No | **No** or **Yes** | If the lease container already exists in the specified database, select **No**. If you want the trigger to create this container, select **Yes**. If you select **Yes** and are using manual throughput dedicated for each container, make sure to open the **Add new parameter** list to select the **Lease Container Throughput** property. Enter the number of [request units (RUs)](/azure/cosmos-db/request-units) that you want to provision for this container. | ||||| The following image shows an example trigger: |
connectors | Managed | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/managed.md | In the Standard workflow designer, *all* managed connectors appear under the **A :::row::: :::column::: [![Azure Blob Storage icon][azure-blob-storage-icon]][azure-blob-storage-doc]- \ - \ - [**Azure Blob Storage**][azure-blob-storage-doc] - \ - \ - Connect to your Azure Storage account so that you can create and manage blob content. + <br><br>[**Azure Blob Storage**][azure-blob-storage-doc] + <br><br>Connect to your Azure Storage account so that you can create and manage blob content. :::column-end::: :::column::: [![Azure Event Hubs icon][azure-event-hubs-icon]][azure-event-hubs-doc]- \ - \ - [**Azure Event Hubs**][azure-event-hubs-doc] - \ - \ - Consume and publish events through an event hub. For example, get output from your workflow with Event Hubs, and then send that output to a real-time analytics provider. + <br><br>[**Azure Event Hubs**][azure-event-hubs-doc] + <br><br>Consume and publish events through an event hub. For example, get output from your workflow with Event Hubs, and then send that output to a real-time analytics provider. :::column-end::: :::column::: [![Azure Queues icon][azure-queues-icon]][azure-queues-doc]- \ - \ - [**Azure Queues**][azure-queues-doc] - \ - \ - Connect to your Azure Storage account so that you can create and manage queues and messages. + <br><br>[**Azure Queues**][azure-queues-doc] + <br><br>Connect to your Azure Storage account so that you can create and manage queues and messages. :::column-end::: :::column::: [![Azure Service Bus icon][azure-service-bus-icon]][azure-service-bus-doc]- \ - \ - [**Azure Service Bus**][azure-service-bus-doc] - \ - \ - Manage asynchronous messages, sessions, and topic subscriptions with the most commonly used connector in Logic Apps. + <br><br>[**Azure Service Bus**][azure-service-bus-doc] + <br><br>Manage asynchronous messages, sessions, and topic subscriptions with the most commonly used connector in Logic Apps. :::column-end::: :::row-end::: :::row::: :::column::: [![Azure Table Storage icon][azure-table-storage-icon]][azure-table-storage-doc]- \ - \ - [**Azure Table Storage**][azure-table-storage-doc] - \ - \ - Connect to your Azure Storage account so that you can create, update, query, and manage tables. + <br><br>[**Azure Table Storage**][azure-table-storage-doc] + <br><br>Connect to your Azure Storage account so that you can create, update, query, and manage tables. :::column-end::: :::column::: [![File System icon][file-system-icon]][file-system-doc]- \ - \ - [**File System**][file-system-doc] - \ - \ - Connect to your on-premises file share so that you can create and manage files. + <br><br>[**File System**][file-system-doc] + <br><br>Connect to your on-premises file share so that you can create and manage files. :::column-end::: :::column::: [![FTP icon][ftp-icon]][ftp-doc]- \ - \ - [**FTP**][ftp-doc] - \ - \ - Connect to FTP servers you can access from the internet so that you can work with your files and folders. + <br><br>[**FTP**][ftp-doc] + <br><br>Connect to FTP servers you can access from the internet so that you can work with your files and folders. :::column-end::: :::column::: [![Office 365 Outlook icon][office-365-outlook-icon]][office-365-outlook-doc]- \ - \ - [**Office 365 Outlook**][office-365-outlook-doc] - \ - \ - Connect to your work or school email account so that you can create and manage emails, tasks, calendar events and meetings, contacts, requests, and more. + <br><br>[**Office 365 Outlook**][office-365-outlook-doc] + <br><br>Connect to your work or school email account so that you can create and manage emails, tasks, calendar events and meetings, contacts, requests, and more. :::column-end::: :::row-end::: :::row::: :::column::: [![Salesforce icon][salesforce-icon]][salesforce-doc]- \ - \ - [**Salesforce**][salesforce-doc] - \ - \ - Connect to your Salesforce account so that you can create and manage items such as records, jobs, objects, and more. + <br><br>[**Salesforce**][salesforce-doc] + <br><br>Connect to your Salesforce account so that you can create and manage items such as records, jobs, objects, and more. :::column-end::: :::column::: [![SharePoint Online icon][sharepoint-online-icon]][sharepoint-online-doc]- \ - \ - [**SharePoint Online**][sharepoint-online-doc] - \ - \ - Connect to SharePoint Online so that you can manage files, attachments, folders, and more. + <br><br>[**SharePoint Online**][sharepoint-online-doc] + <br><br>Connect to SharePoint Online so that you can manage files, attachments, folders, and more. :::column-end::: :::column::: [![SFTP-SSH icon][sftp-ssh-icon]][sftp-ssh-doc]- \ - \ - [**SFTP-SSH**][sftp-ssh-doc] - \ - \ - Connect to SFTP servers that you can access from the internet by using SSH so that you can work with your files and folders. + <br><br>[**SFTP-SSH**][sftp-ssh-doc] + <br><br>Connect to SFTP servers that you can access from the internet by using SSH so that you can work with your files and folders. :::column-end::: :::column::: [![SQL Server icon][sql-server-icon]][sql-server-doc]- \ - \ - [**SQL Server**][sql-server-doc] - \ - \ - Connect to your SQL Server on premises or an Azure SQL Database in the cloud so that you can manage records, run stored procedures, or perform queries. + <br><br>[**SQL Server**][sql-server-doc] + <br><br>Connect to your SQL Server on premises or an Azure SQL Database in the cloud so that you can manage records, run stored procedures, or perform queries. :::column-end::: :::row-end::: In the Standard workflow designer, *all* managed connectors appear under the **A :::row::: :::column::: [![IBM 3270 icon][ibm-3270-icon]][ibm-3270-doc]- \ - \ - [**IBM 3270**][ibm-3270-doc] + <br><br>[**IBM 3270**][ibm-3270-doc] :::column-end::: :::column::: [![IBM MQ icon][ibm-mq-icon]][ibm-mq-doc]- \ - \ - [**MQ**][ibm-mq-doc] + <br><br>[**MQ**][ibm-mq-doc] :::column-end::: :::column::: [![SAP icon][sap-icon]][sap-connector-doc]- \ - \ - [**SAP**][sap-connector-doc] + <br><br>[**SAP**][sap-connector-doc] :::column-end::: :::column::: :::column-end::: For a Consumption workflow, this section lists example [Standard connectors](#st :::row::: :::column::: [![Apache Impala][apache-impala-icon]][apache-impala-doc]- \ - \ - [**Apache Impala**][apache-impala-doc] + <br><br>[**Apache Impala**][apache-impala-doc] :::column-end::: :::column::: [![Biztalk Server icon][biztalk-server-icon]][biztalk-server-doc]- \ - \ - [**Biztalk Server**][biztalk-server-doc] + <br><br>[**Biztalk Server**][biztalk-server-doc] :::column-end::: :::column::: [![File System icon][file-system-icon]][file-system-doc]- \ - \ - [**File System**][file-system-doc] + <br><br>[**File System**][file-system-doc] :::column-end::: :::column::: [![IBM DB2 icon][ibm-db2-icon]][ibm-db2-doc]- \ - \ - [**IBM DB2**][ibm-db2-doc] + <br><br>[**IBM DB2**][ibm-db2-doc] :::column-end::: :::column::: [![IBM Informix icon][ibm-informix-icon]][ibm-informix-doc]- \ - \ - [**IBM Informix**][ibm-informix-doc] + <br><br>[**IBM Informix**][ibm-informix-doc] :::column-end::: :::column::: [![MySQL icon][mysql-icon]][mysql-doc]- \ - \ - [**MySQL**][mysql-doc] + <br><br>[**MySQL**][mysql-doc] :::column-end::: :::row-end::: :::row::: :::column::: [![Oracle DB icon][oracle-db-icon]][oracle-db-doc]- \ - \ - [**Oracle DB**][oracle-db-doc] + <br><br>[**Oracle DB**][oracle-db-doc] :::column-end::: :::column::: [![PostgreSQL icon][postgre-sql-icon]][postgre-sql-doc]- \ - \ - [**PostgreSQL**][postgre-sql-doc] + <br><br>[**PostgreSQL**][postgre-sql-doc] :::column-end::: :::column::: [![SAP icon][sap-icon]][sap-connector-doc]- \ - \ - [**SAP**][sap-connector-doc] + <br><br>[**SAP**][sap-connector-doc] :::column-end::: :::column::: [![SharePoint Server icon][sharepoint-server-icon]][sharepoint-server-doc]- \ - \ - [**SharePoint Server**][sharepoint-server-doc] + <br><br>[**SharePoint Server**][sharepoint-server-doc] :::column-end::: :::column::: [![SQL Server icon][sql-server-icon]][sql-server-doc]- \ - \ - [**SQL Server**][sql-server-doc] + <br><br>[**SQL Server**][sql-server-doc] :::column-end::: :::column::: [![Teradata icon][teradata-icon]][teradata-doc]- \ - \ - [**Teradata**][teradata-doc] + <br><br>[**Teradata**][teradata-doc] :::column-end::: :::row-end::: For more information, review the following documentation: :::row::: :::column::: [![AS2 Decode v2 icon][as2-v2-icon]][as2-doc]- \ - \ - [**AS2 Decode (v2)**][as2-doc] + <br><br>[**AS2 Decode (v2)**][as2-doc] :::column-end::: :::column::: [![AS2 Encode (v2) icon][as2-v2-icon]][as2-doc]- \ - \ - [**AS2 Encode (v2)**][as2-doc] + <br><br>[**AS2 Encode (v2)**][as2-doc] :::column-end::: :::column::: [![AS2 decoding icon][as2-icon]][as2-doc]- \ - \ - [**AS2 decoding**][as2-doc] + <br><br>[**AS2 decoding**][as2-doc] :::column-end::: :::column::: [![AS2 encoding icon][as2-icon]][as2-doc]- \ - \ - [**AS2 encoding**][as2-doc] + <br><br>[**AS2 encoding**][as2-doc] :::column-end::: :::row-end::: :::row::: :::column::: [![EDIFACT decoding icon][edifact-icon]][edifact-decode-doc]- \ - \ - [**EDIFACT decoding**][edifact-decode-doc] + <br><br>[**EDIFACT decoding**][edifact-decode-doc] :::column-end::: :::column::: [![EDIFACT encoding icon][edifact-icon]][edifact-encode-doc]- \ - \ - [**EDIFACT encoding**][edifact-encode-doc] + <br><br>[**EDIFACT encoding**][edifact-encode-doc] :::column-end::: :::column::: [![X12 decoding icon][x12-icon]][x12-decode-doc]- \ - \ - [**X12 decoding**][x12-decode-doc] + <br><br>[**X12 decoding**][x12-decode-doc] :::column-end::: :::column::: [![X12 encoding icon][x12-icon]][x12-encode-doc]- \ - \ - [**X12 encoding**][x12-encode-doc] + <br><br>[**X12 encoding**][x12-encode-doc] :::column-end::: :::row-end::: In an integration service environment (ISE), these managed connectors also have :::row::: :::column::: [![AS2 ISE icon][as2-icon]][as2-doc]- \ - \ - [**AS2** ISE][as2-doc] + <br><br>[**AS2** ISE][as2-doc] :::column-end::: :::column::: [![Azure Automation ISE icon][azure-automation-icon]][azure-automation-doc]- \ - \ - [**Azure Automation** ISE][azure-automation-doc] + <br><br>[**Azure Automation** ISE][azure-automation-doc] :::column-end::: :::column::: [![Azure Blob Storage ISE icon][azure-blob-storage-icon]][azure-blob-storage-doc]- \ - \ - [**Azure Blob Storage** ISE][azure-blob-storage-doc] + <br><br>[**Azure Blob Storage** ISE][azure-blob-storage-doc] :::column-end::: :::column::: [![Azure Cosmos DB ISE icon][azure-cosmos-db-icon]][azure-cosmos-db-doc]- \ - \ - [**Azure Cosmos DB** ISE][azure-cosmos-db-doc] + <br><br>[**Azure Cosmos DB** ISE][azure-cosmos-db-doc] :::column-end::: :::row-end::: :::row::: :::column::: [![Azure Event Hubs ISE icon][azure-event-hubs-icon]][azure-event-hubs-doc]- \ - \ - [**Azure Event Hubs** ISE][azure-event-hubs-doc] + <br><br>[**Azure Event Hubs** ISE][azure-event-hubs-doc] :::column-end::: :::column::: [![Azure Event Grid ISE icon][azure-event-grid-icon]][azure-event-grid-doc]- \ - \ - [**Azure Event Grid** ISE][azure-event-grid-doc] + <br><br>[**Azure Event Grid** ISE][azure-event-grid-doc] :::column-end::: :::column::: [![Azure Files ISE icon][azure-file-storage-icon]][azure-file-storage-doc]- \ - \ - [**Azure Files** ISE][azure-file-storage-doc] + <br><br>[**Azure Files** ISE][azure-file-storage-doc] :::column-end::: :::column::: [![Azure Key Vault ISE icon][azure-key-vault-icon]][azure-key-vault-doc]- \ - \ - [**Azure Key Vault** ISE][azure-key-vault-doc] + <br><br>[**Azure Key Vault** ISE][azure-key-vault-doc] :::column-end::: :::row-end::: :::row::: :::column::: [![Azure Monitor Logs ISE icon][azure-monitor-logs-icon]][azure-monitor-logs-doc]- \ - \ - [**Azure Monitor Logs** ISE][azure-monitor-logs-doc] + <br><br>[**Azure Monitor Logs** ISE][azure-monitor-logs-doc] :::column-end::: :::column::: [![Azure Service Bus ISE icon][azure-service-bus-icon]][azure-service-bus-doc]- \ - \ - [**Azure Service Bus** ISE][azure-service-bus-doc] + <br><br>[**Azure Service Bus** ISE][azure-service-bus-doc] :::column-end::: :::column::: [![Azure Synapse Analytics ISE icon][azure-sql-data-warehouse-icon]][azure-sql-data-warehouse-doc]- \ - \ - [**Azure Synapse Analytics** ISE][azure-sql-data-warehouse-doc] + <br><br>[**Azure Synapse Analytics** ISE][azure-sql-data-warehouse-doc] :::column-end::: :::column::: [![Azure Table Storage ISE icon][azure-table-storage-icon]][azure-table-storage-doc]- \ - \ - [**Azure Table Storage** ISE][azure-table-storage-doc] + <br><br>[**Azure Table Storage** ISE][azure-table-storage-doc] :::column-end::: :::row-end::: :::row::: :::column::: [![Azure Queues ISE icon][azure-queues-icon]][azure-queues-doc]- \ - \ - [**Azure Queues** ISE][azure-queues-doc] + <br><br>[**Azure Queues** ISE][azure-queues-doc] :::column-end::: :::column::: [![EDIFACT ISE icon][edifact-icon]][edifact-doc]- \ - \ - [**EDIFACT** ISE][edifact-doc] + <br><br>[**EDIFACT** ISE][edifact-doc] :::column-end::: :::column::: [![File System ISE icon][file-system-icon]][file-system-doc]- \ - \ - [**File System** ISE][file-system-doc] + <br><br>[**File System** ISE][file-system-doc] :::column-end::: :::column::: [![FTP ISE icon][ftp-icon]][ftp-doc]- \ - \ - [**FTP** ISE][ftp-doc] + <br><br>[**FTP** ISE][ftp-doc] :::column-end::: :::row-end::: :::row::: :::column::: [![IBM 3270 ISE icon][ibm-3270-icon]][ibm-3270-doc]- \ - \ - [**IBM 3270** ISE][ibm-3270-doc] + <br><br>[**IBM 3270** ISE][ibm-3270-doc] :::column-end::: :::column::: [![IBM DB2 ISE icon][ibm-db2-icon]][ibm-db2-doc]- \ - \ - [**IBM DB2** ISE][ibm-db2-doc] + <br><br>[**IBM DB2** ISE][ibm-db2-doc] :::column-end::: :::column::: [![IBM MQ ISE icon][ibm-mq-icon]][ibm-mq-doc]- \ - \ - [**IBM MQ** ISE][ibm-mq-doc] + <br><br>[**IBM MQ** ISE][ibm-mq-doc] :::column-end::: :::column::: [![SAP ISE icon][sap-icon]][sap-connector-doc]- \ - \ - [**SAP** ISE][sap-connector-doc] + <br><br>[**SAP** ISE][sap-connector-doc] :::column-end::: :::row-end::: :::row::: :::column::: [![SFTP-SSH ISE icon][sftp-ssh-icon]][sftp-ssh-doc]- \ - \ - [**SFTP-SSH** ISE][sftp-ssh-doc] + <br><br>[**SFTP-SSH** ISE][sftp-ssh-doc] :::column-end::: :::column::: [![SMTP ISE icon][smtp-icon]][smtp-doc]- \ - \ - [**SMTP** ISE][smtp-doc] + <br><br>[**SMTP** ISE][smtp-doc] :::column-end::: :::column::: [![SQL Server ISE icon][sql-server-icon]][sql-server-doc]- \ - \ - [**SQL Server** ISE][sql-server-doc] + <br><br>[**SQL Server** ISE][sql-server-doc] :::column-end::: :::column::: [![X12 ISE icon][x12-icon]][x12-doc]- \ - \ - [**X12** ISE][x12-doc] + <br><br>[**X12** ISE][x12-doc] :::column-end::: :::row-end::: |
container-apps | Microservices Dapr Bindings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-bindings.md | cd bindings-dapr-nodejs-cron-postgres | Parameter | Description | | | -- | | Environment Name | Prefix for the resource group created to hold all Azure resources. |- | Azure Location | The Azure location for your resources. [Make sure you select a location available for Azure PostgreSQL](../postgresql/flexible-server/overview.md#azure-regions). | + | Azure Location | The Azure location for your resources. [Make sure you select a location available for Azure PostgreSQL](/azure/postgresql/flexible-server/overview#azure-regions). | | Azure Subscription | The Azure subscription for your resources. | 1. Run `azd up` to provision the infrastructure and deploy the application to Azure Container Apps in a single command. cd bindings-dapr-python-cron-postgres | Parameter | Description | | | -- | | Environment Name | Prefix for the resource group created to hold all Azure resources. |- | Azure Location | The Azure location for your resources. [Make sure you select a location available for Azure PostgreSQL](../postgresql/flexible-server/overview.md#azure-regions). | + | Azure Location | The Azure location for your resources. [Make sure you select a location available for Azure PostgreSQL](/azure/postgresql/flexible-server/overview#azure-regions). | | Azure Subscription | The Azure subscription for your resources. | 1. Run `azd up` to provision the infrastructure and deploy the application to Azure Container Apps in a single command. cd bindings-dapr-csharp-cron-postgres | Parameter | Description | | | -- | | Environment Name | Prefix for the resource group created to hold all Azure resources. |- | Azure Location | The Azure location for your resources. [Make sure you select a location available for Azure PostgreSQL](../postgresql/flexible-server/overview.md#azure-regions). | + | Azure Location | The Azure location for your resources. [Make sure you select a location available for Azure PostgreSQL](/azure/postgresql/flexible-server/overview#azure-regions). | | Azure Subscription | The Azure subscription for your resources. | 1. Run `azd up` to provision the infrastructure and deploy the application to Azure Container Apps in a single command. |
container-apps | Tutorial Java Quarkus Connect Managed Identity Postgresql Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-java-quarkus-connect-managed-identity-postgresql-database.md | -[Azure Container Apps](overview.md) provides a [managed identity](managed-identity.md) for your app, which is a turn-key solution for securing access to [Azure Database for PostgreSQL](../postgresql/index.yml) and other Azure services. Managed identities in Container Apps make your app more secure by eliminating secrets from your app, such as credentials in the environment variables. +[Azure Container Apps](overview.md) provides a [managed identity](managed-identity.md) for your app, which is a turn-key solution for securing access to [Azure Database for PostgreSQL](/azure/postgresql/) and other Azure services. Managed identities in Container Apps make your app more secure by eliminating secrets from your app, such as credentials in the environment variables. -This tutorial walks you through the process of building, configuring, deploying, and scaling Java container apps on Azure. At the end of this tutorial, you'll have a [Quarkus](https://quarkus.io) application storing data in a [PostgreSQL](../postgresql/index.yml) database with a managed identity running on [Container Apps](overview.md). +This tutorial walks you through the process of building, configuring, deploying, and scaling Java container apps on Azure. At the end of this tutorial, you'll have a [Quarkus](https://quarkus.io) application storing data in a [PostgreSQL](/azure/postgresql/) database with a managed identity running on [Container Apps](overview.md). What you will learn: az acr create \ ## 3. Clone the sample app and prepare the container image -This tutorial uses a sample Fruits list app with a web UI that calls a Quarkus REST API backed by [Azure Database for PostgreSQL](../postgresql/index.yml). The code for the app is available [on GitHub](https://github.com/quarkusio/quarkus-quickstarts/tree/main/hibernate-orm-panache-quickstart). To learn more about writing Java apps using Quarkus and PostgreSQL, see the [Quarkus Hibernate ORM with Panache Guide](https://quarkus.io/guides/hibernate-orm-panache) and the [Quarkus Datasource Guide](https://quarkus.io/guides/datasource). +This tutorial uses a sample Fruits list app with a web UI that calls a Quarkus REST API backed by [Azure Database for PostgreSQL](/azure/postgresql/). The code for the app is available [on GitHub](https://github.com/quarkusio/quarkus-quickstarts/tree/main/hibernate-orm-panache-quickstart). To learn more about writing Java apps using Quarkus and PostgreSQL, see the [Quarkus Hibernate ORM with Panache Guide](https://quarkus.io/guides/hibernate-orm-panache) and the [Quarkus Datasource Guide](https://quarkus.io/guides/datasource). Run the following commands in your terminal to clone the sample repo and set up the sample app environment. |
container-registry | Tasks Agent Pools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tasks-agent-pools.md | This feature is available in the **Premium** container registry service tier. Fo - Task agent pools currently support Linux nodes. Windows nodes aren't currently supported. - Task agent pools are available in preview in the following regions: West US 2, South Central US, East US 2, East US, Central US, West Europe, North Europe, Canada Central, East Asia, Switzerland North, USGov Arizona, USGov Texas, and USGov Virginia. - For each registry, the default total vCPU (core) quota is 16 for all standard agent pools and is 0 for isolated agent pools. Open a [support request][open-support-ticket] for additional allocation.-- You can't currently cancel a task run on an agent pool. ## Prerequisites |
copilot | Manage Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/manage-access.md | Title: Manage access to Microsoft Copilot in Azure description: Learn how administrators can manage user access to Microsoft Copilot in Azure. Previously updated : 05/28/2024 Last updated : 08/13/2024 -By default, Copilot in Azure is available to all users in a tenant. However, [Global Administrators](/entra/identity/role-based-access-control/permissions-reference#global-administrator) can choose to control access to Copilot in Azure for their organization. If you turn off access for your tenant, you can still grant access to specific Microsoft Entra users or groups. +By default, Copilot in Azure is available to all users in a tenant. However, [Global Administrators](/entra/identity/role-based-access-control/permissions-reference#global-administrator) can manage access to Copilot in Azure for their organization. Access can also be optionally granted to specific Microsoft Entra users or groups. ++If Copilot in Azure is not available for a user, they'll see an unauthorized message when they select the **Copilot** button in the Azure portal. ++> [!NOTE] +> In some cases, your tenant may not have access to Copilot in Azure by default. Global Administrators can enable access by following the steps described in this article at any time. As always, Microsoft Copilot in Azure only has access to resources that the user has access to. It can only take actions that the user has permission to perform, and requires confirmation before making changes. Copilot in Azure complies with all existing access management rules and protections such as Azure role-based access control (Azure RBAC), Privileged Identity Management, Azure Policy, and resource locks. Global Administrators for a tenant can change the **Access management** selectio > [!IMPORTANT] > In order to use Microsoft Copilot in Azure, your organization must allow websocket connections to `https://directline.botframework.com`. Please ask your network administrator to enable this connection. ++ ## Next steps - [Learn more about Microsoft Copilot in Azure](overview.md). |
cosmos-db | Access Key Vault Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/access-key-vault-managed-identity.md | - Title: Use a managed identity to access Azure Key Vault from Azure Cosmos DB -description: Use managed identity in Azure Cosmos DB to access Azure Key Vault. ---- Previously updated : 06/01/2022-----# Access Azure Key Vault from Azure Cosmos DB using a managed identity --Azure Cosmos DB may need to read secret/key data from Azure Key Vault. For example, your Azure Cosmos DB may require a customer-managed key stored in Azure Key Vault. To do this, Azure Cosmos DB should be configured with a managed identity, and then an Azure Key Vault access policy should grant the managed identity access. ---## Prerequisites --- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An existing Azure Cosmos DB API for NoSQL account. [Create an Azure Cosmos DB API for NoSQL account](nosql/quickstart-portal.md)-- An existing Azure Key Vault resource. [Create a key vault using the Azure CLI](/azure/key-vault/general/quick-create-cli)-- To perform the steps in this article, install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in to Azure](/cli/azure/authenticate-azure-cli).--## Prerequisite check --1. In a terminal or command window, store the names of your Azure Key Vault resource, Azure Cosmos DB account and resource group as shell variables named ``keyVaultName``, ``cosmosName``, and ``resourceGroupName``. -- ```azurecli-interactive - # Variable for function app name - keyVaultName="msdocs-keyvault" - - # Variable for Azure Cosmos DB account name - cosmosName="msdocs-cosmos-app" -- # Variable for resource group name - resourceGroupName="msdocs-cosmos-keyvault-identity" - ``` -- > [!NOTE] - > These variables will be re-used in later steps. This example assumes your Azure Cosmos DB account name is ``msdocs-cosmos-app``, your key vault name is ``msdocs-keyvault`` and your resource group name is ``msdocs-cosmos-keyvault-identity``. ---## Create a system-assigned managed identity in Azure Cosmos DB --First, create a system-assigned managed identity for the existing Azure Cosmos DB account. --> [!IMPORTANT] -> This how-to guide assumes that you are using a system-assigned managed identity. Many of the steps are similar when using a user-assigned managed identity. --1. Run [``az cosmosdb identity assign``](/cli/azure/cosmosdb/identity#az-cosmosdb-identity-assign) to create a new system-assigned managed identity. -- ```azurecli-interactive - az cosmosdb identity assign \ - --resource-group $resourceGroupName \ - --name $cosmosName - ``` --1. Retrieve the metadata of the system-assigned managed identity using [``az cosmosdb identity show``](/cli/azure/cosmosdb/identity#az-cosmosdb-identity-show), filter to just return the ``principalId`` property using the **query** parameter, and store the result in a shell variable named ``principal``. -- ```azurecli-interactive - principal=$( - az cosmosdb identity show \ - --resource-group $resourceGroupName \ - --name $cosmosName \ - --query principalId \ - --output tsv - ) -- echo $principal - ``` -- > [!NOTE] - > This variable will be re-used in a later step. --## Create an Azure Key Vault access policy --In this step, create an access policy in Azure Key Vault using the previously managed identity. --1. Use the [``az keyvault set-policy``](/cli/azure/keyvault#az-keyvault-set-policy) command to create an access policy in Azure Key Vault that gives the Azure Cosmos DB managed identity permission to access Key Vault. Specifically, the policy will use the **key-permissions** parameters to grant permissions to ``get``, ``list``, and ``import`` keys. -- ```azurecli-interactive - az keyvault set-policy \ - --name $keyVaultName \ - --object-id $principal \ - --key-permissions get list import - ``` --## Next steps --* To use customer-managed keys in Azure Key Vault with your Azure Cosmos DB account, see [configure customer-managed keys](how-to-setup-cmk.md#using-managed-identity) -* To use Azure Key Vault to manage secrets, see [secure credentials](store-credentials-key-vault.md). |
cosmos-db | Access Previews | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/access-previews.md | - Title: Request access to Azure Cosmos DB previews -description: Learn how to request access to Azure Cosmos DB previews --- Previously updated : 04/13/2022----# Access Azure Cosmos DB Preview Features ---## Steps to register for a preview feature from the portal --Azure Cosmos DB offers several preview features that you can request access to. Here are the steps to request access to these preview features. --1. Go to **Preview Features** area in your Azure subscription. -2. Under **Type**, select "Microsoft.DocumentDBΓÇ¥. -3. Click on the feature you would like access to in the list of available preview features. -4. Click the **Register** button at the bottom of the page to join the preview. ---> [!TIP] -> If your request is stuck in the **Pending** state for an abnormal amount of time, [create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). --## Next steps --- Learn [how to choose an API](choose-api.md) in Azure Cosmos DB-- [Get started with Azure Cosmos DB for NoSQL](nosql/quickstart-dotnet.md)-- [Get started with Azure Cosmos DB for MongoDB](mongodb/create-mongodb-nodejs.md)-- [Get started with Azure Cosmos DB for Cassandra](cassandr)-- [Get started with Azure Cosmos DB for Gremlin](gremlin/quickstart-dotnet.md)-- [Get started with Azure Cosmos DB for Table](table/quickstart-dotnet.md) |
cosmos-db | Ai Advantage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/ai-advantage.md | - Title: Try free with Azure AI Advantage- -description: Try Azure Cosmos DB free with the Azure AI Advantage offer. Innovate with a full, integrated stack purpose-built for AI-powered applications. ------ - ignite-2023 - Previously updated : 11/08/2023---# Try Azure Cosmos DB free with Azure AI Advantage ---Azure offers a full, integrated stack purpose-built for AI-powered applications. If you build your AI application stack on Azure using Azure Cosmos DB, your design can lead to solutions that get to market faster, experience lower latency, and have comprehensive built-in security. --There are many benefits when using Azure Cosmos DB and Azure AI together: --- Manage provisioned throughput to scale seamlessly as your app grows--- Rely on world-class infrastructure and security to grow your business while safeguarding your data--- Enhance the reliability of your generative AI applications by using the speed of Azure Cosmos DB to retrieve and process data--## The offer --The Azure AI Advantage offer is for existing Azure AI and GitHub Copilot customers who want to use Azure Cosmos DB as part of their solution stack. With this offer, you get: --- Free 40,000 [RU/s](request-units.md) of Azure Cosmos DB throughput (equivalent of up to $6,000) for 90 days.--- Funding to implement a new AI application using Azure Cosmos DB and/or Azure Kubernetes Service. For more information, speak to your Microsoft representative.--If you decide that Azure Cosmos DB is right for you, you can receive up to 63% discount on [Azure Cosmos DB prices through Reserved Capacity](reserved-capacity.md). --## Get started --Get started with this offer by ensuring that you have the prerequisite services before applying. --1. Make sure that you have an Azure account with an active subscription. If you don't already have an account, [create an account for free](https://azure.microsoft.com/free). --1. Ensure that you previously used one of the qualifying services in your subscription: -- - Azure AI Services -- - Azure OpenAI Service -- - Azure Machine Learning -- - Azure AI Search -- - GitHub Copilot --1. Create a new Azure Cosmos DB account using one of the following APIs: -- - API for NoSQL -- - API for MongoDB RU -- - API for Apache Cassandra -- - API for Apache Gremlin -- - API for Table -- > [!IMPORTANT] - > The Azure Cosmos DB account must have been created within 30 days of registering for the offer. --1. Register for the Azure AI Advantage offer: <https://aka.ms/AzureAIAdvantageSignupForm> --1. The team reviews your registration and follows up via e-mail. --## After the offer --After 90 days, your Azure Cosmos DB account will continue to run at [standard pricing rates](https://azure.microsoft.com/pricing/details/cosmos-db/). --## Related content --- [Build & modernize AI application reference architecture](https://github.com/Azure/Build-Modern-AI-Apps) |
cosmos-db | Ai Agents | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/ai-agents.md | - Title: AI agent -description: Learn about key concepts for agents and step through the implementation of an AI agent memory system. ----- Previously updated : 06/26/2024---# AI agent --AI agents are designed to perform specific tasks, answer questions, and automate processes for users. These agents vary widely in complexity. They range from simple chatbots, to copilots, to advanced AI assistants in the form of digital or robotic systems that can run complex workflows autonomously. --This article provides conceptual overviews and detailed implementation samples for AI agents. --## What are AI agents? --Unlike standalone large language models (LLMs) or rule-based software/hardware systems, AI agents have these common features: --- **Planning**: AI agents can plan and sequence actions to achieve specific goals. The integration of LLMs has revolutionized their planning capabilities.-- **Tool usage**: Advanced AI agents can use various tools, such as code execution, search, and computation capabilities, to perform tasks effectively. AI agents often use tools through function calling.-- **Perception**: AI agents can perceive and process information from their environment, to make them more interactive and context aware. This information includes visual, auditory, and other sensory data.-- **Memory**: AI agents have the ability to remember past interactions (tool usage and perception) and behaviors (tool usage and planning). They store these experiences and even perform self-reflection to inform future actions. This memory component allows for continuity and improvement in agent performance over time.--> [!NOTE] -> The usage of the term *memory* in the context of AI agents is different from the concept of computer memory (like volatile, nonvolatile, and persistent memory). --### Copilots --Copilots are a type of AI agent. They work alongside users rather than operating independently. Unlike fully automated agents, copilots provide suggestions and recommendations to assist users in completing tasks. --For instance, when a user is writing an email, a copilot might suggest phrases, sentences, or paragraphs. The user might also ask the copilot to find relevant information in other emails or files to support the suggestion (see [retrieval-augmented generation](vector-database.md#retrieval-augmented-generation)). The user can accept, reject, or edit the suggested passages. --### Autonomous agents --Autonomous agents can operate more independently. When you set up autonomous agents to assist with email composition, you could enable them to perform the following tasks: --- Consult existing emails, chats, files, and other internal and public information that's related to the subject matter.-- Perform qualitative or quantitative analysis on the collected information, and draw conclusions that are relevant to the email.-- Write the complete email based on the conclusions and incorporate supporting evidence.-- Attach relevant files to the email.-- Review the email to ensure that all the incorporated information is factually accurate and that the assertions are valid.-- Select the appropriate recipients for **To**, **Cc**, and **Bcc**, and look up their email addresses.-- Schedule an appropriate time to send the email.-- Perform follow-ups if responses are expected but not received.--You can configure the agents to perform each of the preceding tasks with or without human approval. --### Multi-agent systems --A popular strategy for achieving performant autonomous agents is the use of multi-agent systems. In multi-agent systems, multiple autonomous agents, whether in digital or robotic form, interact or work together to achieve individual or collective goals. Agents in the system can operate independently and possess their own knowledge or information. Each agent might also have the capability to perceive its environment, make decisions, and execute actions based on its objectives. --Multi-agent systems have these key characteristics: --- **Autonomous**: Each agent functions independently. It makes its own decisions without direct human intervention or control by other agents.-- **Interactive**: Agents communicate and collaborate with each other to share information, negotiate, and coordinate their actions. This interaction can occur through various protocols and communication channels.-- **Goal-oriented**: Agents in a multi-agent system are designed to achieve specific goals, which can be aligned with individual objectives or a shared objective among the agents.-- **Distributed**: Multi-agent systems operate in a distributed manner, with no single point of control. This distribution enhances the system's robustness, scalability, and resource efficiency.--A multi-agent system provides the following advantages over a copilot or a single instance of LLM inference: --- **Dynamic reasoning**: Compared to chain-of-thought or tree-of-thought prompting, multi-agent systems allow for dynamic navigation through various reasoning paths.-- **Sophisticated abilities**: Multi-agent systems can handle complex or large-scale problems by conducting thorough decision-making processes and distributing tasks among multiple agents.-- **Enhanced memory**: Multi-agent systems with memory can overcome the context windows of LLMs to enable better understanding and information retention.--## Implementation of AI agents --### Reasoning and planning --Complex reasoning and planning are the hallmark of advanced autonomous agents. Popular frameworks for autonomous agents incorporate one or more of the following methodologies (with links to arXiv archive pages) for reasoning and planning: --- [Self-Ask](https://arxiv.org/abs/2210.03350)-- Improve on chain of thought by having the model explicitly ask itself (and answer) follow-up questions before answering the initial question. --- [Reason and Act (ReAct)](https://arxiv.org/abs/2210.03629)-- Use LLMs to generate both reasoning traces and task-specific actions in an interleaved manner. Reasoning traces help the model induce, track, and update action plans, along with handling exceptions. Actions allow the model to connect with external sources, such as knowledge bases or environments, to gather additional information. --- [Plan and Solve](https://arxiv.org/abs/2305.04091)-- Devise a plan to divide the entire task into smaller subtasks, and then carry out the subtasks according to the plan. This approach mitigates the calculation errors, missing-step errors, and semantic misunderstanding errors that are often present in zero-shot chain-of-thought prompting. --- [Reflect/Self-critique](https://arxiv.org/abs/2303.11366)-- Use *reflexion* agents that verbally reflect on task feedback signals. These agents maintain their own reflective text in an episodic memory buffer to induce better decision-making in subsequent trials. --### Frameworks --Various frameworks and tools can facilitate the development and deployment of AI agents. --For tool usage and perception that don't require sophisticated planning and memory, some popular LLM orchestrator frameworks are LangChain, LlamaIndex, Prompt Flow, and Semantic Kernel. --For advanced and autonomous planning and execution workflows, [AutoGen](https://microsoft.github.io/autogen/) propelled the multi-agent wave that began in late 2022. OpenAI's [Assistants API](https://platform.openai.com/docs/assistants/overview) allows its users to create agents natively within the GPT ecosystem. [LangChain Agents](https://python.langchain.com/v0.1/docs/modules/agents/) and [LlamaIndex Agents](https://docs.llamaindex.ai/en/stable/use_cases/agents/) also emerged around the same time. --> [!TIP] -> The [implementation sample](#implementation-sample) later in this article shows how to build a simple multi-agent system by using one of the popular frameworks and a unified agent memory system. --### AI agent memory system --The prevalent practice for experimenting with AI-enhanced applications from 2022 through 2024 has been using standalone database management systems for various data workflows or types. For example, you can use an in-memory database for caching, a relational database for operational data (including tracing/activity logs and LLM conversation history), and a [pure vector database](vector-database.md#integrated-vector-database-vs-pure-vector-database) for embedding management. --However, this practice of using a complex web of standalone databases can hurt an AI agent's performance. Integrating all these disparate databases into a cohesive, interoperable, and resilient memory system for AI agents is its own challenge. --Also, many of the frequently used database services are not optimal for the speed and scalability that AI agent systems need. These databases' individual weaknesses are exacerbated in multi-agent systems. --#### In-memory databases --In-memory databases are excellent for speed but might struggle with the large-scale data persistence that AI agents need. --#### Relational databases --Relational databases are not ideal for the varied modalities and fluid schemas of data that agents handle. Relational databases require manual efforts and even downtime to manage provisioning, partitioning, and sharding. --#### Pure vector databases --Pure vector databases tend to be less effective for transactional operations, real-time updates, and distributed workloads. The popular pure vector databases nowadays typically offer: --- No guarantee on reads and writes.-- Limited ingestion throughput.-- Low availability (below 99.9%, or an annualized outage of 9 hours or more).-- One consistency level (eventual).-- A resource-intensive in-memory vector index.-- Limited options for multitenancy.-- Limited security.--## Characteristics of a robust AI agent memory system --Just as efficient database management systems are critical to the performance of software applications, it's critical to provide LLM-powered agents with relevant and useful information to guide their inference. Robust memory systems enable organizing and storing various kinds of information that the agents can retrieve at inference time. --Currently, LLM-powered applications often use [retrieval-augmented generation](vector-database.md#retrieval-augmented-generation) that uses basic semantic search or vector search to retrieve passages or documents. [Vector search](vector-database.md#vector-search) can be useful for finding general information. But vector search might not capture the specific context, structure, or relationships that are relevant for a particular task or domain. --For example, if the task is to write code, vector search might not be able to retrieve the syntax tree, file system layout, code summaries, or API signatures that are important for generating coherent and correct code. Similarly, if the task is to work with tabular data, vector search might not be able to retrieve the schema, the foreign keys, the stored procedures, or the reports that are useful for querying or analyzing the data. --Weaving together a web of standalone in-memory, relational, and vector databases (as described [earlier](#ai-agent-memory-system)) is not an optimal solution for the varied data types. This approach might work for prototypical agent systems. However, it adds complexity and performance bottlenecks that can hamper the performance of advanced autonomous agents. --A robust memory system should have the following characteristics. --### Multimodal --AI agent memory systems should provide collections that store metadata, relationships, entities, summaries, or other types of information that can be useful for various tasks and domains. These collections can be based on the structure and format of the data, such as documents, tables, or code. Or they can be based on the content and meaning of the data, such as concepts, associations, or procedural steps. --Memory systems aren't just critical to AI agents. They're also important for the humans who develop, maintain, and use these agents. --For example, humans might need to supervise agents' planning and execution workflows in near real time. While supervising, humans might interject with guidance or make in-line edits of agents' dialogues or monologues. Humans might also need to audit the reasoning and actions of agents to verify the validity of the final output. --Human/agent interactions are likely in natural or programming languages, whereas agents "think," "learn," and "remember" through embeddings. This difference poses another requirement on memory systems' consistency across data modalities. --### Operational --Memory systems should provide memory banks that store information that's relevant for the interaction with the user and the environment. Such information might include chat history, user preferences, sensory data, decisions made, facts learned, or other operational data that's updated with high frequency and at high volumes. --These memory banks can help the agents remember short-term and long-term information, avoid repeating or contradicting themselves, and maintain task coherence. These requirements must hold true even if the agents perform a multitude of unrelated tasks in succession. In advanced cases, agents might also test numerous branch plans that diverge or converge at different points. --### Sharable but also separable --At the macro level, memory systems should enable multiple AI agents to collaborate on a problem or process different aspects of the problem by providing shared memory that's accessible to all the agents. Shared memory can facilitate the exchange of information and the coordination of actions among the agents. --At the same time, the memory system must allow agents to preserve their own persona and characteristics, such as their unique collections of prompts and memories. --## Building a robust AI agent memory system --The preceding characteristics require AI agent memory systems to be highly scalable and swift. Painstakingly weaving together disparate in-memory, relational, and vector databases (as described [earlier](#ai-agent-memory-system)) might work for early-stage AI-enabled applications. However, this approach adds complexity and performance bottlenecks that can hamper the performance of advanced autonomous agents. --In place of all the standalone databases, Azure Cosmos DB can serve as a unified solution for AI agent memory systems. Its robustness successfully [enabled OpenAI's ChatGPT service](https://www.youtube.com/watch?v=6IIUtEFKJec&t) to scale dynamically with high reliability and low maintenance. Powered by an atom-record-sequence engine, it's the world's first globally distributed [NoSQL](distributed-nosql.md), [relational](distributed-relational.md), and [vector database](vector-database.md) service that offers a serverless mode. AI agents built on top of Azure Cosmos DB offer speed, scale, and simplicity. --### Speed --Azure Cosmos DB provides single-digit millisecond latency. This capability makes it suitable for processes that require rapid data access and management. These processes include caching (both traditional and [semantic caching](https://techcommunity.microsoft.com/t5/azure-architecture-blog/optimize-azure-openai-applications-with-semantic-caching/ba-p/4106867)), transactions, and operational workloads. --Low latency is crucial for AI agents that need to perform complex reasoning, make real-time decisions, and provide immediate responses. In addition, the service's [use of the DiskANN algorithm](nosql/vector-search.md#enroll-in-the-vector-search-preview-feature) provides accurate and fast vector search with minimal memory consumption. --### Scale --Azure Cosmos DB is engineered for global distribution and horizontal scalability. It offers support for multiple-region I/O and multitenancy. --The service helps ensure that memory systems can expand seamlessly and keep up with rapidly growing agents and associated data. The [availability guarantee in its service-level agreement (SLA)](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services) translates to less than 5 minutes of downtime per year. Pure vector database services, by contrast, come with 9 hours or more of downtime. This availability provides a solid foundation for mission-critical workloads. At the same time, the various service models in Azure Cosmos DB, like [Reserved Capacity](reserved-capacity.md) or Serverless, can help reduce financial costs. --### Simplicity --Azure Cosmos DB can simplify data management and architecture by integrating multiple database functionalities into a single, cohesive platform. --Its integrated vector database capabilities can store, index, and query embeddings alongside the corresponding data in natural or programming languages. This capability enables greater data consistency, scale, and performance. --Its flexibility supports the varied modalities and fluid schemas of the metadata, relationships, entities, summaries, chat history, user preferences, sensory data, decisions, facts learned, or other operational data involved in agent workflows. The database automatically indexes all data without requiring schema or index management, which helps AI agents perform complex queries quickly and efficiently. --Azure Cosmos DB is fully managed, which eliminates the overhead of database administration tasks like scaling, patching, and backups. Without this overhead, developers can focus on building and optimizing AI agents without worrying about the underlying data infrastructure. --### Advanced features --Azure Cosmos DB incorporates advanced features such as change feed, which allows tracking and responding to changes in data in real time. This capability is useful for AI agents that need to react to new information promptly. --Additionally, the built-in support for multi-master writes enables high availability and resilience to help ensure continuous operation of AI agents, even after regional failures. --The five available [consistency levels](consistency-levels.md) (from strong to eventual) can also cater to various distributed workloads, depending on the scenario requirements. --> [!TIP] -> You can choose from two Azure Cosmos DB APIs to build your AI agent memory system: -> -> - Azure Cosmos DB for NoSQL, which offers 99.999% availability guarantee and provides [three vector search algorithms](nosql/vector-search.md): IVF, HNSW, and DiskANN -> - vCore-based Azure Cosmos DB for MongoDB, which offers 99.995% availability guarantee and provides [two vector search algorithms](mongodb/vcore/vector-search.md): IVF and HNSW (DiskANN is upcoming) -> -> For information about the availability guarantees for these APIs, see the [service SLAs](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services). --## Implementation sample --This section explores the implementation of an autonomous agent to process traveler inquiries and bookings in a travel application for a cruise line. --Chatbots are a long-standing concept, but AI agents are advancing beyond basic human conversation to carry out tasks based on natural language. These tasks traditionally required coded logic. The AI travel agent in this implementation sample uses the LangChain Agent framework for agent planning, tool usage, and perception. --The AI travel agent's [unified memory system](#characteristics-of-a-robust-ai-agent-memory-system) uses the [vector database](vector-database.md) and document store capabilities of Azure Cosmos DB to address traveler inquiries and facilitate trip bookings. Using Azure Cosmos DB for this purpose helps ensure speed, scale, and simplicity, as described [earlier](#building-a-robust-ai-agent-memory-system). --The sample agent operates within a Python FastAPI back end. It supports user interactions through a React JavaScript user interface. --### Prerequisites --- An Azure subscription. If you don't have one, you can [try Azure Cosmos DB for free](try-free.md) for 30 days without creating an Azure account. The free trial doesn't require a credit card, and no commitment follows the trial period.-- An account for the OpenAI API or Azure OpenAI Service.-- A vCore cluster in Azure Cosmos DB for MongoDB. You can create one by following [this quickstart](mongodb/vcore/quickstart-portal.md).-- An integrated development environment, such as Visual Studio Code.-- Python 3.11.4 installed in the development environment.--### Download the project --All of the code and sample datasets are available in [this GitHub repository](https://github.com/jonathanscholtes/Travel-AI-Agent-React-FastAPI-and-Cosmos-DB-Vector-Store). The repository includes these folders: --- *loader*: This folder contains Python code for loading sample documents and vector embeddings in Azure Cosmos DB.-- *api*: This folder contains the Python FastAPI project for hosting the AI travel agent.-- *web*: This folder contains code for the React web interface.--### Load travel documents into Azure Cosmos DB --The GitHub repository contains a Python project in the *loader* directory. It's intended for loading the sample travel documents into Azure Cosmos DB. --#### Set up the environment --Set up your Python virtual environment in the *loader* directory by running the following command: --```python - python -m venv venv -``` --Activate your environment and install dependencies in the *loader* directory: --```python - venv\Scripts\activate - python -m pip install -r requirements.txt -``` --Create a file named *.env* in the *loader* directory, to store the following environment variables: --```python - OPENAI_API_KEY="<your OpenAI key>" - MONGO_CONNECTION_STRING="mongodb+srv:<your connection string from Azure Cosmos DB>" -``` --#### Load documents and vectors --The Python file *main.py* serves as the central entry point for loading data into Azure Cosmos DB. This code processes the sample travel data from the GitHub repository, including information about ships and destinations. The code also generates travel itinerary packages for each ship and destination, so that travelers can book them by using the AI agent. The CosmosDBLoader tool is responsible for creating collections, vector embeddings, and indexes in the Azure Cosmos DB instance. --Here are the contents of *main.py*: --```python -from cosmosdbloader import CosmosDBLoader -from itinerarybuilder import ItineraryBuilder -import json ---cosmosdb_loader = CosmosDBLoader(DB_Name='travel') --#read in ship data -with open('documents/ships.json') as file: - ship_json = json.load(file) --#read in destination data -with open('documents/destinations.json') as file: - destinations_json = json.load(file) --builder = ItineraryBuilder(ship_json['ships'],destinations_json['destinations']) --# Create five itinerary packages -itinerary = builder.build(5) --# Save itinerary packages to Cosmos DB -cosmosdb_loader.load_data(itinerary,'itinerary') --# Save destinations to Cosmos DB -cosmosdb_loader.load_data(destinations_json['destinations'],'destinations') --# Save ships to Cosmos DB, create vector store -collection = cosmosdb_loader.load_vectors(ship_json['ships'],'ships') --# Add text search index to ship name -collection.create_index([('name', 'text')]) -``` --Load the documents, load the vectors, and create indexes by running the following command from the *loader* directory: --```python - python main.py -``` --Here's the output of *main.py*: --```markdown build itinerary--load itinerary--load destinations--load vectors ships---``` --### Build the AI travel agent by using Python FastAPI --The AI travel agent is hosted in a back end API through Python FastAPI, which facilitates integration with the front-end user interface. The API project processes agent requests by [grounding](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/grounding-llms/ba-p/3843857) the LLM prompts against the data layer, specifically the vectors and documents in Azure Cosmos DB. --The agent makes use of various tools, particularly the Python functions provided at the API service layer. This article focuses on the code necessary for AI agents within the API code. --The API project in the GitHub repository is structured as follows: --- *Data modeling components* use Pydantic models.-- *Web layer components* are responsible for routing requests and managing communication.-- *Service layer components* are responsible for primary business logic and interaction with the data layer, the LangChain Agent, and agent tools.-- *Data layer components* are responsible for interacting with Azure Cosmos DB for MongoDB document storage and vector search.--### Set up the environment for the API --We used Python version 3.11.4 for the development and testing of the API. --Set up your Python virtual environment in the *api* directory: --```python - python -m venv venv -``` --Activate your environment and install dependencies by using the *requirements* file in the *api* directory: --```python - venv\Scripts\activate - python -m pip install -r requirements.txt -``` --Create a file named *.env* in the *api* directory, to store your environment variables: --```python - OPENAI_API_KEY="<your Open AI key>" - MONGO_CONNECTION_STRING="mongodb+srv:<your connection string from Azure Cosmos DB>" -``` --Now that you've configured the environment and set up variables, run the following command from the *api* directory to initiate the server: --```python - python app.py -``` --The FastAPI server starts on the localhost loopback 127.0.0.1 port 8000 by default. You can access the Swagger documents by using the following localhost address: `http://127.0.0.1:8000/docs`. --### Use a session for the AI agent memory --It's imperative for the travel agent to be able to reference previously provided information within the ongoing conversation. This ability is commonly known as *memory* in the context of LLMs. --To achieve this objective, use the chat message history that's stored in the Azure Cosmos DB instance. The history for each chat session is stored through a session ID to ensure that only messages from the current conversation session are accessible. This necessity is the reason behind the existence of a `Get Session` method in the API. It's a placeholder method for managing web sessions to illustrate the use of chat message history. --Select **Try it out** for `/session/`. ---```python -{ - "session_id": "0505a645526f4d68a3603ef01efaab19" -} -``` --For the AI agent, you only need to simulate a session. The stubbed-out method merely returns a generated session ID for tracking message history. In a practical implementation, this session would be stored in Azure Cosmos DB and potentially in React `localStorage`. --Here are the contents of *web/session.py*: --```python - @router.get("/") - def get_session(): - return {'session_id':str(uuid.uuid4().hex)} -``` --### Start a conversation with the AI travel agent --Use the session ID that you obtained from the previous step to start a new dialogue with the AI agent, so you can validate its functionality. Conduct the test by submitting the following phrase: "I want to take a relaxing vacation." --Select **Try it out** for `/agent/agent_chat`. ---Use this example parameter: --```python -{ - "input": "I want to take a relaxing vacation.", - "session_id": "0505a645526f4d68a3603ef01efaab19" -} -``` --The initial execution results in a recommendation for the Tranquil Breeze Cruise and the Fantasy Seas Adventure Cruise, because the agent anticipates that they're the most relaxing cruises available through the vector search. These documents have the highest score for `similarity_search_with_score` called in the data layer of the API, `data.mongodb.travel.similarity_search()`. --The similarity search scores appear as output from the API for debugging purposes. Here's the output after a call to `data.mongodb.travel.similarity_search()`: --```markdown -0.8394561085977978 -0.8086545112328692 -2 -``` --> [!TIP] -> If documents are not being returned for vector search, modify the `similarity_search_with_score` limit or the score filter value as needed (`[doc for doc, score in docs if score >=.78]`) in `data.mongodb.travel.similarity_search()`. --Calling `agent_chat` for the first time creates a new collection named `history` in Azure Cosmos DB to store the conversation by session. This call enables the agent to access the stored chat message history as needed. Subsequent executions of `agent_chat` with the same parameters produce varying results, because it draws from memory. --### Walk through the AI agent --When you're integrating the AI agent into the API, the web search components are responsible for initiating all requests. The web search components are followed by the search service, and finally the data components. --In this specific case, you use a MongoDB data search that connects to Azure Cosmos DB. The layers facilitate the exchange of model components, with the AI agent and the AI agent tool code residing in the service layer. This approach enables the seamless interchangeability of data sources. It also extends the capabilities of the AI agent with additional, more intricate functionalities or tools. ---#### Service layer --The service layer forms the cornerstone of core business logic. In this particular scenario, the service layer plays a crucial role as the repository for the LangChain Agent code. It facilitates the seamless integration of user prompts with Azure Cosmos DB data, conversation memory, and agent functions for the AI agent. --The service layer employs a singleton pattern module for handling agent-related initializations in the *init.py* file. Here are the contents of *service/init.py*: --```python -from dotenv import load_dotenv -from os import environ -from langchain.globals import set_llm_cache -from langchain_openai import ChatOpenAI -from langchain_mongodb.chat_message_histories import MongoDBChatMessageHistory -from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder -from langchain_core.runnables.history import RunnableWithMessageHistory -from langchain.agents import AgentExecutor, create_openai_tools_agent -from service import TravelAgentTools as agent_tools --load_dotenv(override=False) ---chat : ChatOpenAI | None=None -agent_with_chat_history : RunnableWithMessageHistory | None=None --def LLM_init(): - global chat,agent_with_chat_history - chat = ChatOpenAI(model_name="gpt-3.5-turbo-16k",temperature=0) - tools = [agent_tools.vacation_lookup, agent_tools.itinerary_lookup, agent_tools.book_cruise ] -- prompt = ChatPromptTemplate.from_messages( - [ - ( - "system", - "You are a helpful and friendly travel assistant for a cruise company. Answer travel questions to the best of your ability providing only relevant information. In order to book a cruise you will need to capture the person's name.", - ), - MessagesPlaceholder(variable_name="chat_history"), - ("user", "Answer should be embedded in html tags. {input}"), - MessagesPlaceholder(variable_name="agent_scratchpad"), - ] - ) -- #Answer should be embedded in HTML tags. Only answer questions related to cruise travel, If you can not answer respond with \"I am here to assist with your travel questions.\". --- agent = create_openai_tools_agent(chat, tools, prompt) - agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) -- agent_with_chat_history = RunnableWithMessageHistory( - agent_executor, - lambda session_id: MongoDBChatMessageHistory( database_name="travel", - collection_name="history", - connection_string=environ.get("MONGO_CONNECTION_STRING"), - session_id=session_id), - input_messages_key="input", - history_messages_key="chat_history", -) --LLM_init() -``` --The *init.py* file initiates the loading of environment variables from an *.env* file by using the `load_dotenv(override=False)` method. Then, a global variable named `agent_with_chat_history` is instantiated for the agent. This agent is intended for use by *TravelAgent.py*. --The `LLM_init()` method is invoked during module initialization to configure the AI agent for conversation via the API web layer. The OpenAI `chat` object is instantiated through the GPT-3.5 model and incorporates specific parameters such as model name and temperature. The `chat` object, tools list, and prompt template are combined to generate `AgentExecutor`, which operates as the AI travel agent. --The agent with history, `agent_with_chat_history`, is established through `RunnableWithMessageHistory` with chat history (`MongoDBChatMessageHistory`). This action enables it to maintain a complete conversation history via Azure Cosmos DB. --#### Prompt --The LLM prompt initially began with the simple statement "You are a helpful and friendly travel assistant for a cruise company." However, testing showed that you could obtain more consistent results by including the instruction "Answer travel questions to the best of your ability, providing only relevant information. To book a cruise, capturing the person's name is essential." The results appear in HTML format to enhance the visual appeal of the web interface. --#### Agent tools --[Tools](#what-are-ai-agents) are interfaces that an agent can use to interact with the world, often through function calling. --When you're creating an agent, you must furnish it with a set of tools that it can use. The `@tool` decorator offers the most straightforward approach to defining a custom tool. --By default, the decorator uses the function name as the tool name, although you can replace it by providing a string as the first argument. The decorator uses the function's docstring as the tool's description, so it requires the provisioning of a docstring. --Here are the contents of *service/TravelAgentTools.py*: --```python -from langchain_core.tools import tool -from langchain.docstore.document import Document -from data.mongodb import travel -from model.travel import Ship ---@tool -def vacation_lookup(input:str) -> list[Document]: - """find information on vacations and trips""" - ships: list[Ship] = travel.similarity_search(input) - content = "" -- for ship in ships: - content += f" Cruise ship {ship.name} description: {ship.description} with amenities {'/n-'.join(ship.amenities)} " -- return content --@tool -def itinerary_lookup(ship_name:str) -> str: - """find ship itinerary, cruise packages and destinations by ship name""" - it = travel.itnerary_search(ship_name) - results = "" -- for i in it: - results += f" Cruise Package {i.Name} room prices: {'/n-'.join(i.Rooms)} schedule: {'/n-'.join(i.Schedule)}" -- return results ---@tool -def book_cruise(package_name:str, passenger_name:str, room: str )-> str: - """book cruise using package name and passenger name and room """ - print(f"Package: {package_name} passenger: {passenger_name} room: {room}") -- # LLM defaults empty name to John Doe - if passenger_name == "John Doe": - return "In order to book a cruise I need to know your name." - else: - if room == '': - return "which room would you like to book" - return "Cruise has been booked, ref number is 343242" -``` --The *TravelAgentTools.py* file defines three tools: --- `vacation_lookup` conducts a vector search against Azure Cosmos DB. It uses `similarity_search` to retrieve relevant travel-related material.-- `itinerary_lookup` retrieves cruise package details and schedules for a specified cruise ship.-- `book_cruise` books a cruise package for a passenger.--Specific instructions ("In order to book a cruise I need to know your name") might be necessary to ensure the capture of the passenger's name and room number for booking the cruise package, even though you included such instructions in the LLM prompt. --#### AI agent --The fundamental concept that underlies agents is to use a language model for selecting a sequence of actions to execute. --Here are the contents of *service/TravelAgent.py*: --```python -from .init import agent_with_chat_history -from model.prompt import PromptResponse -import time -from dotenv import load_dotenv --load_dotenv(override=False) ---def agent_chat(input:str, session_id:str)->str: -- start_time = time.time() -- results=agent_with_chat_history.invoke( - {"input": input}, - config={"configurable": {"session_id": session_id}}, - ) -- return PromptResponse(text=results["output"],ResponseSeconds=(time.time() - start_time)) -``` --The *TravelAgent.py* file is straightforward, because `agent_with_chat_history` and its dependencies (tools, prompt, and LLM) are initialized and configured in the *init.py* file. This file calls the agent by using the input received from the user, along with the session ID for conversation memory. Afterward, `PromptResponse` (model/prompt) is returned with the agent's output and response time. --## AI agent integration with the React user interface --With the successful loading of the data and accessibility of the AI agent through the API, you can now complete the solution by establishing a web user interface (by using React) for your travel website. Using the capabilities of React helps illustrate the seamless integration of the AI agent into a travel site. This integration enhances the user experience with a conversational travel assistant for inquiries and bookings. --### Set up the environment for React --Install Node.js and the dependencies before testing the React interface. --Run the following command from the *web* directory to perform a clean installation of project dependencies. The installation might take some time. --```javascript - npm ci -``` --Next, create a file named *.env* within the *web* directory to facilitate the storage of environment variables. Include the following details in the newly created *.env* file: --`REACT_APP_API_HOST=http://127.0.0.1:8000` --Now, run the following command from the *web* directory to initiate the React web user interface: --```javascript - npm start -``` --Running the previous command opens the React web application. --### Walk through the React web interface --The web project of the GitHub repository is a straightforward application to facilitate user interaction with the AI agent. The primary components required to converse with the agent are *TravelAgent.js* and *ChatLayout.js*. The *Main.js* file serves as the central module or user landing page. ---#### Main --The main component serves as the central manager of the application. It acts as the designated entry point for routing. Within the render function, it produces JSX code to delineate the main page layout. This layout encompasses placeholder elements for the application, such as logos and links, a section that houses the travel agent component, and a footer that contains a sample disclaimer about the application's nature. --Here are the contents of *main.js*: --```javascript - import React, { Component } from 'react' -import { Stack, Link, Paper } from '@mui/material' -import TravelAgent from './TripPlanning/TravelAgent' --import './Main.css' --class Main extends Component { - constructor() { - super() -- } -- render() { - return ( - <div className="Main"> - <div className="Main-Header"> - <Stack direction="row" spacing={5}> - <img src="/mainlogo.png" alt="Logo" height={'120px'} /> - <Link - href="#" - sx={{ color: 'white', fontWeight: 'bold', fontSize: 18 }} - underline="hover" - > - Ships - </Link> - <Link - href="#" - sx={{ color: 'white', fontWeight: 'bold', fontSize: 18 }} - underline="hover" - > - Destinations - </Link> - </Stack> - </div> - <div className="Main-Body"> - <div className="Main-Content"> - <Paper elevation={3} sx={{p:1}} > - <Stack - direction="row" - justifyContent="space-evenly" - alignItems="center" - spacing={2} - > - - <Link href="#"> - <img - src={require('./images/destinations.png')} width={'400px'} /> - </Link> - <TravelAgent ></TravelAgent> - <Link href="#"> - <img - src={require('./images/ships.png')} width={'400px'} /> - </Link> - - </Stack> - </Paper> - </div> - </div> - <div className="Main-Footer"> - <b>Disclaimer: Sample Application</b> - <br /> - Please note that this sample application is provided for demonstration - purposes only and should not be used in production environments - without proper validation and testing. - </div> - </div> - ) - } -} --export default Main -``` --#### Travel agent --The travel agent component has a straightforward purpose: capturing user inputs and displaying responses. It plays a key role in managing the integration with the back-end AI agent, primarily by capturing sessions and forwarding user prompts to the FastAPI service. The resulting responses are stored in an array for display, facilitated by the chat layout component. --Here are the contents of *TripPlanning/TravelAgent.js*: --```javascript -import React, { useState, useEffect } from 'react' -import { Button, Box, Link, Stack, TextField } from '@mui/material' -import SendIcon from '@mui/icons-material/Send' -import { Dialog, DialogContent } from '@mui/material' -import ChatLayout from './ChatLayout' -import './TravelAgent.css' --export default function TravelAgent() { - const [open, setOpen] = React.useState(false) - const [session, setSession] = useState('') - const [chatPrompt, setChatPrompt] = useState( - 'I want to take a relaxing vacation.', - ) - const [message, setMessage] = useState([ - { - message: 'Hello, how can I assist you today?', - direction: 'left', - bg: '#E7FAEC', - }, - ]) -- const handlePrompt = (prompt) => { - setChatPrompt('') - setMessage((message) => [ - ...message, - { message: prompt, direction: 'right', bg: '#E7F4FA' }, - ]) - console.log(session) - fetch(process.env.REACT_APP_API_HOST + '/agent/agent_chat', { - method: 'POST', - headers: { - 'Content-Type': 'application/json', - }, - body: JSON.stringify({ input: prompt, session_id: session }), - }) - .then((response) => response.json()) - .then((res) => { - setMessage((message) => [ - ...message, - { message: res.text, direction: 'left', bg: '#E7FAEC' }, - ]) - }) - } -- const handleSession = () => { - fetch(process.env.REACT_APP_API_HOST + '/session/') - .then((response) => response.json()) - .then((res) => { - setSession(res.session_id) - }) - } -- const handleClickOpen = () => { - setOpen(true) - } -- const handleClose = (value) => { - setOpen(false) - } -- useEffect(() => { - if (session === '') handleSession() - }, []) -- return ( - <Box> - <Dialog onClose={handleClose} open={open} maxWidth="md" fullWidth="true"> - <DialogContent> - <Stack> - <Box sx={{ height: '500px' }}> - <div className="AgentArea"> - <ChatLayout messages={message} /> - </div> - </Box> - <Stack direction="row" spacing={0}> - <TextField - sx={{ width: '80%' }} - variant="outlined" - label="Message" - helperText="Chat with AI Travel Agent" - defaultValue="I want to take a relaxing vacation." - value={chatPrompt} - onChange={(event) => setChatPrompt(event.target.value)} - ></TextField> - <Button - variant="contained" - endIcon={<SendIcon />} - sx={{ mb: 3, ml: 3, mt: 1 }} - onClick={(event) => handlePrompt(chatPrompt)} - > - Submit - </Button> - </Stack> - </Stack> - </DialogContent> - </Dialog> - <Link href="#" onClick={() => handleClickOpen()}> - <img src={require('.././images/planvoyage.png')} width={'400px'} /> - </Link> - </Box> - ) -} -``` --Select **Effortlessly plan your voyage** to open the travel assistant. --#### Chat layout --The chat layout component oversees the arrangement of the chat. It systematically processes the chat messages and implements the formatting specified in the `message` JSON object. --Here are the contents of *TripPlanning/ChatLayout.py*: --```javascript -import React from 'react' -import { Box, Stack } from '@mui/material' -import parse from 'html-react-parser' -import './ChatLayout.css' --export default function ChatLayout(messages) { - return ( - <Stack direction="column" spacing="1"> - {messages.messages.map((obj, i = 0) => ( - <div className="bubbleContainer" key={i}> - <Box - key={i++} - className="bubble" - sx={{ float: obj.direction, fontSize: '10pt', background: obj.bg }} - > - <div>{parse(obj.message)}</div> - </Box> - </div> - ))} - </Stack> - ) -} -``` --User prompts are on the right side and colored blue. Responses from the AI travel agent are on the left side and colored green. As the following image shows, the HTML-formatted responses are accounted for in the conversation. ---When your AI agent is ready to go into production, you can use semantic caching to improve query performance by 80% and to reduce LLM inference and API call costs. To implement semantic caching, see [this post on the Stochastic Coder blog](https://stochasticcoder.com/2024/03/22/improve-llm-performance-using-semantic-cache-with-cosmos-db/). ---## Related content --- [30-day free trial without an Azure subscription](https://azure.microsoft.com/try/cosmosdb/)-- [90-day free trial and up to $6,000 in throughput credits with Azure AI Advantage](ai-advantage.md)-- [Azure Cosmos DB lifetime free tier](free-tier.md) |
cosmos-db | Analytical Store Change Data Capture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-change-data-capture.md | - Title: Change data capture in analytical store- -description: Change data capture (CDC) in Azure Cosmos DB analytical store allows you to efficiently consume a continuous and incremental feed of changed data. ----- Previously updated : 11/28/2023---# Change Data Capture in Azure Cosmos DB analytical store ---Change data capture (CDC) in [Azure Cosmos DB analytical store](analytical-store-introduction.md) allows you to efficiently consume a continuous and incremental feed of changed (inserted, updated, and deleted) data from analytical store. Seamlessly integrated with Azure Synapse and Azure Data Factory, it provides you with a scalable no-code experience for high data volume. As the change data capture feature is based on analytical store, it [doesn't consume provisioned RUs, doesn't affect your transactional workloads](analytical-store-introduction.md#decoupled-performance-for-analytical-workloads), provides lower latency, and has lower TCO. --The change data capture feature in Azure Cosmos DB analytical store can write to various sinks using an Azure Synapse or Azure Data Factory data flow. ---For more information on supported sink types in a mapping data flow, see [data flow supported sink types](../data-factory/data-flow-sink.md#supported-sinks). --In addition to providing incremental data feed from analytical store to diverse targets, change data capture supports the following capabilities: --- Supports capturing deletes and intermediate updates-- Ability to filter the change feed for a specific type of operation (**Insert** | **Update** | **Delete** | **TTL**)-- Supports applying filters, projections and transformations on the Change feed via source query-- Multiple change feeds on the same container can be consumed simultaneously-- Each change in container appears exactly once in the change data capture feed, and the checkpoints are managed internally for you-- Changes can be synchronized "from the beginningΓÇ¥ or ΓÇ£from a given timestampΓÇ¥ or ΓÇ£from nowΓÇ¥-- There's no limitation around the fixed data retention period for which changes are available--## Efficient incremental data capture with internally managed checkpoints --Each change in Cosmos DB container appears exactly once in the CDC feed, and the checkpoints are managed internally for you. This helps to address the below disadvantages of the common pattern of using custom checkpoints based on the ΓÇ£_tsΓÇ¥ value: -- * The ΓÇ£_tsΓÇ¥ filter is applied against the data files which does not always guarantee minimal data scan. The internally managed GLSN based checkpoints in the new CDC capability ensure that the incremental data identification is done, just based on the metadata and so guarantees minimal data scanning in each stream. --* The analytical store sync process does not guarantee ΓÇ£_tsΓÇ¥ based ordering which means that there could be cases where an incremental recordΓÇÖs ΓÇ£_tsΓÇ¥ is lesser than the last checkpointed ΓÇ£_tsΓÇ¥ and could be missed out in the incremental stream. The new CDC does not consider ΓÇ£_tsΓÇ¥ to identify the incremental records and thus guarantees that none of the incremental records are missed. --## Features --Change data capture in Azure Cosmos DB analytical store supports the following key features. --### Capturing changes from the beginning --When the `Start from beginning` option is selected, the initial load includes a full snapshot of container data in the first run, and changed or incremental data is captured in subsequent runs. This is limited by the `analytical TTL` property and documents TTL-removed from analytical store are not included in the change feed. Example: Imagine a container with `analytical TTL` set to 31536000 seconds, which is equivalent to 1 year. If you create a CDC process for this container, only documents newer than 1 year will be included in the initial load. --### Capturing changes from a given timestamp --When the `Start from timestamp` option is selected, the initial load processes the data from the given timestamp, and incremental or changed data is captured in subsequent runs. This process is also limited by the `analytical TTL` property. --### Capturing changes from now --When the `Start from timestamp` option is selected, all past operations of the container are not captured. ---### Capturing deletes, intermediate updates, and TTLs --The change data capture feature for the analytical store captures deletes, intermediate updates, and TTL operations. The captured deletes and updates can be applied on Sinks that support delete and update operations. The {_rid} value uniquely identifies the records and so by specifying {_rid} as key column on the Sink side, the update and delete operations would be reflected on the Sink. --Note that TTL operations are considered deletes. Check the [source settings](get-started-change-data-capture.md) section to check mode details and the support for intermediate updates and deletes in sinks. --### Filter the change feed for a specific type of operation --You can filter the change data capture feed for a specific type of operation. For example, you can selectively capture the insert and update operations only, thereby ignoring the user-delete and TTL-delete operations. --### Applying filters, projections, and transformations on the Change feed via source query --You can optionally use a source query to specify filter(s), projection(s), and transformation(s), which would all be pushed down to the columnar analytical store. Here's a sample source-query that would only capture incremental records with the filter `Category = 'Urban'`. This sample query projects only five fields and applies a simple transformation: --```sql -SELECT ProductId, Product, Segment, concat(Manufacturer, '-', Category) as ManufacturerCategory -FROM c -WHERE Category = 'Urban' -``` ---### Multiple CDC processes --You can create multiple processes to consume CDC in analytical store. This approach brings flexibility to support different scenarios and requirements. While one process may have no data transformations and multiple sinks, another one can have data flattening and one sink. And they can run in parallel. ---### Throughput isolation, lower latency and lower TCO --Operations on Cosmos DB analytical store don't consume the provisioned RUs and so don't affect your transactional workloads. Change data capture with analytical store also has lower latency and lower TCO. The lower latency is attributed to analytical store enabling better parallelism for data processing and reduces the overall TCO enabling you to drive cost efficiencies in these rapidly shifting economic conditions. --## Scenarios --Here are common scenarios where you could use change data capture and the analytical store. --### Consuming incremental data from Cosmos DB --You can use analytical store change data capture, if you're currently using or planning to use: --- Incremental data capture using Azure Data Factory Data Flows or Copy activity.-- One time batch processing using Azure Data Factory.-- Streaming Cosmos DB data- - The analytical store has up to 2-min latency to sync transactional store data. You can schedule Data Flows in Azure Data Factory every minute. - - If you need to stream without the above latency, we recommend using the change feed feature of the transactional store. -- Capturing deletes, incremental changes, applying filters on Cosmos DB Data.- - If you're using Azure Functions triggers or any other option with change feed and would like to capture deletes, incremental changes, apply transformations etc.; we recommend change data capture over analytical store. --### Incremental feed to analytical platform of your choice --Change data capture capability enables an end-to-end analytical solution providing you with the flexibility to use Azure Cosmos DB data with any of the supported sink types. For more information on supported sink types, see [data flow supported sink types](../data-factory/data-flow-sink.md#supported-sinks). Change data capture also enables you to bring Azure Cosmos DB data into a centralized data lake and join the data with data from other diverse sources. You can flatten the data, partition it, and apply more transformations either in Azure Synapse Analytics or Azure Data Factory. --## Change data capture on Azure Cosmos DB for MongoDB containers --The linked service interface for the API for MongoDB isn't available within Azure Data Factory data flows yet. You can use your API for MongoDB's account endpoint with the **Azure Cosmos DB for NoSQL** linked service interface as a work around until the Mongo linked service is directly supported. --In the interface for a new NoSQL linked service, select **Enter Manually** to provide the Azure Cosmos DB account information. Here, use the account's NoSQL document endpoint (Example: `https://<account-name>.documents.azure.com:443/`) instead of the Mongo DB endpoint (Example: `mongodb://<account-name>.mongo.cosmos.azure.com:10255/`) --## Next steps --> [!div class="nextstepaction"] -> [Get started with change data capture in the analytical store](get-started-change-data-capture.md) |
cosmos-db | Analytical Store Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md | - Title: What is Azure Cosmos DB analytical store? -description: Learn about Azure Cosmos DB transactional (row-based) and analytical(column-based) store. Benefits of analytical store, performance impact for large-scale workloads, and auto sync of data from transactional store to analytical store. ----- Previously updated : 05/08/2024----# What is Azure Cosmos DB analytical store? --- > [!IMPORTANT] - > Mirroring in Microsoft Fabric is now available in preview for NoSql API. This feature provides all the capabilities of Azure Synapse Link with better analytical performance, ability to unify your data estate with Fabric OneLake and open access to your data in OneLake with Delta Parquet format. If you are considering Azure Synapse Link, we recommend that you try mirroring to assess overall fit for your organization. To get started with mirroring, click [here](/fabric/database/mirrored-database/azure-cosmos-db?context=/azure/cosmos-db/context/context). --To get started with Azure Synapse Link, please visit [“Getting started with Azure Synapse Link”](synapse-link.md) --Azure Cosmos DB analytical store is a fully isolated column store for enabling large-scale analytics against operational data in your Azure Cosmos DB, without any impact to your transactional workloads. --Azure Cosmos DB transactional store is schema-agnostic, and it allows you to iterate on your transactional applications without having to deal with schema or index management. In contrast to this, Azure Cosmos DB analytical store is schematized to optimize for analytical query performance. This article describes in detailed about analytical storage. --## Challenges with large-scale analytics on operational data --The multi-model operational data in an Azure Cosmos DB container is internally stored in an indexed row-based "transactional store". Row store format is designed to allow fast transactional reads and writes in the order-of-milliseconds response times, and operational queries. If your dataset grows large, complex analytical queries can be expensive in terms of provisioned throughput on the data stored in this format. High consumption of provisioned throughput in turn, impacts the performance of transactional workloads that are used by your real-time applications and services. --Traditionally, to analyze large amounts of data, operational data is extracted from Azure Cosmos DB's transactional store and stored in a separate data layer. For example, the data is stored in a data warehouse or data lake in a suitable format. This data is later used for large-scale analytics and analyzed using compute engines such as the Apache Spark clusters. The separation of analytical from operational data results in delays for analysts that want to use the most recent data. --The ETL pipelines also become complex when handling updates to the operational data when compared to handling only newly ingested operational data. --## Column-oriented analytical store --Azure Cosmos DB analytical store addresses the complexity and latency challenges that occur with the traditional ETL pipelines. Azure Cosmos DB analytical store can automatically sync your operational data into a separate column store. Column store format is suitable for large-scale analytical queries to be performed in an optimized manner, resulting in improving the latency of such queries. --Using Azure Synapse Link, you can now build no-ETL HTAP solutions by directly linking to Azure Cosmos DB analytical store from Azure Synapse Analytics. It enables you to run near real-time large-scale analytics on your operational data. --## Features of analytical store --When you enable analytical store on an Azure Cosmos DB container, a new column-store is internally created based on the operational data in your container. This column store is persisted separately from the row-oriented transactional store for that container, in a storage account that is fully managed by Azure Cosmos DB, in an internal subscription. Customers don't need to spend time with storage administration. The inserts, updates, and deletes to your operational data are automatically synced to analytical store. You don't need the Change Feed or ETL to sync the data. --## Column store for analytical workloads on operational data --Analytical workloads typically involve aggregations and sequential scans of selected fields. The data analytical store is stored in a column-major order, allowing values of each field to be serialized together, where applicable. This format reduces the IOPS required to scan or compute statistics over specific fields. It dramatically improves the query response times for scans over large data sets. --For example, if your operational tables are in the following format: ---The row store persists the above data in a serialized format, per row, on the disk. This format allows for faster transactional reads, writes, and operational queries, such as, "Return information about Product 1". However, as the dataset grows large and if you want to run complex analytical queries on the data it can be expensive. For example, if you want to get "the sales trends for a product under the category named 'Equipment' across different business units and months", you need to run a complex query. Large scans on this dataset can get expensive in terms of provisioned throughput and can also impact the performance of the transactional workloads powering your real-time applications and services. --Analytical store, which is a column store, is better suited for such queries because it serializes similar fields of data together and reduces the disk IOPS. --The following image shows transactional row store vs. analytical column store in Azure Cosmos DB: ---## Decoupled performance for analytical workloads --There's no impact on the performance of your transactional workloads due to analytical queries, as the analytical store is separate from the transactional store. Analytical store doesn't need separate request units (RUs) to be allocated. --## Auto-Sync --Auto-Sync refers to the fully managed capability of Azure Cosmos DB where the inserts, updates, deletes to operational data are automatically synced from transactional store to analytical store in near real time. Auto-sync latency is usually within 2 minutes. In cases of shared throughput database with a large number of containers, auto-sync latency of individual containers could be higher and take up to 5 minutes. --At the end of each execution of the automatic sync process, your transactional data will be immediately available for Azure Synapse Analytics runtimes: --* Azure Synapse Analytics Spark pools can read all data, including the most recent updates, through Spark tables, which are updated automatically, or via the `spark.read` command, that always reads the last state of the data. --* Azure Synapse Analytics SQL Serverless pools can read all data, including the most recent updates, through views, which are updated automatically, or via `SELECT` together with the `OPENROWSET` commands, which always reads the latest status of the data. --> [!NOTE] -> Your transactional data will be synchronized to analytical store even if your transactional time-to-live (TTL) is smaller than 2 minutes. --> [!NOTE] -> Please note that if you delete your container, analytical store is also deleted. --## Scalability & elasticity --Azure Cosmos DB transactional store uses horizontal partitioning to elastically scale the storage and throughput without any downtime. Horizontal partitioning in the transactional store provides scalability & elasticity in auto-sync to ensure data is synced to the analytical store in near real time. The data sync happens regardless of the transactional traffic throughput, whether it's 1000 operations/sec or 1 million operations/sec, and it doesn't impact the provisioned throughput in the transactional store. --## <a id="analytical-schema"></a>Automatically handle schema updates --Azure Cosmos DB transactional store is schema-agnostic, and it allows you to iterate on your transactional applications without having to deal with schema or index management. In contrast to this, Azure Cosmos DB analytical store is schematized to optimize for analytical query performance. With the auto-sync capability, Azure Cosmos DB manages the schema inference over the latest updates from the transactional store. It also manages the schema representation in the analytical store out-of-the-box which, includes handling nested data types. --As your schema evolves, and new properties are added over time, the analytical store automatically presents a unionized schema across all historical schemas in the transactional store. --> [!NOTE] -> In the context of analytical store, we consider the following structures as property: -> * JSON "elements" or "string-value pairs separated by a `:` ". -> * JSON objects, delimited by `{` and `}`. -> * JSON arrays, delimited by `[` and `]`. ---### Schema constraints --The following constraints are applicable on the operational data in Azure Cosmos DB when you enable analytical store to automatically infer and represent the schema correctly: --* You can have a maximum of 1000 properties across all nested levels in the document schema and a maximum nesting depth of 127. - * Only the first 1000 properties are represented in the analytical store. - * Only the first 127 nested levels are represented in the analytical store. - * The first level of a JSON document is its `/` root level. - * Properties in the first level of the document will be represented as columns. ---* Sample scenarios: - * If your document's first level has 2000 properties, the sync process will represent the first 1000 of them. - * If your documents have five levels with 200 properties in each one, the sync process will represent all properties. - * If your documents have 10 levels with 400 properties in each one, the sync process will fully represent the two first levels and only half of the third level. --* The hypothetical document below contains four properties and three levels. - * The levels are `root`, `myArray`, and the nested structure within the `myArray`. - * The properties are `id`, `myArray`, `myArray.nested1`, and `myArray.nested2`. - * The analytical store representation will have two columns, `id`, and `myArray`. You can use Spark or T-SQL functions to also expose the nested structures as columns. ---```json -{ - "id": "1", - "myArray": [ - "string1", - "string2", - { - "nested1": "abc", - "nested2": "cde" - } - ] -} -``` --* While JSON documents (and Azure Cosmos DB collections/containers) are case-sensitive from the uniqueness perspective, analytical store isn't. -- * **In the same document:** Properties names in the same level should be unique when compared case-insensitively. For example, the following JSON document has "Name" and "name" in the same level. While it's a valid JSON document, it doesn't satisfy the uniqueness constraint and hence won't be fully represented in the analytical store. In this example, "Name" and "name" are the same when compared in a case-insensitive manner. Only `"Name": "fred"` will be represented in analytical store, because it's the first occurrence. And `"name": "john"` won't be represented at all. - - - ```json - {"id": 1, "Name": "fred", "name": "john"} - ``` - - * **In different documents:** Properties in the same level and with the same name, but in different cases, will be represented within the same column, using the name format of the first occurrence. For example, the following JSON documents have `"Name"` and `"name"` in the same level. Since the first document format is `"Name"`, this is what will be used to represent the property name in analytical store. In other words, the column name in analytical store will be `"Name"`. Both `"fred"` and `"john"` will be represented, in the `"Name"` column. --- ```json - {"id": 1, "Name": "fred"} - {"id": 2, "name": "john"} - ``` --* The first document of the collection defines the initial analytical store schema. - * Documents with more properties than the initial schema will generate new columns in analytical store. - * Columns can't be removed. - * The deletion of all documents in a collection doesn't reset the analytical store schema. - * There's not schema versioning. The last version inferred from transactional store is what you'll see in analytical store. --* Currently Azure Synapse Spark can't read properties that contain some special characters in their names, listed below. Azure Synapse SQL serverless isn't affected. - * : - * ` - * , - * ; - * {} - * () - * \n - * \t - * = - * " --> [!NOTE] -> White spaces are also listed in the Spark error message returned when you reach this limitation. But we have added a special treatment for white spaces, please check out more details in the items below. - -* If you have properties names using the characters listed above, the alternatives are: - * Change your data model in advance to avoid these characters. - * Since currently we don't support schema reset, you can change your application to add a redundant property with a similar name, avoiding these characters. - * Use Change Feed to create a materialized view of your container without these characters in properties names. - * Use the `dropColumn` Spark option to ignore the affected columns and load all other columns into a DataFrame. The syntax is: --```Python -# Removing one column: -df = spark.read\ - .format("cosmos.olap")\ - .option("spark.synapse.linkedService","<your-linked-service-name>")\ - .option("spark.synapse.container","<your-container-name>")\ - .option("spark.cosmos.dropColumn","FirstName,LastName")\ - .load() - -# Removing multiple columns: -df = spark.read\ - .format("cosmos.olap")\ - .option("spark.synapse.linkedService","<your-linked-service-name>")\ - .option("spark.synapse.container","<your-container-name>")\ - .option("spark.cosmos.dropColumn","FirstName,LastName;StreetName,StreetNumber")\ - .option("spark.cosmos.dropMultiColumnSeparator", ";")\ - .load() -``` --* Azure Synapse Spark now supports properties with white spaces in their names. For that, you need to use the `allowWhiteSpaceInFieldNames` Spark option to load the affected columns into a DataFrame, keeping the original name. The syntax is: --```Python -df = spark.read\ - .format("cosmos.olap")\ - .option("spark.synapse.linkedService","<your-linked-service-name>")\ - .option("spark.synapse.container","<your-container-name>")\ - .option("spark.cosmos.allowWhiteSpaceInFieldNames", "true")\ - .load() -``` --* The following BSON datatypes aren't supported and won't be represented in analytical store: - * Decimal128 - * Regular Expression - * DB Pointer - * JavaScript - * Symbol - * MinKey/MaxKey --* When using DateTime strings that follow the ISO 8601 UTC standard, expect the following behavior: - * Spark pools in Azure Synapse represent these columns as `string`. - * SQL serverless pools in Azure Synapse represent these columns as `varchar(8000)`. --* Properties with `UNIQUEIDENTIFIER (guid)` types are represented as `string` in analytical store and should be converted to `VARCHAR` in **SQL** or to `string` in **Spark** for correct visualization. --* SQL serverless pools in Azure Synapse support result sets with up to 1000 columns, and exposing nested columns also counts towards that limit. It is a good practice to consider this information in your transactional data architecture and modeling. --* If you rename a property, in one or many documents, it will be considered a new column. If you execute the same rename in all documents in the collection, all data will be migrated to the new column and the old column will be represented with `NULL` values. --### Schema representation --There are two methods of schema representation in the analytical store, valid for all containers in the database account. They have tradeoffs between the simplicity of query experience versus the convenience of a more inclusive columnar representation for polymorphic schemas. --* Well-defined schema representation, default option for API for NoSQL and Gremlin accounts. -* Full fidelity schema representation, default option for API for MongoDB accounts. ---#### Well-defined schema representation --The well-defined schema representation creates a simple tabular representation of the schema-agnostic data in the transactional store. The well-defined schema representation has the following considerations: --* The first document defines the base schema and properties must always have the same type across all documents. The only exceptions are: - * From `NULL` to any other data type. The first non-null occurrence defines the column data type. Any document not following the first non-null datatype won't be represented in analytical store. - * From `float` to `integer`. All documents are represented in analytical store. - * From `integer` to `float`. All documents are represented in analytical store. However, to read this data with Azure Synapse SQL serverless pools, you must use a WITH clause to convert the column to `varchar`. And after this initial conversion, it's possible to convert it again to a number. Please check the example below, where **num** initial value was an integer and the second one was a float. --```SQL -SELECT CAST (num as float) as num -FROM OPENROWSET(PROVIDER = 'CosmosDB', - CONNECTION = '<your-connection', - OBJECT = 'IntToFloat', - SERVER_CREDENTIAL = 'your-credential' -) -WITH (num varchar(100)) AS [IntToFloat] -``` -- * Properties that don't follow the base schema data type won't be represented in analytical store. For example, consider the documents below: the first one defined the analytical store base schema. The second document, where `id` is `"2"`, **doesn't** have a well-defined schema since property `"code"` is a string and the first document has `"code"` as a number. In this case, the analytical store registers the data type of `"code"` as `integer` for lifetime of the container. The second document will still be included in analytical store, but its `"code"` property won't. - - * `{"id": "1", "code":123}` - * `{"id": "2", "code": "123"}` - - > [!NOTE] - > The condition above doesn't apply for `NULL` properties. For example, `{"a":123} and {"a":NULL}` is still well-defined. --> [!NOTE] - > The condition above doesn't change if you update `"code"` of document `"1"` to a string in your transactional store. In analytical store, `"code"` will be kept as `integer` since currently we don't support schema reset. --* Array types must contain a single repeated type. For example, `{"a": ["str",12]}` isn't a well-defined schema because the array contains a mix of integer and string types. --> [!NOTE] -> If the Azure Cosmos DB analytical store follows the well-defined schema representation and the specification above is violated by certain items, those items won't be included in the analytical store. --* Expect different behavior in regard to different types in well-defined schema: - * Spark pools in Azure Synapse represent these values as `undefined`. - * SQL serverless pools in Azure Synapse represent these values as `NULL`. --* Expect different behavior in regard to explicit `NULL` values: - * Spark pools in Azure Synapse read these values as `0` (zero), and as `undefined` as soon as the column has a non-null value. - * SQL serverless pools in Azure Synapse read these values as `NULL`. - -* Expect different behavior in regard to missing columns: - * Spark pools in Azure Synapse represent these columns as `undefined`. - * SQL serverless pools in Azure Synapse represent these columns as `NULL`. --##### Representation challenges workarounds --It is possible that an old document, with an incorrect schema, was used to create your container's analytical store base schema. Based on all the rules presented above, you may be receiving `NULL` for certain properties when querying your analytical store using Azure Synapse Link. To delete or update the problematic documents won't help because base schema reset isn't currently supported. The possible solutions are: -- * To migrate the data to a new container, making sure that all documents have the correct schema. - * To abandon the property with the wrong schema and add a new one with another name that has the correct schema in all documents. Example: You have billions of documents in the **Orders** container where the **status** property is a string. But the first document in that container has **status** defined with integer. So, one document will have **status** correctly represented and all other documents will have `NULL`. You can add the **status2** property to all documents and start to use it, instead of the original property. --#### Full fidelity schema representation --The full fidelity schema representation is designed to handle the full breadth of polymorphic schemas in the schema-agnostic operational data. In this schema representation, no items are dropped from the analytical store even if the well-defined schema constraints (that is no mixed data type fields nor mixed data type arrays) are violated. --This is achieved by translating the leaf properties of the operational data into the analytical store as JSON `key-value` pairs, where the datatype is the `key` and the property content is the `value`. This JSON object representation allows queries without ambiguity, and you can individually analyze each datatype. --In other words, in the full fidelity schema representation, each datatype of each property of each document will generate a `key-value`pair in a JSON object for that property. Each of them count as one of the 1000 maximum properties limit. --For example, let's take the following sample document in the transactional store: --```json -{ -name: "John Doe", -age: 32, -profession: "Doctor", -address: { - streetNo: 15850, - streetName: "NE 40th St.", - zip: 98052 -}, -salary: 1000000 -} -``` --The nested object `address` is a property in the root level of the document and will be represented as a column. Each leaf property in the `address` object will be represented as a JSON object: `{"object":{"streetNo":{"int32":15850},"streetName":{"string":"NE 40th St."},"zip":{"int32":98052}}}`. --Unlike the well-defined schema representation, the full fidelity method allows variation in datatypes. If the next document in this collection of the example above has `streetNo` as a string, it will be represented in analytical store as `"streetNo":{"string":15850}`. In well-defined schema method, it wouldn't be represented. ---##### Datatypes map for full fidelity schema --Here's a map of MongoDB data types and their representations in the analytical store in full fidelity schema representation. The map below isn't valid for NoSQL API accounts. --|Original data type |Suffix |Example | -|||| -| Double | ".float64" | 24.99| -| Array | ".array" | ["a", "b"]| -| Binary | ".binary" |0| -| Boolean | ".bool" |True| -| Int32 | ".int32" |123| -| Int64 | ".int64" |255486129307| -| NULL | ".NULL" | NULL| -| String| ".string" | "ABC"| -| Timestamp | ".timestamp" | Timestamp(0, 0)| -| ObjectId |".objectId" | ObjectId("5f3f7b59330ec25c132623a2")| -| Document |".object" | {"a": "a"}| --* Expect different behavior in regard to explicit `NULL` values: - * Spark pools in Azure Synapse will read these values as `0` (zero). - * SQL serverless pools in Azure Synapse will read these values as `NULL`. - -* Expect different behavior in regard to missing columns: - * Spark pools in Azure Synapse will represent these columns as `undefined`. - * SQL serverless pools in Azure Synapse will represent these columns as `NULL`. --* Expect different behavior in regard to `timestamp` values: - * Spark pools in Azure Synapse will read these values as `TimestampType`, `DateType`, or `Float`. It depends on the range and how the timestamp was generated. - * SQL Serverless pools in Azure Synapse will read these values as `DATETIME2`, ranging from `0001-01-01` through `9999-12-31`. Values beyond this range are not supported and will cause an execution failure for your queries. If this is your case, you can: - * Remove the column from the query. To keep the representation, you can create a new property mirroring that column but within the supported range. And use it in your queries. - * Use [Change Data Capture from analytical store](analytical-store-change-data-capture.md), at no RUs cost, to transform and load the data into a new format, within one of the supported sinks. - --##### Using full fidelity schema with Spark --Spark will manage each datatype as a column when loading into a `DataFrame`. Let's assume a collection with the documents below. --```json -{ - "_id" : "1" , - "item" : "Pizza", - "price" : 3.49, - "rating" : 3, - "timestamp" : 1604021952.6790195 -}, -{ - "_id" : "2" , - "item" : "Ice Cream", - "price" : 1.59, - "rating" : "4" , - "timestamp" : "2022-11-11 10:00 AM" -} -``` --While the first document has `rating` as a number and `timestamp` in utc format, the second document has `rating` and `timestamp` as strings. Assuming that this collection was loaded into `DataFrame` without any data transformation, the output of the `df.printSchema()` is: --```JSON -root - |-- _rid: string (nullable = true) - |-- _ts: long (nullable = true) - |-- id: string (nullable = true) - |-- _etag: string (nullable = true) - |-- _id: struct (nullable = true) - | |-- objectId: string (nullable = true) - |-- item: struct (nullable = true) - | |-- string: string (nullable = true) - |-- price: struct (nullable = true) - | |-- float64: double (nullable = true) - |-- rating: struct (nullable = true) - | |-- int32: integer (nullable = true) - | |-- string: string (nullable = true) - |-- timestamp: struct (nullable = true) - | |-- float64: double (nullable = true) - | |-- string: string (nullable = true) - |-- _partitionKey: struct (nullable = true) - | |-- string: string (nullable = true) - ``` --In well-defined schema representation, both `rating` and `timestamp` of the second document wouldn't be represented. In full fidelity schema, you can use the following examples to individually access to each value of each datatype. --In the example below, we can use `PySpark` to run an aggregation: --```PySpark -df.groupBy(df.item.string).sum().show() -``` --In the example below, we can use `PySQL` to run another aggregation: --```PySQL -df.createOrReplaceTempView("Pizza") -sql_results = spark.sql("SELECT sum(price.float64),count(*) FROM Pizza where timestamp.string is not null and item.string = 'Pizza'") -sql_results.show() -``` --##### Using full fidelity schema with SQL --You can use the following syntax example, with the same documents of the Spark example above: --```SQL -SELECT rating,timestamp_string,timestamp_utc -FROM OPENROWSET(PROVIDER = 'CosmosDB', - CONNECTION = 'Account=<your-database-account-name';Database=<your-database-name>', - OBJECT = '<your-collection-name>', - SERVER_CREDENTIAL = '<your-synapse-sql-server-credential-name>') -WITH ( -rating integer '$.rating.int32', -timestamp varchar(50) '$.timestamp.string', -timestamp_utc float '$.timestamp.float64' -) as HTAP -WHERE timestamp is not null or timestamp_utc is not null -``` --You can implement transformations using `cast`, `convert` or any other T-SQL function to manipulate your data. You can also hide complex datatype structures by using views. --```SQL -create view MyView as -SELECT MyRating=rating,MyTimestamp = convert(varchar(50),timestamp_utc) -FROM OPENROWSET(PROVIDER = 'CosmosDB', - CONNECTION = 'Account=<your-database-account-name';Database=<your-database-name>', - OBJECT = '<your-collection-name>', - SERVER_CREDENTIAL = '<your-synapse-sql-server-credential-name>') -WITH ( -rating integer '$.rating.int32', -timestamp_utc float '$.timestamp.float64' -) as HTAP -WHERE timestamp_utc is not null -union all -SELECT MyRating=convert(integer,rating_string),MyTimestamp = timestamp_string -FROM OPENROWSET(PROVIDER = 'CosmosDB', - CONNECTION = 'Account=<your-database-account-name';Database=<your-database-name>', - OBJECT = '<your-collection-name>', - SERVER_CREDENTIAL = '<your-synapse-sql-server-credential-name>') -WITH ( -rating_string varchar(50) '$.rating.string', -timestamp_string varchar(50) '$.timestamp.string' -) as HTAP -WHERE timestamp_string is not null -``` ---##### Working with MongoDB `_id` field --MongoDB `_id` field is fundamental to every collection in MongoDB and originally has a hexadecimal representation. As you can see in the table above, full fidelity schema will preserve its characteristics, creating a challenge for its visualization in Azure Synapse Analytics. For correct visualization, you must convert the `_id` datatype as below: --###### Working with MongoDB `_id` field in Spark --The example below works on Spark 2.x and 3.x versions: --```Scala -val df = spark.read.format("cosmos.olap").option("spark.synapse.linkedService", "xxxx").option("spark.cosmos.container", "xxxx").load() --val convertObjectId = udf((bytes: Array[Byte]) => { - val builder = new StringBuilder -- for (b <- bytes) { - builder.append(String.format("%02x", Byte.box(b))) - } - builder.toString -} - ) --val dfConverted = df.withColumn("objectId", col("_id.objectId")).withColumn("convertedObjectId", convertObjectId(col("_id.objectId"))).select("id", "objectId", "convertedObjectId") -display(dfConverted) -``` --###### Working with MongoDB `_id` field in SQL --```SQL -SELECT TOP 100 id=CAST(_id as VARBINARY(1000)) -FROM OPENROWSET('CosmosDB', -                'Your-account;Database=your-database;Key=your-key', -                HTAP) WITH (_id VARCHAR(1000)) as HTAP -``` --##### Working with MongoDB `id` field --The `id` property in MongoDB containers is automatically overridden with the Base64 representation of the "_id" property both in analytical store. The "id" field is intended for internal use by MongoDB applications. Currently, the only workaround is to rename the "id" property to something other than "id". ---#### Full fidelity schema for API for NoSQL or Gremlin accounts --It's possible to use full fidelity Schema for API for NoSQL accounts, instead of the default option, by setting the schema type when enabling Synapse Link on an Azure Cosmos DB account for the first time. Here are the considerations about changing the default schema representation type: --* Currently, if you enable Synapse Link in your NoSQL API account using the Azure portal, it will be enabled as well-defined schema. -* Currently, if you want to use full fidelity schema with NoSQL or Gremlin API accounts, you have to set it at account level in the same CLI or PowerShell command that will enable Synapse Link at account level. -* Currently Azure Cosmos DB for MongoDB isn't compatible with this possibility of changing the schema representation. All MongoDB accounts have full fidelity schema representation type. -* Full Fidelity schema data types map mentioned above isn't valid for NoSQL API accounts that use JSON datatypes. As an example, `float` and `integer` values are represented as `num` in analytical store. -* It's not possible to reset the schema representation type, from well-defined to full fidelity or vice-versa. -* Currently, containers schemas in analytical store are defined when the container is created, even if Synapse Link has not been enabled in the database account. - * Containers or graphs created before Synapse Link was enabled with full fidelity schema at account level will have well-defined schema. - * Containers or graphs created after Synapse Link was enabled with full fidelity schema at account level will have full fidelity schema. - -The schema representation type decision must be made at the same time that Synapse Link is enabled on the account, using Azure CLI or PowerShell. - - With the Azure CLI: - ```cli - az cosmosdb create --name MyCosmosDBDatabaseAccount --resource-group MyResourceGroup --subscription MySubscription --analytical-storage-schema-type "FullFidelity" --enable-analytical-storage true - ``` - -> [!NOTE] -> In the command above, replace `create` with `update` for existing accounts. - - With the PowerShell: - ```PowerShell - New-AzCosmosDBAccount -ResourceGroupName MyResourceGroup -Name MyCosmosDBDatabaseAccount -EnableAnalyticalStorage true -AnalyticalStorageSchemaType "FullFidelity" - ``` - -> [!NOTE] -> In the command above, replace `New-AzCosmosDBAccount` with `Update-AzCosmosDBAccount` for existing accounts. -> -## <a id="analytical-ttl"></a> Analytical Time-to-Live (TTL) --Analytical TTL (ATTL) indicates how long data should be retained in your analytical store, for a container. --Analytical store is enabled when ATTL is set with a value other than `NULL` and `0`. When enabled, inserts, updates, deletes to operational data are automatically synced from transactional store to analytical store, irrespective of the transactional TTL (TTTL) configuration. The retention of this transactional data in analytical store can be controlled at container level by the `AnalyticalStoreTimeToLiveInSeconds` property. --The possible ATTL configurations are: --* If the value is set to `0` or set to `NULL`: the analytical store is disabled and no data is replicated from transactional store to analytical store --* If the value is set to `-1`: the analytical store retains all historical data, irrespective of the retention of the data in the transactional store. This setting indicates that the analytical store has infinite retention of your operational data --* If the value is set to any positive integer `n` number: items will expire from the analytical store `n` seconds after their last modified time in the transactional store. This setting can be leveraged if you want to retain your operational data for a limited period of time in the analytical store, irrespective of the retention of the data in the transactional store --Some points to consider: --* After the analytical store is enabled with an ATTL value, it can be updated to a different valid value later. -* While TTTL can be set at the container or item level, ATTL can only be set at the container level currently. -* You can achieve longer retention of your operational data in the analytical store by setting ATTL >= TTTL at the container level. -* The analytical store can be made to mirror the transactional store by setting ATTL = TTTL. -* If you have ATTL bigger than TTTL, at some point in time you'll have data that only exists in analytical store. This data is read only. -* Currently we don't delete any data from analytical store. If you set your ATTL to any positive integer, the data won't be included in your queries and you won't be billed for it. But if you change ATTL back to `-1`, all the data will show up again, you will start to be billed for all the data volume. --How to enable analytical store on a container: --* From the Azure portal, the ATTL option, when turned on, is set to the default value of -1. You can change this value to 'n' seconds, by navigating to container settings under Data Explorer. - -* From the Azure Management SDK, Azure Cosmos DB SDKs, PowerShell, or Azure CLI, the ATTL option can be enabled by setting it to either -1 or 'n' seconds. --To learn more, see [how to configure analytical TTL on a container](configure-synapse-link.md). --## Cost-effective analytics on historical data --Data tiering refers to the separation of data between storage infrastructures optimized for different scenarios. Thereby improving the overall performance and cost-effectiveness of the end-to-end data stack. With analytical store, Azure Cosmos DB now supports automatic tiering of data from the transactional store to analytical store with different data layouts. With analytical store optimized in terms of storage cost compared to the transactional store, allows you to retain much longer horizons of operational data for historical analysis. --After the analytical store is enabled, based on the data retention needs of the transactional workloads, you can configure `transactional TTL` property to have records automatically deleted from the transactional store after a certain time period. Similarly, the `analytical TTL` allows you to manage the lifecycle of data retained in the analytical store, independent from the transactional store. By enabling analytical store and configuring transactional and analytical `TTL` properties, you can seamlessly tier and define the data retention period for the two stores. --> [!NOTE] -> When `analytical TTL` is set to a value larger than `transactional TTL` value, your container will have data that only exists in analytical store. This data is read only and currently we don't support document level `TTL` in analytical store. If your container data may need an update or a delete at some point in time in the future, don't use `analytical TTL` bigger than `transactional TTL`. This capability is recommended for data that won't need updates or deletes in the future. --> [!NOTE] -> If your scenario doesn't demand physical deletes, you can adopt a logical delete/update approach. Insert in transactional store another version of the same document that only exists in analytical store but needs a logical delete/update. Maybe with a flag indicating that it's a delete or an update of an expired document. Both versions of the same document will co-exist in analytical store, and your application should only consider the last one. ---## Resilience --Analytical store relies on Azure Storage and offers the following protection against physical failure: -- * By default, Azure Cosmos DB database accounts allocate analytical store in Locally Redundant Storage (LRS) accounts. LRS provides at least 99.999999999% (11 nines) durability of objects over a given year. - * If any geo-region of the database account is configured for zone-redundancy, it is allocated in Zone-redundant Storage (ZRS) accounts. You need to enable Availability Zones on a region of their Azure Cosmos DB database account to have analytical data of that region stored in Zone-redundant Storage. ZRS offers durability for storage resources of at least 99.9999999999% (12 9's) over a given year. --For more information about Azure Storage durability, see [this link.](/azure/storage/common/storage-redundancy) --## Backup --Although analytical store has built-in protection against physical failures, backup can be necessary for accidental deletes or updates in transactional store. In those cases, you can restore a container and use the restored container to backfill the data in the original container, or fully rebuild analytical store if necessary. --> [!NOTE] -> Currently analytical store isn't backed up, therefore it can't be restored. Your backup policy can't be planned relying on that. --Synapse Link, and analytical store by consequence, has different compatibility levels with Azure Cosmos DB backup modes: --* Periodic backup mode is fully compatible with Synapse Link and these 2 features can be used in the same database account. -* Synapse Link for database accounts using continuous backup mode is GA. -* Continuous backup mode for Synapse Link enabled accounts is in public preview. Currently, you can't migrate to continuous backup if you disabled Synapse Link on any of your collections in a Cosmos DB account. --### Backup policies --There are two possible backup policies and to understand how to use them, the following details about Azure Cosmos DB backups are very important: -- * The original container is restored without analytical store in both backup modes. - * Azure Cosmos DB doesn't support containers overwrite from a restore. --Now let's see how to use backup and restores from the analytical store perspective. -- #### Restoring a container with TTTL >= ATTL - - When `transactional TTL` is equal or bigger than `analytical TTL`, all data in analytical store still exists in transactional store. In case of a restore, you have two possible situations: - * To use the restored container as a replacement for the original container. To rebuild analytical store, just enable Synapse Link at account level and container level. - * To use the restored container as a data source to backfill or update the data in the original container. In this case, analytical store will automatically reflect the data operations. - - #### Restoring a container with TTTL < ATTL - -When `transactional TTL` is smaller than `analytical TTL`, some data only exists in analytical store and won't be in the restored container. Again, you have two possible situations: - * To use the restored container as a replacement for the original container. In this case, when you enable Synapse Link at container level, only the data that was in transactional store will be included in the new analytical store. But please note that the analytical store of the original container remains available for queries as long as the original container exists. You may want to change your application to query both. - * To use the restored container as a data source to backfill or update the data in the original container: - * Analytical store will automatically reflect the data operations for the data that is in transactional store. - * If you re-insert data that was previously removed from transactional store due to `transactional TTL`, this data will be duplicated in analytical store. --Example: -- * Container `OnlineOrders` has TTTL set to one month and ATTL set for one year. - * When you restore it to `OnlineOrdersNew` and turn on analytical store to rebuild it, there will be only one month of data in both transactional and analytical store. - * Original container `OnlineOrders` isn't deleted and its analytical store is still available. - * New data is only ingested into `OnlineOrdersNew`. - * Analytical queries will do a UNION ALL from analytical stores while the original data is still relevant. --If you want to delete the original container but don't want to lose its analytical store data, you can persist the analytical store of the original container in another Azure data service. Synapse Analytics has the capability to perform joins between data stored in different locations. An example: A Synapse Analytics query joins analytical store data with external tables located in Azure Blob Storage, Azure Data Lake Store, etc. --It's important to note that the data in the analytical store has a different schema than what exists in the transactional store. While you can generate snapshots of your analytical store data, and export it to any Azure Data service, at no RUs costs, we can't guarantee the use of this snapshot to back feed the transactional store. This process isn't supported. ---## Global distribution --If you have a globally distributed Azure Cosmos DB account, after you enable analytical store for a container, it will be available in all regions of that account. Any changes to operational data are globally replicated in all regions. You can run analytical queries effectively against the nearest regional copy of your data in Azure Cosmos DB. --## Partitioning --Analytical store partitioning is completely independent of partitioning in the transactional store. By default, data in analytical store isn't partitioned. If your analytical queries have frequently used filters, you have the option to partition based on these fields for better query performance. To learn more, see [introduction to custom partitioning](custom-partitioning-analytical-store.md) and [how to configure custom partitioning](configure-custom-partitioning.md). --## Security --* **Authentication with the analytical store** is the same as the transactional store for a given database. You can use primary, secondary, or read-only keys for authentication. You can leverage linked service in Synapse Studio to prevent pasting the Azure Cosmos DB keys in the Spark notebooks. For Azure Synapse SQL serverless, you can use SQL credentials to also prevent pasting the Azure Cosmos DB keys in the SQL notebooks. The Access to these Linked Services or to these SQL credentials are available to anyone who has access to the workspace. Please note that the Azure Cosmos DB read only key can also be used. --* **Network isolation using private endpoints** - You can control network access to the data in the transactional and analytical stores independently. Network isolation is done using separate managed private endpoints for each store, within managed virtual networks in Azure Synapse workspaces. To learn more, see how to [Configure private endpoints for analytical store](analytical-store-private-endpoints.md) article. --* **Data encryption at rest** - Your analytical store encryption is enabled by default. --* **Data encryption with customer-managed keys** - You can seamlessly encrypt the data across transactional and analytical stores using the same customer-managed keys in an automatic and transparent manner. Azure Synapse Link only supports configuring customer-managed keys using your Azure Cosmos DB account's managed identity. You must configure your account's managed identity in your Azure Key Vault access policy before [enabling Azure Synapse Link](configure-synapse-link.md#enable-synapse-link) on your account. To learn more, see how to [Configure customer-managed keys using Azure Cosmos DB accounts' managed identities](how-to-setup-cmk.md#using-managed-identity) article. --> [!NOTE] -> If you change your database account from First Party to System or User Assigned Identy, and enable Azure Synapse Link in your database account, you won't be able to return to First Party identity since you can't disable Synapse Link from your database account. --## Support for multiple Azure Synapse Analytics runtimes --The analytical store is optimized to provide scalability, elasticity, and performance for analytical workloads without any dependency on the compute run-times. The storage technology is self-managed to optimize your analytics workloads without manual efforts. --Data in Azure Cosmos DB analytical store can be queried simultaneously from the different analytics runtimes supported by Azure Synapse Analytics. Azure Synapse Analytics supports Apache Spark and serverless SQL pool with Azure Cosmos DB analytical store. --> [!NOTE] -> You can only read from analytical store using Azure Synapse Analytics runtimes. And the opposite is also true, Azure Synapse Analytics runtimes can only read from analytical store. Only the auto-sync process can change data in analytical store. You can write data back to Azure Cosmos DB transactional store using Azure Synapse Analytics Spark pool, using the built-in Azure Cosmos DB OLTP SDK. --## <a id="analytical-store-pricing"></a> Pricing --Analytical store follows a consumption-based pricing model where you're charged for: --* Storage: the volume of the data retained in the analytical store every month including historical data as defined by analytical TTL. --* Analytical write operations: the fully managed synchronization of operational data updates to the analytical store from the transactional store (auto-sync) --* Analytical read operations: the read operations performed against the analytical store from Azure Synapse Analytics Spark pool and serverless SQL pool run times. --Analytical store pricing is separate from the transaction store pricing model. There's no concept of provisioned RUs in the analytical store. See [Azure Cosmos DB pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) for full details on the pricing model for analytical store. --Data in the analytics store can only be accessed through Azure Synapse Link, which is done in the Azure Synapse Analytics runtimes: Azure Synapse Apache Spark pools and Azure Synapse serverless SQL pools. See [Azure Synapse Analytics pricing page](https://azure.microsoft.com/pricing/details/synapse-analytics/) for full details on the pricing model to access data in analytical store. --In order to get a high-level cost estimate to enable analytical store on an Azure Cosmos DB container, from the analytical store perspective, you can use the [Azure Cosmos DB Capacity planner](https://cosmos.azure.com/capacitycalculator/) and get an estimate of your analytical storage and write operations costs. --Analytical store read operations estimates aren't included in the Azure Cosmos DB cost calculator since they are a function of your analytical workload. But as a high-level estimate, scan of 1 TB of data in analytical store typically results in 130,000 analytical read operations, and results in a cost of $0.065. As an example, if you use Azure Synapse serverless SQL pools to perform this scan of 1 TB, it will cost $5.00 according to [Azure Synapse Analytics pricing page](https://azure.microsoft.com/pricing/details/synapse-analytics/). The final total cost for this 1 TB scan would be $5.065. --While the above estimate is for scanning 1TB of data in analytical store, applying filters reduces the volume of data scanned and this determines the exact number of analytical read operations given the consumption pricing model. A proof-of-concept around the analytical workload would provide a finer estimate of analytical read operations. This estimate doesn't include the cost of Azure Synapse Analytics. ---## Next steps --To learn more, see the following docs: --* [Azure Synapse Link for Azure Cosmos DB](synapse-link.md) --* Check out the training module on how to [Design hybrid transactional and analytical processing using Azure Synapse Analytics](/training/modules/design-hybrid-transactional-analytical-processing-using-azure-synapse-analytics/) --* [Get started with Azure Synapse Link for Azure Cosmos DB](configure-synapse-link.md) --* [Frequently asked questions about Synapse Link for Azure Cosmos DB](synapse-link-frequently-asked-questions.yml) --* [Azure Synapse Link for Azure Cosmos DB Use cases](synapse-link-use-cases.md) |
cosmos-db | Analytical Store Private Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-private-endpoints.md | - Title: Configure private endpoints for Azure Cosmos DB analytical store. -description: Learn how to set up managed private endpoints for Azure Cosmos DB analytical store to restrict network access. --- Previously updated : 09/29/2022----# Configure Azure Private Link for Azure Cosmos DB analytical store --In this article, you will learn how to set up managed private endpoints for Azure Cosmos DB analytical store. If you are using the transactional store, see [Private endpoints for the transactional store](how-to-configure-private-endpoints.md) article. Using [managed private endpoints](../synapse-analytics/security/synapse-workspace-managed-private-endpoints.md), you can restrict network access of your Azure Cosmos DB analytical store, to a Managed Virtual Network associated with your Azure Synapse workspace. Managed private endpoints establish a private link to your analytical store. --> [!NOTE] -> If you are using Private DNS Zones for Azure Cosmos DB and wish to create a Synapse managed private endpoint to the analytical store sub-resource, you must first create a DNS zone for the analytical store (`privatelink.analytics.cosmos.azure.com`) linked to your Azure Cosmos DB's virtual network. --## Enable a private endpoint for the analytical store --### Set up Azure Synapse Analytics workspace with a managed virtual network and data-exfiltration --[Create a workspace in Azure Synapse Analytics with data-exfiltration enabled.](../synapse-analytics/security/how-to-create-a-workspace-with-data-exfiltration-protection.md) With [data-exfiltration protection](../synapse-analytics/security/workspace-data-exfiltration-protection.md), you can ensure that malicious users cannot copy or transfer data from your Azure resources to locations outside your organizationΓÇÖs scope. --The following access restrictions are applicable when data-exfiltration protection is turned on for an Azure Synapse Analytics workspace: --* If you are using Azure Spark for Azure Synapse Analytics, access is only allowed to the approved managed private endpoints for Azure Cosmos DB analytical store. --* If you are using Synapse serverless SQL pools, you can query any Azure Cosmos DB account using Azure Synapse Link. However, write requests that [create external tables as select (CETAS)](../synapse-analytics/sql/develop-tables-cetas.md) are only allowed to the approved manage private endpoints in the workspace virtual network. --> [!NOTE] -> You cannot change managed virtual network and data-exfiltration configuration after the workspace is created. --### Add a managed private endpoint for Azure Cosmos DB analytical store --1. Sign in to the [Azure portal](https://portal.azure.com). --1. From the Azure portal, navigate to your Synapse Analytics workspace and open the **Overview** pane. --1. Launch Synapse Studio by navigating to **Getting Started** pane and select **Open** under **Open Synapse Studio**. --1. In the Synapse Studio, open the **Manage** tab. --1. Navigate to **Managed private endpoints** and select **New** -- :::image type="content" source="./media/analytical-store-private-endpoints/create-new-private-endpoint.png" alt-text="Create a new private endpoint for analytical store." border="true"::: --1. Select **Azure Cosmos DB (API for NoSQL)** account type > **Continue**. -- :::image type="content" source="./media/analytical-store-private-endpoints/select-private-endpoint.png" alt-text="Select Azure Cosmos DB API for NoSQL to create a private endpoint." border="true"::: --1. Fill out the **New managed private endpoint** form with the following details: -- * **Name** - Name for your managed private endpoint. This name cannot be updated after it's created. - * **Description** - Provide a friendly description to identify your private endpoint. - * **Azure subscription** - Select an Azure Cosmos DB account from the list of available accounts in your Azure subscriptions. - * **Azure Cosmos DB account name** - Select an existing Azure Cosmos DB account of type SQL or MongoDB. - * **Target sub-resource** - Select one of the following options: - **Analytical**: If you want to add the private endpoint for Azure Cosmos DB analytical store. - **NoSQL** (or **MongoDB**): If you want to add OLTP or transactional account endpoint. -- > [!NOTE] - > You can add both transactional store and analytical store private endpoints to the same Azure Cosmos DB account in an Azure Synapse Analytics workspace. If you only want to run analytical queries, you may only want to map the analytical private endpoint. -- :::image type="content" source="./media/analytical-store-private-endpoints/choose-analytical-private-endpoint.png" alt-text="Choose analytical for the target subresource." border="true"::: --1. After creating, go to the private endpoint name and select **Manage approvals in Azure portal**. --1. Navigate to your Azure Cosmos DB account, select the private endpoint, and select **Approve**. --1. Navigate back to Synapse Analytics workspace and click **Refresh** on the **Managed private endpoints** pane. Verify that private endpoint is in **Approved** state. -- :::image type="content" source="./media/analytical-store-private-endpoints/approved-private-endpoint.png" alt-text="Verify that the private endpoint is approved." border="true"::: --## Use Apache Spark for Azure Synapse Analytics --If you created an Azure Synapse workspace with data-exfiltration protection turned on, the outbound access from Synapse Spark to Azure Cosmos DB accounts will be blocked, by default. Also, if the Azure Cosmos DB already has an existing private endpoint, Synapse Spark will be blocked from accessing it. --To allow access to Azure Cosmos DB data: --* If you are using Azure Synapse Link to query Azure Cosmos DB data, add a managed **analytical** private endpoint for the Azure Cosmos DB account. --* If you are using batch writes/reads and/or streaming writes/reads to transactional store, add a managed *SQL* or *MongoDB* private endpoint for the Azure Cosmos DB account. In addition, you should also set *connectionMode* to *Gateway* as shown in the following code snippet: -- ```python - # Write a Spark DataFrame into an Azure Cosmos DB container - # To select a preferred lis of regions in a multi-region account, add .option("spark.cosmos.preferredRegions", "<Region1>, <Region2>") - - YOURDATAFRAME.write\ - .format("cosmos.oltp")\ - .option("spark.synapse.linkedService", "<your-Cosmos-DB-linked-service-name>")\ - .option("spark.cosmos.container","<your-Cosmos-DB-container-name>")\ - .option("spark.cosmos.write.upsertEnabled", "true")\ - .option("spark.cosmos.connection.mode", "Gateway")\ - .mode('append')\ - .save() - - ``` --## Using Synapse serverless SQL pools --Synapse serverless SQL pools use multi-tenant capabilities that are not deployed into managed virtual network. If the Azure Cosmos DB account has an existing private endpoint, Synapse serverless SQL pool will be blocked from accessing the account, due to network isolation checks on the Azure Cosmos DB account. --To configure network isolation for this account from a Synapse workspace: --1. Allow the Synapse workspace to access the Azure Cosmos DB account by specifying `NetworkAclBypassResourceId` setting on the account. -- **Using PowerShell** -- ```powershell-interactive - Update-AzCosmosDBAccount -Name MyCosmosDBDatabaseAccount -ResourceGroupName MyResourceGroup -NetworkAclBypass AzureServices -NetworkAclBypassResourceId "/subscriptions/subId/resourceGroups/rgName/providers/Microsoft.Synapse/workspaces/wsName" - ``` -- **Using Azure CLI** -- ```azurecli-interactive - az cosmosdb update --name MyCosmosDBDatabaseAccount --resource-group MyResourceGroup --network-acl-bypass AzureServices --network-acl-bypass-resource-ids "/subscriptions/subId/resourceGroups/rgName/providers/Microsoft.Synapse/workspaces/wsName" - ``` -- > [!NOTE] - > Azure Cosmos DB account and Azure Synapse Analytics workspace should be under same Microsoft Entra tenant. --2. You can now access the account from serverless SQL pools, using T-SQL queries over Azure Synapse Link. However, to ensure network isolation for the data in analytical store, you must add an **analytical** managed private endpoint for this account. Otherwise, the data in the analytical store will not be blocked from public access. --> [!IMPORTANT] -> If you are using Azure Synapse Link and need network isolation for your data in analytical store, you must map the Azure Cosmos DB account into Synapse workspace using **Analytical** managed private endpoint. --## Next steps --* Get started with [querying analytical store with Azure Synapse Spark 3](../synapse-analytics/synapse-link/how-to-query-analytical-store-spark-3.md?toc=/azure/cosmos-db/toc.json&bc=/azure/cosmos-db/breadcrumb/toc.json) -* Get started with [querying analytical store with Azure Synapse Spark 2](../synapse-analytics/synapse-link/how-to-query-analytical-store-spark.md?toc=/azure/cosmos-db/toc.json&bc=/azure/cosmos-db/breadcrumb/toc.json) -* Get started with [querying analytical store with Azure Synapse serverless SQL pools](../synapse-analytics/sql/query-cosmos-db-analytical-store.md?toc=/azure/cosmos-db/toc.json&bc=/azure/cosmos-db/breadcrumb/toc.json) |
cosmos-db | Analytics And Business Intelligence Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytics-and-business-intelligence-overview.md | - Title: Analytics and BI- -description: Review Azure Cosmos DB options to enable large-scale analytics and BI reporting on your operational data. ---- Previously updated : 07/01/2024---# Analytics and Business Intelligence (BI) on your Azure Cosmos DB data --Azure Cosmos DB offers various options to enable large-scale analytics and BI reporting on your operational data. --To get meaningful insights on your Azure Cosmos DB data, you may need to query across multiple partitions, collections, or databases. In some cases, you might combine this data with other data sources in your organization such as Azure SQL Database, Azure Data Lake Storage Gen2 etc. You might also query with aggregate functions such as sum, count etc. Such queries need heavy computational power, which likely consumes more request units (RUs) and as a result, these queries might potentially affect your mission critical workload performance. --To isolate transactional workloads from the performance impact of complex analytical queries, database data is ingested nightly to a central location using complex Extract-Transform-Load (ETL) pipelines. Such ETL-based analytics are complex, costly with delayed insights on business data. --Azure Cosmos DB addresses these challenges by providing zero ETL, cost-effective analytics offerings. --## Zero ETL, near real-time analytics on Azure Cosmos DB -Azure Cosmos DB offers zero ETL, near real-time analytics on your data without affecting the performance of your transactional workloads or request units (RUs). These offerings remove the need for complex ETL pipelines, making your Azure Cosmos DB data seamlessly available to analytics engines. With reduced latency to insights, you can provide enhanced customer experience and react more quickly to changes in market conditions or business environment. Here are some sample [scenarios](synapse-link-use-cases.md) you can achieve with quick insights into your data. - - You can enable zero-ETL analytics and BI reporting on Azure Cosmos DB using the following options: --* Mirroring your data into Microsoft Fabric -* Enabling Azure Synapse Link to access data from Azure Synapse Analytics - --### Option 1: Mirroring your Azure Cosmos DB data into Microsoft Fabric --Mirroring enables you to seamlessly bring your Azure Cosmos DB database data into Microsoft Fabric. With zero ETL, you can get quick, rich business insights on your Azure Cosmos DB data using FabricΓÇÖs built-in analytics, BI, and AI capabilities. --Your Cosmos DB operational data is incrementally replicated into Fabric OneLake in near real-time. Data in OneLake is stored in open-source Delta Parquet format and made available to all analytical engines in Fabric. With open access, you can use it with various Azure services such as Azure Databricks, Azure HDInsight, and more. OneLake also helps unify your data estate for your analytical needs. Mirrored data can be joined with any other data in OneLake, such as Lakehouses, Warehouses or shortcuts. You can also join Azure Cosmos DB data with other mirrored database sources such as Azure SQL Database, Snowflake. -You can query across Azure Cosmos DB collections or databases mirrored into OneLake. --With Mirroring in Fabric, you don't need to piece together different services from multiple vendors. Instead, you can enjoy a highly integrated, end-to-end, and easy-to-use product that is designed to simplify your analytics needs. -You can use T-SQL to run complex aggregate queries and Spark for data exploration. You can seamlessly access the data in notebooks, use data science to build machine learning models, and build Power BI reports using Direct Lake powered by rich Copilot integration. ---If you're looking for analytics on your operational data in Azure Cosmos DB, mirroring provides: -* Zero ETL, cost-effective near real-time analytics on Azure Cosmos DB data without affecting your request unit (RU) consumption -* Ease of bringing data across various sources into Fabric OneLake. -* Improved query performance of SQL engine handling delta tables, with V-order optimizations -* Improved cold start time for Spark engine with deep integration with ML/notebooks -* One-click integration with Power BI with Direct Lake and Copilot -* Richer app integration to access queries and views with GraphQL -* Open access to and from other services such as Azure Databricks --To get started with mirroring, visit ["Get started with mirroring tutorial"](/fabric/database/mirrored-database/azure-cosmos-db-tutorial?context=/azure/cosmos-db/context/context). ---### Option 2: Azure Synapse Link to access data from Azure Synapse Analytics -Azure Synapse Link for Azure Cosmos DB creates a tight seamless integration between Azure Cosmos DB and Azure Synapse Analytics, enabling zero ETL, near real-time analytics on your operational data. -Transactional data is seamlessly synced to Analytical store, which stores the data in columnar format optimized for analytics. --Azure Synapse Analytics can access this data in Analytical store, without further movement, using Azure Synapse Link. Business analysts, data engineers, and data scientists can now use Synapse Spark or Synapse SQL interchangeably to run near real time business intelligence, analytics, and machine learning pipelines. --The following image shows the Azure Synapse Link integration with Azure Cosmos DB and Azure Synapse Analytics: --- > [!IMPORTANT] - > Mirroring in Microsoft Fabric is now available in preview for NoSql API. This feature provides all the capabilities of Azure Synapse Link with better analytical performance, ability to unify your data estate with Fabric OneLake and open access to your data in OneLake with Delta Parquet format. If you are considering Azure Synapse Link, we recommend that you try mirroring to assess overall fit for your organization. To get started with mirroring, click [here](/fabric/database/mirrored-database/azure-cosmos-db?context=/azure/cosmos-db/context/context). --To get started with Azure Synapse Link, visit ["Getting started with Azure Synapse Link"](synapse-link.md). ---## Real-time analytics and BI on Azure Cosmos DB: Other options -There are a few other options to enable real-time analytics on Azure Cosmos DB data: -* Using [change feed](nosql/changefeed-ecommerce-solution.md) -* Using [Spark connector directly on Azure Cosmos DB](nosql/tutorial-spark-connector.md) -* Using Power BI connector directly on Azure Cosmos DB --While these options are included for completeness and work well with single partition queries in real-time, these methods have the following challenges for analytical queries: -* Performance impact on your workload: -- Analytical queries tend to be complex and consume significant compute capacity. When these queries are run against your Azure Cosmos DB data directly, you might experience performance degradation on your transactional queries. -* Cost impact: - - When analytical queries are run directly against your database or collections, they increase the need for request units allocated, as analytical queries tend to be complex and need more computation power. Increased RU usage will likely lead to significant cost impact over time, if you run aggregate queries. --Instead of these options, we recommend that you use Mirroring in Microsoft Fabric or Azure Synapse Link, which provide zero ETL analytics, without affecting transactional workload performance or request units. --## Related content --* [Mirroring Azure Cosmos DB overview](/fabric/database/mirrored-database/azure-cosmos-db?context=/azure/cosmos-db/context/context) --* [Getting started with mirroring](/fabric/database/mirrored-database/azure-cosmos-db-tutorial?context=/azure/cosmos-db/context/context) --* [Azure Synapse Link for Azure Cosmos DB](synapse-link.md) --* [Working with Azure Synapse Link for Azure Cosmos DB](configure-synapse-link.md) -- |
cosmos-db | Analytics And Business Intelligence Use Cases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytics-and-business-intelligence-use-cases.md | - Title: Near real-time analytics use cases for Azure Cosmos DB -description: Learn how real-time analytics is used in Supply chain analytics, forecasting, reporting, real-time personalization, and IOT predictive maintenance. ---- Previously updated : 06/25/2024----# Azure Cosmos DB: No-ETL analytics use cases --Azure Cosmos DB provides various analytics options for no-ETL, near real-time analytics over operational data. You can enable analytics on your Azure Cosmos DB data using following options: -* Mirroring Azure Cosmos DB in Microsoft Fabric -* Azure Synapse Link for Azure Cosmos DB --To learn more about these options, see ["Analytics and BI on your Azure Cosmos DB data."](analytics-and-business-intelligence-overview.md) --> [!IMPORTANT] -> Mirroring Azure Cosmos DB in Microsoft Fabric is now available in preview for NoSql API. This feature provides all the capabilities of Azure Synapse Link with better analytical performance, ability to unify your data estate with Fabric OneLake and open access to your data in OneLake with Delta Parquet format. If you are considering Azure Synapse Link, we recommend that you try mirroring to assess overall fit for your organization. To get started with mirroring, click [here](/fabric/database/mirrored-database/azure-cosmos-db?context=/azure/cosmos-db/context/context). --No-ETL, near real-time analytics can open up various possibilities for your businesses. Here are three sample scenarios: --* Supply chain analytics, forecasting & reporting -* Real-time personalization -* Predictive maintenance, anomaly detection in IOT scenarios --## Supply chain analytics, forecasting & reporting --Research studies show that embedding big data analytics in supply chain operations leads to improvements in order-to-cycle delivery times and supply chain efficiency. --Manufacturers are onboarding to cloud-native technologies to break out of constraints of legacy Enterprise Resource Planning (ERP) and Supply Chain Management (SCM) systems. With supply chains generating increasing volumes of operational data every minute (order, shipment, transaction data), manufacturers need an operational database. This operational database should scale to handle the data volumes as well as an analytical platform to get to a level of real-time contextual intelligence to stay ahead of the curve. --The following architecture shows the power of using Azure Cosmos DB as the cloud-native operational database in supply chain analytics: ---Based on previous architecture, you can achieve the following use cases: --* **Prepare & train predictive pipeline:** Generate insights over the operational data across the supply chain using machine learning translates. This way you can lower inventory, operations costs, and reduce the order-to-delivery times for customers. -- Mirroring and Synapse Link allow you to analyze the changing operational data in Azure Cosmos DB without any manual ETL processes. These offerings save you from additional cost, latency, and operational complexity. They enable data engineers and data scientists to build robust predictive pipelines: -- * Query operational data from Azure Cosmos DB by using native integration with Apache Spark pools in Microsoft Fabric or Azure Synapse Analytics. You can query the data in an interactive notebook or scheduled remote jobs without complex data engineering. -- * Build Machine Learning (ML) models with Spark ML algorithms and Azure Machine Learning (AML) integration in Microsoft Fabric or Azure Synapse Analytics. -- * Write back the results after model inference into Azure Cosmos DB for operational near-real-time scoring. --* **Operational reporting:** Supply chain teams need flexible and custom reports over real-time, accurate operational data. These reports are required to obtain a snapshot view of supply chain effectiveness, profitability, and productivity. It allows data analysts and other key stakeholders to constantly reevaluate the business and identify areas to tweak to reduce operational costs. -- Mirroring and Synapse Link for Azure Cosmos DB enable rich business intelligence (BI)/reporting scenarios: -- * Query operational data from Azure Cosmos DB by using native integration with full expressiveness of T-SQL language. -- * Model and publish auto refreshing BI dashboards over Azure Cosmos DB through Power BI integrated in Microsoft Fabric or Azure Synapse Analytics. --The following is some guidance for data integration for batch & streaming data into Azure Cosmos DB: --* **Batch data integration & orchestration:** With supply chains getting more complex, supply chain data platforms need to integrate with variety of data sources and formats. Microsoft Fabric and Azure Synapse come built-in with the same data integration engine and experiences as Azure Data Factory. This integration allows data engineers to create rich data pipelines without a separate orchestration engine: -- * Move data from 85+ supported data sources to [Azure Cosmos DB with Azure Data Factory](../data-factory/connector-azure-cosmos-db.md). -- * Write code-free ETL pipelines to Azure Cosmos DB including [relational-to-hierarchical and hierarchical-to-hierarchical mappings with mapping data flows](../data-factory/how-to-sqldb-to-cosmosdb.md). --* **Streaming data integration & processing:** With the growth of Industrial IoT (sensors tracking assets from 'floor-to-store', connected logistics fleets, etc.), there is an explosion of real-time data being generated in a streaming fashion that needs to be integrated with traditional slow moving data for generating insights. Azure Stream Analytics is a recommended service for streaming ETL and processing on Azure with a [wide range of scenarios](../stream-analytics/streaming-technologies.md). Azure Stream Analytics supports [Azure Cosmos DB as a native data sink](../stream-analytics/stream-analytics-documentdb-output.md). --## Real-time personalization --Retailers today must build secure and scalable e-commerce solutions that meet the demands of both customers and business. These e-commerce solutions need to engage customers through customized products and offers, process transactions quickly and securely, and focus on fulfillment and customer service. Azure Cosmos DB along with the latest Synapse Link for Azure Cosmos DB allows retailers to generate personalized recommendations for customers in real time. They use low-latency and tunable consistency settings for immediate insights as shown in the following architecture: ---* **Prepare & train predictive pipeline:** You can generate insights over the operational data across your business units or customer segments using Fabric or Synapse Spark and machine learning models. This translates to personalized delivery to target customer segments, predictive end-user experiences, and targeted marketing to fit your end-user requirements. -) -## IOT predictive maintenance --Industrial IOT innovations have drastically reduced downtimes of machinery and increased overall efficiency across all fields of industry. One of such innovations is predictive maintenance analytics for machinery at the edge of the cloud. --The following is an architecture using the cloud native HTAP capabilities in IoT predictive maintenance: ---* **Prepare & train predictive pipeline:** The historical operational data from IoT device sensors could be used to train predictive models such as anomaly detectors. These anomaly detectors are then deployed back to the edge for real-time monitoring. Such a virtuous loop allows for continuous retraining of the predictive models. --* **Operational reporting:** With the growth of digital twin initiatives, companies are collecting vast amounts of operational data from large number of sensors to build a digital copy of each machine. This data powers BI needs to understand trends over historical data in addition to recent hot data. --## Related content ---* [Mirroring Azure Cosmos DB overview](/fabric/database/mirrored-database/azure-cosmos-db?context=/azure/cosmos-db/context/context) --* [Getting started with mirroring](/fabric/database/mirrored-database/azure-cosmos-db-tutorial?context=/azure/cosmos-db/context/context) - -* [Azure Synapse Link for Azure Cosmos DB](synapse-link.md) --* [Working with Azure Synapse Link for Azure Cosmos DB](configure-synapse-link.md) - |
cosmos-db | Attachments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/attachments.md | - Title: Azure Cosmos DB Attachments -description: This article presents an overview of Azure Cosmos DB Attachments. ----- Previously updated : 08/07/2020----# Azure Cosmos DB Attachments --Azure Cosmos DB attachments are special items that contain references to an associated metadata with an external blob or media file. --Azure Cosmos DB supports two types of attachments: --* **Unmanaged Attachments** are a wrapper around a URI reference to a blob that is stored in an external service (for example, Azure Storage, OneDrive, etc.). This approach is similar to storing a URI property in a standard Azure Cosmos DB item. -* **Managed Attachments** are blobs managed and stored internally by Azure Cosmos DB and exposed via a system-generated mediaLink. ---> [!NOTE] -> Attachments are a legacy feature. Their support is scoped to offer continued functionality if you are already using this feature. -> -> Instead of using attachments, we recommend you to use Azure Blob Storage as a purpose-built blob storage service to store blob data. You can continue to store metadata related to blobs, along with reference URI links, in Azure Cosmos DB as item properties. Storing this data in Azure Cosmos DB provides the ability to query metadata and links to blobs stored in Azure Blob Storage. -> -> Microsoft is committed to provide a minimum 36-month notice prior to fully deprecating attachments ΓÇô which will be announced at a further date. --## Known limitations --Azure Cosmos DBΓÇÖs managed attachments are distinct from its support for standard items ΓÇô for which it offers unlimited scalability, global distribution, and integration with other Azure services. --- Attachments aren't supported in all versions of the Azure Cosmos DB SDKs.-- Managed attachments are limited to 2 GB of storage per database account.-- Managed attachments aren't compatible with Azure Cosmos DBΓÇÖs global distribution, and they aren't replicated across regions.--> [!NOTE] -> Azure Cosmos DB for MongoDB version 3.2 utilizes managed attachments for GridFS and these are subject to the same limitations. -> -> We recommend developers using the MongoDB GridFS feature set to upgrade to Azure Cosmos DB for MongoDB version 3.6 or higher, which is decoupled from attachments and provides a better experience. Alternatively, developers using the MongoDB GridFS feature set should also consider using Azure Blob Storage - which is purpose-built for storing blob content and offers expanded functionality at lower cost compared to GridFS. --## Migrating Attachments to Azure Blob Storage --We recommend migrating Azure Cosmos DB attachments to Azure Blob Storage by following these steps: --1. Copy attachments data from your source Azure Cosmos DB container to your target Azure Blob Storage container. -2. Validate the uploaded blob data in the target Azure Blob Storage container. -3. If applicable, add URI references to the blobs contained in Azure Blob Storage as string properties within your Azure Cosmos DB dataset. -4. Refactor your application code to read and write blobs from the new Azure Blob Storage container. --The following code sample shows how to copy attachments from Azure Cosmos DB to Azure Blob storage as part of a migration flow by using Azure Cosmos DB's .NET SDK v2 and Azure Blob Storage .NET SDK v12. Make sure to replace the `<placeholder values>` for the source Azure Cosmos DB account and target Azure Blob storage container. --```csharp --using System; -using System.IO; -using System.Threading.Tasks; -using Microsoft.Azure.Documents; -using Microsoft.Azure.Documents.Client; -using Azure.Storage.Blobs; -using Azure.Storage.Blobs.Models; --namespace attachments -{ - class Program - { - private static string cosmosAccount = "<Your_Azure_Cosmos_account_URI>"; - private static string cosmosKey = "<Your_Azure_Cosmos_account_PRIMARY_KEY>"; - private static string cosmosDatabaseName = "<Your_Azure_Cosmos_database>"; - private static string cosmosCollectionName = "<Your_Azure_Cosmos_collection>"; - private static string storageConnectionString = "<Your_Azure_Storage_connection_string>"; - private static string storageContainerName = "<Your_Azure_Storage_container_name>"; - private static DocumentClient cosmosClient = new DocumentClient(new Uri(cosmosAccount), cosmosKey); - private static BlobServiceClient storageClient = new BlobServiceClient(storageConnectionString); - private static BlobContainerClient storageContainerClient = storageClient.GetBlobContainerClient(storageContainerName); -- static void Main(string[] args) - { - CopyAttachmentsToBlobsAsync().Wait(); - } -- private async static Task CopyAttachmentsToBlobsAsync() - { - Console.WriteLine("Copying Azure Cosmos DB Attachments to Azure Blob Storage ..."); -- int totalCount = 0; - string docContinuation = null; -- // Iterate through each item (document in v2) in the Azure Cosmos DB container (collection in v2) to look for attachments. - do - { - FeedResponse<dynamic> response = await cosmosClient.ReadDocumentFeedAsync( - UriFactory.CreateDocumentCollectionUri(cosmosDatabaseName, cosmosCollectionName), - new FeedOptions - { - MaxItemCount = -1, - RequestContinuation = docContinuation - }); - docContinuation = response.ResponseContinuation; -- foreach (Document document in response) - { - string attachmentContinuation = null; - PartitionKey docPartitionKey = new PartitionKey(document.Id); -- // Iterate through each attachment within the item (if any). - do - { - FeedResponse<Attachment> attachments = await cosmosClient.ReadAttachmentFeedAsync( - document.SelfLink, - new FeedOptions - { - PartitionKey = docPartitionKey, - RequestContinuation = attachmentContinuation - } - ); - attachmentContinuation = attachments.ResponseContinuation; -- foreach (var attachment in attachments) - { - // Download the attachment in to local memory. - MediaResponse content = await cosmosClient.ReadMediaAsync(attachment.MediaLink); -- byte[] buffer = new byte[content.ContentLength]; - await content.Media.ReadAsync(buffer, 0, buffer.Length); -- // Upload the locally buffered attachment to blob storage - string blobId = String.Concat(document.Id, "-", attachment.Id); -- Azure.Response<BlobContentInfo> uploadedBob = await storageContainerClient.GetBlobClient(blobId).UploadAsync( - new MemoryStream(buffer, writable: false), - true - ); -- Console.WriteLine("Copied attachment ... Item Id: {0} , Attachment Id: {1}, Blob Id: {2}", document.Id, attachment.Id, blobId); - totalCount++; -- // Clean up attachment from Azure Cosmos DB. - // Warning: please verify you've succesfully migrated attachments to blog storage prior to cleaning up Azure Cosmos DB. - // await cosmosClient.DeleteAttachmentAsync( - // attachment.SelfLink, - // new RequestOptions { PartitionKey = docPartitionKey } - // ); -- // Console.WriteLine("Cleaned up attachment ... Document Id: {0} , Attachment Id: {1}", document.Id, attachment.Id); - } -- } while (!string.IsNullOrEmpty(attachmentContinuation)); - } - } - while (!string.IsNullOrEmpty(docContinuation)); -- Console.WriteLine("Finished copying {0} attachments to blob storage", totalCount); - } - } -} --``` --## Next steps --- Get started with [Azure Blob storage](../storage/blobs/storage-quickstart-blobs-dotnet.md)-- Get references for using attachments via [Azure Cosmos DB's .NET SDK v2](/dotnet/api/microsoft.azure.documents.attachment)-- Get references for using attachments via [Azure Cosmos DB's Java SDK v2](/java/api/com.microsoft.azure.documentdb.attachment)-- Get references for using attachments via [Azure Cosmos DB's REST API](/rest/api/cosmos-db/attachments) |
cosmos-db | Audit Control Plane Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/audit-control-plane-logs.md | - Title: How to audit Azure Cosmos DB control plane operations -description: Learn how to audit the control plane operations such as add a region, update throughput, region failover, add a VNet etc. in Azure Cosmos DB ---- Previously updated : 08/13/2021---# How to audit Azure Cosmos DB control plane operations --Control Plane in Azure Cosmos DB is a RESTful service that enables you to perform a diverse set of operations on the Azure Cosmos DB account. It exposes a public resource model (for example: database, account) and various operations to the end users to perform actions on the resource model. The control plane operations include changes to the Azure Cosmos DB account or container. For example, operations such as create an Azure Cosmos DB account, add a region, update throughput, region failover, add a VNet etc. are some of the control plane operations. This article explains how to audit the control plane operations in Azure Cosmos DB. You can run the control plane operations on Azure Cosmos DB accounts by using Azure CLI, PowerShell or Azure portal, whereas for containers, use Azure CLI or PowerShell. --The following are some example scenarios where auditing control plane operations is helpful: --* You want to get an alert when the firewall rules for your Azure Cosmos DB account are modified. The alert is required to find unauthorized modifications to rules that govern the network security of your Azure Cosmos DB account and take quick action. --* You want to get an alert if a new region is added or removed from your Azure Cosmos DB account. Adding or removing regions has implications on billing and data sovereignty requirements. This alert will help you detect an accidental addition or removal of region on your account. --* You want to get more details from the diagnostic logs on what has changed. For example, a VNet was changed. --## Disable key based metadata write access --Before you audit the control plane operations in Azure Cosmos DB, disable the key-based metadata write access on your account. When key based metadata write access is disabled, clients connecting to the Azure Cosmos DB account through account keys are prevented from accessing the account. You can disable write access by setting the `disableKeyBasedMetadataWriteAccess` property to true. After you set this property, changes to any resource can happen from a user with the proper Azure role and credentials. To learn more on how to set this property, see the [Preventing changes from SDKs](role-based-access-control.md#prevent-sdk-changes) article. --After the `disableKeyBasedMetadataWriteAccess` is turned on, if the SDK based clients run create or update operations, an error *"Operation 'POST' on resource 'ContainerNameorDatabaseName' is not allowed through Azure Cosmos DB endpoint* is returned. You have to turn on access to such operations for your account, or perform the create/update operations through Azure Resource Manager, Azure CLI or Azure PowerShell. To switch back, set the disableKeyBasedMetadataWriteAccess to **false** by using Azure CLI as described in the [Preventing changes from Azure Cosmos DB SDK](role-based-access-control.md#prevent-sdk-changes) article. Make sure to change the value of `disableKeyBasedMetadataWriteAccess` to false instead of true. --Consider the following points when turning off the metadata write access: --* Evaluate and ensure that your applications do not make metadata calls that change the above resources (For example, create collection, update throughput, …) by using the SDK or account keys. --* When `disableKeyBasedMetadataWriteAccess` is set to true, the metadata operations issued by the SDK are blocked. Alternatively, you can use Azure portal, Azure CLI, Azure PowerShell, or Azure Resource Manager template deployments to perform these operations. --## Enable diagnostic logs for control plane operations --You can enable diagnostic logs for control plane operations by using the Azure portal. After enabling, the diagnostic logs will record the operation as a pair of start and complete events with relevant details. For example, the *RegionFailoverStart* and *RegionFailoverComplete* will complete the region failover event. --Use the following steps to enable logging on control plane operations: --1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Cosmos DB account. --1. Open the **Diagnostic settings** pane, provide a **Name** for the logs to create. --1. Select **ControlPlaneRequests** for log type and select the **Send to Log Analytics** option. --1. Optionally, send the diagnostic logs to Azure Storage, Azure Event Hubs, Azure Monitor, or a third party. --You can also store the logs in a storage account or stream to an event hub. This article shows how to send logs to log analytics and then query them. After you enable, it takes a few minutes for the diagnostic logs to take effect. All the control plane operations performed after that point can be tracked. The following screenshot shows how to enable control plane logs: ---## View the control plane operations --After you turn on logging, use the following steps to track down operations for a specific account: --1. Sign in to the [Azure portal](https://portal.azure.com). --1. Open the **Monitor** tab from the left-hand navigation and then select the **Logs** pane. It opens a UI where you can easily run queries with that specific account in scope. Run the following query to view control plane logs: -- ```kusto - AzureDiagnostics - | where ResourceProvider=="MICROSOFT.DOCUMENTDB" and Category=="ControlPlaneRequests" - | where TimeGenerated >= ago(1h) - ``` -- The following screenshots capture logs when a consistency level is changed for an Azure Cosmos DB account. The `activityId_g` value from results is different from the activity ID of an operation: -- :::image type="content" source="./media/audit-control-plane-logs/add-ip-filter-logs.png" alt-text="Control plane logs when a VNet is added"::: -- The following screenshots capture logs when the keyspace or a table of a Cassandra account are created and when the throughput is updated. The control plane logs for create and update operations on the database and the container are logged separately as shown in the following screenshot: -- :::image type="content" source="./media/audit-control-plane-logs/throughput-update-logs.png" alt-text="Control plane logs when throughput is updated"::: --## Identify the identity associated to a specific operation --If you want to debug further, you can identify a specific operation in the **Activity log** by using the `activityId_g` or by the timestamp of the operation. Timestamp is used for some Resource Manager clients where the activity ID is not explicitly passed. The Activity log gives details about the identity with which the operation was initiated. The following screenshot shows how to use the `activityId_g` to find the operations associated with it in the Activity log: ---## Control plane operations for Azure Cosmos DB account --The following are the control plane operations available at the account level. Most of the operations are tracked at account level. These operations are available as metrics in Azure monitor: --* Region added -* Region removed -* Account deleted -* Region failed over -* Account created -* Virtual network deleted -* Account network settings updated -* Account replication settings updated -* Account keys updated -* Account backup settings updated -* Account diagnostic settings updated --## Control plane operations for database or containers --The following are the control plane operations available at the database and container level. These operations are available as metrics in Azure monitor: --* SQL Database Created -* SQL Database Updated -* SQL Database Throughput Updated -* SQL Database Deleted -* SQL Container Created -* SQL Container Updated -* SQL Container Throughput Updated -* SQL Container Deleted -* Cassandra Keyspace Created -* Cassandra Keyspace Updated -* Cassandra Keyspace Throughput Updated -* Cassandra Keyspace Deleted -* Cassandra Table Created -* Cassandra Table Updated -* Cassandra Table Throughput Updated -* Cassandra Table Deleted -* Gremlin Database Created -* Gremlin Database Updated -* Gremlin Database Throughput Updated -* Gremlin Database Deleted -* Gremlin Graph Created -* Gremlin Graph Updated -* Gremlin Graph Throughput Updated -* Gremlin Graph Deleted -* Mongo Database Created -* Mongo Database Updated -* Mongo Database Throughput Updated -* Mongo Database Deleted -* Mongo Collection Created -* Mongo Collection Updated -* Mongo Collection Throughput Updated -* Mongo Collection Deleted -* AzureTable Table Created -* AzureTable Table Updated -* AzureTable Table Throughput Updated -* AzureTable Table Deleted --## Diagnostic log operations --The following are the operation names in diagnostic logs for different operations: --* RegionAddStart, RegionAddComplete -* RegionRemoveStart, RegionRemoveComplete -* AccountDeleteStart, AccountDeleteComplete -* RegionFailoverStart, RegionFailoverComplete -* AccountCreateStart, AccountCreateComplete -* AccountUpdateStart, AccountUpdateComplete -* VirtualNetworkDeleteStart, VirtualNetworkDeleteComplete -* DiagnosticLogUpdateStart, DiagnosticLogUpdateComplete --For API-specific operations, the operation is named with the following format: --* ApiKind + ApiKindResourceType + OperationType -* ApiKind + ApiKindResourceType + "Throughput" + operationType --**Example** --* CassandraKeyspacesCreate -* CassandraKeyspacesUpdate -* CassandraKeyspacesThroughputUpdate -* SqlContainersUpdate --The *ResourceDetails* property contains the entire resource body as a request payload and it contains all the properties requested to update --## Diagnostic log queries for control plane operations --The following are some examples to get diagnostic logs for control plane operations: --```kusto -AzureDiagnostics  -| where Category startswith "ControlPlane" -| where OperationName contains "Update" -| project httpstatusCode_s, statusCode_s, OperationName, resourceDetails_s, activityId_g -``` --```kusto -AzureDiagnostics  -| where Category =="ControlPlaneRequests" -| where TimeGenerated >= todatetime('2020-05-14T17:37:09.563Z') -| project TimeGenerated, OperationName, apiKind_s, apiKindResourceType_s, operationType_s, resourceDetails_s -``` --```kusto -AzureDiagnostics -| where Category == "ControlPlaneRequests" -| where OperationName startswith "SqlContainersUpdate" -``` --```kusto -AzureDiagnostics -| where Category == "ControlPlaneRequests" -| where OperationName startswith "SqlContainersThroughputUpdate" -``` --Query to get the activityId and the caller who initiated the container delete operation: --```kusto -(AzureDiagnostics -| where Category == "ControlPlaneRequests" -| where OperationName == "SqlContainersDelete" -| where TimeGenerated >= todatetime('9/3/2020, 5:30:29.300 PM') -| summarize by activityId_g ) -| join ( -AzureActivity -| parse HTTPRequest with * "clientRequestId\": \"" activityId_g "\"" * -| summarize by Caller, HTTPRequest, activityId_g) -on activityId_g -| project Caller, activityId_g -``` --Query to get index or ttl updates. You can then compare the output of this query with an earlier update to see the change in index or ttl. --```Kusto -AzureDiagnostics -| where Category =="ControlPlaneRequests" -| where OperationName == "SqlContainersUpdate" -| project resourceDetails_s -``` --**output:** --```json -{id:skewed,indexingPolicy:{automatic:true,indexingMode:consistent,includedPaths:[{path:/*,indexes:[]}],excludedPaths:[{path:/_etag/?}],compositeIndexes:[],spatialIndexes:[]},partitionKey:{paths:[/pk],kind:Hash},defaultTtl:1000000,uniqueKeyPolicy:{uniqueKeys:[]},conflictResolutionPolicy:{mode:LastWriterWins,conflictResolutionPath:/_ts,conflictResolutionProcedure:} -``` --## Next steps --* [Prevent Azure Cosmos DB resources from being deleted or changed](resource-locks.md) -* [Explore Azure Monitor for Azure Cosmos DB](insights-overview.md?toc=/azure/cosmos-db/toc.json&bc=/azure/cosmos-db/breadcrumb/toc.json) -* [Monitor and debug with metrics in Azure Cosmos DB](use-metrics.md) |
cosmos-db | Audit Restore Continuous | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/audit-restore-continuous.md | - Title: Auditing the point in time restore action for continuous backup mode in Azure Cosmos DB -description: This article provides details available to audit Azure Cosmos DB's point in time restore feature in continuous backup mode. --- Previously updated : 04/18/2022-----# Audit the point-in-time restore action for continuous backup mode in Azure Cosmos DB --Azure Cosmos DB provides you with a list of all point-in-time restores for continuous mode that were performed on an Azure Cosmos DB account using [activity logs](monitor.md#activity-log). Activity logs can be viewed for any Azure Cosmos DB account from the **Activity Logs** page in the Azure portal. The activity log shows all the operations that were triggered on the specific account. When a point-in-time restore is triggered, it shows up as `Restore Database Account` operation on the source account as well as the target account. The activity log for the source account can be used to audit restore events, and the activity logs on the target account can be used to get the updates about the progress of the restore. --## Audit the restores that were triggered on a live database account --When a restore is triggered on a source account, a log is emitted with the status *Started*. And when the restore succeeds or fails, a new log is emitted with the status *Succeeded* or *Failed* respectively. --To get the list of just the restore operations that were triggered on a specific account, you can open the Activity Log of the source account, and search for **Restore database account** in the search bar with the required **Timespan** filter. The `UserPrincipalName` of the user that triggered the restore can be found from the `Event initiated by` column. ---The parameters of the restore request can be found by clicking on the event and selecting the JSON tab: ---## Audit the restores that were triggered on a deleted database account --For the accounts that were already deleted, there would not be any database account page. Instead, the Activity Log in the subscription page can be used to get the restores that were triggered on a deleted account. Once the Activity Log page is opened, a new filter can be added to narrow down the results specific to the resource group the account existed in, or even using the database account name in the Resource filter. The Resource for the activity log is the database account on which the restore was triggered. ---The activity logs can also be accessed using Azure CLI or Azure PowerShell. For more information on activity logs, review [Azure Activity log - Azure Monitor](../azure-monitor/essentials/activity-log.md). --## Track the progress of the restore operation --Azure Cosmos DB allows you to track the progress of the restore using the activity logs of the restored database account. Once the restore is triggered, you will see a notification with the title **Restore Account**. ---The account status would be *Creating*, but it would have an Activity Log page. A new log event will appear after the restore of each collection. Note that there can be a delay of 5-10 minutes to see the log event after the actual restore of the collection is complete. -- ## Next steps - - * Learn more about [continuous backup](continuous-backup-restore-introduction.md) mode. - * Provision an account with continuous backup by using the [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), the [Azure CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template). - * [Manage permissions](continuous-backup-restore-permissions.md) required to restore data with continuous backup mode. - * Learn about the [resource model of continuous backup mode](continuous-backup-restore-resource-model.md). - * Explore the [Frequently asked questions for continuous mode](continuous-backup-restore-frequently-asked-questions.yml). |
cosmos-db | Automated Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/automated-recommendations.md | - Title: Automated performance, cost, security recommendations for Azure Cosmos DB -description: Learn how to view customized performance, cost, security, and other recommendations for Azure Cosmos DB based on your workload patterns. ---- Previously updated : 08/26/2021----# Automated recommendations for Azure Cosmos DB --All the cloud services including Azure Cosmos DB get frequent updates with new features, capabilities, and improvements. ItΓÇÖs important for your application to keep up with the latest performance and security updates. The Azure portal offers customized recommendations that enable you to maximize the performance of your application. The Azure Cosmos DB's advisory engine continuously analyzes the usage history of your Azure Cosmos DB resources and provides recommendations based on your workload patterns. These recommendations correspond to areas like partitioning, indexing, network, security etc. These customized recommendations help you to improve the performance of your application. --## View recommendations --You can view recommendations for Azure Cosmos DB in the following ways: --- One way to view the recommendations is within the notifications tab. If there are new recommendations, you will see a message bar. Sign into your [Azure portal](https://portal.azure.com) and navigate to your Azure Cosmos DB account. Within your Azure Cosmos DB account, open the **Notifications** pane and then select the **Recommendations** tab. You can select the message and view recommendations. -- :::image type="content" source="./media/automated-recommendations/cosmos-db-pane-recommendations.png" alt-text="View recommendations from Azure Cosmos DB pane"::: --- You can also find the recommendations through [Azure Advisor](../advisor/advisor-overview.md) in categorized by different buckets such as cost, security, reliability, performance, and operational excellence. You can select specific subscriptions and filter by the resource type, which is **Azure Cosmos DB accounts**. When you select a specific recommendation, it displays the actions you can take to benefit your workloads.-- :::image type="content" source="./media/automated-recommendations/advisor-pane-recommendations.png" alt-text="View recommendations from Azure Advisor pane"::: --Not all recommendations shown in the Azure Cosmos DB pane are available in the Azure Advisor and vice versa. ThatΓÇÖs because based on the type of recommendation they fit in either the Azure Advisor pane, Azure Cosmos DB pane or both. --Currently Azure Cosmos DB supports recommendations on the following areas. Each of these recommendations includes a link to the relevant section of the documentation, so itΓÇÖs easy for you to take the next steps. --## SDK usage recommendations --In this category, the advisor detects the usage of an old version of SDKs and recommends that you upgrade to a newer version to leverage the latest bug fixes and performance improvements. Currently the following SDK-specific recommendations are available: --|Name |Description | -||| -| Old Spark connector | Detects the usage of old versions of the Spark connector and recommends upgrading. | -| Old .NET SDK | Detects the usage of old versions of the .NET SDK and recommends upgrading. | -| Old Java SDK | Detects the usage of old versions of the Java connector and recommends upgrading. | --## Indexing recommendations --In this category, the advisor detects the indexing mode, indexing policy, indexed paths and recommends changing if the current configuration impacts the query performance. Currently the following indexing-specific recommendations are available: --|Name |Description | -||| -| Lazy indexing | Detects usage of lazy indexing mode and recommends using consistent indexing mode instead. The purpose of Azure Cosmos DBΓÇÖs lazy indexing mode is limited and can impact the freshness of query results in some situations so consistent indexing mode is recommended. | -| Default indexing policy with many indexed paths | Detects containers running on default indexing with many indexed paths and recommends customizing the indexing policy.| -| ORDER BY queries with high RU/s charge| Detects containers issuing ORDER BY queries with high RU/s charge and recommends exploring composite indexes for one container per account that issues the highest number of these queries in a 24 hour period.| -| MongoDB 3.6 accounts with no index and high RU/s consumption| Detects Azure Cosmos DBΓÇÖs API for MongoDB with 3.6 version of containers issuing queries with high RU/s charge and recommends adding indexes.| --## Cost optimization recommendations --In this category, the advisor detects the RU/s usage and determines that you can optimize the price by making some changes to your resources or by leveraging a different pricing model. Currently the following cost optimization-specific recommendations are available: --|Name |Description | -||| -| Reserved capacity | Detects your RU/s utilization and recommends reserved instances to users who can benefit from it. | -| Inactive containers | Detects the containers that haven't been used for more than 30 days and recommends reducing the throughput for such containers or deleting them.| -| New subscriptions with high throughput | Detects new subscriptions with accounts spending unusually high RU/s per day and provides them a notification. This notification is specifically to bring awareness to new customers that Azure Cosmos DB operates on provisioned throughput-based model and not consumption-based model. | -| Enable autoscale | Detects if your databases and containers currently using manual throughput would see cost savings by enabling autoscale. | -| Use manual throughput instead of autoscale | Detects if your databases and containers currently using autoscale throughput would see cost savings by switching to manual throughput. | --## Migration recommendations --In this category, the advisor detects that you are using legacy features recommends migrating so that you can leverage Azure Cosmos DBΓÇÖs massive scalability and other benefits. Currently the following migration-specific recommendations are available: --|Name |Description | -||| -| Non-partitioned containers | Detects fixed-size containers that are approaching their max storage limit and recommends migrating them to partitioned containers.| --## Query usage recommendations --In this category, the advisor detects the query execution and identifies that the query performance can be tuned with some changes. Currently the following query usage recommendations are available: --|Name |Description | -||| -| Queries with fixed page size | Detects queries issued with a fixed page size and recommends using -1 (no limit on the page size) instead of defining a specific value. This option reduces the number of network round trips required to retrieve all results. | --## Next steps --* [Tuning query performance in Azure Cosmos DB](nosql/query-metrics.md) -* [Troubleshoot query issues](troubleshoot-query-performance.md) when using Azure Cosmos DB -* Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. - * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md) - * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) |
cosmos-db | Autoscale Per Partition Region | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/autoscale-per-partition-region.md | - Title: Per-region and per-partition autoscale (preview)- -description: Configure autoscale in Azure Cosmos DB for uneven workload patterns by customizing autoscale for specific regions or partitions. ------ - ignite-2023 - Previously updated : 05/01/2024-# CustomerIntent: As a database adminstrator, I want to fine tune autoscaler for specific regions or partitions so that I can balance an uneven workload. ---# Per-region and per-partition autoscale (preview) --By default, Azure Cosmos DB autoscale scales workloads based on the most active region and partition. For nonuniform workloads that have different workload patterns across regions and partitions, this scaling can cause unnecessary scale-ups. With this improvement to autoscale, also known as "dynamic scaling," the per region and per partition autoscale feature now allows your workloadsΓÇÖ regions and partitions to scale independently based on usage. --> [!IMPORTANT] -> By default, this feature is only available for Azure Cosmos DB accounts created after **November 15, 2023**. For customers who can significantly benefit from dynamic scaling, Azure Cosmos DB is progressively enabling the feature in stages for existing accounts and providing GA support, ahead of broader GA. Customers in this cohort will be notified by email before the enablement. This update wonΓÇÖt impact your account(s) performance, availability, and won't cause downtime or data movement. Please contact your Microsoft representative for questions. --This feature is recommended for autoscale workloads that are nonuniform across regions and partitions. This feature allows you to save costs if you often experience hot partitions and/or have multiple regions. When enabled, this feature applies to all autoscale resources in the account. --## Use cases --- Database workloads that have a highly trafficked primary region and a secondary passive region for disaster recovery.- - By enabling autoscale per region and partition, you can now save on costs as the secondary region independently and automatically scales down while idle. The secondary regions also automatically scales-up as it becomes active and while handling write replication from the primary region. -- Multi-region database workloads.- - These workloads often observe uneven distribution of requests across regions due to natural traffic growth and dips throughout the day. For example, a database might be active during business hours across globally distributed time zones. --## Example --For example, if we have a collection with **1000** RU/s and **2** partitions, each partition can go up to **500** RU/s. For one hour of activity, the utilization would look like this: --| Region | Partition | Throughput | Utilization | Notes | -| | | | | | -| Write | P1 | <= 500 RU/s | 100% | 500 RU/s consisting of 50 RU/s used for write operations and 450 RU/s for read operations. | -| Write | P2 | <= 200 RU/s | 40% | 200 RU/s consisting of all read operations. | -| Read | P1 | <= 150 RU/s | 30% | 150 RU/s consisting of 50 RU/s used for writes replicated from the write region. 100 RU/s are used for read operations in this region. | -| Read | P2 | <= 50 RU/s | 10% | | --Because all partitions are scaled uniformly based on the hottest partition, both the write and read regions are scaled to 1000 RU/s, making the total RU/s as much as **2000 RU/s**. --With per-partition or per-region scaling, you can optimize your throughput. The total consumption would be **900 RU/s** as each partition or region's throughput is scaled independently and measured per hour using the same scenario. --## Get started --This feature is available for new Azure Cosmos DB accounts. To enable this feature, follow these steps: --1. Navigate to your Azure Cosmos DB account in the [Azure portal](https://portal.azure.com). -1. Navigate to the **Features** page. -1. Locate and enable the **Per Region and Per Partition Autoscale** feature. -- :::image type="content" source="media/autoscale-per-partition-region/enable-feature.png" lightbox="media/autoscale-per-partition-region/enable-feature.png" alt-text="Screenshot of the 'Per Region and Per Partition Autoscale' feature in the Azure portal."::: -- > [!IMPORTANT] - > The feature is enabled at the account level, so all containers within the account will automatically have this capability applied. The feature is available for both shared throughput databases and containers with dedicated throughput. Provisioned throughput accounts must switch over to autoscale and then enable this feature, if interested. --1. Use [Azure Monitor metrics](monitor-reference.md#supported-metrics-for-microsoftdocumentdbdatabaseaccounts) to analyze how the new autoscaling is applied across partitions and regions. Filter to your desired database account and container, then filter or split by the `PhysicalPartitionID` metric. This metric shows all partitions across their various regions. -- Then, use `NormalizedRUConsumption` to see which partitions and regions scale independently. You can use the `ProvisionedThroughput` metric to see what throughput value is emitted to our billing service. --## Requirements/Limitations --Accounts must be created after 11/15/2023 to enable this feature. |
cosmos-db | Bulk Executor Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/bulk-executor-overview.md | - Title: Azure Cosmos DB bulk executor library overview -description: Perform bulk operations in Azure Cosmos DB through bulk import and bulk update APIs offered by the bulk executor library. --- Previously updated : 3/30/2023-----# Azure Cosmos DB bulk executor library overview --Azure Cosmos DB is a fast, flexible, and globally distributed database service that elastically scales out to support: --* Large read and write throughput, on the order of millions of operations per second. -* Storing high volumes of transactional and operational data, on the order of hundreds of terabytes or even more, with predictable millisecond latency. --The bulk executor library helps you use this massive throughput and storage. The bulk executor library allows you to perform bulk operations in Azure Cosmos DB through bulk import and bulk update APIs. You can read more about the features of bulk executor library in the following sections. --> [!NOTE] -> Currently, bulk executor library supports import and update operations. Azure Cosmos DB API supports this library for NoSQL and Gremlin accounts only. --> [!IMPORTANT] -> The bulk executor library is not currently supported on [serverless](serverless.md) accounts. On .NET, we recommend that you use the [bulk support](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk/) available in the V3 version of the SDK. --## Key features of the bulk executor library --* Using the bulk executor library significantly reduces the client-side compute resources needed to saturate the throughput allocated to a container. A single threaded application that writes data using the bulk import API achieves 10 times greater write throughput when compared to a multi-threaded application that writes data in parallel while it saturates the client machine's CPU. --* The bulk executor library abstracts away the tedious tasks of writing application logic to handle rate limiting of request, request timeouts, and other transient exceptions. It efficiently handles them within the library. --* It provides a simplified mechanism for applications to perform bulk operations to scale out. A single bulk executor instance that runs on an Azure virtual machine can consume greater than 500 K RU/s. You can achieve a higher throughput rate by adding more instances on individual client virtual machines. --* The bulk executor library can bulk import more than a terabyte of data within an hour by using a scale-out architecture. --* It can bulk update existing data in Azure Cosmos DB containers as patches. --## How does the bulk executor operate? --When a bulk operation to import or update documents is triggered with a batch of entities, they're initially shuffled into buckets that correspond to their Azure Cosmos DB partition key range. Within each bucket that corresponds to a partition key range, they're broken down into mini-batches. --Each mini-batch acts as a payload that is committed on the server-side. The bulk executor library has built in optimizations for concurrent execution of the mini-batches both within and across partition key ranges. --The following diagram illustrates how bulk executor batches data into different partition keys: ---The bulk executor library makes sure to maximally utilize the throughput allocated to a collection. It uses an [AIMD-style congestion control mechanism](https://tools.ietf.org/html/rfc5681) for each Azure Cosmos DB partition key range to efficiently handle rate limiting and timeouts. --For more information about sample applications that consume the bulk executor library, see [Use the bulk executor .NET library to perform bulk operations in Azure Cosmos DB](nosql/bulk-executor-dotnet.md) and [Perform bulk operations on Azure Cosmos DB data](bulk-executor-java.md). --For reference information, see [.NET bulk executor library](nosql/sdk-dotnet-bulk-executor-v2.md) and [Java bulk executor library](nosql/sdk-java-bulk-executor-v2.md). --## Next steps - -* [Azure Cosmos DB Spark connector](./nosql/quickstart-spark.md) -* [Azure Cosmos DB connector](../data-factory/connector-azure-cosmos-db.md) |
cosmos-db | Burst Capacity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/burst-capacity.md | - Title: Burst capacity- -description: Use your database or container's idle throughput capacity to handle spikes of traffic with burst capacity in Azure Cosmos DB. ------ Previously updated : 05/23/2023---# Burst capacity in Azure Cosmos DB ---Azure Cosmos DB burst capacity allows you to take advantage of your database or container's idle throughput capacity to handle spikes of traffic. With burst capacity, each physical partition can accumulate up to 5 minutes of idle capacity, which can be consumed at a rate up to 3000 RU/s. With burst capacity, requests that would have otherwise been rate limited can now be served with burst capacity while it's available. --Burst capacity applies only to Azure Cosmos DB accounts using provisioned throughput (manual and autoscale) and doesn't apply to serverless containers. The feature is configured at the Azure Cosmos DB account level and automatically applies to all databases and containers in the account that have physical partitions with less than 3000 RU/s of provisioned throughput. Resources that have greater than or equal to 3000 RU/s per physical partition can't benefit from or use burst capacity. --## How burst capacity works --> [!NOTE] -> The current implementation of burst capacity is subject to change in the future. Usage of burst capacity is subject to system resource availability and is **not guaranteed**. Azure Cosmos DB may also use burst capacity for background maintenance tasks. If your workload requires consistent throughput beyond what you have provisioned, it's recommended to provision your RU/s accordingly without relying on burst capacity. Before enabling burst capacity, it is also recommended to evaluate if your partition layout can be [merged](merge.md) to permanently give more RU/s per physical partition without relying on burst capacity. --Let's take an example of a physical partition that has 100 RU/s of provisioned throughput and is idle for 5 minutes. With burst capacity, it can accumulate a maximum of 100 RU/s * 300 seconds = 30,000 RU of burst capacity. The capacity can be consumed at a maximum rate of 3000 RU/s, so if there's a sudden spike in request volume, the partition can burst up to 3000 RU/s for up 30,000 RU / 3000 RU/s = 10 seconds. Without burst capacity, any requests that are consumed beyond the provisioned 100 RU/s would have been rate limited (429). --After the 10 seconds is over, the burst capacity has been used up. If the workload continues to exceed the provisioned 100 RU/s, any requests that are consumed beyond the provisioned 100 RU/s would now be rate limited (429). The maximum amount of burst capacity a physical partition can accumulate at any point in time is equal to 300 seconds * the provisioned RU/s of the physical partition. --## Getting started --To get started using burst capacity, navigate to the **Features** page in your Azure Cosmos DB account. Select and enable the **Burst Capacity** feature. --Once you've enabled the feature, it takes 15-20 minutes to take effect. ---## Requirements --To enable burst capacity, your Azure Cosmos DB account must meet all the following criteria: --- Your Azure Cosmos DB account is using provisioned throughput (manual or autoscale). Burst capacity doesn't apply to serverless accounts.-- Your Azure Cosmos DB account is using API for NoSQL, Cassandra, Gremlin, MongoDB, or Table.--## Next steps --- See the FAQ on [burst capacity.](burst-capacity-faq.yml)-- Learn more about [provisioned throughput.](set-throughput.md)-- Learn more about [request units.](request-units.md) |
cosmos-db | Access Data Spring Data App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/access-data-spring-data-app.md | - Title: How to use Spring Data API for Apache Cassandra with Azure Cosmos DB for Apache Cassandra -description: Learn how to use Spring Data API for Apache Cassandra with Azure Cosmos DB for Apache Cassandra. ----- Previously updated : 07/17/2021---# How to use Spring Data API for Apache Cassandra with Azure Cosmos DB for Apache Cassandra --This article demonstrates creating a sample application that uses [Spring Data] to store and retrieve information using the [Azure Cosmos DB for Apache Cassandra](/azure/cosmos-db/cassandra-introduction). --## Prerequisites --The following prerequisites are required in order to complete the steps in this article: --* An Azure subscription; if you don't already have an Azure subscription, you can activate your [MSDN subscriber benefits] or sign up for a [free Azure account]. -* A supported Java Development Kit (JDK). For more information about the JDKs available for use when developing on Azure, see [Java support on Azure and Azure Stack](/azure/developer/java/fundamentals/java-support-on-azure). -* [Apache Maven](http://maven.apache.org/), version 3.0 or later. -* [Curl](https://curl.haxx.se/) or similar HTTP utility to test functionality. -* A [Git](https://git-scm.com/downloads) client. --> [!NOTE] -> The samples mentioned below implement custom extensions for a better experience when using Azure Cosmos DB for Apache Cassandra. They include custom retry and load balancing policies, as well as implementing recommended connection settings. For a more extensive exploration of how the custom policies are used, see Java samples for [version 3](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample) and [version 4](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4). --## Create an Azure Cosmos DB for Apache Cassandra account ---## Configure the sample application --The following procedure configures the test application. --1. Open a command shell and clone either of the following examples: -- For Java [version 3 driver](https://github.com/datastax/java-driver/tree/3.x) and corresponding Spring version: -- ```shell - git clone https://github.com/Azure-Samples/spring-data-cassandra-on-azure-extension-v3.git - ``` - - For Java [version 4 driver](https://github.com/datastax/java-driver/tree/4.x) and corresponding Spring version: -- ```shell - git clone https://github.com/Azure-Samples/spring-data-cassandra-on-azure-extension-v4.git - ``` -- > [!NOTE] - > Although the usage described below is identical for both Java version 3 and version 4 samples above, the way in which they have been implemented in order to include custom retry and load balancing policies is different. We recommend reviewing the code to understand how to implement custom policies if you are making changes to an existing Spring Java application. --1. Locate the *application.properties* file in the *resources* directory of the sample project, or create the file if it does not already exist. --1. Open the *application.properties* file in a text editor, and add or configure the following lines in the file, and replace the sample values with the appropriate values from earlier: -- ```yaml - spring.data.cassandra.contact-points=<Account Name>.cassandra.cosmos.azure.com - spring.data.cassandra.port=10350 - spring.data.cassandra.username=<Account Name> - spring.data.cassandra.password=******** - ``` -- Where: -- | Parameter | Description | - ||| - | `spring.data.cassandra.contact-points` | Specifies the **Contact Point** from earlier in this article. | - | `spring.data.cassandra.port` | Specifies the **Port** from earlier in this article. | - | `spring.data.cassandra.username` | Specifies your **Username** from earlier in this article. | - | `spring.data.cassandra.password` | Specifies your **Primary Password** from earlier in this article. | --1. Save and close the *application.properties* file. --## Package and test the sample application --Browse to the directory that contains the .pom file to build and test the application. --1. Build the sample application with Maven; for example: -- ```shell - mvn clean package - ``` --1. Start the sample application; for example: -- ```shell - java -jar target/spring-data-cassandra-on-azure-0.1.0-SNAPSHOT.jar - ``` --1. Create new records using `curl` from a command prompt like the following examples: -- ```shell - curl -s -d "{\"name\":\"dog\",\"species\":\"canine\"}" -H "Content-Type: application/json" -X POST http://localhost:8080/pets -- curl -s -d "{\"name\":\"cat\",\"species\":\"feline\"}" -H "Content-Type: application/json" -X POST http://localhost:8080/pets - ``` -- Your application should return values like the following: -- ```shell - Added Pet{id=60fa8cb0-0423-11e9-9a70-39311962166b, name='dog', species='canine'}. -- Added Pet{id=72c1c9e0-0423-11e9-9a70-39311962166b, name='cat', species='feline'}. - ``` --1. Retrieve all of the existing records using `curl` from a command prompt like the following examples: -- ```shell - curl -s http://localhost:8080/pets - ``` -- Your application should return values like the following: -- ```json - [{"id":"60fa8cb0-0423-11e9-9a70-39311962166b","name":"dog","species":"canine"},{"id":"72c1c9e0-0423-11e9-9a70-39311962166b","name":"cat","species":"feline"}] - ``` --## Clean up resources ---## Next steps --To learn more about Spring and Azure, continue to the Spring on Azure documentation center. --> [!div class="nextstepaction"] -> [Spring on Azure](../../index.yml) --### Additional Resources --For more information about using Azure with Java, see the [Azure for Java Developers] and the [Working with Azure DevOps and Java]. --[Azure for Java Developers]: ../index.yml -[free Azure account]: https://azure.microsoft.com/pricing/free-trial/ -[Working with Azure DevOps and Java]: /azure/devops/ -[MSDN subscriber benefits]: https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/ -[Spring Boot]: http://projects.spring.io/spring-boot/ -[Spring Data]: https://spring.io/projects/spring-data -[Spring Initializr]: https://start.spring.io/ -[Spring Framework]: https://spring.io/ --[COSMOSDB01]: media/access-data-spring-data-app/create-cosmos-db-01.png -[COSMOSDB02]: media/access-data-spring-data-app/create-cosmos-db-02.png -[COSMOSDB03]: media/access-data-spring-data-app/create-cosmos-db-03.png -[COSMOSDB04]: media/access-data-spring-data-app/create-cosmos-db-04.png -[COSMOSDB05]: media/access-data-spring-data-app/create-cosmos-db-05.png -[COSMOSDB05-1]: media/access-data-spring-data-app/create-cosmos-db-05-1.png -[COSMOSDB06]: media/access-data-spring-data-app/create-cosmos-db-06.png |
cosmos-db | Adoption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/adoption.md | - Title: How to adapt to Azure Cosmos DB for Apache Cassandra from Apache Cassandra -description: Learn best practices and ways to successfully use the Azure Cosmos DB for Apache Cassandra with Apache Cassandra applications. ----- Previously updated : 03/24/2022-----# How to adapt to Azure Cosmos DB for Apache Cassandra if you are coming from Apache Cassandra ---The Azure Cosmos DB for Apache Cassandra provides wire protocol compatibility with existing Cassandra SDKs and tools. You can run applications that are designed to connect to Apache Cassandra by using the API for Cassandra with minimal changes. --When you use the API for Cassandra, it's important to be aware of differences between Apache Cassandra and Azure Cosmos DB. If you're familiar with native [Apache Cassandra](https://cassandra.apache.org/), this article can help you begin to use the Azure Cosmos DB for Apache Cassandra. --## Feature support --The API for Cassandra supports a large number of Apache Cassandra features. Some features aren't supported or they have limitations. Before you migrate, be sure that the [Azure Cosmos DB for Apache Cassandra features](support.md) you need are supported. --## Replication --When you plan for replication, it's important to look at both migration and consistency. --Although you can communicate with the API for Cassandra through the Cassandra Query Language (CQL) binary protocol v4 wire protocol, Azure Cosmos DB implements its own internal replication protocol. You can't use the Cassandra gossip protocol for live migration or replication. For more information, see [Live-migrate from Apache Cassandra to the API for Cassandra by using dual writes](migrate-data-dual-write-proxy.md). --For information about offline migration, see [Migrate data from Cassandra to an Azure Cosmos DB for Apache Cassandra account by using Azure Databricks](migrate-data-databricks.md). --Although the approaches to replication consistency in Apache Cassandra and Azure Cosmos DB are similar, it's important to understand how they're different. A [mapping document](consistency-mapping.md) compares Apache Cassandra and Azure Cosmos DB approaches to replication consistency. However, we highly recommend that you specifically review [Azure Cosmos DB consistency settings](../consistency-levels.md) or watch a brief [video guide to understanding consistency settings in the Azure Cosmos DB platform](https://aka.ms/docs.consistency-levels). --## Recommended client configurations --When you use the API for Cassandra, you don't need to make substantial code changes to existing applications that run Apache Cassandra. We recommend some approaches and configuration settings for the API for Cassandra in Azure Cosmos DB. Review the blog post [API for Cassandra recommendations for Java](https://devblogs.microsoft.com/cosmosdb/cassandra-api-java/). --## Code samples --The API for Cassandra is designed to work with your existing application code. If you encounter connectivity-related errors, use the [quickstart samples](manage-data-java-v4-sdk.md) as a starting point to discover minor setup changes you might need to make in your existing code. --We also have more in-depth samples for [Java v3](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample) and [Java v4](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4) drivers. These code samples implement custom [extensions](https://github.com/Azure/azure-cosmos-cassandra-extensions/tree/release/java-driver-4/1.0.0), which in turn implement recommended client configurations. --You also can use samples for [Java Spring Boot (v3 driver)](https://github.com/Azure-Samples/spring-data-cassandra-on-azure-extension-v3) and [Java Spring Boot (v4 driver)](https://github.com/Azure-Samples/spring-data-cassandra-on-azure-extension-v4.git). --## Storage --The API for Cassandra is backed by Azure Cosmos DB, which is a document-oriented NoSQL database engine. Azure Cosmos DB maintains metadata, which might result in a change in the amount of physical storage required for a specific workload. --The difference in storage requirements between native Apache Cassandra and Azure Cosmos DB is most noticeable in small row sizes. In some cases, the difference might be offset because Azure Cosmos DB doesn't implement compaction or tombstones. This factor depends significantly on the workload. If you're uncertain about storage requirements, we recommend that you first create a proof of concept. --## Multi-region deployments --Native Apache Cassandra is a multi-master system by default. Apache Cassandra doesn't have an option for single-master with multi-region replication for reads only. The concept of application-level failover to another region for writes is redundant in Apache Cassandra. All nodes are independent, and there's no single point of failure. However, Azure Cosmos DB provides the out-of-box ability to configure either single-master or multi-master regions for writes. --An advantage of having a single-master region for writes is avoiding cross-region conflict scenarios. It gives you the option to maintain strong consistency across multiple regions while maintaining a level of high availability. --> [!NOTE] -> Strong consistency across regions and a Recovery Point Objective (RPO) of zero isn't possible for native Apache Cassandra because all nodes are capable of serving writes. You can configure Azure Cosmos DB for strong consistency across regions in a *single write region* configuration. However, like with native Apache Cassandra, you can't configure an Azure Cosmos DB account that's configured with multiple write regions for strong consistency. A distributed system can't provide an RPO of zero *and* a Recovery Time Objective (RTO) of zero. --For more information, see [Load balancing policy](https://devblogs.microsoft.com/cosmosdb/cassandra-api-java/#load-balancing-policy) in our [API for Cassandra recommendations for Java blog](https://devblogs.microsoft.com/cosmosdb/cassandra-api-java). Also, see [Failover scenarios](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4#failover-scenarios) in our official [code sample for the Cassandra Java v4 driver](https://github.com/Azure-Samples/azure-cosmos-cassandra-extensions-java-sample-v4). --## Request units --One of the major differences between running a native Apache Cassandra cluster and provisioning an Azure Cosmos DB account is how database capacity is provisioned. In traditional databases, capacity is expressed in terms of CPU cores, RAM, and IOPS. Azure Cosmos DB is a multi-tenant platform-as-a-service database. Capacity is expressed by using a single normalized metric called [request units](../request-units.md). Every request sent to the database has a request unit cost (RU cost). Each request can be profiled to determine its cost. --The benefit of using request units as a metric is that database capacity can be provisioned deterministically for highly predictable performance and efficiency. After you profile the cost of each request, you can use request units to directly associate the number of requests sent to the database with the capacity you need to provision. The challenge with this way of provisioning capacity is that you need to have a solid understanding of the throughput characteristics of your workload. --We highly recommend that you profile your requests. Use that information to help you estimate the number of request units you'll need to provision. Here are some articles that might help you make the estimate: --- [Request units in Azure Cosmos DB](../request-units.md)-- [Find the request unit charge for operations executed in the Azure Cosmos DB for Apache Cassandra](find-request-unit-charge.md)-- [Optimize provisioned throughput cost in Azure Cosmos DB](../optimize-cost-throughput.md)--## Capacity provisioning models --In traditional database provisioning, a fixed capacity is provisioned up front to handle the anticipated throughput. Azure Cosmos DB offers a capacity-based model called [provisioned throughput](../set-throughput.md). As a multi-tenant service, Azure Cosmos DB also offers *consumption-based* models in [autoscale](../provision-throughput-autoscale.md) mode and [serverless](../serverless.md) mode. The extent to which a workload might benefit from either of these consumption-based provisioning models depends on the predictability of throughput for the workload. --In general, steady-state workloads that have predictable throughput benefit most from provisioned throughput. Workloads that have large periods of dormancy benefit from serverless mode. Workloads that have a continuous level of minimal throughput, but with unpredictable spikes, benefit most from autoscale mode. We recommend that you review the following articles for a clear understanding of the best capacity model for your throughput needs: --- [Introduction to provisioned throughput in Azure Cosmos DB](../set-throughput.md)-- [Create Azure Cosmos DB containers and databases with autoscale throughput](../provision-throughput-autoscale.md)-- [Azure Cosmos DB serverless](../serverless.md)--## Partitioning --Partitioning in Azure Cosmos DB is similar to partitioning in Apache Cassandra. One of the main differences is that Azure Cosmos DB is more optimized for *horizontal scale*. In Azure Cosmos DB, limits are placed on the amount of *vertical throughput* capacity that's available in any physical partition. The effect of this optimization is most noticeable when an existing data model has significant throughput skew. --Take steps to ensure that your partition key design results in a relatively uniform distribution of requests. For more information about how logical and physical partitioning work and limits on throughput capacity per partition, see [Partitioning in the Azure Cosmos DB for Apache Cassandra](partitioning.md). --## Scaling --In native Apache Cassandra, increasing capacity and scale involves adding new nodes to a cluster and ensuring that the nodes are properly added to the Cassandra ring. In Azure Cosmos DB, adding nodes is transparent and automatic. Scaling is a function of how many [request units](../request-units.md) are provisioned for your keyspace or table. Scaling in physical machines occurs when either physical storage or required throughput reaches limits allowed for a logical or a physical partition. For more information, see [Partitioning in the Azure Cosmos DB for Apache Cassandra](partitioning.md). --## Rate limiting --A challenge of provisioning [request units](../request-units.md), particularly if you're using [provisioned throughput](../set-throughput.md), is rate limiting. Azure Cosmos DB returns rate-limited errors if clients consume more resources than the amount you provisioned. The API for Cassandra in Azure Cosmos DB translates these exceptions to overloaded errors on the Cassandra native protocol. For information about how to avoid rate limiting in your application, see [Prevent rate-limiting errors for Azure Cosmos DB for Apache Cassandra operations](prevent-rate-limiting-errors.md). --## Apache Spark connector --Many Apache Cassandra users use the Apache Spark Cassandra connector to query their data for analytical and data movement needs. You can connect to the API for Cassandra the same way and by using the same connector. Before you connect to the API for Cassandra, review [Connect to the Azure Cosmos DB for Apache Cassandra from Spark](connect-spark-configuration.md). In particular, see the section [Optimize Spark connector throughput configuration](connect-spark-configuration.md#optimizing-spark-connector-throughput-configuration). --## Troubleshoot common issues --For solutions to common issues, see [Troubleshoot common issues in the Azure Cosmos DB for Apache Cassandra](troubleshoot-common-issues.md). --## Next steps --- Learn about [partitioning and horizontal scaling in Azure Cosmos DB](../partitioning-overview.md).-- Learn about [provisioned throughput in Azure Cosmos DB](../request-units.md).-- Learn about [global distribution in Azure Cosmos DB](../distribute-data-globally.md). |
cosmos-db | Change Feed | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/change-feed.md | - Title: Change feed in the Azure Cosmos DB for Apache Cassandra -description: Learn how to use change feed in the Azure Cosmos DB for Apache Cassandra to get the changes made to your data. --- Previously updated : 11/25/2019-----# Change feed in the Azure Cosmos DB for Apache Cassandra --[Change feed](../change-feed.md) support in the Azure Cosmos DB for Apache Cassandra is available through the query predicates in the Cassandra Query Language (CQL). Using these predicate conditions, you can query the change feed API. Applications can get the changes made to a table using the primary key (also known as the partition key) as is required in CQL. You can then take further actions based on the results. Changes to the rows in the table are captured in the order of their modification time and the sort order per partition key. --The following example shows how to get a change feed on all the rows in a API for Cassandra Keyspace table using .NET. The predicate COSMOS_CHANGEFEED_START_TIME() is used directly within CQL to query items in the change feed from a specified start time (in this case current datetime). You can download the full sample, for C# [here](https://github.com/azure-samples/azure-cosmos-db-cassandra-change-feed) and for Java [here](https://github.com/Azure-Samples/cosmos-changefeed-cassandra-java). --In each iteration, the query resumes at the last point changes were read, using paging state. We can see a continuous stream of new changes to the table in the Keyspace. We will see changes to rows that are inserted, or updated. Watching for delete operations using change feed in API for Cassandra is currently not supported. --> [!NOTE] -> Reusing a token after dropping a collection and then recreating it with the same name results in an error. -> We advise you to set the pageState to null when creating a new collection and reusing collection name. --# [Java](#tab/java) --```java - Session cassandraSession = utils.getSession(); -- try { - DateTimeFormatter dtf = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss"); - LocalDateTime now = LocalDateTime.now().minusHours(6).minusMinutes(30); - String query="SELECT * FROM uprofile.user where COSMOS_CHANGEFEED_START_TIME()='" - + dtf.format(now)+ "'"; - - byte[] token=null; - System.out.println(query); - while(true) - { - SimpleStatement st=new SimpleStatement(query); - st.setFetchSize(100); - if(token!=null) - st.setPagingStateUnsafe(token); - - ResultSet result=cassandraSession.execute(st) ; - token=result.getExecutionInfo().getPagingState().toBytes(); - - for(Row row:result) - { - System.out.println(row.getString("user_name")); - } - } - } finally { - utils.close(); - LOGGER.info("Please delete your table after verifying the presence of the data in portal or from CQL"); - } -``` --# [C#](#tab/csharp) --```C# - //set initial start time for pulling the change feed - DateTime timeBegin = DateTime.UtcNow; -- //initialise variable to store the continuation token - byte[] pageState = null; - while (true) - { - try - { -- //Return the latest change for all rows in 'user' table - IStatement changeFeedQueryStatement = new SimpleStatement( - $"SELECT * FROM uprofile.user where COSMOS_CHANGEFEED_START_TIME() = '{timeBegin.ToString("yyyy-MM-ddTHH:mm:ss.fffZ", CultureInfo.InvariantCulture)}'"); - if (pageState != null) - { - changeFeedQueryStatement = changeFeedQueryStatement.SetPagingState(pageState); - } - Console.WriteLine("getting records from change feed at last page state...."); - RowSet rowSet = session.Execute(changeFeedQueryStatement); -- //store the continuation token here - pageState = rowSet.PagingState; -- List<Row> rowList = rowSet.ToList(); - if (rowList.Count != 0) - { - for (int i = 0; i < rowList.Count; i++) - { - string value = rowList[i].GetValue<string>("user_name"); - int key = rowList[i].GetValue<int>("user_id"); - // do something with the data - e.g. compute, forward to another event, function, etc. |