Updates from: 08/01/2024 01:19:00
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Security Analytics Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-security-analytics-sentinel.md
Previously updated : 03/06/2023 Last updated : 07/31/2024 #Customer intent: As an IT professional, I want to gather logs and audit data using Microsoft Sentinel and Azure Monitor to secure applications that use Azure Active Directory B2C.
active-directory-b2c Partner Whoiam Rampart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-whoiam-rampart.md
Previously updated : 05/02/2023 Last updated : 07/31/2024
active-directory-b2c Tenant Management Directory Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management-directory-quota.md
Previously updated : 06/15/2023 Last updated : 07/31/2024
# Manage directory size quota of your Azure Active Directory B2C tenant
-It's important that you monitor how you use your Azure AD B2C directory quota. Directory quota has a given size that is expressed in number of objects. These objects include user accounts, app registrations, groups, etc. When the number of objects in your tenant reach quota size, the directory will generate an error when trying to create a new object.
+It's important that you monitor how you use your Azure AD B2C directory quota. Directory quota has a size that's expressed in number of objects. These objects include user accounts, app registrations, groups, etc. When the number of objects in your tenant reach quota size, the directory will generate an error when trying to create a new object.
## Monitor directory quota usage in your Azure AD B2C tenant
The response from the API call looks similar to the following json:
- The attribute `used` is the number of objects you already have in the directory.
-If your tenant usage is higher that 80%, you can remove inactive users or request for a quota increase.
+If your tenant usage is higher that 80%, you can remove inactive users or request for a quota size increase.
-## Request increase directory quota size
+## Increase directory quota size
-You can request to increase the quota size by [contacting support](find-help-open-support-ticket.md)
+You can request to increase the quota size by [contacting support](find-help-open-support-ticket.md).
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 07/01/2024 Last updated : 07/31/2024
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Microsoft Entra ID](../active-directory/fundamentals/whats-new.md), [Azure AD B2C developer release notes](custom-policy-developer-notes.md) and [What's new in Microsoft Entra External ID](/entra/external-id/whats-new-docs).
+## July 2024
+
+### Updated articles
+
+- [Developer notes for Azure Active Directory B2C](custom-policy-developer-notes.md) - Updated Twitter to X
+- [Custom email verification with SendGrid](custom-email-sendgrid.md) - Updated the localization script
+ ## June 2024 ### Updated articles
Welcome to what's new in Azure Active Directory B2C documentation. This article
- [Set up sign-up and sign-in with a LinkedIn account using Azure Active Directory B2C](identity-provider-linkedin.md) - Updated LinkedIn instructions - [Page layout versions](page-layout.md) - Updated page layout versions
-## February 2024
-
-### New articles
--- [Enable CAPTCHA in Azure Active Directory B2C](add-captcha.md)-- [Define a CAPTCHA technical profile in an Azure Active Directory B2C custom policy](captcha-technical-profile.md)-- [Verify CAPTCHA challenge string using CAPTCHA display control](display-control-captcha.md)-
-### Updated articles
--- [Enable custom domains in Azure Active Directory B2C](custom-domain.md) - Updated steps to block the default B2C domain-- [Manage Azure AD B2C custom policies with Microsoft Graph PowerShell](manage-custom-policies-powershell.md) - Microsoft Graph PowerShell updates -- [Localization string IDs](localization-string-ids.md) - CAPTCHA updates-- [Page layout versions](page-layout.md) - CAPTCHA updates-
ai-services Jailbreak Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/concepts/jailbreak-detection.md
This shield aims to safeguard against attacks that use information not directly
### Language availability
-Currently, the Prompt Shields API supports the English language. While our API doesn't restrict the submission of non-English content, we can't guarantee the same level of quality and accuracy in the analysis of such content. We recommend users to primarily submit content in English to ensure the most reliable and accurate results from the API.
+Prompt Shields have been specifically trained and tested on the following languages: Chinese, English, French, German, Italian, Japanese, Portuguese. However, the feature can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application.
### Text length limitations
ai-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/concepts/best-practices.md
Title: Conversational language understanding best practices
-description: Apply best practices when using conversational language understanding
+description: Learn how to apply best practices when you use conversational language understanding.
#
# Best practices for conversational language understanding
-Use the following guidelines to create the best possible projects in conversational language understanding.
+Use the following guidelines to create the best possible projects in conversational language understanding.
## Choose a consistent schema
-Schema is the definition of your intents and entities. There are different approaches you could take when defining what you should create as an intent versus an entity. There are some questions you need to ask yourself:
+Schema is the definition of your intents and entities. There are different approaches you could take when you define what you should create as an intent versus an entity. Ask yourself these questions:
- What actions or queries am I trying to capture from my user? - What pieces of information are relevant in each action?
-You can typically think of actions and queries as _intents_, while the information required to fulfill those queries as _entities_.
+You can typically think of actions and queries as _intents_, while the information required to fulfill those queries are _entities_.
-For example, assume you want your customers to cancel subscriptions for various products that you offer through your chatbot. You can create a _Cancel_ intent with various examples like _"Cancel the Contoso service,"_ or _"stop charging me for the Fabrikam subscription."_ The user's intent here is to _cancel,_ the _Contoso service_ or _Fabrikam subscription_ are the subscriptions they would like to cancel. Therefore, you can create an entity for _subscriptions_. You can then model your entire project to capture actions as intents and use entities to fill in those actions. This allows you to cancel anything you define as an entity, such as other products. You can then have intents for signing up, renewing, upgrading, etc. that all make use of the _subscriptions_ and other entities.
+For example, assume that you want your customers to cancel subscriptions for various products that you offer through your chatbot. You can create a _cancel_ intent with various examples like "Cancel the Contoso service" or "Stop charging me for the Fabrikam subscription." The user's intent here is to _cancel_, and the _Contoso service_ or _Fabrikam subscription_ are the subscriptions they want to cancel.
-The above schema design makes it easy for you to extend existing capabilities (canceling, upgrading, signing up) to new targets by creating a new entity.
+To proceed, you create an entity for _subscriptions_. Then you can model your entire project to capture actions as intents and use entities to fill in those actions. This approach allows you to cancel anything you define as an entity, such as other products. You can then have intents for signing up, renewing, and upgrading that all make use of the _subscriptions_ and other entities.
-Another approach is to model the _information_ as intents and _actions_ as entities. Let's take the same example, allowing your customers to cancel subscriptions through your chatbot. You can create an intent for each subscription available, such as _Contoso_ with utterances like _"cancel Contoso,"_ _"stop charging me for contoso services,"_ _"Cancel the Contoso subscription."_ You would then create an entity to capture the action, _cancel._ You can define different entities for each action or consolidate actions as one entity with a list component to differentiate between actions with different keys.
+The preceding schema design makes it easy for you to extend existing capabilities (canceling, upgrading, or signing up) to new targets by creating a new entity.
+
+Another approach is to model the _information_ as intents and the _actions_ as entities. Let's take the same example of allowing your customers to cancel subscriptions through your chatbot.
+
+You can create an intent for each subscription available, such as _Contoso_, with utterances like "Cancel Contoso," "Stop charging me for Contoso services," and "Cancel the Contoso subscription." You then create an entity to capture the _cancel_ action. You can define different entities for each action or consolidate actions as one entity with a list component to differentiate between actions with different keys.
This schema design makes it easy for you to extend new actions to existing targets by adding new action entities or entity components.
-Make sure to avoid trying to funnel all the concepts into just intents, for example don't try to create a _Cancel Contoso_ intent that only has the purpose of that one specific action. Intents and entities should work together to capture all the required information from the customer.
+Make sure to avoid trying to funnel all the concepts into intents. For example, don't try to create a _Cancel Contoso_ intent that only has the purpose of that one specific action. Intents and entities should work together to capture all the required information from the customer.
-You also want to avoid mixing different schema designs. Do not build half of your application with actions as intents and the other half with information as intents. Ensure it is consistent to get the possible results.
+You also want to avoid mixing different schema designs. Don't build half of your application with actions as intents and the other half with information as intents. To get the possible results, ensure that it's consistent.
[!INCLUDE [Balance training data](../includes/balance-training-data.md)]
You also want to avoid mixing different schema designs. Do not build half of you
## Use standard training before advanced training
-[Standard training](../how-to/train-model.md#training-modes) is free and faster than Advanced training, making it useful to quickly understand the effect of changing your training set or schema while building the model. Once you're satisfied with the schema, consider using advanced training to get the best AIQ out of your model.
+[Standard training](../how-to/train-model.md#training-modes) is free and faster than advanced training. It can help you quickly understand the effect of changing your training set or schema while you build the model. After you're satisfied with the schema, consider using advanced training to get the best AIQ out of your model.
## Use the evaluation feature
-
-When you build an app, it's often helpful to catch errors early. ItΓÇÖs usually a good practice to add a test set when building the app, as training and evaluation results are very useful in identifying errors or issues in your schema.
-
-## Machine-learning components and composition
-
-See [Component types](./entity-components.md#component-types).
-## Using the "none" score Threshold
+When you build an app, it's often helpful to catch errors early. It's usually a good practice to add a test set when you build the app. Training and evaluation results are useful in identifying errors or issues in your schema.
-If you see too many false positives, such as out-of-context utterances being marked as valid intents, See [confidence threshold](./none-intent.md) for information on how it affects inference.
-
-* Non machine-learned entity components like lists and regex are by definition not contextual. If you see list or regex entities in unintended places, try labeling the list synonyms as the machine-learned component.
+## Machine-learning components and composition
-* For entities, you can use learned component as the ΓÇÿRequiredΓÇÖ component, to restrict when a composed entity should fire.
+For more information, see [Component types](./entity-components.md#component-types).
-For example, suppose you have an entity called "*ticket quantity*" that attempts to extract the number of tickets you want to reserve for booking flights, for utterances such as "*Book two tickets tomorrow to Cairo.*"
+## Use the None score threshold
+If you see too many false positives, such as out-of-context utterances being marked as valid intents, see [Confidence threshold](./none-intent.md) for information on how it affects inference.
-Typically, you would add a prebuilt component for `Quantity.Number` that already extracts all numbers in utterances. However if your entity was only defined with the prebuilt component, it would also extract other numbers as part of the *ticket quantity* entity, such as "*Book two tickets tomorrow to Cairo at 3 PM.*"
+* Non-machine-learned entity components, like lists and regex, are by definition not contextual. If you see list or regex entities in unintended places, try labeling the list synonyms as the machine-learned component.
+* For entities, you can use learned component as the Required component, to restrict when a composed entity should fire.
-To resolve this, you would label a learned component in your training data for all the numbers that are meant to be a *ticket quantity*. The entity now has two components:
-* The prebuilt component that can interpret all numbers, and
-* The learned component that predicts where the *ticket quantity* is in a sentence.
+For example, suppose you have an entity called **Ticket Quantity** that attempts to extract the number of tickets you want to reserve for booking flights, for utterances such as "Book two tickets tomorrow to Cairo."
-If you require the learned component, make sure that *ticket quantity* is only returned when the learned component predicts it in the right context. If you also require the prebuilt component, you can then guarantee that the returned *ticket quantity* entity is both a number and in the correct position.
+Typically, you add a prebuilt component for `Quantity.Number` that already extracts all numbers in utterances. However, if your entity was only defined with the prebuilt component, it also extracts other numbers as part of the **Ticket Quantity** entity, such as "Book two tickets tomorrow to Cairo at 3 PM."
+To resolve this issue, you label a learned component in your training data for all the numbers that are meant to be a ticket quantity. The entity now has two components:
-## Addressing model inconsistencies
+* The prebuilt component that can interpret all numbers.
+* The learned component that predicts where the ticket quantity is located in a sentence.
-If your model is overly sensitive to small grammatical changes, like casing or diacritics, you can systematically manipulate your dataset directly in the Language Studio. To use these features, click on the Settings tab on the left toolbar and locate the **Advanced project settings** section.
+If you require the learned component, make sure that **Ticket Quantity** is only returned when the learned component predicts it in the right context. If you also require the prebuilt component, you can then guarantee that the returned **Ticket Quantity** entity is both a number and in the correct position.
+## Address model inconsistencies
-First, you can ***Enable data transformation for casing***, which normalizes the casing of utterances when training, testing, and implementing your model. If you've migrated from LUIS, you might recognize that LUIS did this normalization by default. To access this feature via the API, set the `"normalizeCasing"` parameter to `true`. See an example below:
+If your model is overly sensitive to small grammatical changes, like casing or diacritics, you can systematically manipulate your dataset directly in Language Studio. To use these features, select the **Settings** tab on the left pane and locate the **Advanced project settings** section.
+First, you can enable the setting for **Enable data transformation for casing**, which normalizes the casing of utterances when training, testing, and implementing your model. If you migrated from LUIS, you might recognize that LUIS did this normalization by default. To access this feature via the API, set the `normalizeCasing` parameter to `true`. See the following example:
```json {
First, you can ***Enable data transformation for casing***, which normalizes the
} ... ```
-Second, you can also leverage the **Advanced project settings** to ***Enable data augmentation for diacritics*** to generate variations of your training data for possible diacritic variations used in natural language. This feature is available for all languages, but it is especially useful for Germanic and Slavic languages, where users often write words using classic English characters instead of the correct characters. For example, the phrase "Navigate to the sports channel" in French is "Accédez à la chaîne sportive". When this feature is enabled, the phrase "Accedez a la chaine sportive" (without diacritic characters) is also included in the training dataset. If you enable this feature, please note that the utterance count of your training set will increase, and you may need to adjust your training data size accordingly. The current maximum utterance count after augmentation is 25,000. To access this feature via the API, set the `"augmentDiacritics"` parameter to `true`. See an example below:
+
+Second, you can also enable the setting for **Enable data augmentation for diacritics** to generate variations of your training data for possible diacritic variations used in natural language. This feature is available for all languages. It's especially useful for Germanic and Slavic languages, where users often write words by using classic English characters instead of the correct characters. For example, the phrase "Navigate to the sports channel" in French is "Accédez à la chaîne sportive." When this feature is enabled, the phrase "Accedez a la chaine sportive" (without diacritic characters) is also included in the training dataset.
+
+If you enable this feature, the utterance count of your training set increases. For this reason, you might need to adjust your training data size accordingly. The current maximum utterance count after augmentation is 25,000. To access this feature via the API, set the `augmentDiacritics` parameter to `true`. See the following example:
```json {
Second, you can also leverage the **Advanced project settings** to ***Enable dat
... ```
-## Addressing model overconfidence
+## Address model overconfidence
-Customers can use the LoraNorm recipe version in case the model is being incorrectly overconfident. An example of this can be like the below (note that the model predicts the incorrect intent with 100% confidence). This makes the confidence threshold project setting unusable.
+Customers can use the LoraNorm recipe version if the model is being incorrectly overconfident. An example of this behavior can be like the following scenario where the model predicts the incorrect intent with 100% confidence. This score makes the confidence threshold project setting unusable.
| Text | Predicted intent | Confidence score | |-|-|-|
-| "*Who built the Eiffel Tower?*" | `Sports` | 1.00 |
-| "*Do I look good to you today?*" | `QueryWeather` | 1.00 |
-| "*I hope you have a good evening.*" | `Alarm` | 1.00 |
+| "Who built the Eiffel Tower?" | `Sports` | 1.00 |
+| "Do I look good to you today?" | `QueryWeather` | 1.00 |
+| "I hope you have a good evening." | `Alarm` | 1.00 |
-To address this, use the `2023-04-15` configuration version that normalizes confidence scores. The confidence threshold project setting can then be adjusted to achieve the desired result.
+To address this scenario, use the `2023-04-15` configuration version that normalizes confidence scores. The confidence threshold project setting can then be adjusted to achieve the desired result.
```console curl --location 'https://<your-resource>.cognitiveservices.azure.com/language/authoring/analyze-conversations/projects/<your-project>/:train?api-version=2022-10-01-preview' \
curl --location 'https://<your-resource>.cognitiveservices.azure.com/language/au
} ```
-Once the request is sent, you can track the progress of the training job in Language Studio as usual.
+After the request is sent, you can track the progress of the training job in Language Studio as usual.
> [!NOTE]
-> You have to retrain your model after updating the `confidenceThreshold` project setting. Afterwards, you'll need to republish the app for the new threshold to take effect.
+> You have to retrain your model after you update the `confidenceThreshold` project setting. Afterward, you need to republish the app for the new threshold to take effect.
### Normalization in model version 2023-04-15
-Model version 2023-04-15, conversational language understanding provides normalization in the inference layer that doesn't affect training.
+With model version 2023-04-15, conversational language understanding provides normalization in the inference layer that doesn't affect training.
-The normalization layer normalizes the classification confidence scores to a confined range. The range selected currently is from `[-a,a]` where "a" is the square root of the number of intents. As a result, the normalization depends on the number of intents in the app. If there's a very low number of intents, the normalization layer has a very small range to work with. With a fairly large number of intents, the normalization is more effective.
+The normalization layer normalizes the classification confidence scores to a confined range. The range selected currently is from `[-a,a]` where "a" is the square root of the number of intents. As a result, the normalization depends on the number of intents in the app. If the number of intents is low, the normalization layer has a small range to work with. With a large number of intents, the normalization is more effective.
-If this normalization doesnΓÇÖt seem to help intents that are out of scope to the extent that the confidence threshold can be used to filter out of scope utterances, it might be related to the number of intents in the app. Consider adding more intents to the app, or if you're using an orchestrated architecture, consider merging apps that belong to the same domain together.
+If this normalization doesn't seem to help intents that are out of scope to the extent that the confidence threshold can be used to filter out-of-scope utterances, it might be related to the number of intents in the app. Consider adding more intents to the app. Or, if you're using an orchestrated architecture, consider merging apps that belong to the same domain together.
-## Debugging composed entities
+## Debug composed entities
-Entities are functions that emit spans in your input with an associated type. The function is defined by one or more components. You can mark components as needed, and you can decide whether to enable the *combine components* setting. When you combine components, all spans that overlap will be merged into a single span. If the setting isn't used, each individual component span will be emitted.
-
-To better understand how individual components are performing, you can disable the setting and set each component to "not required". This lets you inspect the individual spans that are emitted, and experiment with removing components so that only problematic components are generated.
+Entities are functions that emit spans in your input with an associated type. One or more components define the function. You can mark components as needed, and you can decide whether to enable the **Combine components** setting. When you combine components, all spans that overlap are merged into a single span. If the setting isn't used, each individual component span is emitted.
+
+To better understand how individual components are performing, you can disable the setting and set each component to **Not required**. This setting lets you inspect the individual spans that are emitted and experiment with removing components so that only problematic components are generated.
-## Evaluate a model using multiple test sets
+## Evaluate a model by using multiple test sets
-Data in a conversational language understanding project can have two data sets. A "testing" set, and a "training" set. If you want to use multiple test sets to evaluate your model, you can:
+Data in a conversational language understanding project can have two datasets: a testing set and a training set. If you want to use multiple test sets to evaluate your model, you can:
* Give your test sets different names (for example, "test1" and "test2"). * Export your project to get a JSON file with its parameters and configuration.
-* Use the JSON to import a new project, and rename your second desired test set to "test".
-* Train the model to run the evaluation using your second test set.
+* Use the JSON to import a new project. Rename your second desired test set to "test."
+* Train the model to run the evaluation by using your second test set.
## Custom parameters for target apps and child apps
-If you're using [orchestrated apps](./app-architecture.md), you may want to send custom parameter overrides for various child apps. The `targetProjectParameters` field allows users to send a dictionary representing the parameters for each target project. For example, consider an orchestrator app named `Orchestrator` orchestrating between a conversational language understanding app named `CLU1` and a custom question answering app named `CQA1`. If you want to send a parameter named "top" to the question answering app, you can use the above parameter.
+If you're using [orchestrated apps](./app-architecture.md), you might want to send custom parameter overrides for various child apps. The `targetProjectParameters` field allows users to send a dictionary representing the parameters for each target project. For example, consider an orchestrator app named `Orchestrator` orchestrating between a conversational language understanding app named `CLU1` and a custom question answering app named `CQA1`. If you want to send a parameter named "top" to the question answering app, you can use the preceding parameter.
```console curl --request POST \
curl --request POST \
## Copy projects across language resources
-Often you can copy conversational language understanding projects from one resource to another using the **copy** button in Azure Language Studio. However in some cases, it might be easier to copy projects using the API.
+Often you can copy conversational language understanding projects from one resource to another by using the **Copy** button in Language Studio. In some cases, it might be easier to copy projects by using the API.
-First, identify the:
- * source project name
- * target project name
- * source language resource
- * target language resource, which is where you want to copy it to.
+First, identify the:
+
+ * Source project name.
+ * Target project name.
+ * Source language resource.
+ * Target language resource, which is where you want to copy it to.
-Call the API to authorize the copy action, and get the `accessTokens` for the actual copy operation later.
+Call the API to authorize the copy action and get `accessTokens` for the actual copy operation later.
```console curl --request POST \
curl --request POST \
--data '{"projectKind":"Conversation","allowOverwrite":false}' ```
-Call the API to complete the copy operation. Use the response you got earlier as the payload.
+Call the API to complete the copy operation. Use the response you got earlier as the payload.
```console curl --request POST \
curl --request POST \
}' ```
+## Address out-of-domain utterances
-## Addressing out of domain utterances
-
-Customers can use the new recipe version '2024-06-01-preview' in case the model has poor AIQ on out of domain utterances. An example of this with the default recipe can be like the below where the model has 3 intents Sports, QueryWeather and Alarm. The test utterances are out of domain utterances and the model classifies them as InDomain with a relatively high confidence score.
+Customers can use the new recipe version `2024-06-01-preview` if the model has poor AIQ on out-of-domain utterances. An example of this scenario with the default recipe can be like the following example where the model has three intents: `Sports`, `QueryWeather`, and `Alarm`. The test utterances are out-of-domain utterances and the model classifies them as `InDomain` with a relatively high confidence score.
| Text | Predicted intent | Confidence score | |-|-|-|
-| "*Who built the Eiffel Tower?*" | `Sports` | 0.90 |
-| "*Do I look good to you today?*" | `QueryWeather` | 1.00 |
-| "*I hope you have a good evening.*" | `Alarm` | 0.80 |
+| "Who built the Eiffel Tower?" | `Sports` | 0.90 |
+| "Do I look good to you today?" | `QueryWeather` | 1.00 |
+| "I hope you have a good evening." | `Alarm` | 0.80 |
-To address this, use the `2024-06-01-preview` configuration version that is built specifically to address this issue while also maintaining reasonably good quality on In Domain utterances.
+To address this scenario, use the `2024-06-01-preview` configuration version that's built specifically to address this issue while also maintaining reasonably good quality on `InDomain` utterances.
```console curl --location 'https://<your-resource>.cognitiveservices.azure.com/language/authoring/analyze-conversations/projects/<your-project>/:train?api-version=2022-10-01-preview' \
curl --location 'https://<your-resource>.cognitiveservices.azure.com/language/au
} ```
-Once the request is sent, you can track the progress of the training job in Language Studio as usual.
+After the request is sent, you can track the progress of the training job in Language Studio as usual.
Caveats:-- The None Score threshold for the app (confidence threshold below which the topIntent is marked as None) when using this recipe should be set to 0. This is because this new recipe attributes a certain portion of the in domain probabilities to out of domain so that the model isn't incorrectly overconfident about in domain utterances. As a result, users may see slightly reduced confidence scores for in domain utterances as compared to the prod recipe.-- This recipe isn't recommended for apps with just two (2) intents, such as IntentA and None, for example.-- This recipe isn't recommended for apps with low number of utterances per intent. A minimum of 25 utterances per intent is highly recommended.+
+- The None score threshold for the app (confidence threshold below which `topIntent` is marked as `None`) when you use this recipe should be set to 0. This setting is used because this new recipe attributes a certain portion of the in-domain probabilities to out of domain so that the model isn't incorrectly overconfident about in-domain utterances. As a result, users might see slightly reduced confidence scores for in-domain utterances as compared to the prod recipe.
+- We don't recommend this recipe for apps with only two intents, such as `IntentA` and `None`, for example.
+- We don't recommend this recipe for apps with a low number of utterances per intent. We highly recommend a minimum of 25 utterances per intent.
ai-services Entity Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/concepts/entity-components.md
Title: Entity components in Conversational Language Understanding
+ Title: Entity components in conversational language understanding
-description: Learn how Conversational Language Understanding extracts entities from text
+description: Learn how conversational language understanding extracts entities from text.
#
# Entity components
-In Conversational Language Understanding, entities are relevant pieces of information that are extracted from your utterances. An entity can be extracted by different methods. They can be learned through context, matched from a list, or detected by a prebuilt recognized entity. Every entity in your project is composed of one or more of these methods, which are defined as your entity's components. When an entity is defined by more than one component, their predictions can overlap. You can determine the behavior of an entity prediction when its components overlap by using a fixed set of options in the **Entity options**.
+In conversational language understanding, entities are relevant pieces of information that are extracted from your utterances. An entity can be extracted by different methods. They can be learned through context, matched from a list, or detected by a prebuilt recognized entity. Every entity in your project is composed of one or more of these methods, which are defined as your entity's components.
+
+When an entity is defined by more than one component, their predictions can overlap. You can determine the behavior of an entity prediction when its components overlap by using a fixed set of options in the *entity options*.
## Component types
-An entity component determines a way you can extract the entity. An entity can contain one component, which would determine the only method that would be used to extract the entity, or multiple components to expand the ways in which the entity is defined and extracted.
+An entity component determines a way that you can extract the entity. An entity can contain one component, which determines the only method to be used to extract the entity. An entity can also contain multiple components to expand the ways in which the entity is defined and extracted.
### Learned component
-The learned component uses the entity tags you label your utterances with to train a machine learned model. The model learns to predict where the entity is, based on the context within the utterance. Your labels provide examples of where the entity is expected to be present in an utterance, based on the meaning of the words around it and as the words that were labeled. This component is only defined if you add labels by tagging utterances for the entity. If you do not tag any utterances with the entity, it will not have a learned component.
+The learned component uses the entity tags you label your utterances with to train a machine-learned model. The model learns to predict where the entity is based on the context within the utterance. Your labels provide examples of where the entity is expected to be present in an utterance, based on the meaning of the words around it and as the words that were labeled.
+This component is only defined if you add labels by tagging utterances for the entity. If you don't tag any utterances with the entity, it doesn't have a learned component.
-### List component
-The list component represents a fixed, closed set of related words along with their synonyms. The component performs an exact text match against the list of values you provide as synonyms. Each synonym belongs to a "list key", which can be used as the normalized, standard value for the synonym that will return in the output if the list component is matched. List keys are **not** used for matching.
+### List component
-In multilingual projects, you can specify a different set of synonyms for each language. While using the prediction API, you can specify the language in the input request, which will only match the synonyms associated to that language.
+The list component represents a fixed, closed set of related words along with their synonyms. The component performs an exact text match against the list of values you provide as synonyms. Each synonym belongs to a *list key*, which can be used as the normalized, standard value for the synonym that returns in the output if the list component is matched. List keys *aren't* used for matching.
+In multilingual projects, you can specify a different set of synonyms for each language. When you use the prediction API, you can specify the language in the input request, which only matches the synonyms associated to that language.
### Prebuilt component
-The prebuilt component allows you to select from a library of common types such as numbers, datetimes, and names. When added, a prebuilt component is automatically detected. You can have up to five prebuilt components per entity. See [the list of supported prebuilt components](../prebuilt-component-reference.md) for more information.
-
+The prebuilt component allows you to select from a library of common types such as numbers, datetimes, and names. When added, a prebuilt component is automatically detected. You can have up to five prebuilt components per entity. For more information, see [the list of supported prebuilt components](../prebuilt-component-reference.md).
### Regex component
-The regex component matches regular expressions to capture consistent patterns. When added, any text that matches the regular expression will be extracted. You can have multiple regular expressions within the same entity, each with a different key identifier. A matched expression will return the key as part of the prediction response.
+The regex component matches regular expressions to capture consistent patterns. When added, any text that matches the regular expression is extracted. You can have multiple regular expressions within the same entity, each with a different key identifier. A matched expression returns the key as part of the prediction response.
-In multilingual projects, you can specify a different expression for each language. While using the prediction API, you can specify the language in the input request, which will only match the regular expression associated to that language.
-
+In multilingual projects, you can specify a different expression for each language. When you use the prediction API, you can specify the language in the input request, which only matches the regular expression associated to that language.
## Entity options
-When multiple components are defined for an entity, their predictions may overlap. When an overlap occurs, each entity's final prediction is determined by one of the following options.
+When multiple components are defined for an entity, their predictions might overlap. When an overlap occurs, each entity's final prediction is determined by one of the following options.
### Combine components Combine components as one entity when they overlap by taking the union of all the components.
-Use this to combine all components when they overlap. When components are combined, you get all the extra information thatΓÇÖs tied to a list or prebuilt component when they are present.
+Use this option to combine all components when they overlap. When components are combined, you get all the extra information that's tied to a list or prebuilt component when they're present.
#### Example
-Suppose you have an entity called Software that has a list component, which contains ΓÇ£Proseware OSΓÇ¥ as an entry. In your utterance data, you have ΓÇ£I want to buy Proseware OS 9ΓÇ¥ with ΓÇ£Proseware OS 9ΓÇ¥ tagged as Software:
-
+Suppose you have an entity called **Software** that has a list component, which contains "Proseware OS" as an entry. In your utterance data, you have "I want to buy Proseware OS 9" with "Proseware OS 9" tagged as **Software**:
-By using combine components, the entity will return with the full context as ΓÇ£Proseware OS 9ΓÇ¥ along with the key from the list component:
+By using combined components, the entity returns with the full context as "Proseware OS 9" along with the key from the list component:
-Suppose you had the same utterance but only ΓÇ£OS 9ΓÇ¥ was predicted by the learned component:
+Suppose you had the same utterance, but only "OS 9" was predicted by the learned component:
-With combine components, the entity will still return as ΓÇ£Proseware OS 9ΓÇ¥ with the key from the list component:
+With combined components, the entity still returns as "Proseware OS 9" with the key from the list component:
-### Do not combine components
+### Don't combine components
-Each overlapping component will return as a separate instance of the entity. Apply your own logic after prediction with this option.
+Each overlapping component returns as a separate instance of the entity. Apply your own logic after prediction with this option.
#### Example
-Suppose you have an entity called Software that has a list component, which contains ΓÇ£Proseware DesktopΓÇ¥ as an entry. In your utterance data, you have ΓÇ£I want to buy Proseware Desktop ProΓÇ¥ with ΓÇ£Proseware Desktop ProΓÇ¥ tagged as Software:
+Suppose you have an entity called **Software** that has a list component, which contains "Proseware Desktop" as an entry. In your utterance data, you have "I want to buy Proseware Desktop Pro" with "Proseware Desktop Pro" tagged as **Software**:
-When you do not combine components, the entity will return twice:
+When you don't combine components, the entity returns twice:
### Required components
-An entity can sometimes be defined by multiple components but requires one or more of them to be present. Every component can be set as **required**, which means the entity will **not** be returned if that component wasn't present. For example, if you have an entity with a list component and a required learned component, it is guaranteed that any returned entity includes a learned component; if it doesn't, the entity will not be returned.
+Sometimes an entity can be defined by multiple components but requires one or more of them to be present. Every component can be set as *required*, which means the entity *won't* be returned if that component wasn't present. For example, if you have an entity with a list component and a required learned component, it's guaranteed that any returned entity includes a learned component. If it doesn't, the entity isn't returned.
-Required components are most frequently used with learned components, as they can restrict the other component types to a specific context, which is commonly associated to **roles**. You can also require all components to make sure that every component is present for an entity.
+Required components are most frequently used with learned components because they can restrict the other component types to a specific context, which is commonly associated to *roles*. You can also require all components to make sure that every component is present for an entity.
-In the Language Studio, every component in an entity has a toggle next to it that allows you to set it as required.
+In Language Studio, every component in an entity has a toggle next to it that allows you to set it as required.
#### Example
-Suppose you have an entity called **Ticket Quantity** that attempts to extract the number of tickets you want to reserve for flights, for utterances such as _"Book **two** tickets tomorrow to Cairo"_.
+Suppose you have an entity called **Ticket Quantity** that attempts to extract the number of tickets you want to reserve for flights, for utterances such as "Book **two** tickets tomorrow to Cairo."
-Typically, you would add a prebuilt component for _Quantity.Number_ that already extracts all numbers. However if your entity was only defined with the prebuilt, it would also extract other numbers as part of the **Ticket Quantity** entity, such as _"Book **two** tickets tomorrow to Cairo at **3** PM"_.
+Typically, you add a prebuilt component for `Quantity.Number` that already extracts all numbers. If your entity was only defined with the prebuilt component, it also extracts other numbers as part of the **Ticket Quantity** entity, such as "Book **two** tickets tomorrow to Cairo at **3** PM."
-To resolve this, you would label a learned component in your training data for all the numbers that are meant to be **Ticket Quantity**. The entity now has 2 components, the prebuilt that knows all numbers, and the learned one that predicts where the Ticket Quantity is in a sentence. If you require the learned component, you make sure that Ticket Quantity only returns when the learned component predicts it in the right context. If you also require the prebuilt component, you can then guarantee that the returned Ticket Quantity entity is both a number and in the correct position.
+To resolve this scenario, you label a learned component in your training data for all the numbers that are meant to be **Ticket Quantity**. The entity now has two components: the prebuilt component that knows all numbers, and the learned one that predicts where the ticket quantity is in a sentence. If you require the learned component, you make sure that **Ticket Quantity** only returns when the learned component predicts it in the right context. If you also require the prebuilt component, you can then guarantee that the returned **Ticket Quantity** entity is both a number and in the correct position.
+## Use components and options
-## How to use components and options
+Components give you the flexibility to define your entity in more than one way. When you combine components, you make sure that each component is represented and you reduce the number of entities returned in your predictions.
-Components give you the flexibility to define your entity in more than one way. When you combine components, you make sure that each component is represented and you reduce the number of entities returned in your predictions.
+A common practice is to extend a prebuilt component with a list of values that the prebuilt might not support. For example, if you have an **Organization** entity, which has a `General.Organization` prebuilt component added to it, the entity might not predict all the organizations specific to your domain. You can use a list component to extend the values of the **Organization** entity and extend the prebuilt component with your own organizations.
-A common practice is to extend a prebuilt component with a list of values that the prebuilt might not support. For example, if you have an **Organization** entity, which has a _General.Organization_ prebuilt component added to it, the entity may not predict all the organizations specific to your domain. You can use a list component to extend the values of the Organization entity and thereby extending the prebuilt with your own organizations.
+Other times, you might be interested in extracting an entity through context, such as a **Product** in a retail project. You label the learned component of the product to learn _where_ a product is based on its position within the sentence. You might also have a list of products that you already know beforehand that you want to always extract. Combining both components in one entity allows you to get both options for the entity.
-Other times you may be interested in extracting an entity through context such as a **Product** in a retail project. You would label for the learned component of the product to learn _where_ a product is based on its position within the sentence. You may also have a list of products that you already know before hand that you'd like to always extract. Combining both components in one entity allows you to get both options for the entity.
-
-When you do not combine components, you allow every component to act as an independent entity extractor. One way of using this option is to separate the entities extracted from a list to the ones extracted through the learned or prebuilt components to handle and treat them differently.
+When you don't combine components, you allow every component to act as an independent entity extractor. One way of using this option is to separate the entities extracted from a list to the ones extracted through the learned or prebuilt components to handle and treat them differently.
> [!NOTE]
-> Previously during the public preview of the service, there were 4 available options: **Longest overlap**, **Exact overlap**, **Union overlap**, and **Return all separately**. **Longest overlap** and **exact overlap** are deprecated and will only be supported for projects that previously had those options selected. **Union overlap** has been renamed to **Combine components**, while **Return all separately** has been renamed to **Do not combine components**.
-
+> Previously during the public preview of the service, there were four available options: **Longest overlap**, **Exact overlap**, **Union overlap**, and **Return all separately**. **Longest overlap** and **Exact overlap** are deprecated and are only supported for projects that previously had those options selected. **Union overlap** has been renamed to **Combine components**, while **Return all separately** has been renamed to **Do not combine components**.
-## Next steps
+## Related content
-[Supported prebuilt components](../prebuilt-component-reference.md)
+- [Supported prebuilt components](../prebuilt-component-reference.md)
ai-services Multiple Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/concepts/multiple-languages.md
Title: Multilingual projects
-description: Learn about which how to make use of multilingual projects in conversational language understanding
+description: Learn about how to make use of multilingual projects in conversational language understanding.
#
# Multilingual projects
-Conversational language understanding makes it easy for you to extend your project to several languages at once. When you enable multiple languages in projects, you'll be able to add language specific utterances and synonyms to your project, and get multilingual predictions for your intents and entities.
+Conversational language understanding makes it easy for you to extend your project to several languages at once. When you enable multiple languages in projects, you can add language-specific utterances and synonyms to your project. You can get multilingual predictions for your intents and entities.
## Multilingual intent and learned entity components
-When you enable multiple languages in a project, you can train the project primarily in one language and immediately get predictions in others.
+When you enable multiple languages in a project, you can train the project primarily in one language and immediately get predictions in other languages.
-For example, you can train your project entirely with English utterances, and query it in: French, German, Mandarin, Japanese, Korean, and others. Conversational language understanding makes it easy for you to scale your projects to multiple languages by using multilingual technology to train your models.
+For example, you can train your project entirely with English utterances and query it in French, German, Mandarin, Japanese, Korean, and others. Conversational language understanding makes it easy for you to scale your projects to multiple languages by using multilingual technology to train your models.
-Whenever you identify that a particular language is not performing as well as other languages, you can add utterances for that language in your project. In the [tag utterances](../how-to/tag-utterances.md) page in Language Studio, you can select the language of the utterance you're adding. When you introduce examples for that language to the model, it is introduced to more of the syntax of that language, and learns to predict it better.
+Whenever you identify that a particular language isn't performing as well as other languages, you can add utterances for that language in your project. In the [tag utterances](../how-to/tag-utterances.md) page in Language Studio, you can select the language of the utterance you're adding. When you introduce examples for that language to the model, it's introduced to more of the syntax of that language and learns to predict it better.
-You aren't expected to add the same amount of utterances for every language. You should build the majority of your project in one language, and only add a few utterances in languages you observe aren't performing well. If you create a project that is primarily in English, and start testing it in French, German, and Spanish, you might observe that German doesn't perform as well as the other two languages. In that case, consider adding 5% of your original English examples in German, train a new model and test in German again. You should see better results for German queries. The more utterances you add, the more likely the results are going to get better.
+You aren't expected to add the same number of utterances for every language. You should build most of your project in one language and only add a few utterances in languages that you observe aren't performing well. If you create a project that's primarily in English and start testing it in French, German, and Spanish, you might observe that German doesn't perform as well as the other two languages. In that case, consider adding 5% of your original English examples in German, train a new model, and test in German again. You should see better results for German queries. The more utterances you add, the more likely the results are going to get better.
-When you add data in another language, you shouldn't expect it to negatively affect other languages.
+When you add data in another language, you shouldn't expect it to negatively affect other languages.
## List and prebuilt components in multiple languages
-Projects with multiple languages enabled will allow you to specify synonyms **per language** for every list key. Depending on the language you query your project with, you will only get matches for the list component with synonyms of that language. When you query your project, you can specify the language in the request body:
+Projects with multiple languages enabled allow you to specify synonyms *per language* for every list key. Depending on the language you query your project with, you only get matches for the list component with synonyms of that language. When you query your project, you can specify the language in the request body:
```json "query": "{query}" "language": "{language code}" ```
-If you do not provide a language, it will fall back to the default language of your project. See the [language support](../language-support.md) article for a list of different language codes.
+If you don't provide a language, it falls back to the default language of your project. For a list of different language codes, see [Language support](../language-support.md).
-Prebuilt components are similar, where you should expect to get predictions for prebuilt components that are available in specific languages. The request's language again determines which components are attempting to be predicted. See the [prebuilt components](../prebuilt-component-reference.md) reference article for the language support of each prebuilt component.
+Prebuilt components are similar, where you should expect to get predictions for prebuilt components that are available in specific languages. The request's language again determines which components are attempting to be predicted. For information on the language support of each prebuilt component, see the [Supported prebuilt entity components](../prebuilt-component-reference.md).
-## Next steps
+## Related content
* [Tag utterances](../how-to/tag-utterances.md) * [Train a model](../how-to/train-model.md)
ai-services Model Retirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/model-retirements.md
description: Learn about the model deprecations and retirements in Azure OpenAI. Previously updated : 07/18/2024 Last updated : 07/30/2024
These models are currently available for use in Azure OpenAI Service.
| `gpt-35-turbo` | 0125 | No earlier than Feb 22, 2025 | | `gpt-4`<br>`gpt-4-32k` | 0314 | **Deprecation:** October 1, 2024 <br> **Retirement:** June 6, 2025 | | `gpt-4`<br>`gpt-4-32k` | 0613 | **Deprecation:** October 1, 2024 <br> **Retirement:** June 6, 2025 |
-| `gpt-4` | 1106-preview | To be upgraded to `gpt-4` Version: `turbo-2024-04-09`, starting on August 15, 2024, or later **<sup>1</sup>** |
-| `gpt-4` | 0125-preview |To be upgraded to `gpt-4` Version: `turbo-2024-04-09`, starting on August 15, 2024, or later **<sup>1</sup>** |
-| `gpt-4` | vision-preview | To be upgraded to `gpt-4` Version: `turbo-2024-04-09`, starting on August 15, 2024, or later **<sup>1</sup>** |
+| `gpt-4` | 1106-preview | To be upgraded to `gpt-4` Version: `turbo-2024-04-09`, starting on November 15, 2024, or later **<sup>1</sup>** |
+| `gpt-4` | 0125-preview |To be upgraded to `gpt-4` Version: `turbo-2024-04-09`, starting on November 15, 2024, or later **<sup>1</sup>** |
+| `gpt-4` | vision-preview | To be upgraded to `gpt-4` Version: `turbo-2024-04-09`, starting on November 15, 2024, or later **<sup>1</sup>** |
| `gpt-3.5-turbo-instruct` | 0914 | No earlier than Sep 14, 2025 | | `text-embedding-ada-002` | 2 | No earlier than April 3, 2025 | | `text-embedding-ada-002` | 1 | No earlier than April 3, 2025 |
If you're an existing customer looking for information about these models, see [
## Retirement and deprecation history
-## July 18, 2024
+### July 30, 2024
+
+* Updated `gpt-4` preview model upgrade date to November 15, 2024 or later for the following versions:
+ * 1106-preview
+ * 0125-preview
+ * vision-preview
+
+### July 18, 2024
* Updated `gpt-4` 0613 deprecation date to October 1, 2024 and the retirement date to June 6, 2025.
-## June 19, 2024
+### June 19, 2024
* Updated `gpt-35-turbo` 0301 retirement date to no earlier than October 1, 2024. * Updated `gpt-35-turbo` & `gpt-35-turbo-16k`0613 retirement date to October 1, 2024.
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
description: Learn about the different model capabilities that are available with Azure OpenAI. Previously updated : 07/18/2024 Last updated : 07/31/2024
Azure OpenAI Service is powered by a diverse set of models with different capabi
| Models | Description | |--|--|
-| [GPT-4o & GPT-4 Turbo](#gpt-4o-and-gpt-4-turbo) | The latest most capable Azure OpenAI models with multimodal versions, which can accept both text and images as input. |
+| [GPT-4o & GPT-4o mini & GPT-4 Turbo](#gpt-4o-and-gpt-4-turbo) | The latest most capable Azure OpenAI models with multimodal versions, which can accept both text and images as input. |
| [GPT-4](#gpt-4) | A set of models that improve on GPT-3.5 and can understand and generate natural language and code. | | [GPT-3.5](#gpt-35) | A set of models that improve on GPT-3 and can understand and generate natural language and code. | | [Embeddings](#embeddings-models) | A set of models that can convert text into numerical vector form to facilitate text similarity. |
Azure OpenAI Service is powered by a diverse set of models with different capabi
GPT-4o integrates text and images in a single model, enabling it to handle multiple data types simultaneously. This multimodal approach enhances accuracy and responsiveness in human-computer interactions. GPT-4o matches GPT-4 Turbo in English text and coding tasks while offering superior performance in non-English languages and vision tasks, setting new benchmarks for AI capabilities.
-### Early access playground
+### How do I access the GPT-4o and GPT-4o mini models?
-Existing Azure OpenAI customers can test out the **NEW GPT-4o mini** model in the **Azure OpenAI Studio Early Access Playground (Preview)**.
-
-To test the latest model:
-
-> [!NOTE]
-> The GPT-4o mini early access playground is currently only available for resources in **West US3** and **East US**, and is limited to 10 requests every five minutes per subscription. Azure OpenAI content filters are enabled at the default configuration and cannot be modified. GPT-4o mini is a preview model and is currently not available for deployment/direct API access.
-
-1. Navigate to Azure OpenAI Studio at https://oai.azure.com/ and sign-in with credentials that have access to your OpenAI resources.
-2. Select an Azure OpenAI resource in the **West US3** or **East US** regions. If you don't have a resource in one of these regions you will need to [create a resource](../how-to/create-resource.md).
-3. From the main [Azure OpenAI Studio](https://oai.azure.com/) page select the **Early Access Playground (Preview)** button from under the **Get started** section. (This button will only be visible when a resource in **West US3** or **East US** is selected.)
-4. Now you can start asking the model questions just as you would before in the existing [chat playground](../chatgpt-quickstart.md).
-
-### How do I access the GPT-4o model?
-
-GPT-4o is available for **standard** and **global-standard** model deployment.
+GPT-4o and GPT-4o mini are available for **standard** and **global-standard** model deployment.
You need to [create](../how-to/create-resource.md) or use an existing resource in a [supported standard](#gpt-4-and-gpt-4-turbo-model-availability) or [global standard](#global-standard-model-availability) region where the model is available.
-When your resource is created, you can [deploy](../how-to/create-resource.md#deploy-a-model) the GPT-4o model. If you are performing a programmatic deployment, the **model** name is `gpt-4o`, and the **version** is `2024-05-13`.
+When your resource is created, you can [deploy](../how-to/create-resource.md#deploy-a-model) the GPT-4o models. If you are performing a programmatic deployment, the **model** names are:
+
+- `gpt-4o`, **Version** `2024-05-13`
+- `gpt-4o-mini` **Version** `2024-07-18`
### GPT-4 Turbo
See [model versions](../concepts/model-versions.md) to learn about how Azure Ope
| Model ID | Description | Max Request (tokens) | Training Data (up to) | | | : |: |:: |
-|`gpt-4o` (2024-05-13) <br> **GPT-4o (Omni)** | **Latest GA model** <br> - Text, image processing <br> - JSON Mode <br> - parallel function calling <br> - Enhanced accuracy and responsiveness <br> - Parity with English text and coding tasks compared to GPT-4 Turbo with Vision <br> - Superior performance in non-English languages and in vision tasks <br> - **Does not support enhancements** |Input: 128,000 <br> Output: 4,096| Oct 2023 |
+|`gpt-4o-mini` (2024-07-18) <br> **GPT-4o mini** | **Latest small GA model** <br> - Fast, inexpensive, capable model ideal for replacing GPT-3.5 Turbo series models. <br> - Text, image processing <br>- JSON Mode <br> - parallel function calling <br> - **Does not support enhancements** | Input: 128,000 <br> Output: 16,384 | Oct 2023 |
+|`gpt-4o` (2024-05-13) <br> **GPT-4o (Omni)** | **Latest large GA model** <br> - Text, image processing <br> - JSON Mode <br> - parallel function calling <br> - Enhanced accuracy and responsiveness <br> - Parity with English text and coding tasks compared to GPT-4 Turbo with Vision <br> - Superior performance in non-English languages and in vision tasks <br> - **Does not support enhancements** |Input: 128,000 <br> Output: 4,096| Oct 2023 |
| `gpt-4` (turbo-2024-04-09) <br>**GPT-4 Turbo with Vision** | **New GA model** <br> - Replacement for all previous GPT-4 preview models (`vision-preview`, `1106-Preview`, `0125-Preview`). <br> - [**Feature availability**](#gpt-4o-and-gpt-4-turbo) is currently different depending on method of input, and deployment type. <br> - **Does not support enhancements**. | Input: 128,000 <br> Output: 4,096 | Dec 2023 | | `gpt-4` (0125-Preview)*<br>**GPT-4 Turbo Preview** | **Preview Model** <br> -Replaces 1106-Preview <br>- Better code generation performance <br> - Reduces cases where the model doesn't complete a task <br> - JSON Mode <br> - parallel function calling <br> - reproducible output (preview) | Input: 128,000 <br> Output: 4,096 | Dec 2023 | | `gpt-4` (vision-preview)<br>**GPT-4 Turbo with Vision Preview** | **Preview model** <br> - Accepts text and image input. <br> - Supports enhancements <br> - JSON Mode <br> - parallel function calling <br> - reproducible output (preview) | Input: 128,000 <br> Output: 4,096 | Apr 2023 |
For more information on Provisioned deployments, see our [Provisioned guidance](
### Global standard model availability
-**Supported models:**
--- `gpt-4o` **Version:** `2024-05-13`
+`gpt-4o` **Version:** `2024-05-13`
**Supported regions:**
For more information on Provisioned deployments, see our [Provisioned guidance](
- westus - westus3
+`gpt-4o-mini` **Version:** `2024-07-18`
+
+**Supported regions:**
+
+- eastus
+ ### GPT-4 and GPT-4 Turbo model availability #### Public cloud regions [!INCLUDE [GPT-4](../includes/model-matrix/standard-gpt-4.md)] -- #### Select customer access In addition to the regions above which are available to all Azure OpenAI customers, some select pre-existing customers have been granted access to versions of GPT-4 in additional regions:
These models can only be used with Embedding API requests.
| `gpt-35-turbo` (0613) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | 4,096 | Sep 2021 | | `gpt-35-turbo` (1106) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | Input: 16,385<br> Output: 4,096 | Sep 2021| | `gpt-35-turbo` (0125) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | 16,385 | Sep 2021 |
-| `gpt-4` (0613) <sup>**1**<sup> | North Central US <br> Sweden Central | 8192 | Sep 2021 |
+| `gpt-4` (0613) <sup>**1**</sup> | North Central US <br> Sweden Central | 8192 | Sep 2021 |
-**<sup>1<sup>** GPT-4 fine-tuning is currently in public preview. See our [GPT-4 fine-tuning safety evaluation guidance](/azure/ai-services/openai/how-to/fine-tuning?tabs=turbo%2Cpython-new&pivots=programming-language-python#safety-evaluation-gpt-4-fine-tuningpublic-preview) for more information.
+**<sup>1</sup>** GPT-4 fine-tuning is currently in public preview. See our [GPT-4 fine-tuning safety evaluation guidance](/azure/ai-services/openai/how-to/fine-tuning?tabs=turbo%2Cpython-new&pivots=programming-language-python#safety-evaluation-gpt-4-fine-tuningpublic-preview) for more information.
### Whisper models
ai-services Fine Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/fine-tuning.md
Previously updated : 05/16/2024 Last updated : 07/25/2024 zone_pivot_groups: openai-fine-tuning-new
ai-services Function Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/function-calling.md
At a high level you can break down working with functions into three steps:
* `gpt-4` (vision-preview) * `gpt-4` (2024-04-09) * `gpt-4o` (2024-05-13)
+* `gpt-4o-mini` (2024-07-18)
Support for parallel function was first added in API version [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
ai-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quotas-limits.md
- ignite-2023 - references_regions Previously updated : 07/24/2024 Last updated : 07/31/2024
The following sections provide you with a quick guide to the default quotas and
## gpt-4o rate limits
-`gpt-4o` introduces rate limit tiers with higher limits for certain customer types.
+`gpt-4o` and `gpt-4o-mini` have rate limit tiers with higher limits for certain customer types.
### gpt-4o global standard
-|Tier| Quota Limit in tokens per minute (TPM) | Requests per minute |
-||::|::|
-|Enterprise agreement | 30 M | 180 K |
-|Default | 450 K | 2.7 K |
+| Model|Tier| Quota Limit in tokens per minute (TPM) | Requests per minute |
+|||::|::|
+|`gpt-4o`|Enterprise agreement | 30 M | 180 K |
+|`gpt-4o-mini` | Enterprise agreement | 50 M | 300 K |
+|`gpt-4o` |Default | 450 K | 2.7 K |
+|`gpt-4o-mini` | Default | 2 M | 12 K |
M = million | K = thousand ### gpt-4o standard
-|Tier| Quota Limit in tokens per minute (TPM) | Requests per minute |
-||::|::|
-|Enterprise agreement | 1 M | 6 K |
-|Default | 150 K | 900 |
+| Model|Tier| Quota Limit in tokens per minute (TPM) | Requests per minute |
+|||::|::|
+|`gpt-4o`|Enterprise agreement | 1 M | 6 K |
+|`gpt-4o-mini` | Enterprise agreement | 2 M | 12 K |
+|`gpt-4o`|Default | 150 K | 900 |
+|`gpt-4o-mini` | Default | 450 K | 2.7 K |
M = million | K = thousand
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
- ignite-2023 - references_regions Previously updated : 07/18/2024 Last updated : 07/31/2024 recommendations: false
This article provides a summary of the latest releases and major documentation u
## July 2024
-### GPT-4o mini preview model available for early access
+### GPT-4o mini model available for deployment
-GPT-4o mini is the latest model from OpenAI [launched on July 18, 2024](https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/).
+GPT-4o mini is the latest Azure OpenAI model first [announced on July 18, 2024](https://azure.microsoft.com/blog/openais-fastest-model-gpt-4o-mini-is-now-available-on-azure-ai/):
-From OpenAI:
+*"GPT-4o mini allows customers to deliver stunning applications at a lower cost with blazing speed. GPT-4o mini is significantly smarter than GPT-3.5 TurboΓÇöscoring 82% on Measuring Massive Multitask Language Understanding (MMLU) compared to 70%ΓÇöand is more than 60% cheaper.1 The model delivers an expanded 128K context window and integrates the improved multilingual capabilities of GPT-4o, bringing greater quality to languages from around the world."*
-*"GPT-4o mini surpasses GPT-3.5 Turbo and other small models on academic benchmarks across both textual intelligence and multimodal reasoning, and supports the same range of languages as GPT-4o. It also demonstrates strong performance in function calling, which can enable developers to build applications that fetch data or take actions with external systems, and improved long-context performance compared to GPT-3.5 Turbo."*
+The model is currently available for both [standard and global standard deployment](./how-to/deployment-types.md) in the East US region.
-To start testing out the model today in Azure OpenAI, see the [**Azure OpenAI Studio early access playground**](./concepts/models.md#early-access-playground).
+For information on model quota, consult the [quota and limits page](./quotas-limits.md) and for the latest info on model availability refer to the [models page](./concepts/models.md).
### New Responsible AI default content filtering policy
ai-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-pronunciation-assessment.md
zone_pivot_groups: programming-languages-ai-services
In this article, you learn how to evaluate pronunciation with speech to text through the Speech SDK. Pronunciation assessment evaluates speech pronunciation and gives speakers feedback on the accuracy and fluency of spoken audio.
+> [!NOTE]
+> Pronunciation assessment uses a specific version of the speech-to-text model, different from the standard speech to text model, to ensure consistent and accurate pronunciation assessment.
+ ## Use pronunciation assessment in streaming mode Pronunciation assessment supports uninterrupted streaming mode. The recording time can be unlimited through the Speech SDK. As long as you don't stop recording, the evaluation process doesn't finish and you can pause and resume evaluation conveniently.
For how to use Pronunciation Assessment in streaming mode in your own applicatio
::: zone-end
+### Continuous recognition
++
+If your audio file exceeds 30 seconds, use continuous mode for processing. The sample code for continuous mode can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs) under the function `PronunciationAssessmentContinuousWithFile`.
+++
+If your audio file exceeds 30 seconds, use continuous mode for processing.
+++
+If your audio file exceeds 30 seconds, use continuous mode for processing. The sample code for continuous mode can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/java/jre/console/src/com/microsoft/cognitiveservices/speech/samples/console/SpeechRecognitionSamples.java) under the function `pronunciationAssessmentContinuousWithFile`.
+++
+If your audio file exceeds 30 seconds, use continuous mode for processing. The sample code for continuous mode can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/261160e26dfcae4c3aee93308d58d74e36739b6f/samples/python/console/speech_sample.py) under the function `pronunciation_assessment_continuous_from_file`.
+++
+If your audio file exceeds 30 seconds, use continuous mode for processing. The sample code for continuous mode can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/261160e26dfcae4c3aee93308d58d74e36739b6f/samples/js/node/pronunciationAssessmentContinue.js).
+++
+If your audio file exceeds 30 seconds, use continuous mode for processing. The sample code for continuous mode can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/objective-c/ios/speech-samples/speech-samples/ViewController.m) under the function `pronunciationAssessFromFile`.
+++
+If your audio file exceeds 30 seconds, use continuous mode for processing. The sample code for continuous mode can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/swift/ios/speech-samples/speech-samples/ViewController.swift) under the function `continuousPronunciationAssessment`.
+++++ ## Set configuration parameters ::: zone pivot="programming-language-go"
This table lists some of the optional methods you can set for the `Pronunciation
> Content and prosody assessments are only available in the [en-US](./language-support.md?tabs=pronunciation-assessment) locale. > > To explore the content and prosody assessments, upgrade to the SDK version 1.35.0 or later.
+>
+> There is no length limit for the topic parameter.
| Method | Description | |--|-|
You can get pronunciation assessment scores for:
- Syllable groups - Phonemes in [SAPI](/previous-versions/windows/desktop/ee431828(v=vs.85)#american-english-phoneme-table) or [IPA](https://en.wikipedia.org/wiki/IPA) format
-### Supported features per locale
+## Supported features per locale
The following table summarizes which features that locales support. For more specifies, see the following sections. If the locales you require aren't listed in the following table for the supported feature, fill out this [intake form](https://aka.ms/speechpa/intake) for further assistance.
pronunciationAssessmentConfig?.phonemeAlphabet = "IPA"
::: zone-end
-## Assess spoken phonemes
+### Assess spoken phonemes
With spoken phonemes, you can get confidence scores that indicate how likely the spoken phonemes matched the expected phonemes.
pronunciationAssessmentConfig?.nbestPhonemeCount = 5
::: zone-end
+## Pronunciation score calculation
+
+Pronunciation scores are calculated by weighting accuracy, prosody, fluency, and completeness scores based on specific formulas for reading and speaking scenarios.
+
+When sorting the scores of accuracy, prosody, fluency, and completeness from low to high (if each score is available) and representing the lowest score to the highest score as s0 to s3, the pronunciation score is calculated as follows:
+
+For reading scenario:
+ - With prosody score: PronScore = 0.4 * s0 + 0.2 * s1 + 0.2 * s2 + 0.2 * s3
+ - Without prosody score: PronScore = 0.6 * s0 + 0.2 * s1 + 0.2 * s2
+
+For the speaking scenario (the completeness score isn't applicable):
+ - With prosody score: PronScore = 0.6 * s0 + 0.2 * s1 + 0.2 * s2
+ - Without prosody score: PronScore = 0.6 * s0 + 0.4 * s1
+
+This formula provides a weighted calculation based on the importance of each score, ensuring a comprehensive evaluation of pronunciation.
+ ## Related content - Learn about quality [benchmark](https://aka.ms/pronunciationassessment/techblog).
ai-services Language Learning With Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/language-learning-with-pronunciation-assessment.md
+
+ Title: Interactive language learning with pronunciation assessment
+description: Interactive language learning with pronunciation assessment gives you instant feedback on pronunciation, fluency, prosody, grammar, and vocabulary through interactive chats.
++++ Last updated : 8/1/2024+++
+# Interactive language learning with pronunciation assessment
++
+Learning a new language is an exciting journey. Interactive language learning can make your learning experience more engaging and effective. By using pronunciation assessment effectively, you get instant feedback on pronunciation accuracy, fluency, prosody, grammar, and vocabulary through your interactive language learning experience.
+
+> [!NOTE]
+> The language learning feature currently supports only `en-US`. For available regions, refer to [available regions for pronunciation assessment](regions.md#speech-service). If you turn on the **Avatar** button to interact with a text to speech avatar, refer to the available [regions](regions.md#speech-service) for text to speech avatar.
+>
+> If you have any feedback on the language learning feature, fill out [this form](https://aka.ms/speechpa/intake).
+
+## Common use cases
+
+Here are some common scenarios where you can make use of the language learning feature to improve your language skills:
+
+- **Assess pronunciations:** Practice your pronunciation and receive scores with detailed feedback to identify areas for improvement.
+- **Improve speaking skills:** Engage in conversations with a native speaker (or a simulated one) to enhance your speaking skills and build confidence.
+- **Learn new vocabulary:** Expand your vocabulary and work on advanced pronunciation by interacting with AI-driven language models.
+
+## Getting started
+
+In this section, you can learn how to immerse yourself in dynamic conversations with a GPT-powered voice assistant to enhance your speaking skills.
+
+To get started with language learning through chatting, follow these steps:
+
+1. Go to **Language learning** in the [Speech Studio](https://aka.ms/speechstudio).
+
+1. Decide on a scenario or context in which you'd like to interact with the voice assistant. This can be a casual conversation, a specific topic, or a language learning exercise.
+
+ :::image type="content" source="media/pronunciation-assessment/language-learning.png" alt-text="Screenshot of choosing chatting scenario to interact with the voice assistant." lightbox="media/pronunciation-assessment/language-learning.png":::
+
+ If you want to interact with an avatar, toggle the **Avatar** button in the upper right corner to **On**.
+
+1. Press the microphone icon to start speaking naturally, as if you were talking to a real person.
+
+ :::image type="content" source="media/pronunciation-assessment/language-learning-selecting-mic-icon.png" alt-text="Screenshot of selecting the microphone icon to interact with the voice assistant." lightbox="media/pronunciation-assessment/language-learning-selecting-mic-icon.png":::
+
+ For accurate vocabulary and grammar scores, speak at least 3 sentences before assessment.
+
+1. Press the stop button or **Assess my response** button to finish speaking. This action will trigger the assessment process.
+
+ :::image type="content" source="media/pronunciation-assessment/language-learning-assess-response.png" alt-text="Screenshot of selecting the stop button to assess your response." lightbox="media/pronunciation-assessment/language-learning-assess-response.png":::
+
+1. Wait for a moment, and you can get a detailed assessment report.
+
+ :::image type="content" source="media/pronunciation-assessment/language-learning-assess-report.png" alt-text="Screenshot of a detailed assessment report.":::
+
+ The assessment report may include feedback on:
+ - **Accuracy:** Accuracy indicates how closely the phonemes match a native speaker's pronunciation.
+ - **Fluency:** Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words.
+ - **Prosody:** Prosody indicates the nature of the given speech, including stress, intonation, speaking speed, and rhythm.
+ - **Grammar:** Grammar considers lexical accuracy, grammatical accuracy, and diversity of sentence structures, providing a more comprehensive evaluation of language proficiency.
+ - **Vocabulary:** Vocabulary evaluates the speaker's effective usage of words and their appropriateness within the given context to express ideas accurately, as well as the level of lexical complexity.
+
+ When recording your speech for pronunciation assessment, ensure your recording time falls within the recommended range of 20 seconds (equivalent to more than 50 words) to 10 minutes per session. This time range is optimal for evaluating the content of your speech accurately. Whether you have a short and focused conversation or a more extended dialogue, as long as the total recorded time falls within this range, you'll receive comprehensive feedback on your pronunciation, fluency, and content.
+
+ To get feedback on how to improve for each aspect of the assessment, select **Get feedback on how to improve**.
+
+ :::image type="content" source="media/pronunciation-assessment/language-learning-feedback-improve.png" alt-text="Screenshot of selecting the button to get feedback on how to improve for each aspect of the assessment.":::
+
+ When you have completed the conversation, you can also download your chat audio. You can clear the current conversation by selecting **Clear chat**.
+
+## Next steps
+
+- Use [pronunciation assessment with the Speech SDK](how-to-pronunciation-assessment.md)
+- Try [pronunciation assessment in the studio](pronunciation-assessment-tool.md).
ai-services Personal Voice How To Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/personal-voice-how-to-use.md
Here's example SSML in a request for text to speech with the voice name and the
You can use the SSML via the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md). * **Real-time speech synthesis**: Use the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md) to convert text to speech.
- * When you use Speech SDK, don't set Endpoint Id, just like prebuild voice.
+ * When you use Speech SDK, don't set Endpoint ID, just like prebuild voice.
* When you use REST API, please use prebuilt neural voices endpoint.
+## Supported and unsupported SSML elements for personal voice
+
+For detailed information on the supported and unsupported SSML elements for Phoenix and Dragon models, refer to the following table. For instructions on how to use SSML elements, refer to the [SSML document structure and events](speech-synthesis-markup-structure.md).
+
+| Element | Description | Supported in Phoenix | Supported in Dragon |
+|-|--|-||
+| `<voice>` | Specifies the voice and optional effects (`eq_car` and `eq_telecomhp8k`). | Yes | Yes |
+| `<mstts:express-as>` | Specifies speaking styles and roles. | No | No |
+| `<mstts:ttsembedding>` | Specifies the `speakerProfileId` property for a personal voice. | Yes | No |
+| `<lang xml:lang>` | Specifies the speaking language. | Yes | Yes |
+| `<prosody>` | Adjusts pitch, contour, range, rate, and volume. | | |
+|&nbsp;&nbsp;&nbsp;`pitch` | Indicates the baseline pitch for the text. | No | No |
+| &nbsp;&nbsp;&nbsp;`contour`| Represents changes in pitch. | No | No |
+| &nbsp;&nbsp;&nbsp;`range` | Represents the range of pitch for the text. | No | No |
+| &nbsp;&nbsp;&nbsp;`rate` | Indicates the speaking rate of the text. | Yes | Yes |
+| &nbsp;&nbsp;&nbsp;`volume`| Indicates the volume level of the speaking voice. | No | No |
+| `<emphasis>` | Adds or removes word-level stress for the text. | No | No |
+| `<audio>` | Embeds prerecorded audio into an SSML document. | Yes | No |
+| `<mstts:audioduration>` | Specifies the duration of the output audio. | No | No |
+| `<mstts:backgroundaudio>`| Adds background audio to your SSML documents or mixes an audio file with text to speech. | Yes | No |
+| `<phoneme>` | Specifies phonetic pronunciation in SSML documents. | | |
+| &nbsp;&nbsp;&nbsp;`ipa` | One of the phonetic alphabets. | Yes | No |
+| &nbsp;&nbsp;&nbsp;`sapi` | One of the phonetic alphabets. | No | No |
+| &nbsp;&nbsp;&nbsp;`ups` | One of the phonetic alphabets. | Yes | No |
+| &nbsp;&nbsp;&nbsp;`x-sampa`| One of the phonetic alphabets. | Yes | No |
+| `<lexicon>` | Defines how multiple entities are read in SSML. | Yes | Yes (only support alias) |
+| `<say-as>` | Indicates the content type, such as number or date, of the element's text. | Yes | Yes |
+| `<sub>` | Indicates that the alias attribute's text value should be pronounced instead of the element's enclosed text. | Yes | Yes |
+| `<math>` | Uses the MathML as input text to properly pronounce mathematical notations in the output audio. | Yes | No |
+| `<bookmark>` | Gets the offset of each marker in the audio stream. | Yes | No |
+| `<break>` | Overrides the default behavior of breaks or pauses between words. | Yes | Yes |
+| `<mstts:silence>` | Inserts pauses before or after text, or between two adjacent sentences. | Yes | No |
+| `<mstts:viseme>` | Defines the position of the face and mouth while a person is speaking. | Yes | No |
+| `<p>` | Denotes paragraphs in SSML documents. | Yes | Yes |
+| `<s>` | Denotes sentences in SSML documents. | Yes | Yes |
+ ## Reference documentation > [!div class="nextstepaction"]
ai-services Speech Synthesis Markup Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-synthesis-markup-voice.md
The following table describes the usage of the `prosody` element's attributes:
| Attribute | Description | Required or optional | | - | - | - | | `contour` | Contour represents changes in pitch. These changes are represented as an array of targets at specified time positions in the speech output. Sets of parameter pairs define each target. For example: <br/><br/>`<prosody contour="(0%,+20Hz) (10%,-2st) (40%,+10Hz)">`<br/><br/>The first value in each set of parameters specifies the location of the pitch change as a percentage of the duration of the text. The second value specifies the amount to raise or lower the pitch by using a relative value or an enumeration value for pitch (see `pitch`). | Optional |
-| `pitch` | Indicates the baseline pitch for the text. Pitch changes can be applied at the sentence level. The pitch changes should be within 0.5 to 1.5 times the original audio. You can express the pitch as:<ul><li>An absolute value: Expressed as a number followed by "Hz" (Hertz). For example, `<prosody pitch="600Hz">some text</prosody>`.</li><li>A relative value:<ul><li>As a relative number: Expressed as a number preceded by "+" or "-" and followed by "Hz" or "st" that specifies an amount to change the pitch. For example: `<prosody pitch="+80Hz">some text</prosody>` or `<prosody pitch="-2st">some text</prosody>`. The "st" indicates the change unit is semitone, which is half of a tone (a half step) on the standard diatonic scale.<li>As a percentage: Expressed as a number preceded by "+" (optionally) or "-" and followed by "%", indicating the relative change. For example: `<prosody pitch="50%">some text</prosody>` or `<prosody pitch="-50%">some text</prosody>`.</li></ul></li><li>A constant value:<ul><li>x-low</li><li>low</li><li>medium</li><li>high</li><li>x-high</li><li>default</li></ul></li></ul> | Optional |
+| `pitch` | Indicates the baseline pitch for the text. Pitch changes can be applied at the sentence level. The pitch changes should be within 0.5 to 1.5 times the original audio. You can express the pitch as:<ul><li>An absolute value: Expressed as a number followed by "Hz" (Hertz). For example, `<prosody pitch="600Hz">some text</prosody>`.</li><li>A relative value:<ul><li>As a relative number: Expressed as a number preceded by "+" or "-" and followed by "Hz" or "st" that specifies an amount to change the pitch. For example: `<prosody pitch="+80Hz">some text</prosody>` or `<prosody pitch="-2st">some text</prosody>`. The "st" indicates the change unit is semitone, which is half of a tone (a half step) on the standard diatonic scale.<li>As a percentage: Expressed as a number preceded by "+" (optionally) or "-" and followed by "%", indicating the relative change. For example: `<prosody pitch="50%">some text</prosody>` or `<prosody pitch="-50%">some text</prosody>`.</li></ul></li><li>A constant value:<ul><li>`x-low` (equivalently 0.55,-45%)</li><li>`low` (equivalently 0.8, -20%)</li><li>`medium` (equivalently 1, default value)</li><li>`high` (equivalently 1.2, +20%)</li><li>`x-high` (equivalently 1.45, +45%)</li></ul></li></ul> | Optional |
| `range`| A value that represents the range of pitch for the text. You can express `range` by using the same absolute values, relative values, or enumeration values used to describe `pitch`.| Optional |
-| `rate` | Indicates the speaking rate of the text. Speaking rate can be applied at the word or sentence level. The rate changes should be within `0.5` to `2` times the original audio. You can express `rate` as:<ul><li>A relative value: <ul><li>As a relative number: Expressed as a number that acts as a multiplier of the default. For example, a value of `1` results in no change in the original rate. A value of `0.5` results in a halving of the original rate. A value of `2` results in twice the original rate.</li><li>As a percentage: Expressed as a number preceded by "+" (optionally) or "-" and followed by "%", indicating the relative change. For example: `<prosody rate="50%">some text</prosody>` or `<prosody rate="-50%">some text</prosody>`.</li></ul><li>A constant value:<ul><li>x-slow</li><li>slow</li><li>medium</li><li>fast</li><li>x-fast</li><li>default</li></ul></li></ul> | Optional |
-| `volume` | Indicates the volume level of the speaking voice. Volume changes can be applied at the sentence level. You can express the volume as:<ul><li>An absolute value: Expressed as a number in the range of `0.0` to `100.0`, from *quietest* to *loudest*, such as `75`. The default value is `100.0`.</li><li>A relative value: <ul><li>As a relative number: Expressed as a number preceded by "+" or "-" that specifies an amount to change the volume. Examples are `+10` or `-5.5`.</li><li>As a percentage: Expressed as a number preceded by "+" (optionally) or "-" and followed by "%", indicating the relative change. For example: `<prosody volume="50%">some text</prosody>` or `<prosody volume="+3%">some text</prosody>`.</li></ul><li>A constant value:<ul><li>silent</li><li>x-soft</li><li>soft</li><li>medium</li><li>loud</li><li>x-loud</li><li>default</li></ul></li></ul> | Optional |
+| `rate` | Indicates the speaking rate of the text. Speaking rate can be applied at the word or sentence level. The rate changes should be within `0.5` to `2` times the original audio. You can express `rate` as:<ul><li>A relative value: <ul><li>As a relative number: Expressed as a number that acts as a multiplier of the default. For example, a value of `1` results in no change in the original rate. A value of `0.5` results in a halving of the original rate. A value of `2` results in twice the original rate.</li><li>As a percentage: Expressed as a number preceded by "+" (optionally) or "-" and followed by "%", indicating the relative change. For example: `<prosody rate="50%">some text</prosody>` or `<prosody rate="-50%">some text</prosody>`.</li></ul><li>A constant value:<ul><li>`x-slow` (equivalently 0.5, -50%)</li><li>`slow` (equivalently 0.64, -46%)</li><li>`medium` (equivalently 1, default value)</li><li>`fast` (equivalently 1.55, +55%)</li><li>`x-fast` (equivalently 2, +100%)</li></ul></li></ul> | Optional |
+| `volume` | Indicates the volume level of the speaking voice. Volume changes can be applied at the sentence level. You can express the volume as:<ul><li>An absolute value: Expressed as a number in the range of `0.0` to `100.0`, from *quietest* to *loudest*, such as `75`. The default value is `100.0`.</li><li>A relative value: <ul><li>As a relative number: Expressed as a number preceded by "+" or "-" that specifies an amount to change the volume. Examples are `+10` or `-5.5`.</li><li>As a percentage: Expressed as a number preceded by "+" (optionally) or "-" and followed by "%", indicating the relative change. For example: `<prosody volume="50%">some text</prosody>` or `<prosody volume="+3%">some text</prosody>`.</li></ul><li>A constant value:<ul><li>`silent` (equivalently 0)</li><li>`x-soft` (equivalently 0.2)</li><li>`soft` (equivalently 0.4)</li><li>`medium` (equivalently 0.6)</li><li>`loud` (equivalently 0.8)</li><li>`x-loud` (equivalently 1, default value)</li></ul></li></ul> | Optional |
### Prosody examples
The following table describes the `emphasis` element's attributes:
| Attribute | Description | Required or optional | | - | - | - |
-| `level` | Indicates the strength of emphasis to be applied:<ul><li>`reduced`</li><li>`none`</li><li>`moderate`</li><li>`strong`</li></ul>.<br>When the `level` attribute isn't specified, the default level is `moderate`. For details on each attribute, see [emphasis element](https://www.w3.org/TR/speech-synthesis11/#S3.2.2). | Optional |
+| `level` | Indicates the strength of emphasis to be applied:<ul><li>`reduced`</li><li>`none`</li><li>`moderate`</li><li>`strong`</li></ul><br>When the `level` attribute isn't specified, the default level is `moderate`. For details on each attribute, see [emphasis element](https://www.w3.org/TR/speech-synthesis11/#S3.2.2). | Optional |
### Emphasis examples
ai-studio Data Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/data-add.md
- build-2024 Last updated 5/21/2024---++ # How to add and manage data in your Azure AI Studio project
ai-studio Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/whats-new.md
- Title: What's new in Azure AI Studio?-
-description: This article provides you with information about new releases and features.
-
-keywords: Release notes
-- Previously updated : 5/21/2024-----
-# What's new in Azure AI Studio?
-
-Azure AI Studio is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features.
-
-## May 2024
-
-### Azure AI Studio (GA)
-
-Azure AI Studio is now generally available. Azure AI Studio is a unified platform that brings together various Azure AI capabilities that were previously available as standalone Azure services. Azure AI Studio provides a seamless experience for developers, data scientists, and AI engineers to build, deploy, and manage AI models and applications. With Azure AI Studio, you can access a wide range of AI capabilities, including language models, speech, vision, and more, all in one place.
-
-> [!NOTE]
-> Some features are still in public preview and might not be available in all regions. Please refer to the feature level documentation for more information.
-
-### New UI
-
-We've updated the AI Studio navigation experience to help you work more efficiently and seamlessly move through the platform. Get to know the new navigation below:
-
-#### Quickly transition between hubs and projects
-
-Easily navigate between the global, hub, and project scopes.
-- Go back to the previous scope at any time by using the back button at the top of the navigation. -- Tools and resources change dynamically based on whether you are working at the global, hub, or project level. --
-#### Navigate with breadcrumbs
-
-We have added breadcrumbs to prevent you from getting lost in the product.
-- Breadcrumbs are consistently shown on the top navigation, regardless of what page you are on. -- Use these breadcrumbs to quickly move through the platform. --
-#### Customize your navigation
-
-The new navigation can be modified and customized to fit your needs.
-- Collapse and expand groupings as needed to easily access the tools you need the most. -- Collapse the navigation at any time to save screen space. All tools and capabilities will still be available. --
-#### Easily switch between your recent hubs and projects
-
-Switch between recently used hubs and projects at any time using the picker at the top of the navigation.
-- While in a hub, use the picker to access and switch to any of your recently used hubs. -- While in a project, use the picker to access and switch to any of your recently used projects. --
-### View and track your evaluators in a centralized way
-
-Evaluator is a new asset in Azure AI Studio. You can define a new evaluator in SDK and use it to run evaluation that generates scores of one or more metrics. You can view and manage both Microsoft curated evaluators and your own customized evaluators in the evaluator library. For more information, see [Evaluate with the prompt flow SDK](./how-to/develop/flow-evaluate-sdk.md).
-
-### Perform continuous monitoring for generative AI applications
-
-Azure AI Monitoring for Generative AI Applications enables you to continuously track the overall health of your production Prompt Flow deployments. With this feature, you can monitor the quality of LLM responses in addition to obtaining full visibility into the performance of your application, thus, helping you maintain trust and compliance. For more information, see [Monitor quality and safety of deployed prompt flow applications](./how-to/monitor-quality-safety.md).
-
-#### View embeddings benchmarks
-
-You can now compare benchmarks across embeddings models. For more information, see [Explore model benchmarks in Azure AI Studio](./how-to/model-benchmarks.md).
-
-### Fine-tune and deploy Azure OpenAI models
-
-Learn how to customize Azure OpenAI models with fine-tuning. You can train models on more examples and get higher quality results. For more information, see [Fine-tune and deploy Azure OpenAI models](../ai-services/openai/how-to/fine-tuning.md?context=/azure/ai-studio/context/context) and [Deploy Azure OpenAI models](./how-to/deploy-models-openai.md).
-
-### Service-side encryption of metadata
-
-We release simplified management when using customer-managed key encryption for workspaces, with less resources hosted in your Azure subscription. This reduces operational cost, and mitigates policy conflicts compared to the current offering.
-
-### Azure AI model Inference API
-
-The Azure AI Model Inference is an API that exposes a common set of capabilities for foundational models and that can be used by developers to consume predictions from a diverse set of models in a uniform and consistent way. Developers can talk with different models deployed in Azure AI without changing the underlying code they're using. For more information, see [Azure AI Model Inference API](./reference/reference-model-inference-api.md).
-
-### Perform tracing and debugging for GenAI applicationsΓÇ»
-
-Tracing is essential for providing detailed visibility into the performance and behavior of GenAI applications' inner workings. It plays a vital role in enhancing the debugging process, increasing observability, and promoting optimization.
-With this new capability, you can now efficiently monitor and rectify issues in your GenAI application during testing, fostering a more collaborative and efficient development process.
-
-### Use evaluators in the prompt flow SDK
-
-Evaluators in the prompt flow SDK offer a streamlined, code-based experience for evaluating and improving your generative AI apps. You can now easily use Microsoft curated quality and safety evaluators or define custom evaluators tailored to assess generative AI systems for the specific metrics you value. For more information about evaluators via the prompt flow SDK, see [Evaluate with the prompt flow SDK](./how-to/develop/flow-evaluate-sdk.md).
-
-Microsoft curated evaluators are also available in the AI Studio evaluator library, where you can view and manage them. However, custom evaluators are currently only available in the prompt flow SDK. For more information about evaluators in AI Studio, see [How to evaluate generative AI apps with Azure AI Studio](./how-to/evaluate-generative-ai-app.md#view-and-manage-the-evaluators-in-the-evaluator-library).
-
-### Use Prompty for engineering and sharing prompts
-
-Prompty is a new prompt template part of the prompt flow SDK that can be run standalone and integrated into your code. You can download a Prompty from the AI Studio playground, continue iterating on it in your local development environment, and check it into your git repo to share and collaborate on prompts with others. The Prompty format is supported in Semantic Kernel, C#, and LangChain as a community extension.
-
-### Mistral Small
-
-Mistral Small is available in the Azure AI model catalog. Mistral Small is Mistral AI's smallest proprietary Large Language Model (LLM). It can be used on any language-based task that requires high efficiency and low latency. Developers can access Mistral Small through Models as a Service (MaaS), enabling seamless API-based interactions.
-
-Mistral Small is:
--- A small model optimized for low latency: Efficient for high volume and low latency workloads. Mistral Small is Mistral's smallest proprietary model.-- Specialized in RAG: Crucial information isn't lost in the middle of long context windows. Supports up to 32K tokens.-- Strong in coding: Code generation, review, and comments with support for all mainstream coding languages.-- Multi-lingual by design: Best-in-class performance in French, German, Spanish, and Italian - in addition to English. Dozens of other languages are supported.-- Efficient guardrails baked in the model, with another safety layer with safe prompt option.-
-For more information about Phi-3, see the [blog announcement](https://techcommunity.microsoft.com/t5/ai-machine-learning-blog/introducing-mistral-small-empowering-developers-with-efficient/ba-p/4127678).
-
-## April 2024
-
-### Phi-3
-
-The Phi-3 family of models developed by Microsoft is available in the Azure AI model catalog. Phi-3 models are the most capable and cost-effective small language models (SLMs) available, outperforming models of the same size and next size up across various language, reasoning, coding, and math benchmarks. This release expands the selection of high-quality models for customers, offering more practical choices as they compose and build generative AI applications.
--- Phi-3-mini is available in two context-length variantsΓÇö4K and 128K tokens. It's the first model in its class to support a context window of up to 128K tokens, with little effect on quality.-- It's instruction-tuned, meaning that it's trained to follow different types of instructions reflecting how people normally communicate. This ensures the model is ready to use out-of-the-box.-- It's available on Azure AI to take advantage of the deploy > evaluate > fine-tune toolchain, and is available on Ollama for developers to run locally on their laptops.-- It has been optimized for ONNX Runtime with support for Windows DirectML along with cross-platform support across graphics processing unit (GPU), CPU, and even mobile hardware.-- It's also available as an NVIDIA NIM microservice with a standard API interface that can be deployed anywhere. And has been optimized for NVIDIA GPUs. -
-For more information about Phi-3, see the [blog announcement](https://azure.microsoft.com/blog/introducing-phi-3-redefining-whats-possible-with-slms/).
-
-### Meta Llama 3
-
-In collaboration with Meta, Meta Llama 3 models are available in the Azure AI model catalog.
--- Meta-Llama-3-8B pretrained and instruction fine-tuned models are recommended for scenarios with limited computational resources, offering faster training times and suitability for edge devices. It's appropriate for use cases like text summarization, classification, sentiment analysis, and translation.-- Meta-Llama-3-70B pretrained and instruction fine-tuned models are geared towards content creation and conversational AI, providing deeper language understanding for more nuanced tasks, like R&D and enterprise applications requiring nuanced text summarization, classification, language modeling, dialog systems, code generation and instruction following.-
-## February 2024
-
-### Azure AI Studio hub
-
-Azure AI resource is renamed hub. For additional information about the hub, check out [the hub documentation](./concepts/ai-resources.md).
-
-## January 2024
-
-### Benchmarks
-
-New models, datasets, and metrics are released for benchmarks. For additional information about the benchmarks experience, check out [the model catalog documentation](./how-to/model-catalog-overview.md).
-
-Added models:
-- `microsoft-phi-2`-- `mistralai-mistral-7b-instruct-v01`-- `mistralai-mistral-7b-v01`-- `codellama-13b-hf`-- `codellama-13b-instruct-hf`-- `codellama-13b-python-hf`-- `codellama-34b-hf`-- `codellama-34b-instruct-hf`-- `codellama-34b-python-hf`-- `codellama-7b-hf`-- `codellama-7b-instruct-hf`-- `codellama-7b-python-hf`-
-Added datasets:
-- `truthfulqa_generation`-- `truthfulqa_mc1`-
-Added metrics:
-- `Coherence`-- `Fluency`-- `GPTSimilarity`-
-## November 2023
-
-### Benchmarks
-
-Benchmarks are released as public preview in Azure AI Studio. For additional information about the Benchmarks experience, check out [Model benchmarks](how-to/model-benchmarks.md).
-
-Added models:
-- `gpt-35-turbo-0301`-- `gpt-4-0314`-- `gpt-4-32k-0314`-- `llama-2-13b-chat`-- `llama-2-13b`-- `llama-2-70b-chat`-- `llama-2-70b`-- `llama-2-7b-chat`-- `llama-2-7b`-
-Added datasets:
-- `boolq`-- `gsm8k`-- `hellaswag`-- `human_eval`-- `mmlu_humanities`-- `mmlu_other`-- `mmlu_social_sciences`-- `mmlu_stem`-- `openbookqa`-- `piqa`-- `social_iqa`-- `winogrande`-
-Added tasks:
-- `Question Answering`-- `Text Generation`-
-Added metrics:
-- `Accuracy`-
-## Related content
--- Learn more about the [Azure AI Studio](./what-is-ai-studio.md).-- Learn about [what's new in Azure OpenAI Service](../ai-services/openai/whats-new.md).
aks Gpu Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/gpu-cluster.md
The NVIDIA GPU Operator automates the management of all NVIDIA software componen
> [!WARNING] > We don't recommend manually installing the NVIDIA device plugin daemon set with clusters using the AKS GPU image.
+> [!NOTE]
+> There might be additional considerations to take when using the NVIDIA GPU Operator and deploying on SPOT instances. Please refer to <https://github.com/NVIDIA/gpu-operator/issues/577>
++ ### Use the AKS GPU image (preview) AKS provides a fully configured AKS image containing the [NVIDIA device plugin for Kubernetes][nvidia-github]. The AKS GPU image is currently only supported for Ubuntu 18.04.
api-center Add Metadata Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/add-metadata-properties.md
Title: Tutorial - Define custom metadata for API governance description: In this tutorial, define custom metadata in your API center. Use custom and built-in metadata to organize and govern your APIs. -+ Last updated 04/19/2024
api-center Check Minimal Api Permissions Dev Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/check-minimal-api-permissions-dev-proxy.md
Title: Check app's API calls for minimal permissions with Dev Proxy description: Learn how to use Dev Proxy to check if your app is calling APIs using minimal permissions defined in Azure API Center. -+ Last updated 07/17/2024
api-center Configure Environments Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/configure-environments-deployments.md
Title: Tutorial - Add environments and deployments for APIs description: In this tutorial, augment the API inventory in your API center by adding information about API environments and deployments. -+ Last updated 04/22/2024
api-center Discover Shadow Apis Dev Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/discover-shadow-apis-dev-proxy.md
Title: Discover shadow APIs using Dev Proxy description: Learn how to discover shadow APIs in your apps using Dev Proxy and onboard them to API Center. -+ Last updated 07/15/2024
api-center Enable Api Analysis Linting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/enable-api-analysis-linting.md
Title: Perform API linting and analysis - Azure API Center description: Configure linting of API definitions in your API center to analyze compliance of APIs with the organization's API style guide.-+ Last updated 06/29/2024
api-center Enable Api Center Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/enable-api-center-portal.md
Title: Self-host the API Center portal description: How to self-host the API Center portal, a customer-managed website that enables discovery of the API inventory in your Azure API center. -+ Last updated 04/29/2024
api-center Find Nonproduction Api Requests Dev Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/find-nonproduction-api-requests-dev-proxy.md
Title: Find nonproduction API requests with Dev Proxy description: Learn how to check if your app is using production-level APIs defined in Azure API Center using Dev Proxy. -+ Last updated 07/17/2024
api-center Import Api Management Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/import-api-management-apis.md
Title: Import APIs from Azure API Management - Azure API Center description: Add APIs to your Azure API center inventory from your API Management instance. -+ Last updated 06/28/2024
api-center Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/key-concepts.md
Title: Azure API Center - Key concepts description: Key concepts of Azure API Center. API Center inventories an organization's APIs for discovery, reuse, and governance at scale. -+ Last updated 04/23/2024
api-center Manage Apis Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/manage-apis-azure-cli.md
Title: Manage API inventory in Azure API Center - Azure CLI description: Use the Azure CLI to create and update APIs, API versions, and API definitions in your Azure API center. -+ Last updated 06/28/2024
api-center Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/metadata.md
Title: Use metadata to organize and govern APIs description: Learn about metadata in Azure API Center. Use built in and custom metadata to organize your inventory and enforce governance standards. -+ Last updated 04/19/2024
api-center Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/overview.md
Title: Azure API Center - Overview
description: Introduction to key scenarios and capabilities of Azure API Center. API Center inventories an organization's APIs for discovery, reuse, and governance at scale. -+ Last updated 04/15/2024
api-center Register Apis Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/register-apis-github-actions.md
Title: Register APIs using GitHub Actions - Azure API Center description: Learn how to automate the registration of APIs in your API center using a CI/CD workflow based on GitHub Actions.-+ Last updated 07/24/2024
api-center Register Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/register-apis.md
Title: Tutorial - Start your API inventory description: In this tutorial, start the API inventory in your API center by registering APIs using the Azure portal. -+ Last updated 04/19/2024
api-center Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/resources.md
Title: Azure API Center - Code samples and labs
description: Find code samples, reference implementations, labs, and deployment templates to create, populate, and govern your Azure API center. -+ Last updated 06/11/2024
api-center Set Up Api Center Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/set-up-api-center-arm-template.md
Title: Quickstart - Create your Azure API center - ARM template description: In this quickstart, use an Azure Resource Manager template to set up an API center for API discovery, reuse, and governance. -+ Last updated 05/13/2024
api-center Set Up Api Center Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/set-up-api-center-azure-cli.md
Title: Quickstart - Create your Azure API center - Azure CLI description: In this quickstart, use the Azure CLI to set up an API center for API discovery, reuse, and governance. -+ ms.daate: 06/27/2024
api-center Set Up Api Center Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/set-up-api-center-bicep.md
Title: Quickstart - Create your Azure API center - Bicep description: In this quickstart, use Bicep to set up an API center for API discovery, reuse, and governance. -+ Last updated 05/13/2024
api-center Set Up Api Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/set-up-api-center.md
Title: Quickstart - Create your Azure API center - portal description: In this quickstart, use the Azure portal to set up an API center for API discovery, reuse, and governance. -+ Last updated 04/19/2024
api-center Use Vscode Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/use-vscode-extension.md
Title: Interact with API inventory using VS Code extension description: Build, discover, try, and consume APIs from your Azure API center using the Azure API Center extension for Visual Studio Code. -+ Last updated 07/15/2024
api-management Api Management Howto Deploy Multi Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-deploy-multi-region.md
description: Learn how to deploy a Premium tier Azure API Management instance to
Previously updated : 05/15/2024 Last updated : 07/29/2024
This section provides considerations for multi-region deployments when the API M
### IP addresses
-* A public virtual IP address is created in every region added with a virtual network. For virtual networks in either [external mode](api-management-using-with-vnet.md) or [internal mode](api-management-using-with-internal-vnet.md), this public IP address is required for management traffic on port `3443`.
+* A public virtual IP address is created in every region added with a virtual network. For virtual networks in either [external mode](api-management-using-with-vnet.md) or [internal mode](api-management-using-with-internal-vnet.md), this public IP address is used for management traffic on port `3443`.
* **External VNet mode** - The public IP addresses are also required to route public HTTP traffic to the API gateways.
This section provides considerations for multi-region deployments when the API M
* **External VNet mode** - Routing of public HTTP traffic to the regional gateways is handled automatically, in the same way it is for a non-networked API Management instance.
-* **Internal VNet mode** - Private HTTP traffic isn't routed or load-balanced to the regional gateways by default. Users own the routing and are responsible for bringing their own solution to manage routing and private load balancing across multiple regions. Example solutions include Azure Application Gateway and Azure Traffic Manager.
+* **Internal VNet mode** - Private HTTP traffic isn't routed or load-balanced to the regional gateways by default. Users own the routing and are responsible for bringing their own solution to manage routing and private load balancing across multiple regions.
## Next steps
app-service Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/language-support-policy.md
To learn more about specific timelines for the language support policy, see the
- [PHP](https://aka.ms/phprelease) - [Go](https://aka.ms/gorelease)
+## Support status
+
+App Service supports languages on both Linux and Windows operating systems. See the following resources for the list of OS support for each language:
+
+- [.NET](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/dot_net_core.md#support-timeline)
+- [Java](#jdk-versions-and-maintenance)
+- [Node](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/node_support.md#support-timeline)
+- [Python](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/python_support.md#support-timeline)
+- [PHP](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/php_support.md#support-timeline)
++ ## Configure language versions To learn more about how to update language versions for your App Service applications, see the following resources: - [.NET](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/dot_net_core.md#how-to-update-your-app-to-target-a-different-version-of-net-or-net-core)-- [Node](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/node_support.md#node-on-linux-app-service) - [Java](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/java_support.md#java-on-app-service)
+- [Node](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/node_support.md#node-on-linux-app-service)
- [Python](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/python_support.md#how-to-update-your-app-to-target-a-different-version-of-python) - [PHP](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/php_support.md#how-to-update-your-app-to-target-a-different-version-of-php)
app-service Scenario Secure App Access Microsoft Graph As App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-access-microsoft-graph-as-app.md
-+ Last updated 04/05/2023
app-service Scenario Secure App Access Microsoft Graph As User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-access-microsoft-graph-as-user.md
-+ Last updated 09/15/2023
app-service Scenario Secure App Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-access-storage.md
description: In this tutorial, you learn how to access Azure Storage for a .NET
-+ Last updated 07/31/2023
app-service Scenario Secure App Authentication App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-authentication-app-service.md
-+ Last updated 05/16/2024
app-service Scenario Secure App Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-overview.md
-+ Last updated 12/10/2021
app-service Tutorial Connect App Access Microsoft Graph As App Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-microsoft-graph-as-app-javascript.md
-+ Last updated 03/14/2023
app-service Tutorial Connect App Access Microsoft Graph As User Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-microsoft-graph-as-user-javascript.md
-+ Last updated 03/08/2022
app-service Tutorial Connect App Access Storage Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-storage-javascript.md
description: In this tutorial, you learn how to access Azure Storage for a JavaS
-+ Last updated 07/31/2023
automation Automation Tutorial Runbook Textual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/learn/automation-tutorial-runbook-textual.md
Start by creating a simple [PowerShell Workflow runbook](../automation-runbook-t
You can either type code directly into the runbook, or you can select cmdlets, runbooks, and assets from the Library control and add them to the runbook with any related parameters. For this tutorial, you type code directly into the runbook.
-Your runbook is currently empty with only the required `Workflow` keyword, the name of the runbook, and the braces that encase the entire workflow.
+Your runbook is currently empty with only the required `workflow` keyword, the name of the runbook, and the braces that encase the entire workflow.
```powershell
-Workflow MyFirstRunbook-Workflow
+workflow MyFirstRunbook-Workflow
{ } ```
Workflow MyFirstRunbook-Workflow
1. You can use the `Parallel` keyword to create a script block with multiple commands that will run concurrently. Enter the following code *between* the braces: ```powershell
- Parallel {
- Write-Output "Parallel"
- Get-Date
- Start-Sleep -s 3
- Get-Date
- }
-
- Write-Output " `r`n"
- Write-Output "Non-Parallel"
- Get-Date
- Start-Sleep -s 3
- Get-Date
+ parallel
+ {
+ Write-Output "Parallel"
+ Get-Date
+ Start-Sleep -Seconds 3
+ Get-Date
+ }
+
+ Write-Output " `r`n"
+ Write-Output "Non-Parallel"
+ Get-Date
+ Start-Sleep -Seconds 3
+ Get-Date
``` 1. Save the runbook by selecting **Save**.
You've tested and published your runbook, but so far it doesn't do anything usef
```powershell workflow MyFirstRunbook-Workflow {
- $resourceGroup = "resourceGroupName"
-
- # Ensures you do not inherit an AzContext in your runbook
- Disable-AzContextAutosave -Scope Process
-
- # Connect to Azure with system-assigned managed identity
- Connect-AzAccount -Identity
-
- # set and store context
- $AzureContext = Set-AzContext ΓÇôSubscriptionId "<SubscriptionID>"
+ $resourceGroup = "resourceGroupName"
+
+ # Ensures you do not inherit an AzContext in your runbook
+ Disable-AzContextAutosave -Scope Process
+
+ # Connect to Azure with system-assigned managed identity
+ Connect-AzAccount -Identity
+
+ # set and store context
+ $AzureContext = Set-AzContext -SubscriptionId "<SubscriptionID>"
} ```
You can use the `ForEach -Parallel` construct to process commands for each item
```powershell workflow MyFirstRunbook-Workflow {
- Param(
- [string]$resourceGroup,
- [string[]]$VMs,
- [string]$action
- )
-
- # Ensures you do not inherit an AzContext in your runbook
- Disable-AzContextAutosave -Scope Process
-
- # Connect to Azure with system-assigned managed identity
- Connect-AzAccount -Identity
-
- # set and store context
- $AzureContext = Set-AzContext ΓÇôSubscriptionId "<SubscriptionID>"
-
- # Start or stop VMs in parallel
- if ($action -eq "Start") {
- ForEach -Parallel ($vm in $VMs)
- {
- Start-AzVM -Name $vm -ResourceGroupName $resourceGroup -DefaultProfile $AzureContext
- }
- }
- elseif ($action -eq "Stop") {
- ForEach -Parallel ($vm in $VMs)
- {
- Stop-AzVM -Name $vm -ResourceGroupName $resourceGroup -DefaultProfile $AzureContext -Force
- }
- }
- else {
- Write-Output "`r`n Action not allowed. Please enter 'stop' or 'start'."
- }
- }
+ param
+ (
+ [string]$resourceGroup,
+ [string[]]$VMs,
+ [string]$action
+ )
+
+ # Ensures you do not inherit an AzContext in your runbook
+ Disable-AzContextAutosave -Scope Process
+
+ # Connect to Azure with system-assigned managed identity
+ Connect-AzAccount -Identity
+
+ # set and store context
+ $AzureContext = Set-AzContext -SubscriptionId "<SubscriptionID>"
+
+ # Start or stop VMs in parallel
+ if ($action -eq "Start")
+ {
+ ForEach -Parallel ($vm in $VMs)
+ {
+ Start-AzVM -Name $vm -ResourceGroupName $resourceGroup -DefaultProfile $AzureContext
+ }
+ }
+ elseif ($action -eq "Stop")
+ {
+ ForEach -Parallel ($vm in $VMs)
+ {
+ Stop-AzVM -Name $vm -ResourceGroupName $resourceGroup -DefaultProfile $AzureContext -Force
+ }
+ }
+ else
+ {
+ Write-Output "`r`n Action not allowed. Please enter 'stop' or 'start'."
+ }
+ }
``` 1. If you want the runbook to execute with the system-assigned managed identity, leave the code as-is. If you prefer to use a user-assigned managed identity, then:
automation Update Agent Issues Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-agent-issues-linux.md
Last updated 11/01/2021
+ # Troubleshoot Linux update agent issues
automation Update Agent Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-agent-issues.md
Last updated 01/25/2020 + # Troubleshoot Windows update agent issues
automation Update Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-management.md
Last updated 06/29/2024 + # Troubleshoot Update Management issues
automation Configure Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/configure-alerts.md
Last updated 07/15/2024 + # How to create alerts for Update Management
automation Configure Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/configure-groups.md
Last updated 07/15/2024 + # Use dynamic groups with Update Management
automation Configure Wuagent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/configure-wuagent.md
Last updated 07/15/2024 + # Configure Windows Update settings for Azure Automation Update Management
automation Deploy Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/deploy-updates.md
Last updated 07/15/2024 + # How to deploy updates and review results
automation Enable From Automation Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/enable-from-automation-account.md
Last updated 07/15/2024 + # Enable Update Management from an Automation account
automation Enable From Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/enable-from-portal.md
Last updated 07/15/2024 + # Enable Update Management from the Azure portal
automation Enable From Runbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/enable-from-runbook.md
Last updated 07/15/2024 + # Enable Update Management from a runbook
automation Enable From Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/enable-from-template.md
Last updated 07/15/2024+ # Enable Update Management using Azure Resource Manager template
automation Enable From Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/enable-from-vm.md
Last updated 07/15/2024 + # Enable Update Management for an Azure VM
automation Manage Updates For Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/manage-updates-for-vm.md
Last updated 07/15/2024+ # Manage updates and patches for your VMs
automation Mecmintegration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/mecmintegration.md
Last updated 07/15/2024 + # Integrate Update Management with Microsoft Configuration Manager
automation Operating System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/operating-system-requirements.md
Last updated 07/15/2024 + # Operating systems supported by Update Management
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/overview.md
Last updated 07/15/2024 + # Update Management overview
automation Plan Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/plan-deployment.md
Last updated 07/15/2024 + # Plan your Update Management deployment
automation Pre Post Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/pre-post-scripts.md
Last updated 07/15/2024 + # Manage pre-scripts and post-scripts
automation Query Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/query-logs.md
Last updated 07/15/2024 + # Query Update Management logs
automation View Update Assessments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/view-update-assessments.md
Last updated 07/15/2024 + # View update assessments in Update Management
avere-vfxt Avere Vfxt Add Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-add-storage.md
Title: Configure Avere vFXT storage - Azure description: Learn how to add a back-end storage system for a cluster in Avere vFXT for Azure. If you created an Azure Blob container with the cluster, it is ready to use. -+ Last updated 01/13/2020
avere-vfxt Avere Vfxt Additional Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-additional-resources.md
Title: Additional links about Avere vFXT for Azure description: Use these resources for additional information about Avere vFXT for Azure, including Avere cluster documentation and vFXT management documentation. -+ Last updated 01/13/2020
avere-vfxt Avere Vfxt Cluster Gui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-cluster-gui.md
Title: Access the Avere vFXT control panel - Azure description: How to connect to the vFXT cluster and the browser-based Avere Control Panel to configure the Avere vFXT -+ Last updated 12/14/2019
avere-vfxt Avere Vfxt Configure Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-configure-dns.md
Title: Avere vFXT DNS - Azure description: Configuring a DNS server for round-robin load balancing with Avere vFXT for Azure -+ Last updated 10/07/2021
avere-vfxt Avere Vfxt Data Ingest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-data-ingest.md
Title: Moving data to Avere vFXT for Azure description: How to add data to a new storage volume for use with the Avere vFXT for Azure -+ Last updated 12/16/2019
avere-vfxt Avere Vfxt Demo Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-demo-links.md
Title: Avere vFXT for Azure demo projects description: "These samples show key features and use cases for Avere vFXT for Azure: video rendering, high-performance computing, vFXT performance, and client setup." -+ Last updated 12/19/2019
avere-vfxt Avere Vfxt Deploy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-deploy-overview.md
Title: Deployment overview - Avere vFXT for Azure description: Learn how to deploy an Avere vFXT for Azure cluster with this overview. Related articles have specific deployment instructions. -+ Last updated 01/13/2020
avere-vfxt Avere Vfxt Deploy Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-deploy-plan.md
Title: Plan your Avere vFXT system - Azure description: Plan an Avere vFXT for Azure cluster that is right for your needs. Learn questions to ask before going to the Azure Marketplace or creating virtual machines. -+ Last updated 01/21/2020
avere-vfxt Avere Vfxt Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-deploy.md
Title: Deploy Avere vFXT for Azure description: Learn how to use the deployment wizard available from the Azure Marketplace to deploy a cluster with Avere vFXT for Azure. -+ Last updated 01/13/2020
avere-vfxt Avere Vfxt Enable Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-enable-support.md
Title: Enable support for Avere vFXT - Azure description: Learn how to enable automatic upload of support data about your cluster from Avere vFXT for Azure to help Support provide customer service. -+ Last updated 12/14/2019
avere-vfxt Avere Vfxt Manage Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-manage-cluster.md
Title: Manage the Avere vFXT cluster - Azure description: How to manage Avere cluster - add or remove nodes, reboot, stop, or destroy the vFXT cluster -+ Last updated 01/13/2020
avere-vfxt Avere Vfxt Mount Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-mount-clients.md
Title: Mount the Avere vFXT - Azure description: Learn how to connect clients to your vFXT cluster in Avere vFXT for Azure and how to load-balance client traffic among your cluster nodes. -+ Last updated 12/16/2019
avere-vfxt Avere Vfxt Non Owner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-non-owner.md
Title: Avere vFXT non-owner workaround - Azure description: Workaround to allow users without subscription owner permission to deploy Avere vFXT for Azure -+ Last updated 12/19/2019
avere-vfxt Avere Vfxt Open Ticket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-open-ticket.md
Title: How to get support for Avere vFXT for Azure description: Learn how to address issues that may arise while deploying or using Avere vFXT for Azure by creating a support ticket through the Azure portal. -+ Last updated 01/13/2020
avere-vfxt Avere Vfxt Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-overview.md
Title: Avere vFXT for Azure description: Learn about Avere vFXT for Azure, a cloud-based filesystem caching solution for data-intensive high-performance computing tasks. -+ Last updated 03/15/2024
avere-vfxt Avere Vfxt Prereqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-prereqs.md
Title: Avere vFXT prerequisites - Azure description: Learn about tasks to perform before you create a cluster in Avere vFXT for Azure, including dealing with subscriptions, quotas, and storage service endpoints. -+ Last updated 01/21/2020
avere-vfxt Avere Vfxt Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-tuning.md
Title: Avere vFXT cluster tuning - Azure description: Learn about some of the custom tuning for vFXT clusters in Avere vFXT for Azure that you can do, working with a support representative. -+ Last updated 12/19/2019
avere-vfxt Avere Vfxt Whitepapers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-whitepapers.md
Title: Whitepapers and case studies - Avere vFXT for Azure description: Links to downloadable whitepapers, case studies, and other articles that illustrate Avere vFXT for Azure and how it can be used. -+
avere-vfxt Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/disaster-recovery.md
Title: Disaster recovery guidance for Avere vFXT for Azure description: How to protect data in Avere vFXT for Azure from accidental deletion or outages -+ Last updated 12/10/2019
azure-arc Billing Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/billing-extended-security-updates.md
Licenses that are provisioned after the End of Support (EOS) date of October 10,
If you deactivate and then later reactivate a license, you're billed for the window during which the license was deactivated. It isn't possible to evade charges by deactivating a license before a critical security patch and reactivating it shortly before.
-If the region or the tenant of an ESU license is changed, this will be subject to back-billing charges.
+If the region or the tenant of an ESU license is changed, this is subject to back-billing charges.
> [!NOTE] > The back-billing cost appears as a separate line item in invoicing. If you acquired a discount for your core WS2012 ESUs enabled by Azure Arc, the same discount may or may not apply to back-billing. You should verify that the same discounting, if applicable, has been applied to back-billing charges as well. >
-Please note that estimates in the Azure Cost Management forecast may not accurately project monthly costs. Due to the episodic nature of back-billing charges, the projection of monthly costs may appear as overestimated during initial months.
+Note that estimates in the Azure Cost Management forecast may not accurately project monthly costs. Due to the episodic nature of back-billing charges, the projection of monthly costs may appear as overestimated during initial months.
## Billing associated with modifications to an Azure Arc ESU license
Please note that estimates in the Azure Cost Management forecast may not accurat
> If you previously provisioned a Datacenter Virtual Core license, it will be charged with and offer the virtualization benefits associated with the pricing of a Datacenter edition license. > -- **Core modification:** If cores are added to an existing ESU license, they're subject to back-billing (that is, charges for the time elapsed since EOS) and regularly billed from the calendar month in which they were added. If cores are reduced or decremented to an existing ESU license, the billing rate will reflect the reduced number of cores within 5 business days of the change.
+- **Core modification:** If cores are added to an existing ESU license, they're subject to back-billing (that is, charges for the time elapsed since EOS) and regularly billed from the calendar month in which they were added. If cores are reduced or decremented to an existing ESU license, the billing rate reflects the reduced number of cores within 5 days of the change.
- **Activation:** Licenses are billed for their number and edition of cores from the point at which they're activated. The activated license doesn't need to be linked to any Azure Arc-enabled servers to initiate billing. Activation and reactivation are subject to back-billing. Note that licenses that were activated but not linked to any servers may be back-billed if they weren't billed upon creation. Customers are responsible for deletion of any activated but unlinked ESU licenses.
Please note that estimates in the Azure Cost Management forecast may not accurat
## Services included with WS2012 ESUs enabled by Azure Arc
-Purchase of Windows Server 2012/R2 ESUs enabled by Azure Arc provides you with the benefit of access to additional Azure management services at no additional cost for enrolled servers. See [Access to Azure services](prepare-extended-security-updates.md#access-to-azure-services) to learn more.
+Purchase of Windows Server 2012/R2 ESUs enabled by Azure Arc provides you with the benefit of access to more Azure management services at no additional cost for enrolled servers. See [Access to Azure services](prepare-extended-security-updates.md#access-to-azure-services) to learn more.
Azure Arc-enabled servers allow you the flexibility to evaluate and operationalize AzureΓÇÖs robust security, monitoring, and governance capabilities for your non-Azure infrastructure, delivering key value beyond the observability, ease of enrollment, and financial flexibility of WS2012 ESUs enabled by Azure Arc. ## Additional notes -- You'll be billed if you connect an activated Azure Arc ESU license to environments like Azure Stack HCI or Azure VMware Solution. These environments are eligible for free Windows Server 2012 ESUs enabled by Azure Arc and should not be activated through Azure Arc.
+- You'll be billed if you connect an activated Azure Arc ESU license to environments like Azure Stack HCI or Azure VMware Solution. These environments are eligible for free Windows Server 2012 ESUs enabled by Azure Arc and shouldn't be activated through Azure Arc.
- You'll be billed for all of the cores provisioned in the license. If provision licenses for free ESU usage like Visual Studio Development environments, you shouldn't provision additional cores for the scope of licensing applied to non-paid ESU coverage.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/overview.md
Azure Arc-enabled SCVMM doesn't store/process customer data outside the region t
## Next steps
-[Create an Azure Arc VM](create-virtual-machine.md).
+
+- Plan your Arc-enabled SCVMM deployment by reviewing the [support matrix](support-matrix-for-system-center-virtual-machine-manager.md).
+- Once ready, [connect your SCVMM management server to Azure Arc using the onboarding script](quickstart-connect-system-center-virtual-machine-manager-to-arc.md).
azure-functions Opentelemetry Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/opentelemetry-howto.md
Java worker optimizations aren't yet available for OpenTelemetry, so there's not
npm install @opentelemetry/api npm install @opentelemetry/auto-instrumentations-node npm install @azure/monitor-opentelemetry-exporter
+ npm install @azure/functions-opentelemetry-instrumentation
``` ### [OTLP Exporter](#tab/otlp-export)
Java worker optimizations aren't yet available for OpenTelemetry, so there's not
npm install @opentelemetry/api npm install @opentelemetry/auto-instrumentations-node npm install @opentelemetry/exporter-logs-otlp-http
+ npm install @azure/functions-opentelemetry-instrumentation
```
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Perizer Corp.](https://perizer.com)| |[Perrygo Consulting Group, LLC](https://perrygo.com)| |Phacil (By Light) |
-|[Pharicode LLC](https://pharicode.com)|
+|[Pharicode LLC](https://glidefast.com/)|
|Philistin & Heller Group, Inc.| |[Picis Envision](https://www.picis.com/en/)| |[Pinao Consulting LLC](https://www.pcg-msp.com)|
azure-monitor Diagnostics Extension Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-overview.md
Azure Diagnostics extension is an [agent in Azure Monitor](../agents/agents-over
Use Azure Diagnostics extension if you need to: -- Send data to Azure Storage for archiving or to analyze it with tools such as [Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md).-- Send data to [Azure Monitor Metrics](../essentials/data-platform-metrics.md) to analyze it with [metrics explorer](../essentials/metrics-getting-started.md) and to take advantage of features such as near-real-time [metric alerts](../alerts/alerts-metric-overview.md) and [autoscale](../autoscale/autoscale-overview.md) (Windows only).-- Send data to third-party tools by using [Azure Event Hubs](./diagnostics-extension-stream-event-hubs.md).-- Collect [boot diagnostics](/troubleshoot/azure/virtual-machines/boot-diagnostics) to investigate VM boot issues.
+* Send data to Azure Storage for archiving or to analyze it with tools such as [Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md).
+* Send data to [Azure Monitor Metrics](../essentials/data-platform-metrics.md) to analyze it with [metrics explorer](../essentials/metrics-getting-started.md) and to take advantage of features such as near-real-time [metric alerts](../alerts/alerts-metric-overview.md) and [autoscale](../autoscale/autoscale-overview.md) (Windows only).
+* Send data to third-party tools by using [Azure Event Hubs](./diagnostics-extension-stream-event-hubs.md).
+* Collect [boot diagnostics](/troubleshoot/azure/virtual-machines/boot-diagnostics) to investigate VM boot issues.
Limitations of Azure Diagnostics extension: -- It can only be used with Azure resources.-- It has limited ability to send data to Azure Monitor Logs.
+* It can only be used with Azure resources.
+* It has limited ability to send data to Azure Monitor Logs.
## Comparison to Log Analytics agent
The Log Analytics agent in Azure Monitor can also be used to collect monitoring
The key differences to consider are: -- Azure Diagnostics Extension can be used only with Azure virtual machines. The Log Analytics agent can be used with virtual machines in Azure, other clouds, and on-premises.-- Azure Diagnostics extension sends data to Azure Storage, [Azure Monitor Metrics](../essentials/data-platform-metrics.md) (Windows only) and Azure Event Hubs. The Log Analytics agent collects data to [Azure Monitor Logs](../logs/data-platform-logs.md).-- The Log Analytics agent is required for retired [solutions](/previous-versions/azure/azure-monitor/insights/solutions), [VM insights](../vm/vminsights-overview.md), and other services such as [Microsoft Defender for Cloud](../../security-center/index.yml).
+* Azure Diagnostics Extension can be used only with Azure virtual machines. The Log Analytics agent can be used with virtual machines in Azure, other clouds, and on-premises.
+* Azure Diagnostics extension sends data to Azure Storage, [Azure Monitor Metrics](../essentials/data-platform-metrics.md) (Windows only) and Azure Event Hubs. The Log Analytics agent collects data to [Azure Monitor Logs](../logs/data-platform-logs.md).
+* The Log Analytics agent is required for retired [solutions](/previous-versions/azure/azure-monitor/insights/solutions), [VM insights](../vm/vminsights-overview.md), and other services such as [Microsoft Defender for Cloud](../../security-center/index.yml).
## Costs
The following tables list the data that can be collected by the Windows and Linu
### Windows diagnostics extension (WAD)
-| Data source | Description |
-| | |
-| Windows event logs | Events from Windows event log. |
-| Performance counters | Numerical values measuring performance of different aspects of operating system and workloads. |
-| IIS logs | Usage information for IIS websites running on the guest operating system. |
-| Application logs | Trace messages written by your application. |
-| .NET EventSource logs |Code writing events using the .NET [EventSource](/dotnet/api/system.diagnostics.tracing.eventsource) class. |
-| [Manifest-based ETW logs](/windows/desktop/etw/about-event-tracing) |Event tracing for Windows events generated by any process. |
-| Crash dumps (logs) | Information about the state of the process if an application crashes. |
-| File-based logs | Logs created by your application or service. |
-| Agent diagnostic logs | Information about Azure Diagnostics itself. |
+| Data source | Description |
+||-|
+| Windows event logs | Events from Windows event log. |
+| Performance counters | Numerical values measuring performance of different aspects of operating system and workloads. |
+| IIS logs | Usage information for IIS websites running on the guest operating system. |
+| Application logs | Trace messages written by your application. |
+| .NET EventSource logs | Code writing events using the .NET [EventSource](/dotnet/api/system.diagnostics.tracing.eventsource) class. |
+| [Manifest-based ETW logs](/windows/desktop/etw/about-event-tracing) | Event tracing for Windows events generated by any process. |
+| Crash dumps (logs) | Information about the state of the process if an application crashes. |
+| File-based logs | Logs created by your application or service. |
+| Agent diagnostic logs | Information about Azure Diagnostics itself. |
### Linux diagnostics extension (LAD)
-| Data source | Description |
-| | |
-| Syslog | Events sent to the Linux event logging system |
-| Performance counters | Numerical values measuring performance of different aspects of operating system and workloads |
-| Log files | Entries sent to a file-based log |
+| Data source | Description |
+|-|--|
+| Syslog | Events sent to the Linux event logging system |
+| Performance counters | Numerical values measuring performance of different aspects of operating system and workloads |
+| Log files | Entries sent to a file-based log |
## Data destinations
Configure one or more *data sinks* to send data to other destinations. The follo
### Windows diagnostics extension (WAD)
-| Destination | Description |
-|:|:|
-| Azure Monitor Metrics | Collect performance data to Azure Monitor Metrics. See [Send Guest OS metrics to the Azure Monitor metric database](../essentials/collect-custom-metrics-guestos-resource-manager-vm.md). |
-| Event hubs | Use Azure Event Hubs to send data outside of Azure. See [Streaming Azure Diagnostics data to Azure Event Hubs](diagnostics-extension-stream-event-hubs.md). |
-| Azure Storage blobs | Write data to blobs in Azure Storage in addition to tables. |
-| Application Insights | Collect data from applications running in your VM to Application Insights to integrate with other application monitoring. See [Send diagnostic data to Application Insights](diagnostics-extension-to-application-insights.md). |
+| Destination | Description |
+|:-|:--|
+| Azure Monitor Metrics | Collect performance data to Azure Monitor Metrics. See [Send Guest OS metrics to the Azure Monitor metric database](../essentials/collect-custom-metrics-guestos-resource-manager-vm.md). |
+| Event hubs | Use Azure Event Hubs to send data outside of Azure. See [Streaming Azure Diagnostics data to Azure Event Hubs](diagnostics-extension-stream-event-hubs.md). |
+| Azure Storage blobs | Write data to blobs in Azure Storage in addition to tables. |
+| Application Insights | Collect data from applications running in your VM to Application Insights to integrate with other application monitoring. See [Send diagnostic data to Application Insights](diagnostics-extension-to-application-insights.md). |
-You can also collect WAD data from storage into a Log Analytics workspace to analyze it with Azure Monitor Logs, although the Log Analytics agent is typically used for this functionality. It can send data directly to a Log Analytics workspace and supports solutions and insights that provide more functionality. See [Collect Azure diagnostic logs from Azure Storage](../agents/diagnostics-extension-logs.md).
+You can also collect WAD data from storage into a Log Analytics workspace to analyze it with Azure Monitor Logs, although the Log Analytics agent is typically used for this functionality. It can send data directly to a Log Analytics workspace and supports solutions and insights that provide more functionality. See [Collect Azure diagnostic logs from Azure Storage](diagnostics-extension-logs.md).
### Linux diagnostics extension (LAD) LAD writes data to tables in Azure Storage. It supports the sinks in the following table.
-| Destination | Description |
-|:|:|
-| Event hubs | Use Azure Event Hubs to send data outside of Azure. |
-| Azure Storage blobs | Write data to blobs in Azure Storage in addition to tables. |
-| Azure Monitor Metrics | Install the Telegraf agent in addition to LAD. See [Collect custom metrics for a Linux VM with the InfluxData Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md).
+| Destination | Description |
+|:-|:|
+| Event hubs | Use Azure Event Hubs to send data outside of Azure. |
+| Azure Storage blobs | Write data to blobs in Azure Storage in addition to tables. |
+| Azure Monitor Metrics | Install the Telegraf agent in addition to LAD. See [Collect custom metrics for a Linux VM with the InfluxData Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md). |
## Installation and configuration
You can also install and configure both the Windows and Linux diagnostics extens
See the following articles for information on installing and configuring the diagnostics extension for Windows and Linux: -- [Install and configure Azure Diagnostics extension for Windows](diagnostics-extension-windows-install.md)-- [Use Linux diagnostics extension to monitor metrics and logs](../../virtual-machines/extensions/diagnostics-linux.md)
+* [Install and configure Azure Diagnostics extension for Windows](diagnostics-extension-windows-install.md)
+* [Use Linux diagnostics extension to monitor metrics and logs](../../virtual-machines/extensions/diagnostics-linux.md)
+
+## Supported operating systems
+
+The following tables list the operating systems that are supported by WAD and LAD. See the documentation for each agent for unique considerations and for the installation process. See Telegraf documentation for its supported operating systems. All operating systems are assumed to be x64. x86 is not supported for any operating system.
+
+### Windows
+
+| Operating system | Support |
+|:|:-:|
+| Windows Server 2022 | ❌ |
+| Windows Server 2022 Core | ❌ |
+| Windows Server 2019 | ✅ |
+| Windows Server 2019 Core | ❌ |
+| Windows Server 2016 | ✅ |
+| Windows Server 2016 Core | ✅ |
+| Windows Server 2012 R2 | ✅ |
+| Windows Server 2012 | ✅ |
+| Windows 11 Client & Pro | ❌ |
+| Windows 11 Enterprise (including multi-session) | ❌ |
+| Windows 10 1803 (RS4) and higher | ❌ |
+| Windows 10 Enterprise (including multi-session) and Pro (Server scenarios only) | ✅ |
+
+### Linux
+
+| Operating system | Support |
+|:-|:-:|
+| CentOS Linux 9 | ❌ |
+| CentOS Linux 8 | ❌ |
+| CentOS Linux 7 | ✅ |
+| Debian 12 | ❌ |
+| Debian 11 | ❌ |
+| Debian 10 | ❌ |
+| Debian 9 | ✅ |
+| Debian 8 | ❌ |
+| Oracle Linux 9 | ❌ |
+| Oracle Linux 8 | ❌ |
+| Oracle Linux 7 | ✅ |
+| Oracle Linux 6.4+ | ✅ |
+| Red Hat Enterprise Linux Server 9 | ❌ |
+| Red Hat Enterprise Linux Server 8\* | ✅ |
+| Red Hat Enterprise Linux Server 7 | ✅ |
+| SUSE Linux Enterprise Server 15 | ❌ |
+| SUSE Linux Enterprise Server 12 | ✅ |
+| Ubuntu 22.04 LTS | ❌ |
+| Ubuntu 20.04 LTS | ✅ |
+| Ubuntu 18.04 LTS | ✅ |
+| Ubuntu 16.04 LTS | ✅ |
+| Ubuntu 14.04 LTS | ✅ |
+
+\* Requires Python 2 to be installed on the machine and aliased to the python command.
## Other documentation
See the following articles for more information.
### Azure Cloud Services (classic) web and worker roles -- [Introduction to Azure Cloud Services monitoring](../../cloud-services/cloud-services-how-to-monitor.md)-- [Enabling Azure Diagnostics in Azure Cloud Services](../../cloud-services/cloud-services-dotnet-diagnostics.md)-- [Application Insights for Azure Cloud Services](../app/azure-web-apps-net-core.md)<br>-- [Trace the flow of an Azure Cloud Services application with Azure Diagnostics](../../cloud-services/cloud-services-dotnet-diagnostics-trace-flow.md)
+* [Introduction to Azure Cloud Services monitoring](../../cloud-services/cloud-services-how-to-monitor.md)
+* [Enabling Azure Diagnostics in Azure Cloud Services](../../cloud-services/cloud-services-dotnet-diagnostics.md)
+* [Application Insights for Azure Cloud Services](../app/azure-web-apps-net-core.md)
+* [Trace the flow of an Azure Cloud Services application with Azure Diagnostics](../../cloud-services/cloud-services-dotnet-diagnostics-trace-flow.md)
### Azure Service Fabric
azure-monitor Monitor Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/monitor-kubernetes.md
Previously updated : 09/14/2023 Last updated : 07/30/2024 # Monitor Kubernetes clusters using Azure services and cloud native tools
The *platform engineer*, also known as the cluster administrator, is responsible
:::image type="content" source="media/monitor-kubernetes/layers-platform-engineer.png" alt-text="Diagram of layers of Kubernetes environment for platform engineer." lightbox="media/monitor-kubernetes/layers-platform-engineer.png" border="false":::
-Large organizations may also have a *fleet architect*, which is similar to the platform engineer but is responsible for multiple clusters. They need visibility across the entire environment and must perform administrative tasks at scale. At scale recommendations are included in the guidance below. See [What is Azure Kubernetes Fleet Manager (preview)?](../../kubernetes-fleet/overview.md) for details on creating a Fleet resource for multi-cluster and at-scale scenarios.
+Large organizations may also have a *fleet architect*, which is similar to the platform engineer but is responsible for multiple clusters. They need visibility across the entire environment and must perform administrative tasks at scale. At scale recommendations are included in the guidance below. See [What is Azure Kubernetes Fleet Manager?](../../kubernetes-fleet/overview.md) for details on creating a Fleet resource for multi-cluster and at-scale scenarios.
### Azure services for platform engineer
The following table lists the Azure services for the platform engineer to monito
| Service | Description | |:|:|
-| [Container Insights](container-insights-overview.md) | Azure service for AKS and Azure Arc-enabled Kubernetes clusters that use a containerized version of the [Azure Monitor agent](../agents/agents-overview.md) to collect stdout/stderr logs, performance metrics, and Kubernetes events from each node in your cluster. It also collects metrics from the Kubernetes control plane and stores them in the workspace. You can view the data in the Azure portal or query it using [Log Analytics](../logs/log-analytics-overview.md). |
+| [Container Insights](container-insights-overview.md) | Azure service for AKS and Azure Arc-enabled Kubernetes clusters that use a containerized version of the [Azure Monitor agent](../agents/agents-overview.md) to collect stdout/stderr logs, performance metrics, and Kubernetes events from each node in your cluster. You can view the data in the Azure portal or query it using [Log Analytics](../logs/log-analytics-overview.md). Configure the [Prometheus experience](./container-insights-experience-v2.md) to use Container insights views with Prometheus data. |
| [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) | [Prometheus](https://prometheus.io) is a cloud-native metrics solution from the Cloud Native Compute Foundation and the most common tool used for collecting and analyzing metric data from Kubernetes clusters. Azure Monitor managed service for Prometheus is a fully managed solution that's compatible with the Prometheus query language (PromQL) and Prometheus alerts and integrates with Azure Managed Grafana for visualization. This service supports your investment in open source tools without the complexity of managing your own Prometheus environment. | | [Azure Arc-enabled Kubernetes](container-insights-enable-arc-enabled-clusters.md) | Allows you to attach to Kubernetes clusters running in other clouds so that you can manage and configure them in Azure. With the Arc agent installed, you can monitor AKS and hybrid clusters together using the same methods and tools, including Container insights and Prometheus. |
-| [Azure Managed Grafana](../../managed-grafan) | Fully managed implementation of [Grafana](https://grafana.com/), which is an open-source data visualization platform commonly used to present Prometheus and other data. Multiple predefined Grafana dashboards are available for monitoring Kubernetes and full-stack troubleshooting. |
+| [Azure Managed Grafana](../../managed-grafan) | Fully managed implementation of [Grafana](https://grafana.com/), which is an open-source data visualization platform commonly used to present Prometheus and other data. Multiple predefined Grafana dashboards are available for monitoring Kubernetes and full-stack troubleshooting. You may choose to use Grafana for performance monitoring of your cluster, or you can use Container insights by |
### Configure monitoring for platform engineer The sections below identify the steps for complete monitoring of your Kubernetes environment using the Azure services in the above table. Functionality and integration options are provided for each to help you determine where you may need to modify this configuration to meet your particular requirements.
+Onboarding Container insights and Managed Prometheus can be part of the same experience as described in [Enable monitoring for Kubernetes clusters](../containers/kubernetes-monitoring-enable.md). The following sections described each separately so you can consider your all of your onboarding and configuration options for each.
#### Enable scraping of Prometheus metrics
Enable scraping of Prometheus metrics by Azure Monitor managed service for Prome
- Select the option **Enable Prometheus metrics** when you [create an AKS cluster](../../aks/learn/quick-kubernetes-deploy-portal.md). - Select the option **Enable Prometheus metrics** when you enable Container insights on an existing [AKS cluster](container-insights-enable-aks.md) or [Azure Arc-enabled Kubernetes cluster](container-insights-enable-arc-enabled-clusters.md).-- Enable for an existing [AKS cluster](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana) or [Arc-enabled Kubernetes cluster (preview)](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana).
+- Enable for an existing [AKS cluster](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana) or [Arc-enabled Kubernetes cluster](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana).
If you already have a Prometheus environment that you want to use for your AKS clusters, then enable Azure Monitor managed service for Prometheus and then use remote-write to send data to your existing Prometheus environment. You can also [use remote-write to send data from your existing self-managed Prometheus environment to Azure Monitor managed service for Prometheus](../essentials/prometheus-remote-write.md).
See [Default Prometheus metrics configuration in Azure Monitor](../essentials/pr
#### Enable Grafana for analysis of Prometheus data
+> [!NOTE]
+> Use Grafana for your monitoring your Kubernetes environment if you have an existing investment in Grafana or if you prefer to use Grafana dashboards instead of Container insights to analyze your Prometheus data. If you don't want to use Grafana, then enable the [Prometheus experience in Container insights](./container-insights-experience-v2.md) so that you can use Container insights views with your Prometheus data.
+ [Create an instance of Managed Grafana](../../managed-grafan#use-out-of-the-box-dashboards) are available for monitoring Kubernetes clusters including several that present similar information as Container insights views. If you have an existing Grafana environment, then you can continue to use it and add Azure Monitor managed service for [Prometheus as a data source](https://grafana.com/docs/grafana/latest/datasources/prometheus/). You can also [add the Azure Monitor data source to Grafana](https://grafana.com/docs/grafana/latest/datasources/azure-monitor/) to use data collected by Container insights in custom Grafana dashboards. Perform this configuration if you want to focus on Grafana dashboards rather than using the Container insights views and reports.
See [Enable Container insights](../containers/container-insights-onboard.md) for
Once Container insights is enabled for a cluster, perform the following actions to optimize your installation. -- Container insights collects many of the same metric values as [Prometheus](#enable-scraping-of-prometheus-metrics). You can disable collection of these metrics by configuring Container insights to only collect **Logs and events** as described in [Enable cost optimization settings in Container insights](../containers/container-insights-cost-config.md#enable-cost-settings). This configuration disables the Container insights experience in the Azure portal, but you can use Grafana to visualize Prometheus metrics and Log Analytics to analyze log data collected by Container insights.-- Reduce your cost for Container insights data ingestion by reducing the amount of data that's collected.
+- Enable the [Prometheus experience in Container insights](./container-insights-experience-v2.md) so that you can use Container insights views with your Prometheus data.
- To improve your query experience with data collected by Container insights and to reduce collection costs, [enable the ContainerLogV2 schema](container-insights-logs-schema.md) for each cluster. If you only use logs for occasional troubleshooting, then consider configuring this table as [basic logs](../logs/logs-table-plans.md).
+- Use cost presets described in [Enable cost optimization settings in Container insights](../containers/container-insights-cost-config.md#enable-cost-settings) to reduce your cost for Container insights data ingestion by reducing the amount of data that's collected. Disable collection of metrics by configuring Container insights to only collect **Logs and events** since many of the same metric values as [Prometheus](#enable-scraping-of-prometheus-metrics).
If you have an existing solution for collection of logs, then follow the guidance for that tool or enable Container insights and use the [data export feature of Log Analytics workspace](../logs/logs-data-export.md) to send data to [Azure Event Hubs](../../event-hubs/event-hubs-about.md) to forward to alternate system.
azure-monitor Prometheus Metrics Scrape Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-configuration.md
If you are using `basic_auth` setting in your prometheus configuration, please f
1. Create a secret in the **kube-system** namespace named **ama-metrics-mtls-secret**
-The value for password1 is **base64encoded**
+The value for password1 is **base64encoded**.
+ The key *password1* can be anything, but just needs to match your scrapeconfig *password_file* filepath. ```yaml
data:
``` The **ama-metrics-mtls-secret** secret is mounted on to the ama-metrics containers at path - **/etc/prometheus/certs/** and is made available to the process that is scraping prometheus metrics. The key( ex - password1) in the above example will be the file name and the value is base64 decoded and added to the contents of the file within the container and the prometheus scraper uses the contents of this file to get the value that is used as the password used to scrape the endpoint.
-2. In the configmap for the custom scrape configuration use the following setting -
+2. In the configmap for the custom scrape configuration use the following setting. The username field should contain the actual username string. The password_file field should contain the path to a file that contains the password.
+ ```yaml basic_auth:
- username: admin
+ username: <username string>
password_file: /etc/prometheus/certs/password1 ```
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log.md
For details on how to create a diagnostic setting, see [Create diagnostic settin
> [!TIP] > * Sending logs to Log Analytics workspace if free of charge for the default retention period. > * Send to Azure Monitor Logs for more complex querying and alerting and for longer retention of up to 12 years.
-> * Logs exported to a Log Analytics workspace can be [shown in Power BI](https://learn.microsoft.com/power-bi/transform-model/log-analytics/desktop-log-analytics-overview)
+> * Logs exported to a Log Analytics workspace can be [shown in Power BI](/power-bi/transform-model/log-analytics/desktop-log-analytics-overview)
> * [Insights](./activity-log-insights.md) are provided for Activity Logs exported to Log Analytics. > [!NOTE]
azure-netapp-files Cross Region Replication Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-requirements-considerations.md
This article describes requirements and considerations about [using the volume c
* The replication destination volume is read-only until you [fail over to the destination region](cross-region-replication-manage-disaster-recovery.md#fail-over-to-destination-volume) to enable the destination volume for read and write. >[!IMPORTANT] >Failover is a manual process. When you need to activate the destination volume (for example, when you want to fail over to the destination region), you need to break replication peering then mount the destination volume. For more information, see [fail over to the destination volume](cross-region-replication-manage-disaster-recovery.md#fail-over-to-destination-volume)
+ >[!IMPORTANT]
+ > A volume with an active backup policy enabled can't be the destination volume in a reverse resync operation. You must suspend the backup policy on the volume prior to starting the reverse resync then resume when the reverse resync completes.
* Azure NetApp Files replication doesn't currently support multiple subscriptions; all replications must be performed under a single subscription. * See [resource limits](azure-netapp-files-resource-limits.md) for the maximum number of cross-region replication destination volumes. You can open a support ticket to [request a limit increase](azure-netapp-files-resource-limits.md#request-limit-increase) in the default quota of replication destination volumes (per subscription in a region). * There can be a delay up to five minutes for the interface to reflect a newly added snapshot on the source volume.
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md
The resource providers for database services are:
| Resource provider namespace | Azure service | | | - |
-| Microsoft.AzureData | SQL Server registry |
| Microsoft.Cache | [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) | | Microsoft.DBforMariaDB | [Azure Database for MariaDB](../../mariadb/index.yml) | | Microsoft.DBforMySQL | [Azure Database for MySQL](../../mysql/index.yml) |
The resource providers for database services are:
| Microsoft.DocumentDB | [Azure Cosmos DB](../../cosmos-db/index.yml) | | Microsoft.Sql | [Azure SQL Database](/azure/azure-sql/database/index)<br /> [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/index) <br />[Azure Synapse Analytics](/azure/sql-data-warehouse/) | | Microsoft.SqlVirtualMachine | [SQL Server on Azure Virtual Machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview) |
+| Microsoft.AzureData | [SQL Server enabled by Azure Arc](/sql/sql-server/azure-arc/overview) |
## Developer tools resource providers
The resource providers for hybrid services are:
| Resource provider namespace | Azure service | | | - |
-| Microsoft.AzureArcData | Azure Arc-enabled data services |
+| Microsoft.AzureArcData | [Azure Arc-enabled data services](/azure/azure-arc/data/overview) |
| Microsoft.AzureStackHCI | [Azure Stack HCI](/azure-stack/hci/overview) | | Microsoft.HybridCompute | [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) | | Microsoft.Kubernetes | [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) |
azure-sql-edge Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/connect.md
To connect to an Azure SQL Edge Database Engine from a network machine, you need
} ``` -- **SA password for the Azure SQL Edge instance**: This is the value specified for the `SA_PASSWORD` environment variable during deployment of Azure SQL Edge.
+- **SA password for the Azure SQL Edge instance**: This is the value specified for the `MSSQL_SA_PASSWORD` environment variable during deployment of Azure SQL Edge.
## Connect to the Database Engine from within the container
azure-sql-edge Onnx Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/onnx-overview.md
Last updated 09/14/2023-+ keywords: deploy SQL Edge
azure-vmware Azure Vmware Solution Horizon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-horizon.md
To understand the Azure virtual machine sizes that are required for the Horizon
## Next steps
-To learn more about VMware Horizon on Azure VMware Solution, read the [VMware Horizon FAQ](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/horizon/vmw-horizon-on-microsoft-azure-vmware-solution-faq.pdf).
+To learn more about VMware Horizon on Azure VMware Solution, read the [VMware Horizon FAQ](https://www.vmware.com/docs/vmw-horizon-faqs).
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Managed disks | Supported.
Encrypted disks | Supported.<br/><br/> Azure VMs enabled with Azure Disk Encryption can be backed up (with or without the Microsoft Entra app).<br/><br/> Encrypted VMs can't be recovered at the file or folder level. You must recover the entire VM.<br/><br/> You can enable encryption on VMs that Azure Backup is already protecting. <br><br> You can back up and restore disks encrypted via platform-managed keys or customer-managed keys. You can also assign a disk-encryption set while restoring in the same region. That is, providing a disk-encryption set while performing cross-region restore is currently not supported. However, you can assign the disk-encryption set to the restored disk after the restore is complete. Disks with a write accelerator enabled | Azure VMs with disk backup for a write accelerator became available in all Azure public regions on May 18, 2022. If disk backup for a write accelerator is not required as part of VM backup, you can choose to remove it by using the [selective disk feature](selective-disk-backup-restore.md). <br><br>**Important** <br> Virtual machines with write accelerator disks need internet connectivity for a successful backup, even though those disks are excluded from the backup. Disks enabled for access with a private endpoint | Supported.
+Disks with both public and private access disabled | Supported.
Backup and restore of deduplicated VMs or disks | Azure Backup doesn't support deduplication. For more information, see [this article](./backup-support-matrix.md#disk-deduplication-support). <br/> <br/> Azure Backup doesn't deduplicate across VMs in the Recovery Services vault. <br/> <br/> If there are VMs in a deduplication state during restore, the files can't be restored because the vault doesn't understand the format. However, you can successfully perform the full VM restore. Adding a disk to a protected VM | Supported. Resizing a disk on a protected VM | Supported.
backup Blob Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-support-matrix.md
Title: Support matrix for Azure Blobs backup description: Provides a summary of support settings and limitations when backing up Azure Blobs. Previously updated : 07/24/2024 Last updated : 07/31/2024
Operational backup for blobs is available in all public cloud regions, except Fr
# [Vaulted backup](#tab/vaulted-backup)
-Vaulted backup for blobs is currently available in all public regions **except** South Africa West, Sweden Central, Sweden South, Israel Central, Poland Central, India Central, Italy North and Malaysia South.
+Vaulted backup for blobs is available in all public regions.
chaos-studio Chaos Studio Tutorial Aks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aks-cli.md
az rest --method get --uri https://management.azure.com/subscriptions/$SUBSCRIPT
```azurecli-interactive
-az role assignment create --role "Azure Kubernetes Service Cluster Admin Role" --assignee-object-id $EXPERIMENT_PRINCIPAL_ID --assignee-principal-type "ServicePrincipal" --scope subscriptions/$SUBSCRIPTION_ID/resourceGroups/$resourceGroupName/providers/Microsoft.ContainerService/managedClusters/$AKS_CLUSTER_NAME
+az role assignment create --role "Azure Kubernetes Service Cluster Admin Role" --assignee-principal-type "ServicePrincipal" --assignee-object-id $EXPERIMENT_PRINCIPAL_ID --scope subscriptions/$SUBSCRIPTION_ID/resourceGroups/$resourceGroupName/providers/Microsoft.ContainerService/managedClusters/$AKS_CLUSTER_NAME
``` ## Run your experiment
chaos-studio Experiment Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/experiment-examples.md
Last updated 05/07/2024 -+
Here's an example of where you would copy and paste the Azure portal parameter i
[![Screenshot that shows Azure portal parameter location.](images/azure-portal-parameter-examples.png)](images/azure-portal-parameter-examples.png#lightbox)
+To save one of the "experiment.json" examples shown below, simply type *nano experiment.json* into your cloud shell, copy and paste any of the below experiment examples, save it (ctrl+o), exit nano (ctrl+x) and run the following command:
+ ```AzCLI
+az rest --method put --uri https://management.azure.com/subscriptions/6b052e15-03d3-4f17-b2e1-be7f07588291/resourceGroups/exampleRG/providers/Microsoft.Chaos/experiments/exampleExperiment?api-version=2024-01-01
+```
+> [!NOTE]
+> This is the generic command you would use to create any experiment from the Azure CLI
+ > [!NOTE]
-> Make sure your experiment has permission to operate on **ALL** resources within the experiment. These examples exclusively use **System-assigned managed identity**, but we also support User-assigned managed identity. For more information, see [Experiment permissions](chaos-studio-permissions-security.md).
+> Make sure your experiment has permission to operate on **ALL** resources within the experiment. These examples exclusively use **System-assigned managed identity**, but we also support User-assigned managed identity. For more information, see [Experiment permissions](chaos-studio-permissions-security.md). These experiments will **NOT** run without granting the experiment permission to run on the target resources.
><br> ><br> >View all available role assignments [here](chaos-studio-fault-providers.md) to determine which permissions are required for your target resources. ++
-Azure Kubernetes Service (AKS) Network Delay
+Azure Kubernetes Service (AKS) - Network Delay
+**Experiment Description** This experiment delays network communication by 200ms
-### [Azure CLI](#tab/azure-CLI)
-```AzCLI
-PUT https://management.azure.com/subscriptions/6b052e15-03d3-4f17-b2e1-be7f07588291/resourceGroups/exampleRG/providers/Microsoft.Chaos/experiments/exampleExperiment?api-version=2024-01-01
+### [Azure CLI Experiment.JSON](#tab/azure-CLI)
+```AzCLI
{ "identity": { "type": "SystemAssigned",
- "principalId": "35g5795t-8sd4-5b99-a7c8-d5asdh9as7",
- "tenantId": "asd79ash-7daa-95hs-0as8-f3md812e3md"
}, "tags": {}, "location": "westus", "properties": {
- "provisioningState": "Succeeded",
"selectors": [ { "type": "List",
PUT https://management.azure.com/subscriptions/6b052e15-03d3-4f17-b2e1-be7f07588
``` - ### [Azure portal parameters](#tab/azure-portal) ```Azure portal {"action":"delay","mode":"all","selector":{"namespaces":["default"]},"delay":{"latency":"200ms","correlation":"100","jitter":"0ms"}} ```
-Azure Kubernetes Service (AKS) Pod Failure
+Azure Kubernetes Service (AKS) - Pod Failure
+**Experiment Description** This experiment takes down all pods in the cluster for 10 minutes.
-### [Azure CLI](#tab/azure-CLI)
+### [Azure CLI Experiment.JSON](#tab/azure-CLI)
```AzCLI
-PUT https://management.azure.com/subscriptions/6b052e15-03d3-4f17-b2e1-be7f07588291/resourceGroups/exampleRG/providers/Microsoft.Chaos/experiments/exampleExperiment?api-version=2024-01-01
- { "identity": { "type": "SystemAssigned",
- "principalId": "35g5795t-8sd4-5b99-a7c8-d5asdh9as7",
- "tenantId": "asd79ash-7daa-95hs-0as8-f3md812e3md"
}, "tags": {}, "location": "westus", "properties": {
- "provisioningState": "Succeeded",
"selectors": [ { "type": "List",
PUT https://management.azure.com/subscriptions/6b052e15-03d3-4f17-b2e1-be7f07588
{"action":"pod-failure","mode":"all","duration":"600s","selector":{"namespaces":["autoinstrumentationdemo"]}} ```
-Azure Kubernetes Service (AKS) Memory Stress
+Azure Kubernetes Service (AKS) - Memory Stress
+**Experiment Description** This experiment stresses the memory of 4 AKS pods to 95% for 10 minutes.
### [Azure CLI](#tab/azure-CLI) ```AzCLI
-PUT https://management.azure.com/subscriptions/6b052e15-03d3-4f17-b2e1-be7f07588291/resourceGroups/exampleRG/providers/Microsoft.Chaos/experiments/exampleExperiment?api-version=2024-01-01
- { "identity": { "type": "SystemAssigned",
- "principalId": "35g5795t-8sd4-5b99-a7c8-d5asdh9as7",
- "tenantId": "asd79ash-7daa-95hs-0as8-f3md812e3md"
}, "tags": {}, "location": "westus", "properties": {
- "provisioningState": "Succeeded",
"selectors": [ { "type": "List",
PUT https://management.azure.com/subscriptions/6b052e15-03d3-4f17-b2e1-be7f07588
```Azure portal {"mode":"all","selector":{"namespaces":["autoinstrumentationdemo"]},"stressors":{"memory":{"workers":4,"size":"95%"}} ```
-
++
+Azure Kubernetes Service (AKS) - CPU Stress
+
+**Experiment Description** This experiment stresses the CPU of four pods in the AKS cluster to 95%.
+
+### [Azure CLI](#tab/azure-CLI)
+```AzCLI
+{
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "tags": {},
+ "location": "westus",
+ "properties": {
+ "selectors": [
+ {
+ "type": "List",
+ "targets": [
+ {
+ "id": "/subscriptions/123hdq8-123d-89d7-5670-123123/resourceGroups/aks_memory_stress_experiment/providers/Microsoft.ContainerService/managedClusters/nikhilAKScluster/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh",
+ "type": "ChaosTarget"
+ }
+ ],
+ "id": "Selector1"
+ }
+ ],
+ "steps": [
+ {
+ "name": "AKS CPU stress",
+ "branches": [
+ {
+ "name": "AKS CPU stress",
+ "actions": [
+ {
+ "type": "continuous",
+ "selectorId": "Selector1",
+ "duration": "PT10M",
+ "parameters": [
+ {
+ "key": "jsonSpec",
+ "value": "{\"mode\":\"all\",\"selector\":{\"namespaces\":[\"autoinstrumentationdemo\"]},\"stressors\":{\"cpu\":{\"workers\":4,\"load\":95}}}"
+ }
+ ],
+ "name": "urn:csci:microsoft:azureKubernetesServiceChaosMesh:stressChaos/2.1"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+```
+
+### [Azure portal parameters](#tab/azure-portal)
+
+```Azure portal
+{"mode":"all","selector":{"namespaces":["autoinstrumentationdemo"]},"stressors":{"cpu":{"workers":4,"load":95}}}
+```
+
+Azure Kubernetes Service (AKS) - Network Emulation
+
+**Experiment Description** This experiment applies a network emulation to all pods in the specified namespace, adding a latency of 100ms and a packet loss of 0.1% for 5 minutes.
++
+### [Azure CLI Experiment.JSON](#tab/azure-CLI)
+```AzCLI
+{
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "tags": {},
+ "location": "westus",
+ "properties": {
+ "selectors": [
+ {
+ "type": "List",
+ "targets": [
+ {
+ "id": "/subscriptions/123hdq8-123d-89d7-5670-123123/resourceGroups/aks_network_emulation_experiment/providers/Microsoft.ContainerService/managedClusters/nikhilAKScluster/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh",
+ "type": "ChaosTarget"
+ }
+ ],
+ "id": "Selector1"
+ }
+ ],
+ "steps": [
+ {
+ "name": "AKS network emulation",
+ "branches": [
+ {
+ "name": "AKS network emulation",
+ "actions": [
+ {
+ "type": "continuous",
+ "selectorId": "Selector1",
+ "duration": "PT5M",
+ "parameters": [
+ {
+ "key": "jsonSpec",
+ "value": "{\"action\":\"netem\",\"mode\":\"all\",\"selector\":{\"namespaces\":[\"default\"]},\"netem\":{\"latency\":\"100ms\",\"loss\":\"0.1\",\"correlation\":\"25\"}}"
+ }
+ ],
+ "name": "urn:csci:microsoft:azureKubernetesServiceChaosMesh:networkChaos/2.1"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+
+```
+
+### [Azure portal parameters](#tab/azure-portal)
+
+```Azure portal
+{"action":"netem","mode":"all","selector":{"namespaces":["default"]},"netem":{"latency":"100ms","loss":"0.1","correlation":"25"}}
+```
+
+Azure Kubernetes Service (AKS) - Network Partition
+
+**Experiment Description** This experiment partitions the network for all pods in the specified namespace, simulating a network split in the 'to' direction for 5 minutes.
++
+### [Azure CLI Experiment.JSON](#tab/azure-CLI)
+```AzCLI
+{
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "tags": {},
+ "location": "westus",
+ "properties": {
+ "selectors": [
+ {
+ "type": "List",
+ "targets": [
+ {
+ "id": "/subscriptions/123hdq8-123d-89d7-5670-123123/resourceGroups/aks_partition_experiment/providers/Microsoft.ContainerService/managedClusters/nikhilAKScluster/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh",
+ "type": "ChaosTarget"
+ }
+ ],
+ "id": "Selector1"
+ }
+ ],
+ "steps": [
+ {
+ "name": "AKS network partition",
+ "branches": [
+ {
+ "name": "AKS network partition",
+ "actions": [
+ {
+ "type": "continuous",
+ "selectorId": "Selector1",
+ "duration": "PT5M",
+ "parameters": [
+ {
+ "key": "jsonSpec",
+ "value": "{\"action\":\"partition\",\"mode\":\"all\",\"selector\":{\"namespaces\":[\"default\"]},\"partition\":{\"direction\":\"to\"}}"
+ }
+ ],
+ "name": "urn:csci:microsoft:azureKubernetesServiceChaosMesh:networkChaos/2.1"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+
+```
+
+### [Azure portal parameters](#tab/azure-portal)
+
+```Azure portal
+{"action":"partition","mode":"all","selector":{"namespaces":["default"]},"partition":{"direction":"to"}}
+```
+
+Azure Kubernetes Service (AKS) - Network Bandwidth Limitation
+
+**Experiment Description** This experiment limits the network bandwidth for all pods in the specified namespace to 1mbps, with additional parameters for limit, buffer, peak rate, and burst for 5 minutes.
++
+### [Azure CLI Experiment.JSON](#tab/azure-CLI)
+```AzCLI
+{
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "tags": {},
+ "location": "westus",
+ "properties": {
+ "selectors": [
+ {
+ "type": "List",
+ "targets": [
+ {
+ "id": "/subscriptions/123hdq8-123d-89d7-5670-123123/resourceGroups/aks_bandwidth_experiment/providers/Microsoft.ContainerService/managedClusters/nikhilAKScluster/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh",
+ "type": "ChaosTarget"
+ }
+ ],
+ "id": "Selector1"
+ }
+ ],
+ "steps": [
+ {
+ "name": "AKS network bandwidth",
+ "branches": [
+ {
+ "name": "AKS network bandwidth",
+ "actions": [
+ {
+ "type": "continuous",
+ "selectorId": "Selector1",
+ "duration": "PT5M",
+ "parameters": [
+ {
+ "key": "jsonSpec",
+ "value": "{\"action\":\"bandwidth\",\"mode\":\"all\",\"selector\":{\"namespaces\":[\"default\"]},\"bandwidth\":{\"rate\":\"1mbps\",\"limit\":\"50mb\",\"buffer\":\"10kb\",\"peakrate\":\"1mbps\",\"minburst\":\"0\"}}"
+ }
+ ],
+ "name": "urn:csci:microsoft:azureKubernetesServiceChaosMesh:networkChaos/2.1"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+
+```
+
+### [Azure portal parameters](#tab/azure-portal)
+
+```Azure portal
+{"action":"bandwidth","mode":"all","selector":{"namespaces":["default"]},"bandwidth":{"rate":"1mbps","limit":"50mb","buffer":"10kb","peakrate":"1mbps","minburst":"0"}}
+```
+
+Azure Kubernetes Service (AKS) - Network Packet Re-order
+
+**Experiment Description** This experiment reorders network packets for all pods in the specified namespace, with a gap of 5 packets and a reorder percentage of 25% for 5 minutes.
++
+### [Azure CLI Experiment.JSON](#tab/azure-CLI)
+```AzCLI
+{
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "tags": {},
+ "location": "westus",
+ "properties": {
+ "selectors": [
+ {
+ "type": "List",
+ "targets": [
+ {
+ "id": "/subscriptions/123hdq8-123d-89d7-5670-123123/resourceGroups/aks_reorder_experiment/providers/Microsoft.ContainerService/managedClusters/nikhilAKScluster/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh",
+ "type": "ChaosTarget"
+ }
+ ],
+ "id": "Selector1"
+ }
+ ],
+ "steps": [
+ {
+ "name": "AKS network reorder",
+ "branches": [
+ {
+ "name": "AKS network reorder",
+ "actions": [
+ {
+ "type": "continuous",
+ "selectorId": "Selector1",
+ "duration": "PT5M",
+ "parameters": [
+ {
+ "key": "jsonSpec",
+ "value": "{\"action\":\"reorder\",\"mode\":\"all\",\"selector\":{\"namespaces\":[\"default\"]},\"reorder\":{\"gap\":\"5\",\"reorder\":\"25\",\"correlation\":\"50\"}}"
+ }
+ ],
+ "name": "urn:csci:microsoft:azureKubernetesServiceChaosMesh:networkChaos/2.1"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+
+```
+
+### [Azure portal parameters](#tab/azure-portal)
+
+```Azure portal
+{"action":"reorder","mode":"all","selector":{"namespaces":["default"]},"reorder":{"gap":"5","reorder":"25","correlation":"50"}}
+```
+
+Azure Kubernetes Service (AKS) - Network Packet Loss
+
+**Experiment Description** This experiment simulates a packet loss of 10% for all pods in the specified namespace for 5 minutes.
++
+### [Azure CLI Experiment.JSON](#tab/azure-CLI)
+```AzCLI
+{
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "tags": {},
+ "location": "westus",
+ "properties": {
+ "selectors": [
+ {
+ "type": "List",
+ "targets": [
+ {
+ "id": "/subscriptions/123hdq8-123d-89d7-5670-123123/resourceGroups/aks_loss_experiment/providers/Microsoft.ContainerService/managedClusters/nikhilAKScluster/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh",
+ "type": "ChaosTarget"
+ }
+ ],
+ "id": "Selector1"
+ }
+ ],
+ "steps": [
+ {
+ "name": "AKS network loss",
+ "branches": [
+ {
+ "name": "AKS network loss",
+ "actions": [
+ {
+ "type": "continuous",
+ "selectorId": "Selector1",
+ "duration": "PT5M",
+ "parameters": [
+ {
+ "key": "jsonSpec",
+ "value": "{\"action\":\"loss\",\"mode\":\"all\",\"selector\":{\"namespaces\":[\"default\"]},\"loss\":{\"loss\":\"10\",\"correlation\":\"25\"}}"
+ }
+ ],
+ "name": "urn:csci:microsoft:azureKubernetesServiceChaosMesh:networkChaos/2.1"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+
+```
+
+### [Azure portal parameters](#tab/azure-portal)
+
+```Azure portal
+{"action":"loss","mode":"all","selector":{"namespaces":["default"]},"loss":{"loss":"10","correlation":"25"}}
+```
+
+Azure Kubernetes Service (AKS) - Network Packet Duplication
+
+**Experiment Description** This experiment duplicates 50% of the network packets for all pods in the specified namespace for 5 minutes.
++
+### [Azure CLI Experiment.JSON](#tab/azure-CLI)
+```AzCLI
+{
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "tags": {},
+ "location": "westus",
+ "properties": {
+ "selectors": [
+ {
+ "type": "List",
+ "targets": [
+ {
+ "id": "/subscriptions/123hdq8-123d-89d7-5670-123123/resourceGroups/aks_duplicate_experiment/providers/Microsoft.ContainerService/managedClusters/nikhilAKScluster/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh",
+ "type": "ChaosTarget"
+ }
+ ],
+ "id": "Selector1"
+ }
+ ],
+ "steps": [
+ {
+ "name": "AKS network duplicate",
+ "branches": [
+ {
+ "name": "AKS network duplicate",
+ "actions": [
+ {
+ "type": "continuous",
+ "selectorId": "Selector1",
+ "duration": "PT5M",
+ "parameters": [
+ {
+ "key": "jsonSpec",
+ "value": "{\"action\":\"duplicate\",\"mode\":\"all\",\"selector\":{\"namespaces\":[\"default\"]},\"duplicate\":{\"duplicate\":\"50\",\"correlation\":\"50\"}}"
+ }
+ ],
+ "name": "urn:csci:microsoft:azureKubernetesServiceChaosMesh:networkChaos/2.1"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+
+```
+
+### [Azure portal parameters](#tab/azure-portal)
+
+```Azure portal
+{"action":"duplicate","mode":"all","selector":{"namespaces":["default"]},"duplicate":{"duplicate":"50","correlation":"50"}}
+```
+
+Azure Kubernetes Service (AKS) - Network Packet Corruption
+
+**Experiment Description** This experiment corrupts 50% of the network packets for all pods in the specified namespace for 5 minutes.
++
+### [Azure CLI Experiment.JSON](#tab/azure-CLI)
+```AzCLI
+{
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "tags": {},
+ "location": "westus",
+ "properties": {
+ "selectors": [
+ {
+ "type": "List",
+ "targets": [
+ {
+ "id": "/subscriptions/123hdq8-123d-89d7-5670-123123/resourceGroups/aks_corrupt_experiment/providers/Microsoft.ContainerService/managedClusters/nikhilAKScluster/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh",
+ "type": "ChaosTarget"
+ }
+ ],
+ "id": "Selector1"
+ }
+ ],
+ "steps": [
+ {
+ "name": "AKS network corrupt",
+ "branches": [
+ {
+ "name": "AKS network corrupt",
+ "actions": [
+ {
+ "type": "continuous",
+ "selectorId": "Selector1",
+ "duration": "PT5M",
+ "parameters": [
+ {
+ "key": "jsonSpec",
+ "value": "{\"action\":\"corrupt\",\"mode\":\"all\",\"selector\":{\"namespaces\":[\"default\"]},\"corrupt\":{\"corrupt\":\"50\",\"correlation\":\"50\"}}"
+ }
+ ],
+ "name": "urn:csci:microsoft:azureKubernetesServiceChaosMesh:networkChaos/2.1"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+
+```
+
+### [Azure portal parameters](#tab/azure-portal)
+
+```Azure portal
+{"action":"corrupt","mode":"all","selector":{"namespaces":["default"]},"corrupt":{"corrupt":"50","correlation":"50"}}
+```
+
+Azure Load Test - Start/Stop Load Test (With Delay)
+
+**Experiment Description** This experiment starts an existing Azure load test, then waits for 10 minutes using the "delay" action before stopping the load test.
++
+### [Azure CLI Experiment.JSON](#tab/azure-CLI)
+```AzCLI
+{
+
+"identity": {
+ "type": "SystemAssigned",
+ },
+ "tags": {},
+ "location": "eastus",
+ "properties": {
+ "selectors": [
+ {
+ "type": "List",
+ "targets": [
+ {
+ "id": "/subscriptions/123hdq8-123d-89d7-5670-123123/resourceGroups/nikhilLoadTest/providers/microsoft.loadtestservice/loadtests/Nikhil-Demo-Load-Test/providers/Microsoft.Chaos/targets/microsoft-azureloadtest",
+ "type": "ChaosTarget"
+ }
+ ],
+ "id": "66e5124c-12db-4f7e-8549-7299c5828bff"
+ },
+ {
+ "type": "List",
+ "targets": [
+ {
+ "id": "/subscriptions/123hdq8-123d-89d7-5670-123123/resourceGroups/builddemo/providers/microsoft.loadtestservice/loadtests/Nikhil-Demo-Load-Test/providers/Microsoft.Chaos/targets/microsoft-azureloadtest",
+ "type": "ChaosTarget"
+ }
+ ],
+ "id": "9dc23b43-81ca-42c3-beae-3fe8ac80c30b"
+ }
+ ],
+ "steps": [
+ {
+ "name": "Step 1 - Start Load Test",
+ "branches": [
+ {
+ "name": "Branch 1",
+ "actions": [
+ {
+ "selectorId": "66e5124c-12db-4f7e-8549-7299c5828bff",
+ "type": "discrete",
+ "parameters": [
+ {
+ "key": "testId",
+ "value": "ae24e6z9-d88d-4752-8552-c73e8a9adebc"
+ }
+ ],
+ "name": "urn:csci:microsoft:azureLoadTest:start/1.0"
+ },
+ {
+ "type": "delay",
+ "duration": "PT10M",
+ "name": "urn:csci:microsoft:chaosStudio:TimedDelay/1.0"
+ }
+ ]
+ }
+ ]
+ },
+ {
+ "name": "Step 2 - End Load test",
+ "branches": [
+ {
+ "name": "Branch 1",
+ "actions": [
+ {
+ "selectorId": "9dc23b43-81ca-42c3-beae-3fe8ac80c30b",
+ "type": "discrete",
+ "parameters": [
+ {
+ "key": "testId",
+ "value": "ae24e6z9-d88d-4752-8552-c73e8a9adebc"
+ }
+ ],
+ "name": "urn:csci:microsoft:azureLoadTest:stop/1.0"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+}
+
+```
+
+### [Azure portal parameters](#tab/azure-portal)
+
+```Azure portal
+ae24e6z9-d88d-4752-8552-c73e8a9adebc
+```
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
Title: List of updates applied to the Azure Guest OS | Microsoft Docs
description: This article lists the Microsoft Security Response Center updates applied to different Azure Guest OS. See if an update applies to your Guest OS. -+ ms.assetid: d0a272a9-ed01-4f4c-a0b3-bd5e841bdd77 Previously updated : 07/23/2024- Last updated : 07/31/2024+ # Azure Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to your Guest OS. Updates always carry forward for the particular [family][family-explain] they were introduced in.
+## July 2024 Guest OS
+
+| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
+| | | | | |
+| Rel 24-07 | 5040430 | Latest Cumulative Update(LCU) | [6.73] | Jul 09, 2024 |
+| Rel 24-07 | 5040437 | Latest Cumulative Update(LCU) | [7.43] | Jul 09, 2024 |
+| Rel 24-07 | 5040434 | Latest Cumulative Update(LCU) | [5.97] | Jul 09, 2024 |
+| Rel 24-07 | 5039909 | .NET Framework 3.5 Security and Quality Rollup | [2.153] | Jul 09, 2024 |
+| Rel 24-07 | 5039882 | .NET Framework 4.7.2 Cumulative Update LKG | [2.153] | Jul 09, 2024 |
+| Rel 24-07 | 5039910 | .NET Framework 3.5 Security and Quality Rollup LKG |[4.133] | Jul 09, 2024 |
+| Rel 24-07 | 5039881 | .NET Framework 4.7.2 Cumulative Update LKG |[4.133] | Jul 09, 2024 |
+| Rel 24-07 | 5039908 | .NET Framework 3.5 Security and Quality Rollup LKG | [3.141] | Jul 09, 2024 |
+| Rel 24-07 | 5039880 | .NET Framework 4.7.2 Cumulative Update LKG | [3.141] | Jul 09, 2024 |
+| Rel 24-07 | 5039879 | . NET Framework Dot Net | [6.73] | Jul 09, 2024 |
+| Rel 24-07 | 5039889 | .NET Framework 4.8 Security and Quality Rollup LKG | [7.43] | Jul 09, 2024 |
+| Rel 24-07 | 5040497 | Monthly Rollup | [2.153] | Jul 09, 2024 |
+| Rel 24-07 | 5040485 | Monthly Rollup | [3.141] | Jul 09, 2024 |
+| Rel 24-07 | 5040456 | Monthly Rollup | [4.133] | Jul 09, 2024 |
+| Rel 24-07 | 5040570 | Servicing Stack Update | [3.141] | Jul 09, 2024 |
+| Rel 24-07 | 5040569 | Servicing Stack Update | [4.133] | Jul 09, 2024 |
+| Rel 24-07 | 5040562 | Servicing Stack Update | [5.97] | Jul 09, 2024 |
+| Rel 24-07 | 5039339 | Servicing Stack Update LKG | [2.153] | Jul 09, 2024 |
+| Rel 24-07 | 5040571 | Servicing Stack Update | [7.43] | Jul 09, 2024 |
+| Rel 24-07 | 5040563 | Servicing Stack Update | [6.73] | Jul 09, 2024 |
+| Rel 24-07 | 4494175 | January '20 Microcode | [5.97] | Sep 1, 2020 |
+| Rel 24-07 | 4494175 | January '20 Microcode | [6.73] | Sep 1, 2020 |
+
+[5040430]: https://support.microsoft.com/kb/5040430
+[5040437]: https://support.microsoft.com/kb/5040437
+[5040434]: https://support.microsoft.com/kb/5040434
+[5039909]: https://support.microsoft.com/kb/5039909
+[5039882]: https://support.microsoft.com/kb/5039882
+[5039910]: https://support.microsoft.com/kb/5039910
+[5039881]: https://support.microsoft.com/kb/5039881
+[5039908]: https://support.microsoft.com/kb/5039908
+[5039880]: https://support.microsoft.com/kb/5039880
+[5039879]: https://support.microsoft.com/kb/5039879
+[5039889]: https://support.microsoft.com/kb/5039889
+[5040497]: https://support.microsoft.com/kb/5040497
+[5040485]: https://support.microsoft.com/kb/5040485
+[5040456]: https://support.microsoft.com/kb/5040456
+[5040570]: https://support.microsoft.com/kb/5040570
+[5040569]: https://support.microsoft.com/kb/5040569
+[5040562]: https://support.microsoft.com/kb/5040562
+[5039339]: https://support.microsoft.com/kb/5039339
+[5040571]: https://support.microsoft.com/kb/5040571
+[5040563]: https://support.microsoft.com/kb/5040563
+[4494175]: https://support.microsoft.com/kb/4494175
+[2.153]: ./cloud-services-guestos-update-matrix.md#family-2-releases
+[3.141]: ./cloud-services-guestos-update-matrix.md#family-3-releases
+[4.133]: ./cloud-services-guestos-update-matrix.md#family-4-releases
+[5.97]: ./cloud-services-guestos-update-matrix.md#family-5-releases
+[6.73]: ./cloud-services-guestos-update-matrix.md#family-6-releases
+[7.43]: ./cloud-services-guestos-update-matrix.md#family-7-releases
+ ## June 2024 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md
Title: Learn about the latest Azure Guest OS Releases | Microsoft Docs
description: The latest release news and SDK compatibility for Azure Cloud Services Guest OS. -+ ms.assetid: 6306cafe-1153-44c7-8554-623b03d59a34 Previously updated : 07/23/2024- Last updated : 07/31/2024+ # Azure Guest OS releases and SDK compatibility matrix
Unsure about how to update your Guest OS? Check [this][cloud updates] out.
## News updates
+###### **July 31, 2024**
+The July Guest OS released.
+ ###### **June 27, 2024** The June Guest OS released.
The September Guest OS released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-7.43_202407-01 | July 31, 2024 | Post 7.46 |
| WA-GUEST-OS-7.42_202406-01 | June 27, 2024 | Post 7.45 | | WA-GUEST-OS-7.41_202405-01 | June 1, 2024 | Post 7.44 |
-| WA-GUEST-OS-7.40_202404-01 | April 19, 2024 | Post 7.43 |
+|~~WA-GUEST-OS-7.40_202404-01~~| April 19, 2024 | July 31, 2024 |
|~~WA-GUEST-OS-7.39_202403-02~~| April 9, 2024 | June 27, 2024 | |~~WA-GUEST-OS-7.38_202402-01~~| February 24, 2024 | June 1, 2024 | |~~WA-GUEST-OS-7.37_202401-01~~| January 22, 2024 | April 19, 2024 |
The September Guest OS released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-6.73_202407-01 | July 31, 2024 | Post 6.76 |
| WA-GUEST-OS-6.72_202406-01 | June 27, 2024 | Post 6.75 | | WA-GUEST-OS-6.71_202405-01 | June 1, 2024 | Post 6.74 |
-| WA-GUEST-OS-6.70_202404-01 | April 19, 2024 | Post 6.73 |
+|~~WA-GUEST-OS-6.70_202404-01~~| April 19, 2024 | July 31, 2024 |
|~~WA-GUEST-OS-6.69_202403-02~~| April 9, 2024 | June 27, 2024 | |~~WA-GUEST-OS-6.68_202402-01~~| February 24, 2024 | June 1, 2024 | |~~WA-GUEST-OS-6.67_202401-01~~| January 22, 2024 | April 19, 2024 |
The September Guest OS released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-5.97_202407-01 | July 31, 2024 | Post 5.100 |
| WA-GUEST-OS-5.96_202406-01 | June 27, 2024 | Post 5.99 | | WA-GUEST-OS-5.95_202405-01 | June 1, 2024 | Post 5.98 |
-| WA-GUEST-OS-5.94_202404-01 | April 19, 2024 | Post 5.97 |
+|~~WA-GUEST-OS-5.94_202404-01~~| April 19, 2024 | July 31, 2024 |
|~~WA-GUEST-OS-5.93_202403-02~~| April 9, 2024 | June 27, 2024 | |~~WA-GUEST-OS-5.92_202402-01~~| February 24, 2024 | June 1, 2024 | |~~WA-GUEST-OS-5.91_202401-01~~| January 22, 2024 | April 19, 2024 |
The September Guest OS released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-4.133_202407-01 | July 31, 2024 | Post 4.136 |
| WA-GUEST-OS-4.132_202406-01 | June 27, 2024 | Post 4.135 | | WA-GUEST-OS-4.131_202405-01 | June 1, 2024 | Post 4.134 |
-| WA-GUEST-OS-4.130_202404-01 | April 19, 2024 | Post 4.133 |
+|~~WA-GUEST-OS-4.130_202404-01~~| April 19, 2024 | July 31, 2024 |
|~~WA-GUEST-OS-4.129_202403-02~~| April 9, 2024 | June 27, 2024 | |~~WA-GUEST-OS-4.128_202402-01~~| February 24, 2024 | June 1, 2024 | |~~WA-GUEST-OS-4.127_202401-01~~| January 22, 2024 | April 19, 2024 |
The September Guest OS released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-3.141_202407-01 | July 31, 2024 | Post 3.144 |
| WA-GUEST-OS-3.140_202406-01 | June 27, 2024 | Post 3.143 | | WA-GUEST-OS-3.139_202405-01 | June 1, 2024 | Post 3.142 |
-| WA-GUEST-OS-3.138_202404-01 | April 19, 2024 | Post 3.141 |
+|~~WA-GUEST-OS-3.138_202404-01~~| April 19, 2024 | Post 3.141 |
|~~WA-GUEST-OS-3.137_202403-02~~| April 9, 2024 | June 27, 2024 | |~~WA-GUEST-OS-3.136_202402-01~~| February 24, 2024 | June 1, 2024 | |~~WA-GUEST-OS-3.135_202401-01~~| January 22, 2024 | April 19, 2024 |
The September Guest OS released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-2.153_202407-01 | July 31, 2024 | Post 2.156 |
| WA-GUEST-OS-2.152_202406-01 | June 27, 2024 | Post 2.155 | | WA-GUEST-OS-2.151_202405-01 | June 1, 2024 | Post 2.154 |
-| WA-GUEST-OS-2.150_202404-01 | April 19, 2024 | Post 2.153 |
+|~~WA-GUEST-OS-2.150_202404-01~~| April 19, 2024 | July 31, 2024 |
|~~WA-GUEST-OS-2.149_202403-02~~| April 9, 2024 | June 27, 2024 | |~~WA-GUEST-OS-2.148_202402-01~~| February 24, 2024 | June 1, 2024 | |~~WA-GUEST-OS-2.147_202401-01~~| January 22, 2024 | April 19, 2024 |
cloud-services Resource Health For Cloud Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/resource-health-for-cloud-services.md
Title: Resource Health for Cloud Services (Classic) description: This article talks about Resource Health Check (RHC) Support for Microsoft Azure Cloud Services (Classic) --++ Last updated 07/24/2024
cloud-shell Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/vnet/deployment.md
Fill out the form with the following information:
| **Nsg Name** | Enter the name of the NSG. The deployment creates this NSG and assigns an access rule to it. | | **Azure Container Instance OID** | Fill in the value from the prerequisite information that you gathered.<br>The example in this article uses `8fe7fd25-33fe-4f89-ade3-0e705fcf4370`. | | **Container Subnet Name** | Defaults to `cloudshellsubnet`. Enter the name of the subnet for your container. |
-| **Container Subnet Address Prefix** | The example in this article uses `10.1.0.0/16`, which provides 65,543 IP addresses for Cloud Shell instances. |
+| **Container Subnet Address Prefix** | The example in this article uses `10.0.1.0/24`, which provides 254 IP addresses for Cloud Shell instances. |
| **Relay Subnet Name** | Defaults to `relaysubnet`. Enter the name of the subnet that contains your relay. | | **Relay Subnet Address Prefix** | The example in this article uses `10.0.2.0/24`. | | **Storage Subnet Name** | Defaults to `storagesubnet`. Enter the name of the subnet that contains your storage. |
communication-services Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/ai.md
+
+ Title: AI in Azure Communication Services
+
+description: Learn about Communication Services AI concepts
++++ Last updated : 07/10/2024++++
+# Artificial intelligence (AI) overview
+
+Artificial intelligence (AI) technologies can be useful for a wide variety of communication experiences. This concept page summarizes availability of AI and AI-adjacent features in Azure Communication Services. AI features can be split into three categories:
+
+- **Accessors.** APIs that allow you to access Azure Communication data for the purposes of integrating your own separate transformations and bots.
+- **Transformers.** APIs that provide a built-in transformation of communication data using a machine learning or language model.
+- **Bots.** APIs that implement bots that directly communicate with end-users, typically blending structured programming with language models.
+
+Typical communication scenarios integrating these capabilities:
+
+- Transforming audio speech content into text transcriptions
+- Transforming a video feed to blur the user's background
+- Operating a chat or voice bot that responds to human conversation
+- Transforming a corpus of text chat and meeting transcriptions into summaries. This experience might involve a generative AI interface in which a user asks, "summarize all conversations between me and user Joe."
+
+## Messaging: SMS, Chat, Email, WhatsApp
+
+Azure Communication Services capabilities for asynchronous messaging share common patterns for integrating AI listed here.
+
+| Feature | Accessor | Transformer | Bot | Description |
+|--|--|--|--|--|
+| REST APIs and SDKs| ✅ | | | The messaging services center around REST APIs and server-oriented SDKs. You can use these SDKs to export content to an external datastore and attach a language model to summarize conversations. Or you can use the SDKs to integrate a bot that directly engages with human users. |
+| WhatsApp Message Analysis | | ✅ | | The Azure Communication Service messaging APIs for WhatsApp provide a built-in integration with Azure OpenAI that analyses and annotates messages. This integration can detect the user’s language, recognize their intent, and extract key phrases. |
+| [Azure Bot – Chat Channel Integration](../quickstarts/chat/quickstart-botframework-integration.md) | | | ✅ | The Azure Communication Service chat system is directly integrated with Azure Bot services. This integration simplifies creating chat bots that engage with human users.|
+
+## Voice, Video, and Telephony
+
+The patterns for integrating AI into the voice and video system are summarized here.
+
+| Feature | Accessor | Transformer | Bot | Description |
+|--|--|--|--|--|
+| [Call Automation REST APIs and SDKs](../concepts/call-automation/call-automation.md) | ✅ | ✅ | | Call Automation APIs include both accessors and transformers, with REST APIs for playing audio files and recognizing a user’s response. The `recognize` APIs integrate Azure Bot Services to transform users’ audio content into text for easier processing by your service. The most common scenario for these APIs is implementing voice bots, sometimes called interactive voice response (IVR). |
+| [Microsoft Copilot Studio](https://learn.microsoft.com/microsoft-copilot-studio/voice-overview) | | ✅ | ✅ | Copilot studio is directly integrated with Azure Communication Services telephony. This integration is designed for voice bots and IVR. |
+| [Azure Portal Copilot](https://learn.microsoft.com/microsoft-copilot-studio/voice-overview) | | ✅ | ✅ | Copilot studio is directly integrated with Azure Communication Services telephony. This integration is designed for voice bots and IVR. |
+| [Client Raw Audio and Video](../concepts/voice-video-calling/media-access.md) | ✅ | | | The Calling client SDK provides APIs for accessing and modifying the raw audio and video feed. An example scenario is taking the video feed, detecting the human speaker and their background, and customizing that background. |
+| [Client Background effects](../quickstarts/voice-video-calling/get-started-video-effects.md?pivots=platform-web)| | ✅ | | The Calling client SDKs provides APIs for blurring or replacing a user’s background. |
+| [Client Captions](../concepts/voice-video-calling/closed-captions.md) | | ✅ | | The Calling client SDK provides APIs for real-time closed captions. These internally integrate Azure Cognitive Services to transform audio content from the call into text in real-time. |
+| [Client Noise Enhancement and Effects](../tutorials/audio-quality-enhancements/add-noise-supression.md?pivots=platform-web) | | ✅ | | The Calling client SDK integrates a [DeepVQE](https://arxiv.org/abs/2306.03177) machine learning model to improve audio quality through echo cancellation, noise suppression, and dereverberation. This transformation is toggled on and off using the client SDK. |
confidential-computing Confidential Computing Enclaves https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-computing-enclaves.md
Title: Build with SGX enclaves - Azure Virtual Machines description: Learn about Intel SGX hardware to enable your confidential computing workloads. -+ Last updated 11/01/2021
confidential-computing Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-portal.md
Title: Quickstart - Create Intel SGX VM in the Azure Portal description: Get started with your deployments by learning how to quickly create an Intel SGX VM in the Azure Portal -+ Last updated 11/1/2021
cosmos-db Troubleshoot Common Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/troubleshoot-common-issues.md
Title: Troubleshoot common errors in the Azure Cosmos DB for Apache Cassandra description: This article discusses common issues in the Azure Cosmos DB for Apache Cassandra and how to troubleshoot them. -+ Last updated 03/02/2021
cosmos-db Quickstart Rag Chatbot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gen-ai/quickstart-rag-chatbot.md
Title: Quickstart - Build a RAG Chatbot description: Learn how to build a RAG chatbot in Python -+ Last updated 06/26/2024
cosmos-db Rag https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gen-ai/rag.md
Title: Retrieval Augmented Generation (RAG) in Azure Cosmos DB description: Learn about Retrieval Augmented Generation (RAG) in Azure Cosmos DB -+ Last updated 07/09/2024
cosmos-db Change Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/change-log.md
description: Notifies our customers of any minor/medium updates that were pushed
-+ Last updated 07/30/2024
cosmos-db Distribute Throughput Across Partitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/distribute-throughput-across-partitions.md
description: Learn how to redistribute throughput across partitions
-+
cosmos-db Estimate Ru Capacity Planner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/estimate-ru-capacity-planner.md
Title: Estimate costs using the Azure Cosmos DB capacity planner - API for MongoDB description: The Azure Cosmos DB capacity planner allows you to estimate the throughput (RU/s) required and cost for your workload. This article describes how to use the capacity planner to estimate the throughput and cost required when using Azure Cosmos DB for MongoDB. -+ Last updated 06/20/2024
cosmos-db Feature Support 50 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-50.md
description: Learn about Azure Cosmos DB for MongoDB 5.0 server version supporte
-+ Last updated 04/24/2024
cosmos-db Feature Support 60 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-60.md
description: Learn about Azure Cosmos DB for MongoDB 6.0 server version supporte
-+ Last updated 04/24/2024
cosmos-db Feature Support 70 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-70.md
description: Learn about Azure Cosmos DB for MongoDB 7.0 server version supporte
-+ Last updated 07/30/2024
cosmos-db Prevent Rate Limiting Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/prevent-rate-limiting-errors.md
Title: Prevent rate-limiting errors for Azure Cosmos DB for MongoDB operations. description: Learn how to prevent your Azure Cosmos DB for MongoDB operations from hitting rate limiting errors with the SSR (server-side retry) feature.-+
cosmos-db Programmatic Database Migration Assistant Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/programmatic-database-migration-assistant-legacy.md
description: This doc provides an overview of the Database Migration Assistant legacy utility. -+ Last updated 04/20/2023
cosmos-db Troubleshoot Query Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/troubleshoot-query-performance.md
Title: Troubleshoot query issues when using the Azure Cosmos DB for MongoDB description: Learn how to identify, diagnose, and troubleshoot Azure Cosmos DB's API for MongoDB query issues.-+ Last updated 04/02/2024
cosmos-db Tutorial Mongotools Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-mongotools-cosmos-db.md
description: Learn how MongoDB native tools can be used to migrate small dataset
-+ Last updated 08/26/2021
cosmos-db Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/compatibility.md
description: Review Azure Cosmos DB for MongoDB vCore supported features and syn
-+ Last updated 10/21/2023
Below are the list of operators currently supported on Azure Cosmos DB for Mongo
<tr><td><code>$text</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> <tr><td><code>$where</code></td><td><img src="media/compatibility/no-icon.svg" alt="No"></td><td><img src="media/compatibility/no-icon.svg" alt="No"></td><td><img src="media/compatibility/no-icon.svg" alt="No"></td></tr>
-<tr><td rowspan="11">Geospatial Operators</td><td><code>$geoIntersects</code></td><td rowspan="11" colspan="3"><img src="media/compatibility/yes-icon.svg" alt="Yes">In Preview*</td></tr>
-<tr><td><code>$geoWithin</code></td></tr>
-<tr><td><code>$box</code></td></tr>
-<tr><td><code>$center</code></td></tr>
-<tr><td><code>$centerSphere</code></td></tr>
-<tr><td><code>$geometry</code></td></tr>
-<tr><td><code>$maxDistance</code></td></tr>
-<tr><td><code>$minDistance</code></td></tr>
-<tr><td><code>$polygon</code></td></tr>
-<tr><td><code>$near</code></td></tr>
-<tr><td><code>$nearSphere</code></td></tr>
+<tr><td rowspan="11">Geospatial Operators</td><td><code>$geoIntersects</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
+<tr><td><code>$geoWithin</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
+<tr><td><code>$box</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
+<tr><td><code>$center</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
+<tr><td><code>$centerSphere</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
+<tr><td><code>$geometry</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
+<tr><td><code>$maxDistance</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
+<tr><td><code>$minDistance</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
+<tr><td><code>$polygon</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
+<tr><td><code>$near</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
+<tr><td><code>$nearSphere</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
<tr><td rowspan="3">Array Query Operators</td><td><code>$all</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> <tr><td><code>$elemMatch</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
Azure Cosmos DB for MongoDB vCore supports the following indexes and index prope
<tr><td>Multikey Index</td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> <tr><td>Text Index</td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> <tr><td>Wildcard Index</td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
-<tr><td>Geospatial Index</td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">In Preview*</td></tr>
+<tr><td>Geospatial Index</td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
<tr><td>Hashed Index</td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> <tr><td>Vector Index (only available in Cosmos DB)</td><td><img src="medi>vector search</a></td></tr> </table>
cosmos-db Geospatial Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/geospatial-support.md
+
+ Title: Support for Geospatial Queries
+
+description: Introducing support for geospatial queries on vCore based Azure Cosmos DB for MongoDB.
++++++ Last updated : 07/31/2024++
+# Support for Geospatial Queries
++
+Geospatial data can now be stored and queried using vCore-based Azure Cosmos DB for MongoDB. This enhancement provides powerful tools to manage and analyze spatial data, enabling a wide range of applications such as real-time location tracking, route optimization, and spatial analytics.
+
+HereΓÇÖs a quick overview of the geospatial commands and operators now supported:
+
+## Geospatial Query Operators
+
+### **$geoIntersects**
+Selects documents where a specified geometry intersects with the documents' geometry. Useful for finding documents that share any portion of space with a given geometry.
+
+ ```json
+ db.collection.find({
+ location: {
+ $geoIntersects: {
+ $geometry: {
+ type: "<GeoJSON object type>",
+ coordinates: [[[...], [...], [...], [...]]]
+ }
+ }
+ }
+ })
+ ```
+
+### **$geoWithin**
+Selects documents with geospatial data that exists entirely within a specified shape. This operator is used to find documents within a defined area.
+
+ ```json
+ db.collection.find({
+ location: {
+ $geoWithin: {
+ $geometry: {
+ type: "Polygon",
+ coordinates: [[[...], [...], [...], [...]]]
+ }
+ }
+ }
+ })
+ ```
+
+### **$box**
+Defines a rectangular area using two coordinate pairs (bottom-left and top-right corners). Used with the `$geoWithin` operator to find documents within this rectangle. For example, finding all locations within a rectangular region on a map.
+
+ ```json
+ db.collection.find({
+ location: {
+ $geoWithin: {
+ $box: [[lowerLeftLong, lowerLeftLat], [upperRightLong, upperRightLat]]
+ }
+ }
+ })
+ ```
+
+### **$center**
+Defines a circular area using a center point and a radius in radians. Used with the `$geoWithin` operator to find documents within this circle.
+
+ ```json
+ db.collection.find({
+ location: {
+ $geoWithin: {
+ $center: [[longitude, latitude], radius]
+ }
+ }
+ })
+ ```
+
+### **$centerSphere**
+Similar to `$center`, but defines a spherical area using a center point and a radius in radians. Useful for spherical geometry calculations.
+
+ ```json
+ db.collection.find({
+ location: {
+ $geoWithin: {
+ $centerSphere: [[longitude, latitude], radius]
+ }
+ }
+ })
+ ```
+
+### **$geometry**
+Specifies a GeoJSON object to define a geometry. Used with geospatial operators to perform queries based on complex shapes.
+
+ ```json
+ db.collection.find({
+ location: {
+ $geoIntersects: {
+ $geometry: {
+ type: "<GeoJSON object type>",
+ coordinates: [longitude, latitude]
+ }
+ }
+ }
+ })
+ ```
+
+### **$maxDistance**
+Specifies the maximum distance from a point for a geospatial query. Used with `$near` and `$nearSphere` operators. For example, finding all locations within 2 km of a given point.
+
+ ```json
+ db.collection.find({
+ location: {
+ $near: {
+ $geometry: {
+ type: "Point",
+ coordinates: [longitude, latitude]
+ },
+ $maxDistance: distance
+ }
+ }
+ })
+ ```
+
+### **$minDistance**
+Specifies the minimum distance from a point for a geospatial query. Used with `$near` and `$nearSphere` operators.
+
+ ```json
+ db.collection.find({
+ location: {
+ $near: {
+ $geometry: {
+ type: "Point",
+ coordinates: [longitude, latitude]
+ },
+ $minDistance: distance
+ }
+ }
+ })
+ ```
+
+### **$polygon**
+Defines a polygon using an array of coordinate pairs. Used with the `$geoWithin` operator to find documents within this polygon.
+
+ ```json
+ db.collection.find({
+ location: {
+ $geoWithin: {
+ $geometry: {
+ type: "Polygon",
+ coordinates: [[[...], [...], [...], [...]]]
+ }
+ }
+ }
+ })
+ ```
+
+### **$near**
+Finds documents that are near a specified point. Returns documents sorted by distance from the point. For example, finding the nearest restaurants to a user's location.
+
+ ```json
+ db.collection.find({
+ location: {
+ $near: {
+ $geometry: {
+ type: "Point",
+ coordinates: [longitude, latitude]
+ },
+ $maxDistance: distance
+ }
+ }
+ })
+ ```
++
+### **$nearSphere**
+Similar to `$near`, but performs calculations on a spherical surface. Useful for more accurate distance calculations on the Earth's surface.
+
+ ```json
+ db.collection.find({
+ location: {
+ $nearSphere: {
+ $geometry: {
+ type: "Point",
+ coordinates: [longitude, latitude]
+ },
+ $maxDistance: distance
+ }
+ }
+ })
+ ```
+
+## Geospatial Aggregation Stage
+
+### **$geoNear**
+Performs a geospatial query to return documents sorted by distance from a specified point. Can include additional query criteria and return distance information.
+
+ ```json
+ db.collection.aggregate([
+ {
+ $geoNear: {
+ near: {
+ type: "Point",
+ coordinates: [longitude, latitude]
+ },
+ distanceField: "distance",
+ spherical: true
+ }
+ }
+ ])
+ ```
++
+## Considerations and Unsupported Capabilities
++
+* Currently, querying with a single-ringed GeoJSON polygon whose area exceeds a single hemisphere isn't supported. In such cases, Mongo vCore returns the following error message:
+ ```json
+ Error: Custom CRS for big polygon is not supported yet.
+ ```
+* A composite index using a regular index and geospatial index isn't allowed. For example:
+ ```json
+ db.collection.createIndex({a: "2d", b: 1});
+ Error: Compound 2d indexes are not supported yet
+ ```
+* Polygons with holes are currently not supported for use with $geoWithin queries. Although inserting a polygon with holes is not restricted, it eventually fails with the following error message:
+
+ ```json
+ Error: $geoWithin currently doesn't support polygons with holes
+ ```
+* The key field is always required in the $geoNear aggregation stage. If the key field is missing, the following error occurs:
+
+ ```json
+ Error: $geoNear requires a 'key' option as a String
+ ```
+* The `$geoNear`, `$near`, and `$nearSphere` stages don't have strict index requirements, so these queries wouldn't fail if an index is missing.
+
+## Related content
+
+- Read more about [feature compatibility with MongoDB.](compatibility.md)
+- Review options for [migrating from MongoDB to Azure Cosmos DB for MongoDB vCore.](how-to-migrate-native-tools.md)
cosmos-db How To Assess Plan Migration Readiness https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-assess-plan-migration-readiness.md
description: Assess an existing MongoDB installation to determine if it's suitab
-+ - ignite-2023
cosmos-db Migration Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/migration-options.md
description: Review various options to migrate your data from other MongoDB sour
-+ Last updated 11/17/2023
cosmos-db Bulk Executor Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/bulk-executor-java.md
Title: Use bulk executor Java library in Azure Cosmos DB to perform bulk import
description: Bulk import and update Azure Cosmos DB documents using bulk executor Java library -+ ms.devlang: java
cosmos-db Client Metrics Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/client-metrics-java.md
description: Learn how to consume Micrometer metrics in the Java SDK for Azure C
-+ Last updated 12/14/2023
cosmos-db How To Delete By Partition Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-delete-by-partition-key.md
Title: Delete items by partition key value using the Azure Cosmos DB SDK (previ
description: Learn how to delete items by partition key value using the Azure Cosmos DB SDKs -+ Last updated 05/23/2023
cosmos-db How To Migrate From Bulk Executor Library Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-migrate-from-bulk-executor-library-java.md
Title: Migrate from the bulk executor library to the bulk support in Azure Cosmos DB Java V4 SDK description: Learn how to migrate your application from using the bulk executor library to the bulk support in Azure Cosmos DB Java V4 SDK -+ Last updated 05/13/2022
cosmos-db Manage With Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/manage-with-terraform.md
Title: Create and manage Azure Cosmos DB with terraform description: Use terraform to create and configure Azure Cosmos DB for NoSQL -+
cosmos-db Materialized Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/materialized-views.md
description: Learn how to efficiently query a base container by using predefined
-+
cosmos-db Migrate Relational Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/migrate-relational-data.md
description: Learn how to perform a complex data migration for one-to-few relationships from a relational database into Azure Cosmos DB for NoSQL. -+ ms.devlang: python
cosmos-db Multi Tenancy Vector Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/multi-tenancy-vector-search.md
Title: Multitenancy in Azure Cosmos DB description: Learn concepts for building multitenant gen-ai apps in Azure Cosmos DB -+ Last updated 06/26/2024
cosmos-db Query Metrics Performance Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query-metrics-performance-python.md
Title: Get NoSQL query performance and execution metrics in Azure Cosmos DB using Python SDK description: Learn how to retrieve NoSQL query execution metrics and profile NoSQL query performance of Azure Cosmos DB requests. -+ Last updated 05/15/2023
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-dotnet.md
Parse the paginated results of the query by looping through each page of results
## Related content -- [JavaScript/Node.js Quickstart](quickstart-nodejs.md)
+- [Node.js Quickstart](quickstart-nodejs.md)
- [Java Quickstart](quickstart-java.md) - [Python Quickstart](quickstart-python.md) - [Go Quickstart](quickstart-go.md)
cosmos-db Quickstart Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-go.md
Parse the paginated results of the query by looping through each page of results
## Related content - [.NET Quickstart](quickstart-dotnet.md)-- [JavaScript/Node.js Quickstart](quickstart-nodejs.md)
+- [Node.js Quickstart](quickstart-nodejs.md)
- [Java Quickstart](quickstart-java.md) - [Python Quickstart](quickstart-python.md)
cosmos-db Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-java.md
Fetch all of the results of the query using `repository.getItemsByCategory`. Loo
## Related content - [.NET Quickstart](quickstart-dotnet.md)-- [JavaScript/Node.js Quickstart](quickstart-nodejs.md)
+- [Node.js Quickstart](quickstart-nodejs.md)
- [java Quickstart](quickstart-java.md) - [Go Quickstart](quickstart-go.md)
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-python.md
Loop through the results of the query.
## Related content - [.NET Quickstart](quickstart-dotnet.md)-- [JavaScript/Node.js Quickstart](quickstart-nodejs.md)
+- [Node.js Quickstart](quickstart-nodejs.md)
- [Java Quickstart](quickstart-java.md) - [Go Quickstart](quickstart-go.md)
cosmos-db Quickstart Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-terraform.md
tags: azure-resource-manager, terraform -+ Last updated 09/22/2022
cosmos-db Samples Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-terraform.md
Title: Terraform samples for Azure Cosmos DB for NoSQL description: Use Terraform to create and configure Azure Cosmos DB for NoSQL. -+
cosmos-db Throughput Control Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/throughput-control-spark.md
Title: 'Azure Cosmos DB Spark connector: Throughput control' description: Learn how you can control throughput for bulk data movements in the Azure Cosmos DB Spark connector. -+ Last updated 06/22/2022
cosmos-db Troubleshoot Java Sdk V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-java-sdk-v4.md
Title: Diagnose and troubleshoot Azure Cosmos DB Java SDK v4 description: Use features like client-side logging and other third-party tools to identify, diagnose, and troubleshoot Azure Cosmos DB issues in Java SDK v4. -+ Last updated 04/01/2022 ms.devlang: java
cosmos-db Tutorial Deploy App Bicep Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-deploy-app-bicep-aks.md
Title: 'Tutorial: Deploy an ASP.NET web application using Azure Cosmos DB for NoSQL, managed identity, and Azure Kubernetes Service via Bicep' description: Learn how to deploy an ASP.NET MVC web application with Azure Cosmos DB for NoSQL, managed identity, and Azure Kubernetes Service by using Bicep.-+
cosmos-db Vector Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/vector-search.md
The container vector policy can be described as JSON objects. Here are two examp
| **`quantizedFlat`** | Quantizes (compresses) vectors before storing on the index. This can improve latency and throughput at the cost of a small amount of accuracy. | 4096 | | **`diskANN`** | Creates an index based on DiskANN for fast and efficient approximate search. | 4096 |
+> [!NOTE]
+> The `quantizedFlat` and `diskANN` indexes requires that at least 1,000 vectors to be inserted. This is to ensure accuracy of the quantization process. If there are fewer than 1,000 vectors, a full scan is executed instead and will lead to higher RU charges for a vector search query.
+ A few points to note: - The `flat` and `quantizedFlat` index types uses Azure Cosmos DB's index to store and read each vector when performing a vector search. Vector searches with a `flat` index are brute-force searches and produce 100% accuracy or recall. That is, it's guaranteed to find the most similar vectors in the dataset. However, there's a limitation of `505` dimensions for vectors on a flat index.
Here are examples of valid vector index policies:
"excludedPaths": [ { "path": "/_etag/?"
+ },
+ {
+ "path": "/vector1"
} ], "vectorIndexes": [
Here are examples of valid vector index policies:
"excludedPaths": [ { "path": "/_etag/?"
+ },
+ {
+ "path": "/vector1",
+ },
+ {
+ "path": "/vector2",
} ], "vectorIndexes": [
Here are examples of valid vector index policies:
] } ```
-> [!NOTE]
-> The Quantized Flat and DiskANN indexes requires that at least 1,000 vectors to be inserted. This is to ensure accuracy of the quantization process. If there are fewer than 1,000 vectors, a full scan is executed instead, and will lead to higher RU charges for a vector search query.
+
+>[!IMPORTANT]
+> The vector path added to the "excludedPaths" section of the indexing policy to ensure optimized performance for insertion. Not adding the vector path to "excludedPaths" will result in higher RU charge and latency for vector insertions.
> [!IMPORTANT] > At this time in the vector search preview do not use nested path or wild card characters in the path of the vector policy. Replace operations on the vector policy are currently not supported. + ## Perform vector search with queries using VectorDistance() Once you created a container with the desired vector policy, and inserted vector data into the container, you can conduct a vector search using the [Vector Distance](query/vectordistance.md) system function in a query. An example of a NoSQL query that projects the similarity score as the alias `SimilarityScore`, and sorts in order of most-similar to least-similar:
cosmos-db Partial Document Update Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partial-document-update-getting-started.md
description: Learn how to use the partial document update feature with the .NET, Java, and Node SDKs for Azure Cosmos DB for NoSQL. -+
cosmos-db Concepts Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-customer-managed-keys.md
Title: Concepts of customer-managed keys in Azure Cosmos DB for PostgreSQL.
description: Concepts of customer-managed keys. -+ Last updated 04/06/2023
cosmos-db How To Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/how-to-customer-managed-keys.md
Title: How to enable encryption with customer-managed keys in Azure Cosmos DB fo
description: Steps to enable data encryption with customer-managed keys. -+ Last updated 01/03/2024
cosmos-db How To Enable Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/how-to-enable-audit.md
Title: Audit logging - Azure Cosmos DB for PostgreSQL
description: How to enable pgAudit logging in Azure Cosmos DB for PostgreSQL. -+ Last updated 10/01/2023
cosmos-db Howto Ingest Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-ingest-azure-data-factory.md
Title: Using Azure Data Factory for data ingestion - Azure Cosmos DB for Postgre
description: See a step-by-step guide for using Azure Data Factory for ingestion on Azure Cosmos DB for PostgreSQL. -+ Last updated 12/13/2023
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/autoscale.md
description: Use Azure CLI to create a API for Table account and table with auto
-+ Last updated 06/22/2022
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/create.md
description: Create a API for Table table for Azure Cosmos DB
-+
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/lock.md
description: Use Azure CLI to create, list, show properties for, and delete reso
-+ Last updated 06/16/2022
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/serverless.md
description: Use Azure CLI to create a API for Table serverless account and tabl
-+ Last updated 06/16/2022
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/throughput.md
description: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos
-+
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/autoscale.md
Title: PowerShell script to create a table with autoscale in Azure Cosmos DB for Table description: PowerShell script to create a table with autoscale in Azure Cosmos DB for Table -+
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/create.md
Title: PowerShell script to create a table in Azure Cosmos DB for Table description: Learn how to use a PowerShell script to update the throughput for a database or a container in Azure Cosmos DB for Table -+
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/list-get.md
Title: PowerShell script to list and get Azure Cosmos DB for Table operations description: Azure PowerShell script - Azure Cosmos DB list and get operations for API for Table -+ Last updated 07/31/2020
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/lock.md
description: Create resource lock for Azure Cosmos DB Table API table
-+ Last updated 06/12/2020
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/throughput.md
Title: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DB for Table description: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DB for Table -+
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/cli-samples.md
Title: Azure CLI Samples for Azure Cosmos DB for Table description: Azure CLI Samples for Azure Cosmos DB for Table -+ Last updated 08/19/2022
cosmos-db Find Request Unit Charge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/find-request-unit-charge.md
Title: Find request unit (RU) charge for a API for Table queries in Azure Cosmos DB description: Learn how to find the request unit (RU) charge for API for Table queries executed against an Azure Cosmos DB container. You can use the Azure portal, .NET, Java, Python, and Node.js languages to find the RU charge. -+ Last updated 10/14/2020
cosmos-db How To Create Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-create-account.md
Title: Create an Azure Cosmos DB for Table account
description: Learn how to create a new Azure Cosmos DB for Table account -+ ms.devlang: csharp
cosmos-db How To Create Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-create-container.md
Title: Create a container in Azure Cosmos DB for Table description: Learn how to create a container in Azure Cosmos DB for Table by using Azure portal, .NET, Java, Python, Node.js, and other SDKs. -+
cosmos-db How To Dotnet Create Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-dotnet-create-item.md
Title: Create an item in Azure Cosmos DB for Table using .NET
description: Learn how to create an item in your Azure Cosmos DB for Table account using the .NET SDK -+ ms.devlang: csharp
cosmos-db How To Dotnet Create Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-dotnet-create-table.md
Title: Create a table in Azure Cosmos DB for Table using .NET
description: Learn how to create a table in your Azure Cosmos DB for Table account using the .NET SDK -+ ms.devlang: csharp
cosmos-db How To Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-dotnet-get-started.md
Title: Get started with Azure Cosmos DB for Table using .NET
description: Get started developing a .NET application that works with Azure Cosmos DB for Table. This article helps you learn how to set up a project and configure access to an Azure Cosmos DB for Table endpoint. -+ ms.devlang: csharp
cosmos-db How To Dotnet Read Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-dotnet-read-item.md
Title: Read an item in Azure Cosmos DB for Table using .NET
description: Learn how to read an item in your Azure Cosmos DB for Table account using the .NET SDK -+ ms.devlang: csharp
cosmos-db How To Use C Plus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-c-plus.md
Title: Use Azure Table Storage and Azure Cosmos DB for Table with C++ description: Store structured data in the cloud using Azure Table storage or the Azure Cosmos DB for Table by using C++.-+ ms.devlang: cpp
cosmos-db How To Use Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-go.md
Title: Use the Azure Table client library for Go description: Store structured data in the cloud using the Azure Table client library for Go.-+ ms.devlang: golang
cosmos-db How To Use Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-java.md
Title: Use the Azure Tables client library for Java description: Store structured data in the cloud using the Azure Tables client library for Java.-+ ms.devlang: java
cosmos-db How To Use Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-nodejs.md
Title: Use Azure Table storage or Azure Cosmos DB for Table from Node.js description: Store structured data in the cloud using Azure Tables client library for Node.js.-+ ms.devlang: javascript
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/introduction.md
description: Use Azure Cosmos DB for Table to store, manage, and query massive volumes of key-value typed NoSQL data. -+ Last updated 02/28/2023
cosmos-db Manage With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/manage-with-bicep.md
Title: Create and manage Azure Cosmos DB for Table with Bicep description: Use Bicep to create and configure Azure Cosmos DB for Table. -+
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/powershell-samples.md
Title: Azure PowerShell samples for Azure Cosmos DB for Table description: Get the Azure PowerShell samples to perform common tasks in Azure Cosmos DB for Table -+
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/quickstart-dotnet.md
Title: Quickstart - Azure Cosmos DB for Table for .NET
description: Learn how to build a .NET app to manage Azure Cosmos DB for Table resources in this quickstart. -+ ms.devlang: csharp
cosmos-db Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/quickstart-java.md
Title: Use the API for Table and Java to build an app - Azure Cosmos DB
description: This quickstart shows how to use the Azure Cosmos DB for Table to create an application with the Azure portal and Java -+ ms.devlang: java
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/quickstart-nodejs.md
Title: 'Quickstart: API for Table with Node.js - Azure Cosmos DB'
description: This quickstart shows how to use the Azure Cosmos DB for Table to create an application with the Azure portal and Node.js -+ ms.devlang: javascript
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/quickstart-python.md
Title: 'Quickstart: API for Table with Python - Azure Cosmos DB' description: This quickstart shows how to access the Azure Cosmos DB for Table from a Python application using the Azure Data Tables SDK -+ ms.devlang: python
cosmos-db Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/resource-manager-templates.md
Title: Resource Manager templates for Azure Cosmos DB for Table description: Use Azure Resource Manager templates to create and configure Azure Cosmos DB for Table. -+
cosmos-db Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/samples-dotnet.md
Title: Examples for Azure Cosmos DB for Table SDK for .NET
description: Find .NET SDK examples on GitHub for common tasks using the Azure Cosmos DB for Table. -+ ms.devlang: csharp
cosmos-db Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/support.md
Title: Azure Table Storage support in Azure Cosmos DB description: Learn how Azure Cosmos DB for Table and Azure Table Storage work together by sharing the same table data model and operations.-+ Last updated 03/07/2023
cosmos-db Tutorial Global Distribution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/tutorial-global-distribution.md
Title: Azure Cosmos DB global distribution tutorial for API for Table
description: Learn how global distribution works in Azure Cosmos DB for Table accounts and how to configure the preferred list of regions -+ Last updated 01/30/2020
cosmos-db Tutorial Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/tutorial-query.md
Title: 'Tutorial: Query Azure Cosmos DB by using the API for Table'
description: Learn how to query data stored in the Azure Cosmos DB for Table account by using OData filters and LINQ queries. -+ Last updated 03/14/2023
cost-management-billing Change Credit Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/change-credit-card.md
Previously updated : 09/13/2023 Last updated : 07/31/2024
If your payment method is being used by an MCA billing profile, the following me
To detach a payment method, a list of conditions must be met. If any conditions aren't met, instructions appear explaining how to meet the condition. A link also appears that takes you to the location where you can resolve the condition.
-When all the conditions are all satisfied, you can detach the payment method from the billing profile.
+When all the conditions are fully satisfied, you can detach the payment method from the billing profile.
> [!NOTE] > When the default payment method is detached, the billing profile is put into an _inactive_ state. Anything deleted in this process will not be able to be recovered. After a billing profile is set to inactive, you must sign up for a new Azure subscription to create new resources.
+#### Detach payment method errors
+
+There are several reasons why trying to detach a payment method might fail. If youΓÇÖre having problems trying to detach (remove) a payment method, it's most likely caused by one of the following reasons.
+
+##### Outstanding charges (past due charges)
+
+You can view your outstanding charges by navigating to **Cost Management + Billing** > select a billing account > under Billing, select **Invoices**, > then in the list of invoices you can view the **Status**. Invoices with **Past Due** status must be paid.
+
+HereΓÇÖs an example of past due charges.
++
+After you pay outstanding charges, you can detach your payment method.
+
+##### Recurring charges set to auto renew
+You can view recurring charges in the Recurring charges page. Navigate to **Cost Management + Billing** > select your billing account > under Billing, select **Recurring charges**. To stop charges from automatically renewing, on the Recurring charges page, select a charge and then one the right side of the row, select the ellipsis symbol (**…**) and then select **Cancel**.
+
+HereΓÇÖs an example of the Recurring charge page with items that must get canceled.
++
+Examples of recurring charges include:
+
+- Azure support agreements
+- Active Azure subscriptions
+- Reservations set to auto renew
+- Savings plans set to auto renew
+
+After all recurring charges are removed, you can detach your payment method.
+
+##### Pending charges
+
+You canΓÇÖt detach your payment method if there are any pending charges. In the Azure portal, pending charges appear with **Due on *date*** status on the Cost Management + Billing > Billing > Invoices page. LetΓÇÖs look at a typical pending charges example.
+
+1. Assume that a billing cycle begins on June 1.
+2. You use Azure services from June 1 to June 10.
+3. You cancel your subscription on June 10.
+4. You pay your invoice on June 12 for the month of May and are paid in full.
+5. However, you still have pending charges for June 1 to June 10.
+
+In this example, you arenΓÇÖt billed for your June usage until the following month (August). So, you canΓÇÖt detach your payment method until you pay the invoice for June, which isnΓÇÖt available until August.
+
+HereΓÇÖs an example of a pending charge.
++
+After you pay all pending charges, you can detach your payment method.
+ #### To detach a payment method 1. In the Delete a payment method area, select the **Detach the current payment method** link. 1. If all conditions are met, select **Detach**. Otherwise, continue to the next step. 1. If Detach is unavailable, a list of conditions is shown. Take the actions listed. Select the link shown in the Detach the default payment method area. Here's an example of a corrective action that explains the actions you need to take. :::image type="content" source="./media/change-credit-card/azure-subscriptions.png" alt-text="Example screenshot showing a corrective action needed to detach a payment method for MCA." :::
-1. When you select the corrective action link, you're redirected to the Azure page where you take action. Take whatever correction action is needed.
+1. When you select the corrective action link, you get redirected to the Azure page where you take action. Take whatever correction action is needed.
1. If necessary, complete all other corrective actions. 1. Navigate back to **Cost Management + Billing** > **Billing profiles** > **Payment methods**. Select **Detach**. At the bottom of the Detach the default payment method page, select **Detach**.
If your payment method is in use by a subscription, do the following steps.
1. In the Delete a payment method area, select **Delete** if all conditions are met. If Delete is unavailable, continue to the next step. 1. A list of conditions is shown. Take the actions listed. Select the link shown in the Delete a payment method area. :::image type="content" source="./media/change-credit-card/payment-method-in-use-mosp.png" alt-text="Example screenshot showing that a payment method is in use by a pay-as-you-go subscription." :::
-1. When you select the corrective action link, you're redirected to the Azure page where you take action. Take whatever correction action is needed.
+1. When you select the corrective action link, you get redirected to the Azure page where you take action. Take whatever correction action is needed.
1. If necessary, complete all other corrective actions. 1. Navigate back to **Cost Management + Billing** > **Billing profiles** > **Payment methods** and delete the payment method.
The following sections answer commonly asked questions about changing your credi
### Why do I keep getting a "session has expired" error message?
-If you get the `Your login session has expired. Please click here to log back in` error message even if you've already logged out and back in, try again with a private browsing session.
+If you already tried signing out and back in, yet you get the error message `Your login session has expired. Please click here to log back in`, try using a private browsing session.
### How do I use a different card for each subscription?
cost-management-billing Analyze Unexpected Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/analyze-unexpected-charges.md
Try the following steps:
- Verify that you have the Owner, Contributor, or Cost Management Contributor role on the subscription. - If you got an error message indicating that you reached the limit of five alerts per subscription, consider editing an existing anomaly alert rule. Add yourself as a recipient instead of creating a new rule in case you exhausted the limit.
+- Anomaly alerts are currently available only in the Azure public cloud. If you are using a government cloud or any of the sovereign clouds, this service is not yet available.
+
+### How can I automate the creation of an anomaly alert rule?
+
+You can automate the creation of anomaly alert rules using the [Scheduled Action API](/rest/api/cost-management/scheduled-actions/create-or-update-by-scope?view=rest-cost-management-2023-11-01&tabs=HTTP), specifying the scheduled action kind as **`InsightAlert.`**
+ ## Get help to identify charges If used the preceding strategies and you still don't understand why you received a charge or if you need other help with billing issues, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
data-factory Connector Dynamics Crm Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dynamics-crm-office-365.md
Previously updated : 07/11/2024 Last updated : 07/31/2024 # Copy and transform data in Dynamics 365 (Microsoft Dataverse) or Dynamics CRM using Azure Data Factory or Azure Synapse Analytics
Additional properties that compare to Dynamics online are **hostName** and **por
| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. If no value is specified, the property uses the default Azure integration runtime. | No | >[!Note]
->Due to the sunset of Idf authentication type by **August 31, 2024**, please upgrade to Active Directory Authentication type before the date if you are currently using it.
+>Due to the sunset of Idf authentication type by **September 15, 2024**, please upgrade to Active Directory Authentication type before the date if you are currently using it.
#### Example: Dynamics on-premises with IFD using Active Directory authentication
data-factory Connector Quickbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-quickbooks.md
description: Learn how to copy data from QuickBooks Online to supported sink dat
-+
The following properties are supported for QuickBooks linked service:
When you use the QuickBooks Online connector in a linked service, it's important to manage OAuth 2.0 refresh tokens from QuickBooks correctly. The linked service uses a refresh token to obtain new access tokens. However, QuickBooks Online periodically updates the refresh token, invalidating the previous one. The linked service does not automatically update the refresh token in Azure Key Vault, so you need to manage updating the refresh token to ensure uninterrupted connectivity. Otherwise you might encounter authentication failures once the refresh token expires.
-You can manually update the refresh token in Azure Key Vault based on QuickBooks Online's refresh token expiry policy. But another approach is to automate updates with a scheduled task or [Azure Function](/samples/azure/azure-quickstart-templates/functions-keyvault-secret-rotation) that checks for a new refresh token and updates it in Azure Key Vault.
+You can manually update the refresh token in Azure Key Vault based on QuickBooks Online's refresh token expiry policy. But another approach is to automate updates with a scheduled task or [Azure Function](https://github.com/Azure-Samples/serverless-keyvault-secret-rotation-handling) that checks for a new refresh token and updates it in Azure Key Vault.
## Dataset properties
data-factory Data Factory Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-troubleshoot-guide.md
Title: General Troubleshooting
description: Learn how to troubleshoot external control activities in Azure Data Factory and Azure Synapse Analytics pipelines. --+ Last updated 05/15/2024
data-factory Data Movement Security Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-movement-security-considerations.md
Title: Security considerations
description: Describes basic security infrastructure that data movement services in Azure Data Factory use to help secure your data. -+ Last updated 01/05/2024
data-factory Enable Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/enable-customer-managed-key.md
Title: Encrypt Azure Data Factory with customer-managed key
description: Enhance Data Factory security with Bring Your Own Key (BYOK) -+ Last updated 10/20/2023
data-factory How To Manage Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-settings.md
Title: Managing Azure Data Factory settings and preferences
description: Learn how to manage Azure Data Factory settings and preferences. --+ Last updated 01/05/2024
data-factory How To Manage Studio Preview Exp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-studio-preview-exp.md
Title: Managing Azure Data Factory studio preview experience
description: Learn more about the Azure Data Factory studio preview experience. --+ Last updated 01/05/2024
data-factory Quota Increase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quota-increase.md
Title: Request quota increases from support description: How to create a support request in the Azure portal for Azure Data Factory to request quota increases or get problem resolution support.-+ - Last updated 05/15/2024
data-factory Solution Template Extract Data From Pdf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-extract-data-from-pdf.md
Title: Extract data from PDF
description: Learn how to use a solution template to extract data from a PDF source using Azure Data Factory. --+ Last updated 05/15/2024
data-factory Solution Template Pii Detection And Masking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-pii-detection-and-masking.md
Title: PII detection and masking
description: Learn how to use a solution template to detect and mask PII data using Azure Data Factory. --+ Last updated 01/05/2024
data-factory Tutorial Bulk Copy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-bulk-copy-portal.md
Title: Copy data in bulk using Azure portal
description: Use Azure Data Factory and Copy Activity to copy data from a source data store to a destination data store in bulk. --+ Last updated 05/15/2024
data-factory Tutorial Bulk Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-bulk-copy.md
Title: Copy data in bulk with PowerShell
description: Use Azure Data Factory with Copy Activity to copy data from a source data store to a destination data store in bulk. --+ Last updated 05/15/2024
data-factory Tutorial Copy Data Dot Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-copy-data-dot-net.md
Title: Copy data from Azure Blob Storage to Azure SQL Database description: 'This tutorial provides step-by-step instructions for copying data from Azure Blob Storage to Azure SQL Database.' --+ Last updated 05/15/2024
data-factory Tutorial Copy Data Portal Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-copy-data-portal-private.md
Title: Use private endpoints to create an Azure Data Factory pipeline description: This tutorial provides step-by-step instructions for using the Azure portal to create a data factory with a pipeline. The pipeline uses the copy activity to copy data from Azure Blob storage to an Azure SQL database. --+ Last updated 05/15/2024
data-factory Tutorial Copy Data Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-copy-data-portal.md
Title: Use the Azure portal to create a data factory pipeline description: This tutorial provides step-by-step instructions for using the Azure portal to create a data factory with a pipeline. The pipeline uses the copy activity to copy data from Azure Blob storage to Azure SQL Database. --+ Last updated 05/15/2024
data-factory Tutorial Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-copy-data-tool.md
Title: Copy data from Azure Blob storage to SQL using Copy Data tool
description: Create an Azure Data Factory and then use the Copy Data tool to copy data from Azure Blob storage to a SQL Database. --+ Last updated 11/02/2023
data-factory Tutorial Push Lineage To Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-push-lineage-to-purview.md
Title: Push Data Factory lineage data to Microsoft Purview
description: Learn about how to push Data Factory lineage data to Microsoft Purview --+ Last updated 05/15/2024
databox Data Box Disk Deploy Upload Verify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-deploy-upload-verify.md
To verify that the data has uploaded into Azure, take the following steps:
## Erasure of data from Data Box Disk
-Once the upload to Azure is complete, the Data Box Disk service erases the data on its disks as per the [NIST SP 800-88](https://csrc.nist.gov/News/2014/Released-SP-800-88-Revision-1,-Guidelines-for-Medi) standard.
+Once the upload to Azure is complete, the Data Box Disk service erases the data on its disks as per the [NIST SP 800-88](https://csrc.nist.gov/News/2014/Released-SP-800-88-Revision-1,-Guidelines-for-Medi) standard. After the erasure is complete, you can [Download the order history](data-box-portal-admin.md#download-order-history).
+ ::: zone target="docs"
defender-for-cloud Concept Data Security Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture.md
Title: Data security posture management
-description: Learn how Defender for Cloud helps improve data security posture in a multicloud environment.
-
+description: Explore how Microsoft Defender for Cloud enhances data security posture management across multicloud environments, ensuring comprehensive protection.
+ - Previously updated : 01/28/2024+ Last updated : 07/30/2024
+#customer intent: As a security professional, I want to understand how Defender for Cloud enhances data security in a multicloud environment so that I can effectively protect sensitive data.
+ # About data security posture management As digital transformation accelerates, organizations move data to the cloud at an exponential rate using multiple data stores such as object stores and managed/hosted databases. The dynamic and complex nature of the cloud increases data threat surfaces and risks. This causes challenges for security teams around data visibility and protecting the cloud data estate.
When you enable data security posture management capabilities with the sensitive
Changes in sensitivity settings take effect the next time that resources are discovered.
-## Next steps
+## Sensitive data discovery
+
+Sensitive data discovery identifies sensitive resources and their related risk and then helps to prioritize and remediate those risks.
+
+Defender for Cloud considers a resource sensitive if a Sensitive Information Type (SIT) is detected in it and the customer has configured the SIT to be considered sensitive. Defender for Cloud detects SITs that are considered sensitive by default.
+
+The sensitive data discovery process operates by taking samples of the resourceΓÇÖs data. The sample data is then used to identify sensitive resources with high confidence without performing a full scan of all assets in the resource.
+
+The sensitive data discovery process is powered by the Microsoft Purview classification engine that uses a common set of SITs and labels for all datastores, regardless of their type or hosting cloud vendor.
+
+Sensitive data discovery detects the existence of sensitive data at the cloud workload level. Sensitive data discovery aims to identify various types of sensitive information, but it might not detect all types.
+
+To get complete data cataloging scanning results with all SITs available in the cloud resource, we recommend you use the scanning features from Microsoft Purview.
+
+### For cloud storage
+
+Defender for Cloud's scanning algorithm selects containers that might contain sensitive information and samples up to 20MBs for each file scanned within the container.
+
+### For cloud Databases
+
+Defender for Cloud selects certain tables and samples between 300 to 1,024 rows using nonblocking queries.
+
+## Next step
-- [Prepare and review requirements](concept-data-security-posture-prepare.md) for data security posture management.-- [Understanding data security posture management - Defender for Cloud in the Field video](episode-thirty-one.md).
+> [!div class="nextstepaction"]
+> [Prepare and review requirements for data security posture management.](concept-data-security-posture-prepare.md)
defender-for-cloud Connect Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/connect-azure-subscription.md
Microsoft Defender for Cloud is a cloud-native application protection platform (
- A cloud security posture management (CSPM) solution that surfaces actions that you can take to prevent breaches - A cloud workload protection platform (CWPP) with specific protections for servers, containers, storage, databases, and other workloads
-Defender for Cloud includes Foundational CSPM capabilities and access to [Microsoft Defender XDR](/microsoft-365/security/defender/microsoft-365-defender) for free. You can add additional paid plans to secure all aspects of your cloud resources. You can try Defender for Cloud for free for the first 30 days. After 30 days charges begin in accordance with the plans enabled in your environment. To learn more about these plans and their costs, see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+Defender for Cloud includes Foundational CSPM capabilities and access to [Microsoft Defender XDR](/microsoft-365/security/defender/microsoft-365-defender) for free. You can add additional paid plans to secure all aspects of your cloud resources. You can try Defender for Cloud for free for the first 30 days. After 30 days, charges begin in accordance with the plans enabled in your environment. To learn more about these plans and their costs, see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
> [!IMPORTANT] > Malware scanning in Defender for Storage is not included for free in the first 30 day trial and will be charged from the first day in accordance with the pricing scheme available on the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
defender-for-cloud Defender For Sql Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-alerts.md
+
+ Title: Explore and investigate Defender for SQL security alerts
+description: Learn how to explore and investigate Defender for SQL security alerts in Microsoft Defender for Cloud.
+++ Last updated : 07/08/2024++
+# Explore and investigate Defender for SQL security alerts
+
+There are several ways to view Microsoft Defender for SQL alerts in Microsoft Defender for Cloud:
+
+- The **Alerts** page.
+
+- The machine's security page.
+
+- The [workload protections dashboard](workload-protections-dashboard.md).
+
+- Through the direct link provided in the alert's email.
+
+## How to view alerts
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Search for and select **Microsoft Defender for Cloud**.
+
+1. Select **Security alerts**.
+
+1. Select an alert.
+
+Alerts are designed to be self-contained, with detailed remediation steps and investigation information in each one. You can investigate further by using other Microsoft Defender for Cloud and Microsoft Sentinel capabilities for a broader view:
+
+- Enable SQL Server's auditing feature for further investigations. If you're a Microsoft Sentinel user, you can upload the SQL auditing logs from the Windows Security Log events to Sentinel and enjoy a rich investigation experience. [Learn more about SQL Server Auditing](/sql/relational-databases/security/auditing/create-a-server-audit-and-server-audit-specification?preserve-view=true&view=sql-server-ver15).
+
+- To improve your security posture, use Defender for Cloud's recommendations for the host machine indicated in each alert to reduce the risks of future attacks.
+
+[Learn more about managing and responding to alerts](managing-and-responding-alerts.yml).
+
+## Related content
+
+For related information, see these resources:
+
+- [Security alerts for SQL Database and Azure Synapse Analytics](alerts-sql-database-and-azure-synapse-analytics.md)
+- [Set up email notifications for security alerts](configure-email-notifications.md)
+- [Learn more about Microsoft Sentinel](../sentinel/index.yml)
defender-for-cloud Defender For Sql Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-usage.md
Title: How to enable Microsoft Defender for SQL servers on machines
+ Title: Enable Microsoft Defender for SQL servers on machines
description: Learn how to protect your Microsoft SQL servers on Azure VMs, on-premises, and in hybrid and multicloud environments with Microsoft Defender for Cloud.
Defender for SQL servers on machines protects your SQL servers hosted in Azure,
|Protected SQL versions:|SQL Server version: 2012, 2014, 2016, 2017, 2019, 2022 <br>- [SQL on Azure virtual machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview)<br>- [SQL Server on Azure Arc-enabled servers](/sql/sql-server/azure-arc/overview)<br><br>| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Microsoft Azure operated by 21Vianet **(Advanced Threat Protection Only)**|
-## Set up Microsoft Defender for SQL servers on machines
+## Enable Defender for SQL on non-Azure machines using the AMA agent
-The Defender for SQL server on machines plan requires Microsoft Monitoring Agent (MMA) or Azure Monitoring Agent (AMA) to prevent attacks and detect misconfigurations. The planΓÇÖs autoprovisioning process is automatically enabled with the plan and is responsible for the configuration of all of the agent components required for the plan to function. This includes installation and configuration of MMA/AMA, workspace configuration, and the installation of the planΓÇÖs VM extension/solution.
+### Prerequisites for enabling Defender for SQL on non-Azure machines
-Microsoft Monitoring Agent (MMA) is set to be retired in August 2024. Defender for Cloud [updated its strategy](upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation) and released a SQL Server-targeted Azure Monitoring Agent (AMA) autoprovisioning process to replace the Microsoft Monitoring Agent (MMA) process which is set to be deprecated. Learn more about the [AMA for SQL server on machines autoprovisioning process](defender-for-sql-autoprovisioning.md) and how to migrate to it.
+- An active Azure subscription.
+- **Subscription owner** permissions on the subscription in which you wish to assign the policy.
-> [!NOTE]
-> Customers who are currently using the **Log Analytics agent/Azure Monitor agent** processes will be asked to [migrate to the AMA for SQL server on machines autoprovisioning process](defender-for-sql-autoprovisioning.md).
+- SQL Server on machines prerequisites:
+ - **Permissions**: the Windows user operating the SQL server must have the **Sysadmin** role on the database.
+ - **Extensions**: The following extensions should be added to the allowlist:
+ - Defender for SQL (IaaS and Arc):
+ - Publisher: Microsoft.Azure.AzureDefenderForSQL
+ - Type: AdvancedThreatProtection.Windows
+ - SQL IaaS Extension (IaaS):
+ - Publisher: Microsoft.SqlServer.Management
+ - Type: SqlIaaSAgent
+ - SQL IaaS Extension (Arc):
+ - Publisher: Microsoft.AzureData
+ - Type: WindowsAgent.SqlServer
+ - AMA extension (IaaS and Arc):
+ - Publisher: Microsoft.Azure.Monitor
+ - Type: AzureMonitorWindowsAgent
-**To enable the plan on a subscription**:
+### Naming conventions in the Deny policy allowlist
+
+- Defender for SQL uses the following naming convention when creating our resources:
+
+ - DCR: `MicrosoftDefenderForSQL--dcr`
+ - DCRA: `/Microsoft.Insights/MicrosoftDefenderForSQL-RulesAssociation`
+ - Resource group: `DefaultResourceGroup-`
+ - Log analytics workspace: `D4SQL--`
+
+- Defender for SQL uses *MicrosoftDefenderForSQL* as a *createdBy* database tag.
+
+### Steps to enable Defender for SQL on non-Azure machines
+
+1. Connect SQL server to Azure Arc. For more information on the supported operating systems, connectivity configuration, and required permissions, see the following documentation:
+
+ - [Plan and deploy Azure Arc-enabled servers](/azure/azure-arc/servers/plan-at-scale-deployment)
+ - [Connected Machine agent prerequisites](/azure/azure-arc/servers/prerequisites)
+ - [Connected Machine agent network requirements](/azure/azure-arc/servers/network-requirements)
+ - [Roles specific to SQL Server enabled by Azure Arc](/sql/relational-databases/security/authentication-access/server-level-roles#roles-specific-to-sql-server-enabled-by-azure-arc)
+
+1. Once Azure Arc is installed, the Azure extension for SQL Server is installed automatically on the database server. For more information, see [Manage automatic connection for SQL Server enabled by Azure Arc](/sql/sql-server/azure-arc/manage-autodeploy).
+
+### Enable Defender for SQL
1. Sign in to the [Azure portal](https://portal.azure.com).
Microsoft Monitoring Agent (MMA) is set to be retired in August 2024. Defender f
1. Select **Save**.
-1. **(Optional)** Configure advanced autoprovisioning settings:
- 1. Navigate to the **Environment settings** page.
+1. Once enabled we use one of the following policy initiatives:
+ - Configure SQL VMs and Arc-enabled SQL servers to install Microsoft Defender for SQL and AMA with a Log analytics workspace (LAW) for a default LAW. This creates resources groups with data collection rules and a default Log analytics workspace. For more information about the Log analytics workspace, see [Log Analytics workspace overview](/azure/azure-monitor/logs/log-analytics-workspace-overview).
+
+ :::image type="content" source="media/defender-for-sql-usage/default-log-analytics-workspace.png" alt-text="Screenshot of how to configure default log analytics workspace." lightbox="media/defender-for-sql-usage/default-log-analytics-workspace.png":::
+
+ - Configure SQL VMs and Arc-enabled SQL servers to install Microsoft Defender for SQL and AMA with a user-defined LAW. This creates a resource group with data collection rules and a custom Log analytics workspace in the predefined region. During this process, we install the Azure monitoring agent. For more information about the options to install the AMA agent, see [Azure Monitor Agent prerequisites](/azure/azure-monitor/agents/azure-monitor-agent-manage#prerequisites).
- 1. Select **Settings & monitoring**.
- - For customers using the new autoprovisioning process, select **Edit configuration** for the **Azure Monitoring Agent for SQL server on machines** component.
- - For customers using the previous autoprovisioning process, select **Edit configuration** for the **Log Analytics agent/Azure Monitor agent** component.
+ :::image type="content" source="media/defender-for-sql-usage/user-defined-log-analytics-workspace.png" alt-text="Screenshot of how to configure user-defined log analytics workspace." lightbox="media/defender-for-sql-usage/user-defined-log-analytics-workspace.png":::
-**To enable the plan on a SQL VM/Arc-enabled SQL Server**:
+1. To complete the installation process, a restart of the SQL server (instance) is necessary for versions 2017 and older.
+
+## Enable Defender for SQL on Azure virtual machines using the AMA agent
+
+### Prerequisites for enabling Defender for SQL on Azure virtual machines
+
+- An active Azure subscription.
+- **Subscription owner** permissions on the subscription in which you wish to assign the policy.
+- SQL Server on machines prerequisites:
+ - **Permissions**: the Windows user operating the SQL server must have the **Sysadmin** role on the database.
+ - **Extensions**: The following extensions should be added to the allowlist:
+ - Defender for SQL (IaaS and Arc):
+ - Publisher: Microsoft.Azure.AzureDefenderForSQL
+ - Type: AdvancedThreatProtection.Windows
+ - SQL IaaS Extension (IaaS):
+ - Publisher: Microsoft.SqlServer.Management
+ - Type: SqlIaaSAgent
+ - SQL IaaS Extension (Arc):
+ - Publisher: Microsoft.AzureData
+ - Type: WindowsAgent.SqlServer
+ - AMA extension (IaaS and Arc):
+ - Publisher: Microsoft.Azure.Monitor
+ - Type: AzureMonitorWindowsAgent
+- Since we're creating a resource group in *East US*, as part of the autoprovisioning enablement process, this region needs to be allowed or Defender for SQL can't complete the installation process successfully.
+
+### Steps to enable Defender for SQL on Azure virtual machines
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Navigate to your SQL VM/Arc-enabled SQL Server.
+1. Search for and select **Microsoft Defender for Cloud**.
-1. In the SQL VM/Arc-enabled SQL Server menu, under Security, selectΓÇ»**Microsoft Defender for Cloud**.
+1. In the Defender for Cloud menu, select **Environment settings**.
-1. In the Microsoft Defender for SQL server on machines section, select **Enable**.
+1. Select the relevant subscription.
-## Explore and investigate security alerts
+1. On the Defender plans page, locate the Databases plan and select **Select types**.
-There are several ways to view Microsoft Defender for SQL alerts in Microsoft Defender for Cloud:
+ :::image type="content" source="media/tutorial-enabledatabases-plan/select-types.png" alt-text="Screenshot that shows you where to select types on the Defender plans page." lightbox="media/tutorial-enabledatabases-plan/select-types.png":::
-- The Alerts page.
+1. In the Resource types selection window, toggle the **SQL servers on machines** plan to **On**.
-- The machine's security page.
+1. Select **Continue**.
-- The [workload protections dashboard](workload-protections-dashboard.md).
+1. Select **Save**.
-- Through the direct link provided in the alert's email.
+1. Once enabled we use one of the following policy initiatives:
+ - Configure SQL VMs and Arc-enabled SQL servers to install Microsoft Defender for SQL and AMA with a Log analytics workspace (LAW) for a default LAW. This creates a resources group in *East US*, and managed identity. For more information about the use of the managed identity, see [Resource Manager template samples for agents in Azure Monitor](/azure/azure-monitor/agents/resource-manager-agent). It also creates a resource group that includes a Data Collection Rules (DCR) and a default LAW. All resources are consolidated under this single resource group. The DCR and LAW are created to align with the region of the virtual machine (VM).
-**To view alerts**:
+ :::image type="content" source="media/defender-for-sql-usage/default-log-analytics-workspace.png" alt-text="Screenshot of how to configure default log analytics workspace." lightbox="media/defender-for-sql-usage/default-log-analytics-workspace.png":::
-1. Sign in to the [Azure portal](https://portal.azure.com).
+ - Configure SQL VMs and Arc-enabled SQL servers to install Microsoft Defender for SQL and AMA with a user-defined LAW. This creates a resources group in *East US*, and managed identity. For more information about the use of the managed identity, see [Resource Manager template samples for agents in Azure Monitor](/azure/azure-monitor/agents/resource-manager-agent). It also creates a resources group with a DCR and a custom LAW in the predefined region.
-1. Search for and select **Microsoft Defender for Cloud**.
+ :::image type="content" source="media/defender-for-sql-usage/user-defined-log-analytics-workspace.png" alt-text="Screenshot of how to configure user-defined log analytics workspace." lightbox="media/defender-for-sql-usage/user-defined-log-analytics-workspace.png":::
+
+1. To complete the installation process, a restart of the SQL server (instance) is necessary for versions 2017 and older.
+
+## Common questions
+
+### Once the deployment is done, how long do we need to wait to see a successful deployment?
+
+It takes approximately 30 minutes to update the protection status by the SQL IaaS Extension, assuming all the prerequisites are fulfilled.
+
+### How do I verify that my deployment ended successfully and that my database is now protected?
+
+1. Locate the database on the upper search bar in the Azure portal.
+1. Under the **Security** tab, select **Defender for Cloud**.
+1. Check the **Protection status**. If the status is **Protected**, the deployment was successful.
++
+### What is the purpose of the managed identity created during the installation process on Azure SQL VMs?
+
+The managed identity is part of the Azure Policy, which pushes out the AMA. It's used by the AMA to access the database to collect the data and send it via the Log Analytics Workspace (LAW) to Defender for Cloud. For more information about the use of the managed identity, see [Resource Manager template samples for agents in Azure Monitor](/azure/azure-monitor/agents/resource-manager-agent).
+
+### Can I use my own DCR or managed-identity instead of Defender for Cloud creating a new one?
+
+Yes, we allow you to bring your own identity or DCR using the following script only. For more information, see [Enable Microsoft Defender for SQL servers on machines at scale](enable-defender-sql-at-scale.md).
+
+### How can I enable SQL servers on machines with AMA at scale?
+
+See [Enable Microsoft Defender for SQL servers on machines at scale](enable-defender-sql-at-scale.md) for the process of how to enable Microsoft Defender for SQLΓÇÖs autoprovisioning across multiple subscriptions simultaneously. It's applicable to SQL servers hosted on Azure Virtual Machines, on-premises environments, and Azure Arc-enabled SQL servers.
+
+### Which tables are used in LAW with AMA?
-1. Select **Security alerts**.
+Defender for SQL on SQL VMs and Arc-enabled SQL servers uses the Log Analytics Workspace (LAW) to transfer data from the database to the Defender for Cloud portal. This means that no data is saved locally at the LAW. The tables in the LAW named *SQLAtpStatus* and the *SqlVulnerabilityAssessmentScanStatus* will be retired [when MMA is deprecated](/azure/azure-monitor/agents/azure-monitor-agent-migration). ATP and VA status can be viewed in the Defender for Cloud portal.
-1. Select an alert.
+### How does Defender for SQL collect logs from the SQL server?
-Alerts are designed to be self-contained, with detailed remediation steps and investigation information in each one. You can investigate further by using other Microsoft Defender for Cloud and Microsoft Sentinel capabilities for a broader view:
+Defender for SQL uses Xevent, beginning with SQL Server 2017. On previous versions of SQL Server, Defender for SQL collects the logs using the SQL server audit logs.
-- Enable SQL Server's auditing feature for further investigations. If you're a Microsoft Sentinel user, you can upload the SQL auditing logs from the Windows Security Log events to Sentinel and enjoy a rich investigation experience. [Learn more about SQL Server Auditing](/sql/relational-databases/security/auditing/create-a-server-audit-and-server-audit-specification?preserve-view=true&view=sql-server-ver15).
+### I see a parameter named enableCollectionOfSqlQueriesForSecurityResearch in the policy initiative. Does this mean that my data is collected for analysis?
-- To improve your security posture, use Defender for Cloud's recommendations for the host machine indicated in each alert to reduce the risks of future attacks.
-
-[Learn more about managing and responding to alerts](managing-and-responding-alerts.yml).
+This parameter isn't in use today. Its default value is *false*, meaning that unless you proactively change the value, it remains false. There's no effect from this parameter.
-## Next steps
+## Related content
For related information, see these resources: - [How Microsoft Defender for Azure SQL can protect SQL servers anywhere](https://www.youtube.com/watch?v=V7RdB6RSVpc). - [Security alerts for SQL Database and Azure Synapse Analytics](alerts-sql-database-and-azure-synapse-analytics.md)-- [Set up email notifications for security alerts](configure-email-notifications.md)-- [Learn more about Microsoft Sentinel](../sentinel/index.yml) - Check out [common questions](faq-defender-for-databases.yml) about Defender for Databases.
defender-for-cloud Enable Defender Sql At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-defender-sql-at-scale.md
+
+ Title: How to enable Microsoft Defender for SQL servers on machines at scale
+description: Learn how to protect your Microsoft SQL servers on Azure VMs, on-premises, and in hybrid and multicloud environments with Microsoft Defender for Cloud at scale.
+++ Last updated : 07/31/2024
+#customer intent: As a user, I want to learn how to enable Defender for SQL servers at scale so that I can protect my SQL servers efficiently.
++
+# Enable Microsoft Defender for SQL servers on machines at scale
+
+Microsoft Defender for Cloud's SQL servers on machines component of the Defender for Databases plan, protects SQL IaaS and Defender for SQL extensions. The SQL servers on machines component identifies and mitigates potential database vulnerabilities while detecting anomalous activity that could indicate threats to your databases.
+
+When [you enable the SQL Server on a machines](tutorial-enable-databases-plan.md#enable-specific-plans-database-protections) component of the Defender for Databases plan, the auto-provision process is it automatically initiated. The auto-provision process installs and configures all the necessary components for the plan to function. Such as the Azure Monitor Agent (AMA), SQL IaaS extension, and Defender for SQL extensions. The auto-provision process also sets up the workspace configuration, Data Collection Rules, identity (if needed), and the SQL IaaS extension.
+
+This page explains how you can enable the auto-provision process for Defender for SQL across multiple subscriptions simultaneously using a PowerShell script. This process applies to SQL servers hosted on Azure VMs, on-premises environments, and Azure Arc-enabled SQL servers. This article also discusses how to utilize extra functionalities that can accommodate various configurations such as:
+
+- Custom data collection rules
+
+- Custom identity management
+
+- Default workspace integration
+
+- Custom workspace configuration
+
+## Prerequisites
+
+- Gain knowledge on:
+ - [SQL server on VMs](https://azure.microsoft.com/products/virtual-machines/sql-server/)
+ - [SQL Server enabled by Azure Arc](/sql/sql-server/azure-arc/overview)
+ - [How to migrate to Azure Monitor Agent from Log Analytics agent](../azure-monitor/agents/azure-monitor-agent-migration.md)
+
+- [Connect AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md)
+- [Connect your GCP project to Microsoft Defender for Cloud](quickstart-onboard-gcp.md)
+
+- Install PowerShell on [Windows](/powershell/scripting/install/installing-powershell-on-windows), [Linux](/powershell/scripting/install/installing-powershell-on-linux), [macOS](/powershell/scripting/install/installing-powershell-on-macos), or [Azure Resource Manager (ARM)](/powershell/scripting/install/powershell-on-arm).
+- [Install the following PowerShell modules](/powershell/module/powershellget/install-module):
+ - Az.Resources
+ - Az.OperationalInsights
+ - Az.Accounts
+ - Az
+ - Az.PolicyInsights
+ - Az.Security
+
+- Permissions: requires VM contributor, contributor, or owner rules.
+
+## PowerShell script parameters and samples
+
+The PowerShell script that enables Microsoft Defender for SQL on Machines on a given subscription has several parameters that you can customize to fit your needs. The following table lists the parameters and their descriptions:
+
+| Parameter name | Required | Description |
+|--|--|--|
+| SubscriptionId: | Required | The Azure subscription ID that you want to enable Defender for SQL servers on machines for. |
+| RegisterSqlVmAgnet | Required | A flag indicating whether to register the SQL VM Agent in bulk. <br><br> Learn more about [registering multiple SQL VMs in Azure with the SQL IaaS Agent extension](/azure/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-vms-bulk?view=azuresql). |
+| WorkspaceResourceId | Optional | The resource ID of the Log Analytics workspace, if you want to use a custom workspace instead of the default one. |
+| DataCollectionRuleResourceId | Optional | The resource ID of the data collection rule, if you want to use a custom DCR instead of the default one. |
+| UserAssignedIdentityResourceId | Optional | The resource ID of the user assigned identity, if you want to use a custom user assigned identity instead of the default one. |
+
+The following sample script is applicable when you use a default Log Analytics workspace, data collection rule, and managed identity.
+
+```powershell
+Write-Host " Enable Defender for SQL on Machines example "
+$SubscriptionId = "<SubscriptionID>"
+.\EnableDefenderForSqlOnMachines.ps1 -SubscriptionId $SubscriptionId -RegisterSqlVmAgnet $RegisterSqlVmAgnet
+```
+
+The following sample script is applicable when you use a custom Log Analytics workspace, data collection rule, and managed identity.
+
+```powershell
+Write-Host " Enable Defender for SQL on Machines example "
+$SubscriptionId = "<SubscriptionID>"
+$RegisterSqlVmAgnet = "false"
+$WorkspaceResourceId = "/subscriptions/<SubscriptionID>/resourceGroups/someResourceGroup/providers/Microsoft.OperationalInsights/workspaces/someWorkspace"
+$DataCollectionRuleResourceId = "/subscriptions/<SubscriptionID>/resourceGroups/someOtherResourceGroup/providers/Microsoft.Insights/dataCollectionRules/someDcr"
+$UserAssignedIdentityResourceId = "/subscriptions/<SubscriptionID>/resourceGroups/someElseResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/someManagedIdentity"
+.\EnableDefenderForSqlOnMachines.ps1 -SubscriptionId $SubscriptionId -RegisterSqlVmAgnet $RegisterSqlVmAgnet -WorkspaceResourceId $WorkspaceResourceId -DataCollectionRuleResourceId $DataCollectionRuleResourceId -UserAssignedIdentityResourceId $UserAssignedIdentityResourceId
+```
+
+## Enable Defender for SQL servers on machines at scale
+
+You can enable Defender for SQL servers on machines at scale by following these steps.
+
+1. Open a PowerShell window.
+
+1. Copy the [EnableDefenderForSqlOnMachines.ps1](https://github.com/Azure/Microsoft-Defender-for-Cloud/blob/fd04330a79a4bcd48424bf7a4058f44216bc40e4/Powershell%20scripts/Enable%20Defender%20for%20SQL%20servers%20on%20machines/EnableDefenderForSqlOnMachines.ps1) script.
+
+1. Paste the script into PowerShell.
+
+1. Enter parameter information as needed.
+
+1. Run the script.
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [Scan your SQL servers for vulnerabilities](defender-for-sql-on-machines-vulnerability-assessment.md)
defender-for-cloud Recommendations Reference Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-ai.md
This recommendation replaces the old recommendation *Cognitive Services accounts
**Severity**: Medium
+### [Azure AI Services resources should use Azure Private Link](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/54f53ddf-6ebd-461e-a247-394c542bc5d1)
+
+**Description**: Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform reduces data leakage risks by handling the connectivity between the consumer and services over the Azure backbone network.
+
+Learn more about private links at: [https://aka.ms/AzurePrivateLink/Overview](https://aka.ms/AzurePrivateLink/Overview)
+
+This recommendation replaces the old recommendation *Cognitive Services should use private link*. It was formerly in category Data recommendations, and was updated to comply with the Azure AI Services naming format and align with the relevant resources.
+
+**Severity**: Medium
++ ### [(Enable if required) Azure AI Services resources should encrypt data at rest with a customer-managed key (CMK)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/18bf29b3-a844-e170-2826-4e95d0ba4dc9/showSecurityCenterCommandBar~/false) **Description**: Using customer-managed keys to encrypt data at rest provides more control over the key lifecycle, including rotation and management. This is particularly relevant for organizations with related compliance requirements.
This recommendation replaces the old recommendation *Cognitive services accounts
**Severity**: Low
+### [Diagnostic logs in Azure AI services resources should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dea5192e-1bb3-101b-b70c-4646546f5e1e)
+
+**Description**: Enable logs for Azure AI services resources. This enables you to recreate activity trails for investigation purposes, when a security incident occurs or your network is compromised.
+
+This recommendation replaces the old recommendation *Diagnostic logs in Search services should be enabled*. It was formerly in the category Cognitive Services and Cognitive Search, and was updated to comply with the Azure AI Services naming format and align with the relevant resources.
+
+**Severity**: Low
+ ### Resource logs in Azure Machine Learning Workspaces should be enabled (Preview) **Description & related policy**: Resource logs enable recreating activity trails to use for investigation purposes when a security incident occurs or when your network is compromised.
This recommendation replaces the old recommendation *Cognitive services accounts
**Severity**: Medium
-### [Diagnostic logs in Azure AI services resources should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dea5192e-1bb3-101b-b70c-4646546f5e1e)
-
-**Description**: Enable logs for Azure AI services resources. This enables you to recreate activity trails for investigation purposes, when a security incident occurs or your network is compromised.
-
-This recommendation replaces the old recommendation *Diagnostic logs in Search services should be enabled*. It was formerly in the category Cognitive Services and Cognitive Search, and was updated to comply with the Azure AI Services naming format and align with the relevant resources.
-
-**Severity**: Low
- ### Resource logs in Azure Databricks Workspaces should be enabled (Preview) **Description & related policy**: Resource logs enable recreating activity trails to use for investigation purposes when a security incident occurs or when your network is compromised.
defender-for-cloud Recommendations Reference Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-data.md
Secure your storage account with greater flexibility using customer-managed keys
**Severity**: Low
-### [Cognitive Services should use private link](recommendations-reference-data.md#cognitive-services-should-use-private-link)
-
-**Description**: Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Azure Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about [private links](../private-link/private-link-overview.md). (Related policy: Cognitive Services should use private link).
-
-**Severity**: Medium
-- ### [Diagnostic logs in Azure Data Lake Store should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ad5bbaeb-7632-5edf-f1c2-752075831ce8) **Description**: Enable logs and retain them for up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised.
defender-for-cloud Release Notes Recommendations Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-recommendations-alerts.md
This article summarizes what's new in security recommendations and alerts in Mic
- Review a complete list of multicloud security recommendations and alerts: - [AI recommendations](/azure/defender-for-cloud/recommendations-reference-ai)
-
- [Compute recommendations](recommendations-reference-compute.md)
-
+ - [Container recommendations](recommendations-reference-container.md) - [Data recommendations](recommendations-reference-data.md) - [DevOps recommendations](recommendations-reference-devops.md)
New and updated recommendations and alerts are added to the table in date order.
| **Date** | **Type** | **State** | **Name** | | -- | | | |
-| July 30 | Recommendation | Preview | [AWS Bedrock should use AWS PrivateLink](recommendations-reference-ai.md#aws-bedrock-should-use-aws-privatelink) |
+|July 31|Recommendation|Update|[Azure AI Services resources should use Azure Private Link](/azure/defender-for-cloud/release-notes-recommendations-alerts)|
+|July 31|Recommendation|GA|[[EDR solution should be installed on Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/06e3a6db-6c0c-4ad9-943f-31d9d73ecf6c)](recommendations-reference-compute.md#edr-solution-should-be-installed-on-virtual-machineshttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkey06e3a6db-6c0c-4ad9-943f-31d9d73ecf6c)|
+|July 31|Recommendation|GA|[[EDR solution should be installed on EC2s](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/77d09952-2bc2-4495-8795-cc8391452f85)](recommendations-reference-compute.md#edr-solution-should-be-installed-on-ec2shttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkey77d09952-2bc2-4495-8795-cc8391452f85)|
+|July 31|Recommendation|GA|[[EDR solution should be installed on GCP Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/68e595c1-a031-4354-b37c-4bdf679732f1)](recommendations-reference-compute.md#edr-solution-should-be-installed-on-gcp-virtual-machineshttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkey68e595c1-a031-4354-b37c-4bdf679732f1)|
+|July 31|Recommendation|GA|[[EDR configuration issues should be resolved on virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dc5357d0-3858-4d17-a1a3-072840bff5be)](recommendations-reference-compute.md#edr-configuration-issues-should-be-resolved-on-virtual-machineshttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkeydc5357d0-3858-4d17-a1a3-072840bff5be)|
+|July 31|Recommendation|GA|[[EDR configuration issues should be resolved on EC2s](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/695abd03-82bd-4d7f-a94c-140e8a17666c)](recommendations-reference-compute.md#edr-configuration-issues-should-be-resolved-on-ec2shttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkey695abd03-82bd-4d7f-a94c-140e8a17666c)|
+|July 31|Recommendation|GA|[[EDR configuration issues should be resolved on GCP virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f36a15fb-61a6-428c-b719-6319538ecfbc)](recommendations-reference-compute.md#edr-configuration-issues-should-be-resolved-on-gcp-virtual-machineshttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkeyf36a15fb-61a6-428c-b719-6319538ecfbc)|
+| July 31 | Recommendation | Upcoming deprecation | [Adaptive network hardening recommendations should be applied on internet facing virtual machines](recommendations-reference-networking.md#adaptive-network-hardening-recommendations-should-be-applied-on-internet-facing-virtual-machines) |
+| July 31 | Alert | Upcoming deprecation | [Traffic detected from IP addresses recommended for blocking](alerts-azure-network-layer.md#traffic-detected-from-ip-addresses-recommended-for-blocking) |
+| July 30 |Recommendation | Preview | [AWS Bedrock should use AWS PrivateLink](recommendations-reference-ai.md#aws-bedrock-should-use-aws-privatelink) |
|July 22|Recommendation|Update|[(Enable if required) Azure AI Services resources should encrypt data at rest with a customer-managed key (CMK)](/azure/defender-for-cloud/recommendations-reference-ai)| | June 28 | Recommendation | GA | [Azure DevOps repositories should require minimum two-reviewer approval for code pushes](recommendations-reference-devops.md#preview-azure-devops-repositories-should-require-minimum-two-reviewer-approval-for-code-pushes) | | June 28 | Recommendation | GA | [Azure DevOps repositories should not allow requestors to approve their own Pull Requests](recommendations-reference-devops.md#preview-azure-devops-repositories-should-not-allow-requestors-to-approve-their-own-pull-requests) |
New and updated recommendations and alerts are added to the table in date order.
## Related content For information about new features, see [What's new in Defender for Cloud features](release-notes.md).+
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
This article summarizes what's new in Microsoft Defender for Cloud. It includes
| Date | Category | Update | | - | | |
+| July 31 | GA | [General availability of enhanced discovery and configuration recommendations for endpoint protection](#general-availability-of-enhanced-discovery-and-configuration-recommendations-for-endpoint-protection) |
+| July 31 | Upcoming update | [Adaptive network hardening deprecation](#adaptive-network-hardening-deprecation) |
| July 22 | Preview | [Security assessments for GitHub no longer requires additional licensing](#preview-security-assessments-for-github-no-longer-requires-additional-licensing) | | July 18 | Upcoming update | [Updated timelines toward MMA deprecation in Defender for Servers Plan 2](#updated-timelines-toward-mma-deprecation-in-defender-for-servers-plan-2) | | July 18 | Upcoming update | [Deprecation of MMA-related features as part of agent retirement](#deprecation-of-mma-related-features-as-part-of-agent-retirement) |
This article summarizes what's new in Microsoft Defender for Cloud. It includes
| July 9 | Upcoming update | [Inventory experience improvement](#inventory-experience-improvement) | | July 8 | Upcoming update | [Container mapping tool to run by default in GitHub](#container-mapping-tool-to-run-by-default-in-github) |
+### General availability of enhanced discovery and configuration recommendations for endpoint protection
+
+July 31, 2024
+
+Improved discovery features for endpoint protection solutions and enhanced identification of configuration issues are now GA and available for multicloud servers. These updates are included in the Defender for Servers Plan 2 and Defender Cloud Security Posture Management (CSPM).
+
+The enhanced recommendations feature uses [agentless machine scanning](/azure/defender-for-cloud/concept-agentless-data-collection), enabling comprehensive discovery and assessment of the configuration of [supported endpoint detection and response solutions](/azure/defender-for-cloud/endpoint-detection-response). When configuration issues are identified, remediation steps are provided.
+
+With this general availability release, the list of [supported solutions](/azure/defender-for-cloud/endpoint-detection-response) is expanded to include two more endpoint detection and response tools:
+
+- Singularity Platform by SentinelOne
+- Cortex XDR
+
+### Adaptive network hardening deprecation
+
+July 31, 2024
+
+**Estimated date for change: August 31, 2024**
+
+Defender for Server's adaptive network hardening is being deprecated.
+
+The feature deprecation includes the following experiences:
+
+- **Recommendation**: [Adaptive network hardening recommendations should be applied on internet facing virtual machines](recommendations-reference-networking.md#adaptive-network-hardening-recommendations-should-be-applied-on-internet-facing-virtual-machines) [assessment Key: f9f0eed0-f143-47bf-b856-671ea2eeed62]
+- **Alert**: [Traffic detected from IP addresses recommended for blocking](alerts-azure-network-layer.md#traffic-detected-from-ip-addresses-recommended-for-blocking)
+ ### Preview: Security assessments for GitHub no longer requires additional licensing July 22, 2024
July 18, 2024
**Estimated date for change**: August 2024 - With the [upcoming deprecation of Log Analytics agent in August](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-cloud-strategy-and-plan-towards-log/ba-p/3883341), all security value for server protection in Defender for Cloud will rely on integration with Microsoft Defender for Endpoint (MDE) as a single agent and on agentless capabilities provided by the cloud platform and agentless machine scanning.
-The following capabilities have updated timelines and plans, thus the support for them over MMA will be extended for Defender for Cloud customers to the end of November 2024:
+The following capabilities have updated timelines and plans, thus the support for them over MMA will be extended for Defender for Cloud customers to the end of November 2024:
-- **File Integrity Monitoring (FIM):** Public preview release for FIM new version over MDE is planned for __August 2024__. The GA version of FIM powered by Log Analytics agent will continue to be supported for existing customers until the end of __November 2024__.
+- **File Integrity Monitoring (FIM):** Public preview release for FIM new version over MDE is planned for **August 2024**. The GA version of FIM powered by Log Analytics agent will continue to be supported for existing customers until the end of **November 2024**.
-- **Security Baseline:** as an alternative to the version based on MMA, the current preview version based on Guest Configuration will be released to general availability in __September 2024.__ OS Security Baselines powered by Log Analytics agent will continue to be supported for existing customers until the end of **November 2024.**
+- **Security Baseline:** as an alternative to the version based on MMA, the current preview version based on Guest Configuration will be released to general availability in **September 2024.** OS Security Baselines powered by Log Analytics agent will continue to be supported for existing customers until the end of **November 2024.**
For more information, see [Prepare for retirement of the Log Analytics agent](prepare-deprecation-log-analytics-mma-agent.md).
dev-box Concept Dev Box Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-dev-box-role-based-access-control.md
+
+ Title: Azure role-based access control
+
+description: Learn how Microsoft Dev Box provides protection with Azure role-based access control (Azure RBAC) integration.
++++ Last updated : 07/31/2024+
+#Customer intent: As a platform engineer, I want to understand how to assign permissions in Dev Box so that I can give dev managers and developers only the permissions they need.
+
+# Azure role-based access control in Microsoft Dev Box
+
+This article describes the different built-in roles that Microsoft Dev
+Box supports, and how they map to organizational roles like platform
+engineer and dev manager.
+
+Azure role-based access control (RBAC) specifies built-in role
+definitions that outline the permissions to be applied. You assign a
+user or group this role definition via a role assignment for a
+particular scope. The scope can be an individual resource, a resource
+group, or across the subscription. In the next section, you learn which
+[built-in roles](#built-in-roles) Microsoft Dev Box supports.
+
+For more information, see [What is Azure role-based access control (Azure RBAC)](https://microsoft-my.sharepoint.com/azure/role-based-access-control/overview)?
+
+> [!Note]
+> When you make role assignment changes, it can take a few minutes for these updates to propagate.
+
+## Built-in roles
+
+In this article, the Azure built-in roles are logically grouped into
+three organizational role types, based on their scope of influence:
+
+- Platform engineer roles: influence permissions for dev centers,
+ catalogs, and projects
+
+- Dev
+
+- Developer roles: influence permissions for users
+
+The following are the built-in roles supported by Microsoft Dev Box:
+
+| Organizational role type | Built-in role | Description |
+|--|--||
+| Platform engineer | Owner | Grant full control to create/manage dev centers, catalogs, and projects, and grant permissions to other users. Learn more about the [Owner role](#owner-role). |
+| Platform engineer | Contributor | Grant full control to create/manage dev centers, catalogs, and projects, except for assigning roles to other users. Learn more about the [Contributor role](#contributor-role). |
+| Dev Manager | DevCenter Project Admin | Grant permission to manage certain aspects of projects and dev boxes. Learn more about the [DevCenter Project Admin role](#devcenter-project-admin-role). |
+| Developer | Dev Box User | Grant permission to create dev boxes and have full control over the dev boxes that they create. Learn more about the [Dev Box User role](#dev-box-user). |
+
+## Role assignment scope
+
+In Azure RBAC, *scope* is the set of resources that access applies to.
+When you assign a role, it\'s important to understand scope so that you
+grant just the access that is needed.
+
+In Azure, you can specify a scope at four levels: management group,
+subscription, resource group, and resource. Scopes are structured in a
+parent-child relationship. Each level of hierarchy makes the scope more
+specific. You can assign roles at any of these levels of scope. The
+level you select determines how widely the role is applied. Lower levels
+inherit role permissions from higher levels. Learn more about [scope for Azure RBAC](https://microsoft-my.sharepoint.com/azure/role-based-access-control/scope-overview).
+
+For Microsoft Dev Box, consider the following scopes:
+
+ | Scope | Description |
+ |--||
+ | Subscription | Used to manage billing and security for all Azure resources and services. Typically, only Platform engineers have subscription-level access because this role assignment grants access to all resources in the subscription. |
+ | Resource group | A logical container for grouping together resources. Role assignment for the resource group grants permission to the resource group and all resources within it, such as dev centers, dev box definitions, dev box pools, projects, and dev boxes. |
+ | Dev center (resource) | A collection of projects that require similar settings. Role assignment for the dev center grants permission to the dev center itself. Permissions assigned for the dev centers aren't inherited by other dev box resources. |
+ | Project (resource) | An Azure resource used to apply common configuration settings when you create a dev box. Role assignment for the project grants permission only to that specific project. |
+ | Dev box pool (resource) | A collection of dev boxes that you manage together and to which you apply similar settings. Role assignment for the dev box pool grants permission only to that specific dev box pool. |
+ | Dev box definition (resource) | An Azure resource that specifies a source image and size, including compute size and storage size. Role assignment for the dev box definition grants permission only to that specific dev box definition. |
++
+## Roles for common Dev Box activities
+
+The following table shows common Dev Box activities and the role needed for a user to perform that activity.
+
+| Activity | Role type | Role | Scope |
+|--||-|-|
+| Grant permission to create a resource group. | Platform engineer| Owner or Contributor | Subscription |
+| Grant permission to submit a Microsoft support ticket, including to request capacity. | Platform engineer| Owner, Contributor, Support Request Contributor | Subscription |
+| Grant permission to create virtual networks and subnets. | Platform engineer| Network Contributor | Resource group |
+| Grant permission to create a network connection. | Platform engineer| Owner or Contributor | Resource group |
+| Grant permission to assign roles to other users. | Platform engineer| Owner | Resource group |
+| Grant permission to: </br> - Create / manage dev centers. </br> - Add / remove network connections. </br> - Add / remove Azure compute galleries. </br> - Create / manage dev box definitions. </br> - Create / manage projects. </br> - Attach / manage catalog to a dev center or project (project-level catalogs must be enabled on the dev center). </br> - Configure dev box limits. | Platform engineer| Contributor | Resource group |
+| Grant permission to add or remove a network connection for a dev center. | Platform engineer| Contributor | Dev center |
+| Grant permission to enable / disable project catalogs. | Dev Manager | Contributor | Dev center |
+| Grant permission to: </br> - Add, sync, remove catalog (project-level catalogs must be enabled on the dev center). </br> - Create dev box pools. </br> - Stop, start, delete dev boxes in pools. | Dev Manager | DevCenter Project Admin | Project |
+| Create and manage your own dev boxes in a project. | User | Dev Box User | Project |
+| Create and manage catalogs in a GitHub or Azure Repos repository. | Dev Manager | Not governed by RBAC. </br> - The user must be assigned permissions through Azure DevOps or GitHub. | Repository |
+
+> [!Important]
+> An organization's subscription is used to manage billing and security for all Azure resources and services. You
+> can assign the Owner or Contributor role on the subscription.
+> Typically, only Platform engineers have subscription-level access because this includes full access to all resources in the subscription.
+
+## Platform engineer roles
+
+To grant users permission to manage Microsoft Dev Box within your
+organization's subscription, you should assign them the
+[Owner](#owner-role) or [Contributor](#contributor-role) role.
+
+Assign these roles to the *resource group*. The dev centers, network
+connections, dev box definitions, dev box pools, and projects within the
+resource group inherit these role assignments.
++
+### Owner role
+
+Assign the Owner role to give a user full control to create or manage
+Dev Box resources and grant permissions to other users. When a user has
+the Owner role in the resource group, they can do the following
+activities across all resources within the resource group:
+
+- Assign roles to platform engineers, so they can manage Dev Box
+ resources.
+
+- Create dev centers, network connections, dev box definitions, dev
+ box pools, and projects.
+
+- View, delete, and change settings for all dev centers, network
+ connections, dev box definitions, dev box pools, and projects.
+
+- Attach and detach catalogs.
+
+> [!Caution]
+> When you assign the Owner or Contributor role on the resource group, then these permissions also apply to non-Dev Box related resources that exist in the resource group.
+
+### Contributor role
+
+Assign the Contributor role to give a user full control to create or
+manage dev centers and projects within a resource group. The Contributor
+role has the same permissions as the Owner role, *except* for:
+
+- Performing role assignments.
+
+## Dev Manager role
+
+There's one dev manager role: DevCenter Project Admin. This role has
+more restricted permissions at lower-level scopes than the platform
+engineer roles. You can assign this role to dev managers to enable them
+to perform administrative tasks for their team.
++
+### DevCenter Project Admin role
+
+Assign the DevCenter Project Admin to enable:
+
+- Add, sync, remove catalog (project-level catalogs must be enabled on
+ the dev center).
+
+- Create dev box pools.
+
+- Stop, start, delete dev boxes in pools.
+
+## Developer role
+
+There's one developer role: Dev Box User. This role enables developers
+to create and manage their own dev boxes.
++
+### Dev Box User
+
+Assign the Dev Box User role to give users permission to create dev
+boxes and have full control over the dev boxes that they create.
+Developers can perform the following actions on any dev box they create:
+
+- Create
+- Start / stop
+- Restart
+- Delay scheduled shutdown
+- Delete
+
+## Identity and access management (IAM)
+
+The **Access control (IAM)** page in the Azure portal is used to
+configure Azure role-based access control on Microsoft Dev Box
+resources. You can use built-in roles for individuals and groups in
+Active Directory. The following screenshot shows Active Directory
+integration (Azure RBAC) using access control (IAM) in the Azure portal:
++
+For detailed steps, see [Assign Azure roles using the Azure portal](https://microsoft-my.sharepoint.com/azure/role-based-access-control/role-assignments-portal).
+
+## Dev center, resource group, and project structure
+
+Your organization should invest time up front to plan the placement of
+your dev centers, and the structure of resource groups and projects.
+
+**Dev centers:** Organize dev centers by the set of projects you would
+like to manage together, applying similar settings, and providing
+similar templates.
+
+Organizations can use one or more dev center. Typically, each sub-organization within the organization has its own dev center. You might consider creating multiple dev centers in the following cases:
+
+- If you want specific configurations to be available to a subset of
+ projects.
+
+- If different teams need to own and maintain the dev center resource
+ in Azure.
+
+**Projects:** Associated with each dev team or group of people working
+on one app or product.
+
+Planning is especially important when you assign roles to the resource
+group because it also applies permissions to all resources in the
+resource group, including dev centers, network connections, dev box
+definitions, dev box pools, and projects.
+
+To ensure that users are only granted permission to the appropriate
+resources:
+
+- Create resource groups that only contain Dev Box resources.
+
+- Organize projects according to the dev box definition and dev box
+ pools required and the developers who should have access. It\'s
+ important to note that dev box pools determine the location of dev
+ box creation. Developers should create dev boxes in a location close
+ to them for the least latency.
+
+For example, you might create separate projects for different developer
+teams to isolate each team's resources. Dev Managers in a project can
+then be assigned to the Project Admin role, which only grants them
+access to the resources of their team.
+
+> [!Important]
+> Plan the structure upfront because it's not possible to move Dev Box resources like projects to a different resource group after they\'re created.
+
+## Catalog structure
+
+Microsoft Dev Box uses catalogs to enable developers to deploy
+customizations for dev boxes by using a catalog of tasks and a
+configuration file to install software, add extensions, clone
+repositories, and more. 
+
+Microsoft Dev Box stores catalogs in either a [GitHub repository](https://docs.github.com/repositories/creating-and-managing-repositories/about-repositories) or an [Azure DevOps Services repository](/azure/devops/repos/get-started/what-is-repos). You can attach a catalog to a dev center or to a project.
+
+You can attach one or more catalogs to your dev center and manage all
+customizations at that level. To provide more granularity in how
+developers access customizations, you can attach catalogs at the project
+level. In planning where to attach catalogs, you should consider the
+needs of each development team.
+
+## Related content
+
+- [What is Azure role-based access control (Azure RBAC)](https://microsoft-my.sharepoint.com/azure/role-based-access-control/overview)
+- [Understand scope for Azure RBAC](https://microsoft-my.sharepoint.com/azure/role-based-access-control/scope-overview)
dev-box How To Configure Intune Conditional Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-intune-conditional-access-policies.md
After creating your device group and validated your dev box devices are members,
| | | | | Windows 365 | 0af06dc6-e4b5-4f28-818e-e78e62d137a5 | Used when retrieving the list of resources for the user and when users initiate actions on their dev box like Restart. | | Azure Virtual Desktop | 9cdead84-a844-4324-93f2-b2e6bb768d07 | Used to authenticate to the Gateway during the connection and when the client sends diagnostic information to the service. <br>Might also appear as Windows Virtual Desktop. |
- | Microsoft Remote Desktop | a4a365df-50f1-4397-bc59-1a1564b8bb9c | Used to authenticate users to the dev box. <br>Only needed when you configure single sign-on in a provisioning policy. |
+ | Microsoft Remote Desktop | a4a365df-50f1-4397-bc59-1a1564b8bb9c | Used to authenticate users to the dev box. <br>Only needed when you configure single sign-on in a provisioning policy. </br> |
+ | Windows Cloud Login | 270efc09-cd0d-444b-a71f-39af4910ec45 | Used to authenticate users to the dev box. This app replaces the `Microsoft Remote Desktop` app. <br>Only needed when you configure single sign-on in a provisioning policy. </br> |
| Microsoft Developer Portal | 0140a36d-95e1-4df5-918c-ca7ccd1fafc9 | Used to manage the Dev box portal. | 1. You should match your conditional access policies between these apps, which ensures that the policy applies to the developer portal, the connection to the Gateway, and the dev box for a consistent experience. If you want to exclude apps, you must also choose all of these apps.
dev-box How To Use Dev Home Customize Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-use-dev-home-customize-dev-box.md
- build-2024 Previously updated : 06/05/2024 Last updated : 07/30/2024 #customer intent: As a developer, I want to use the Dev Home app to create customizations for my dev boxes, so that I can manage my customizations.
To complete the steps in this article, you must:
## Install or update Dev Home
+You might see Dev Home in the Start menu. If you see it there, you can select it to open the app.
+ Dev Home is available in the Microsoft Store. To install or update Dev Home, go to the Dev Home (Preview) page in the [Microsoft Store](https://aka.ms/devhome) and select **Get** or **Update**.
-You might also see Dev Home in the Start menu. If you see it there, you can select it to open the app.
+## Sign in to Dev Home
+
+Dev Home allows you to work with many different services, like Microsoft Hyper-V, Windows Subsystem for Linux (WSL), and Microsoft Dev Box. To access your chosen service, you must sign in to your Microsoft account, or your Work or School account.
+
+To sign in:
+
+1. Open Dev Home.
+1. From the left menu, select **Settings**.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-settings.png" alt-text="Screenshot of Dev Home, showing the home page with Settings highlighted.":::
+
+1. Select **Accounts**.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-accounts.png" alt-text="Screenshot of Dev Home, showing the Settings page with Accounts highlighted.":::
+
+1. Select **Add account** and follow the prompts to sign in.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-sign-in.png" alt-text="Screenshot of Dev Home, showing the Accounts page with Add account highlighted.":::
## Add extensions
Dev Home uses extensions to provide more functionality. To support the Dev Box f
To add an extension:
-1. Open Dev Home.
-1. From the left menu, select **Extensions**, then in the list of extensions **Available in the Microsoft Store**, on the **Dev Home Azure Extension (Preview)**, select **Get**.
+1. In Dev Home, from the left menu, select **Extensions**.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-extensions.png" alt-text="Screenshot of Dev Home, showing the Extensions page.":::
+
+1. In the list of extensions **Available in the Microsoft Store**, on the **Dev Home Azure Extension (Preview)**, select **Get**.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-get-extension.png" alt-text="Screenshot of Dev Home, showing the Extensions page with the Dev Home Azure Extension highlighted.":::
+
+1. In the Microsoft Store dialog, select **Get** to install the extension.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-get-extension-store.png" alt-text="Screenshot of the Microsoft Store dialog with the Get button highlighted.":::
## Create a dev box
Dev Home provides a guided way for you to create a new dev box.
To create a new dev box:
-1. Open **Dev Home**.
-1. From the left menu, select **Environments**, and then select **Create Environment**.
+1. In **Dev Home**, from the left menu, select **Environments**.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-environments.png" alt-text="Screenshot of Dev Home, showing the Environments page.":::
+
+1. Select **Create Environment**.
- :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-create-environment.png" alt-text="Screenshot of Dev Home, showing the Environments page with Create Environment highlighted." lightbox="media/how-to-use-dev-home-customize-dev-box/dev-home-create-environment.png":::
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-create-environment.png" alt-text="Screenshot of Dev Home, showing the Environments page with Create Environment highlighted.":::
1. On the **Select environment** page, select **Microsoft DevBox**, and then select **Next**.
- :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-create-dev-box.png" alt-text="Screenshot of Dev Home, showing the Select environment page with Microsoft Dev Box highlighted." lightbox="media/how-to-use-dev-home-customize-dev-box/dev-home-create-dev-box.png":::
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-create-dev-box.png" alt-text="Screenshot of Dev Home, showing the Select environment page with Microsoft Dev Box highlighted.":::
1. On the **Configure your environment** page: - Enter a name for your dev box.
To create a new dev box:
- Select the **DevBox Pool** where you want to create the dev box. Select a pool located close to you to reduce latency. - Select **Next**.
- :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-configure-environment.png" alt-text="Screenshot showing the Configure your environment page." lightbox="media/how-to-use-dev-home-customize-dev-box/dev-home-configure-environment.png":::
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-configure-environment.png" alt-text="Screenshot showing the Configure your environment page.":::
1. On the **Review your environment** page, review the details and select **Create environment**.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-review-environment.png" alt-text="Screenshot showing the Review your environment page.":::
+
1. Select **Go to Environments** to see the status of your dev box.
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-go-to-environments.png" alt-text="Screenshot showing the Go to Environments button.":::
++ ## Connect to your dev box Dev Home provides a seamless way for you to use the Windows App to connect to your Dev Box from any device of your choice. You can customize the look and feel of the Windows App to suit the way you work, and switch between multiple services across accounts.
If the Windows App isn't installed, selecting Launch takes you to the web client
### Launch your dev box
-1. Open **Dev Home**.
-1. From the left menu, select **Environments**.
-1. Select the dev box you want to launch.
-1. Select **Launch**.
+1. In **Dev Home**, from the left menu, select **Environments**.
+1. For the dev box you want to launch, select **Launch**.
:::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-launch.png" alt-text="Screenshot showing a dev box with the Launch menu highlighted."::: 1. You can also start and stop the dev box from the **Launch** menu.
- :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-start-stop.png" alt-text="Screenshot of the Launch menu with Start and Stop options." lightbox="media/how-to-use-dev-home-customize-dev-box/dev-home-start-stop.png":::
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-start-stop.png" alt-text="Screenshot of the Launch menu with Start and Stop options.":::
For more information on the Windows App, see [Windows App](https://aka.ms/windowsapp).
-### Access your dev box from the start menu or task bar
+### Manage your dev box
-Dev home enables you to pin your dev box to the start menu or task bar.
+Dev home enables you to pin your dev box to the start menu or task bar, and to delete your dev box.
1. Open **Dev Home**. 1. From the left menu, select **Environments**.
-1. Select the dev box you want to pin or unpin.
-1. Select **Pin to start** or **Pin to taskbar**.
+1. Select the dev box you want to manage.
+1. Select **Pin to start**, **Pin to taskbar**, or **Delete**.
- :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-menu.png" alt-text="Screenshot showing a dev box with the Pin to start and Pin to taskbar options highlighted." lightbox="media/how-to-use-dev-home-customize-dev-box/dev-home-menu.png":::
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-options-menu.png" alt-text="Screenshot showing a dev box with the Pin to start, Pin to taskbar, and Delete options highlighted.":::
## Customize an existing dev box Dev home gives you the opportunity to clone repositories and add software to your existing dev box. Dev home uses the Winget catalog to provide a list of software that you can install on your dev box. 1. Open **Dev Home**.
-1. From the left menu, select **Machine configuration**, and then select **Set up an environment**.
+1. From the left menu, select **Machine configuration**.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-machine-configuration.png" alt-text="Screenshot showing Dev Home with Machine configuration highlighted.":::
+
+1. Select **Set up an environment**.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-set-up-environment.png" alt-text="Screenshot showing the Machine configuration page with Set up environment highlighted.":::
+ 1. Select the environment you want to customize, and then select **Next**.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-select-environment.png" alt-text="Screenshot showing the Select environment page.":::
+
1. On the **Set up an environment** page, if you want to clone a repository to your dev box, select **Add repository**. +
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-add-repository.png" alt-text="Screenshot showing the Add repository button.":::
+ 1. In the **Add repository** dialog, enter the source and destination paths for the repository you want to clone, and then select **Add**.
- :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-clone-repository.png" alt-text="Screenshot showing the Add repository dialog box." lightbox="media/how-to-use-dev-home-customize-dev-box/dev-home-clone-repository.png":::
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-clone-repository.png" alt-text="Screenshot showing the Add repository dialog box.":::
1. When you finish adding repositories, select **Next**.
-1. From the list of application Winget provides, choose the software you want to install on your dev box, and then select **Next**. You can also search for software by name.
- :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-software-install.png" alt-text="Screenshot showing the Add software page.":::
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-repository-next.png" alt-text="Screenshot showing the repositories to add, with the Next button highlighted dialog box.":::
+
+1. Next, you can choose software to install. From the list of applications Winget provides, choose the software you want to install on your dev box. You can also search for software by name.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-software-select.png" alt-text="Screenshot showing the Add software page with Visual Studio Community and PowerShell highlighted.":::
+
+1. When you finish selecting software, select **Next**.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-software-install.png" alt-text="Screenshot showing the Add software page with Next highlighted.":::
1. On the **Review and finish** page, under **See details**: 1. Select the **Environment** tab to see the virtual machine you're configuring. 1. Select the **Applications** tab to see a list of the software you're installing.
- 1. Select the **Repositories** tab to see the list of public GitHub repositories you're cloning
+ 1. Select the **Repositories** tab to see the list of public GitHub repositories you're cloning.
1. Select **I agree and want to continue**, and then select **Set up**.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-review-finish.png" alt-text="Screenshot showing the Review and finish page with the I agree and want to continue button highlighted.":::
+
+
+Notice that you can also generate a configuration file based on your selected repositories and software to use in the future to create dev boxes with the same customizations.
++
-You can also generate a configuration file based on your selected repositories and software to use in the future to create dev boxes with the same customizations.
## Related content
digital-twins Concepts Event Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-event-notifications.md
Last updated 11/10/2022 -+ # Optional fields. Don't forget to remove # if you need a field. #
An example message body, populated in AMQP's *data* section:
## Digital twin telemetry messages
-Digital twins can use the [SendTelemetry API](/rest/api/digital-twins/dataplane/twins/digitaltwins_sendtelemetry) to emit *telemetry messages* and send them to egress endpoints.
+Digital twins can use the [SendTelemetry API](/rest/api/digital-twins/dataplane/twins/digital-twins-send-telemetry) to emit *telemetry messages* and send them to egress endpoints.
### Properties
digital-twins Concepts Query Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-query-units.md
Last updated 03/01/2022 -+ # Optional fields. Don't forget to remove # if you need a field. #
To learn more about querying Azure Digital Twins, visit:
* [Query language](concepts-query-language.md) * [Query the twin graph](how-to-query-graph.md)
-* [Query API reference documentation](/rest/api/digital-twins/dataplane/query/querytwins)
+* [Query API reference documentation](/rest/api/digital-twins/dataplane/query/query-twins)
You can find Azure Digital Twins query-related limits in [Azure Digital Twins service limits](reference-service-limits.md).
digital-twins How To Create Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-create-endpoints.md
Last updated 02/08/2023 -+ # Optional fields. Don't forget to remove # if you need a field.
Next, create a SAS token for your storage account that the endpoint can use to a
# [Portal](#tab/portal)
-To create an endpoint with dead-lettering enabled, you must use the [CLI commands](/cli/azure/dt) or [control plane APIs](/rest/api/digital-twins/controlplane/endpoints/digitaltwinsendpoint_createorupdate) to create your endpoint, rather than the Azure portal.
+To create an endpoint with dead-lettering enabled, you must use the [CLI commands](/cli/azure/dt) or [control plane APIs](/rest/api/digital-twins/controlplane/endpoints/digital-twins-endpoint-create-or-update) to create your endpoint, rather than the Azure portal.
For instructions on how to create this type of endpoint with the Azure CLI, switch to the CLI tab for this section.
The value for the parameter is the dead letter SAS URI made up of the storage ac
>[!TIP] >To create a dead-letter endpoint with identity-based authentication, add both the dead-letter parameter from this section and the appropriate [managed identity parameter](#3-create-the-endpoint-with-identity-based-authentication) to the same command.
-You can also create dead letter endpoints using the [Azure Digital Twins control plane APIs](concepts-apis-sdks.md#control-plane-apis) instead of the CLI. To do so, view the [DigitalTwinsEndpoint documentation](/rest/api/digital-twins/controlplane/endpoints/digitaltwinsendpoint_createorupdate) to see how to structure the request and add the dead letter parameters.
+You can also create dead letter endpoints using the [Azure Digital Twins control plane APIs](concepts-apis-sdks.md#control-plane-apis) instead of the CLI. To do so, view the [DigitalTwinsEndpoint documentation](/rest/api/digital-twins/controlplane/endpoints/digital-twins-endpoint-create-or-update) to see how to structure the request and add the dead letter parameters.
digital-twins How To Create Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-create-routes.md
Last updated 1/3/2024 -+ # Optional fields. Don't forget to remove # if you need a field.
# Create event routes and filters in Azure Digital Twins
-This article walks you through the process of creating *event routes* using the [Azure portal](https://portal.azure.com), [Azure CLI az dt route commands](/cli/azure/dt/route), [Event Routes data plane APIs](/rest/api/digital-twins/dataplane/eventroutes), and the [.NET (C#) SDK](/dotnet/api/overview/azure/digitaltwins.core-readme).
+This article walks you through the process of creating *event routes* using the [Azure portal](https://portal.azure.com), [Azure CLI az dt route commands](/cli/azure/dt/route), [Event Routes data plane APIs](/rest/api/digital-twins/dataplane/event-routes), and the [.NET (C#) SDK](/dotnet/api/overview/azure/digitaltwins.core-readme).
Routing [event notifications](concepts-event-notifications.md) from Azure Digital Twins to downstream services or connected compute resources is a two-step process: create endpoints, then create event routes to send data to those endpoints. This article covers the second step, setting up routes to control which events are delivered to which Azure Digital Twin endpoints. To proceed with this article, you should have [endpoints](how-to-create-endpoints.md) already created.
If there's no route name, no messages are routed outside of Azure Digital Twins.
If there's a route name and the filter is `true`, all messages are routed to the endpoint. If there's a route name and a different filter is added, messages will be filtered based on the filter.
-Event routes can be created with the [Azure portal](https://portal.azure.com), [EventRoutes data plane APIs](/rest/api/digital-twins/dataplane/eventroutes), or [az dt route CLI commands](/cli/azure/dt/route). The rest of this section walks through the creation process.
+Event routes can be created with the [Azure portal](https://portal.azure.com), [EventRoutes data plane APIs](/rest/api/digital-twins/dataplane/event-routes), or [az dt route CLI commands](/cli/azure/dt/route). The rest of this section walks through the creation process.
# [Portal](#tab/portal2)
To create an event route with advanced filter options, toggle the switch for the
# [API](#tab/api)
-You can use the [Event Routes data plane APIs](/rest/api/digital-twins/dataplane/eventroutes) to write custom filters. To add a filter, you can use a PUT request to `https://<Your-Azure-Digital-Twins-host-name>/eventRoutes/<event-route-name>?api-version=2020-10-31` with the following body:
+You can use the [Event Routes data plane APIs](/rest/api/digital-twins/dataplane/event-routes) to write custom filters. To add a filter, you can use a PUT request to `https://<Your-Azure-Digital-Twins-host-name>/eventRoutes/<event-route-name>?api-version=2020-10-31` with the following body:
:::code language="json" source="~/digital-twins-docs-samples/api-requests/filter.json":::
digital-twins How To Manage Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-model.md
Last updated 06/11/2024 -+ # Optional fields. Don't forget to remove # if you need a field. #
To decommission a model, you can use the [DecommissionModel](/dotnet/api/azure.d
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/model_operations.cs" id="DecommissionModel":::
-You can also decommission a model using the REST API call [DigitalTwinModels Update](/rest/api/digital-twins/dataplane/models/digitaltwinmodels_update). The `decommissioned` property is the only property that can be replaced with this API call. The JSON Patch document will look something like this:
+You can also decommission a model using the REST API call [DigitalTwinModels Update](/rest/api/digital-twins/dataplane/models/digital-twin-models-update). The `decommissioned` property is the only property that can be replaced with this API call. The JSON Patch document will look something like this:
:::code language="json" source="~/digital-twins-docs-samples/models/patch-decommission-model.json":::
To delete a model, you can use the [DeleteModel](/dotnet/api/azure.digitaltwins.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/model_operations.cs" id="DeleteModel":::
-You can also delete a model with the [DigitalTwinModels Delete](/rest/api/digital-twins/dataplane/models/digitaltwinmodels_delete) REST API call.
+You can also delete a model with the [DigitalTwinModels Delete](/rest/api/digital-twins/dataplane/models/digital-twin-models-delete) REST API call.
#### After deletion: Twins without models
digital-twins How To Manage Twin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-twin.md
Last updated 1/3/2024 -+ # Optional fields. Don't forget to remove # if you need a field. #
The result of calling `object result = await client.GetDigitalTwinAsync("my-moon
The defined properties of the digital twin are returned as top-level properties on the digital twin. Metadata or system information that isn't part of the DTDL definition is returned with a `$` prefix. Metadata properties include the following values: * `$dtId`: The ID of the digital twin in this Azure Digital Twins instance
-* `$etag`: A standard HTTP field assigned by the web server. This is updated to a new value every time the twin is updated, which can be useful to determine whether the twin's data has been updated on the server since a previous check. You can use `If-Match` to perform updates and deletes that only complete if the entity's etag matches the etag provided. For more information on these operations, see the documentation for [DigitalTwins Update](/rest/api/digital-twins/dataplane/twins/digitaltwins_update) and [DigitalTwins Delete](/rest/api/digital-twins/dataplane/twins/digitaltwins_delete).
+* `$etag`: A standard HTTP field assigned by the web server. This is updated to a new value every time the twin is updated, which can be useful to determine whether the twin's data has been updated on the server since a previous check. You can use `If-Match` to perform updates and deletes that only complete if the entity's etag matches the etag provided. For more information on these operations, see the documentation for [DigitalTwins Update](/rest/api/digital-twins/dataplane/twins/digital-twins-update) and [DigitalTwins Delete](/rest/api/digital-twins/dataplane/twins/digital-twins-delete).
* `$metadata`: A set of metadata properties, which might include the following: - `$model`, the DTMI of the model of the digital twin. - `lastUpdateTime` for twin properties. This is a timestamp indicating the date and time that Azure Digital Twins processed the property update message
digital-twins How To Use Postman With Digital Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-postman-with-digital-twins.md
description: Learn how to authorize, configure, and use Postman to call the Azure Digital Twins APIs. This article shows you how to use both the control and data plane APIs. -+ Last updated 01/23/2023
You can now view your request under the collection, and select it to pull up its
To make a Postman request to one of the Azure Digital Twins APIs, you'll need the URL of the API and information about what details it requires. You can find this information in the [Azure Digital Twins REST API reference documentation](/rest/api/azure-digitaltwins/).
-To proceed with an example query, this article will use the [Azure Digital Twins Query API](/rest/api/digital-twins/dataplane/query/querytwins) to query for all the digital twins in an instance.
+To proceed with an example query, this article will use the [Azure Digital Twins Query API](/rest/api/digital-twins/dataplane/query/query-twins) to query for all the digital twins in an instance.
1. Get the request URL and type from the reference documentation. For the Query API, this is currently *POST* `https://digitaltwins-host-name/query?api-version=2020-10-31`. 1. In Postman, set the type for the request and enter the request URL, filling in placeholders in the URL as required. Use your instance's host name from the [Prerequisites section](#prerequisites).
dms Create Dms Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/create-dms-bicep.md
- Title: Create instance of DMS (Bicep)
-description: Learn how to create Database Migration Service by using Bicep.
-- Previously updated : 03/21/2022---
- - subject-armqs
- - mode-arm
- - devx-track-bicep
- - sql-migration-content
--
-# Quickstart: Create instance of Azure Database Migration Service using Bicep
-
-Use Bicep to deploy an instance of the Azure Database Migration Service.
--
-## Prerequisites
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-## Review the Bicep file
-
-The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azure-database-migration-simple-deploy/).
--
-Three Azure resources are defined in the Bicep file:
--- [Microsoft.Network/virtualNetworks](/azure/templates/microsoft.network/virtualnetworks): Creates the virtual network.-- [Microsoft.Network/virtualNetworks/subnets](/azure/templates/microsoft.network/virtualnetworks/subnets): Creates the subnet.-- [Microsoft.DataMigration/services](/azure/templates/microsoft.datamigration/services): Deploys an instance of the Azure Database Migration Service.-
-## Deploy the Bicep file
-
-1. Save the Bicep file as **main.bicep** to your local computer.
-1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
-
- # [CLI](#tab/CLI)
-
- ```azurecli
- az group create --name exampleRG --location eastus
- az deployment group create --resource-group exampleRG --template-file main.bicep --parameters serviceName=<service-name> vnetName=<vnet-name> subnetName=<subnet-name>
- ```
-
- # [PowerShell](#tab/PowerShell)
-
- ```azurepowershell
- New-AzResourceGroup -Name exampleRG -Location eastus
- New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -serviceName "<service-name>" -vnetName "<vnet-name>" -subnetName "<subnet-name>"
- ```
-
-
-
- > [!NOTE]
- > Replace **\<service-name\>** with the name of the new migration service. Replace **\<vnet-name\>** with the name of the new virtual network. Replace **\<subnet-name\>** with the name of the new subnet associated with the virtual network.
-
- When the deployment finishes, you should see a message indicating the deployment succeeded.
-
-## Review deployed resources
-
-Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
-
-# [CLI](#tab/CLI)
-
-```azurecli-interactive
-az resource list --resource-group exampleRG
-```
-
-# [PowerShell](#tab/PowerShell)
-
-```azurepowershell-interactive
-Get-AzResource -ResourceGroupName exampleRG
-```
---
-## Clean up resources
-
-When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
-
-# [CLI](#tab/CLI)
-
-```azurecli-interactive
-az group delete --name exampleRG
-```
-
-# [PowerShell](#tab/PowerShell)
-
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name exampleRG
-```
---
-## Next steps
-
-For other ways to deploy Azure Database Migration Service, see [Azure portal](quickstart-create-data-migration-service-portal.md).
-
-To learn more, see [an overview of Azure Database Migration Service](dms-overview.md).
dms Create Dms Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/create-dms-resource-manager-template.md
- Title: Create instance of DMS (Azure Resource Manager template)
-description: Learn how to create Database Migration Service by using Azure Resource Manager template (ARM template).
-- Previously updated : 06/29/2020---
- - subject-armqs
- - mode-arm
- - devx-track-arm-template
- - sql-migration-content
--
-# Quickstart: Create instance of Azure Database Migration Service using ARM template
-
-Use this Azure Resource Manager template (ARM template) to deploy an instance of the Azure Database Migration Service.
--
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
--
-## Prerequisites
-
-The Azure Database Migration Service ARM template requires the following:
--- The latest version of the [Azure CLI](/cli/azure/install-azure-cli) and/or [PowerShell](/powershell/scripting/install/installing-powershell).-- An Azure subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-
-## Review the template
-
-The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azure-database-migration-simple-deploy/).
--
-Three Azure resources are defined in the template:
--- [Microsoft.Network/virtualNetworks](/azure/templates/microsoft.network/virtualnetworks): Creates the virtual network.-- [Microsoft.Network/virtualNetworks/subnets](/azure/templates/microsoft.network/virtualnetworks/subnets): Creates the subnet.-- [Microsoft.DataMigration/services](/azure/templates/microsoft.datamigration/services): Deploys an instance of the Azure Database Migration Service.-
-More Azure Database Migration Services templates can be found in the [quickstart template gallery](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Datamigration&pageNumber=1&sort=Popular).
--
-## Deploy the template
-
-1. Select the following image to sign in to Azure and open a template. The template creates an instance of the Azure Database Migration Service.
-
- :::image type="content" source="~/reusable-content/ce-skilling/azure/media/template-deployments/deploy-to-azure-button.svg" alt-text="Button to deploy the Resource Manager template to Azure." border="false" link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.datamigration%2fazure-database-migration-simple-deploy%2fazuredeploy.json":::
-
-2. Select or enter the following values.
-
- * **Subscription**: Select an Azure subscription.
- * **Resource group**: Select an existing resource group from the drop down, or select **Create new** to create a new resource group.
- * **Region**: Location where the resources will be deployed.
- * **Service Name**: Name of the new migration service.
- * **Location**: The location of the resource group, leave as the default of `[resourceGroup().location]`.
- * **Vnet Name**: Name of the new virtual network.
- * **Subnet Name**: Name of the new subnet associated with the virtual network.
---
-3. Select **Review + create**. After the instance of Azure Database Migration Service has been deployed successfully, you get a notification.
--
-The Azure portal is used to deploy the template. In addition to the Azure portal, you can also use the Azure PowerShell, Azure CLI, and REST API. To learn other deployment methods, see [Deploy templates](../azure-resource-manager/templates/deploy-powershell.md).
-
-## Review deployed resources
-
-You can use the Azure CLI to check deployed resources.
--
-```azurecli-interactive
-echo "Enter the resource group where your SQL Server VM exists:" &&
-read resourcegroupName &&
-az resource list --resource-group $resourcegroupName
-```
--
-## Clean up resources
-
-When no longer needed, delete the resource group by using Azure CLI or Azure PowerShell:
-
-# [CLI](#tab/CLI)
-
-```azurecli-interactive
-echo "Enter the Resource Group name:" &&
-read resourceGroupName &&
-az group delete --name $resourceGroupName &&
-echo "Press [ENTER] to continue ..."
-```
-
-# [PowerShell](#tab/PowerShell)
-
-```azurepowershell-interactive
-$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
-Remove-AzResourceGroup -Name $resourceGroupName
-Write-Host "Press [ENTER] to continue..."
-```
---
-## Next steps
-
-For a step-by-step tutorial that guides you through the process of creating a template, see:
-
-> [!div class="nextstepaction"]
-> [Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)
-
-For other ways to deploy Azure Database Migration Service, see:
-- [Azure portal](quickstart-create-data-migration-service-portal.md)-
-To learn more, see [an overview of Azure Database Migration Service](dms-overview.md)
dms How To Monitor Migration Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/how-to-monitor-migration-activity.md
- Title: Monitor migration activity - Azure Database Migration Service
-description: Learn to use the Azure Database Migration Service to monitor migration activity.
--- Previously updated : 02/20/2020---
- - sql-migration-content
--
-# Monitor migration activity using the Azure Database Migration Service
-In this article, you learn how to monitor the progress of a migration at both a database level and a table level.
-
-## Monitor at the database level
-To monitor activity at the database level, view the database-level blade:
-
-![Database-level blade](media/how-to-monitor-migration-activity/dms-database-level-blade.png)
-
-> [!NOTE]
-> Selecting the database hyperlink will show you the list of tables and their migration progress.
-
-The following table lists the fields on the database-level blade and describes the various status values associated with each.
-
-<table id='overview' class='overview'>
- <thead>
- <tr>
- <th class="x-hidden-focus"><strong>Field name</strong></th>
- <th><strong>Field substatus</strong></th>
- <th><strong>Description</strong></th>
- </tr>
- </thead>
- <tbody>
- <tr>
- <td rowspan="3" class="ActivityStatus"><strong>Activity status</strong></td>
- <td>Running</td>
- <td>Migration activity is running.</td>
- </tr>
- <tr>
- <td>Succeeded</td>
- <td>Migration activity succeeded without issues.</td>
- </tr>
- <tr>
- <td>Faulted</td>
- <td>Migration failed. Select the ΓÇÿSee error detailsΓÇÖ link under migration details for the complete error message.</td>
- </tr>
- <tr>
- <td rowspan="4" class="Status"><strong>Status</strong></td>
- <td>Initializing</td>
- <td>DMS is setting up the migration pipeline.</td>
- </tr>
- <tr>
- <td>Running</td>
- <td>DMS pipeline is running and performing migration.</td>
- </tr>
- <tr>
- <td>Complete</td>
- <td>Migration completed.</td>
- </tr>
- <tr>
- <td>Failed</td>
- <td>Migration failed. Click on migration details to see migration errors.</td>
- </tr>
- <tr>
- <td rowspan="5" class="migration-details"><strong>Migration details</strong></td>
- <td>Initiating the migration pipeline</td>
- <td>DMS is setting up the migration pipeline.</td>
- </tr>
- <tr>
- <td>Full data load in progress</td>
- <td>DMS is performing initial load.</td>
- </tr>
- <tr>
- <td>Ready for Cutover</td>
- <td>After initial load is completed, DMS will mark database as ready for cutover. User should check if data has caught up on continuous sync.</td>
- </tr>
- <tr>
- <td>All changes applied</td>
- <td>Initial load and continuous sync are complete. This status also occurs after the database is cutover successfully.</td>
- </tr>
- <tr>
- <td>See error details</td>
- <td>Click on the link to show error details.</td>
- </tr>
- <tr>
- <td rowspan="1" class="duration"><strong>Duration</strong></td>
- <td>N/A</td>
- <td>Total time from migration activity being initialized to migration completed or migration faulted.</td>
- </tr>
- </tbody>
-</table>
-
-## Monitor at table level ΓÇô Quick Summary
-To monitor activity at the table level, view the table-level blade. The top portion of the blade shows the detailed number of rows migrated in full load and incremental updates.
-
-The bottom portion of the blade lists the tables and shows a quick summary of migration progress.
-
-![Table-level blade - quick summary](media/how-to-monitor-migration-activity/dms-table-level-blade-summary.png)
-
-The following table describes the fields shown in the table-level details.
-
-| Field name | Description |
-| - | - |
-| **Full load completed** | Number of tables completed full data load. |
-| **Full load queued** | Number of tables being queued for full load. |
-| **Full load loading** | Number of tables failed. |
-| **Incremental updates** | Number of change data capture (CDC) updates in rows applied to target. |
-| **Incremental inserts** | Number of CDC inserts in rows applied to target. |
-| **Incremental deletes** | Number of CDC deletes in rows applied to target. |
-| **Pending changes** | Number of CDC in rows that are still waiting to get applied to target. |
-| **Applied changes** | Total of CDC updates, inserts, and deletes in rows applied to target. |
-| **Tables in error state** | Number of tables that are in ΓÇÿerrorΓÇÖ state during migration. Some examples that tables can go into error state are when there are duplicates identified in the target or data isn't compatible loading in the target table. |
-
-## Monitor at table level ΓÇô Detailed Summary
-There are two tabs that show migration progress in Full load and Incremental data sync.
-
-![Full load tab](media/how-to-monitor-migration-activity/dms-full-load-tab.png)
-
-![Incremental data sync tab](media/how-to-monitor-migration-activity/dms-incremental-data-sync-tab.png)
-
-The following table describes the fields shown in table level migration progress.
-
-| Field name | Description |
-| - | - |
-| **Status - Syncing** | Continuous sync is running. |
-| **Insert** | Number of CDC inserts in rows applied to target. |
-| **Update** | Number of CDC updates in rows applied to target. |
-| **Delete** | Number of CDC deletes in rows applied to target. |
-| **Total Applied** | Total of CDC updates, inserts, and deletes in rows applied to target. |
-| **Data Errors** | Number of data errors happened in this table. Some examples of the errors are *511: Cannot create a row of size %d which is greater than the allowable maximum row size of %d, 8114: Error converting data type %ls to %ls.* Customer should query from dms_apply_exceptions table in Azure target to see the error details. |
-
-> [!NOTE]
-> CDC values of Insert, Update and Delete and Total Applied may decrease when database is cutover or migration is restarted.
-
-## Next steps
-- Review the migration guidance in the Microsoft [Database Migration Guide](/data-migration/).
dms Howto Sql Server To Azure Sql Managed Instance Powershell Offline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-managed-instance-powershell-offline.md
- Title: "PowerShell: Migrate SQL Server to SQL Managed Instance offline"-
-description: Learn to offline migrate from SQL Server to Azure SQL Managed Instance by using Azure PowerShell and the Azure Database Migration Service.
--- Previously updated : 12/16/2020---
- - fasttrack-edit
- - devx-track-azurepowershell
- - sql-migration-content
--
-# Migrate SQL Server to SQL Managed Instance offline with PowerShell & Azure Database Migration Service
-
-In this article, you offline migrate the **Adventureworks2016** database restored to an on-premises instance of SQL Server 2005 or above to an Azure SQL SQL Managed Instance by using Microsoft Azure PowerShell. You can migrate databases from a SQL Server instance to an SQL Managed Instance by using the `Az.DataMigration` module in Microsoft Azure PowerShell.
-
-In this article, you learn how to:
-> [!div class="checklist"]
->
-> * Create a resource group.
-> * Create an instance of Azure Database Migration Service.
-> * Create a migration project in an instance of Azure Database Migration Service.
-> * Run the migration offline.
--
-This article provides steps for an offline migration, but it's also possible to migrate [online](howto-sql-server-to-azure-sql-managed-instance-powershell-online.md).
--
-## Prerequisites
-
-To complete these steps, you need:
-
-* [SQL Server 2016 or above](https://www.microsoft.com/sql-server/sql-server-downloads) (any edition).
-* A local copy of the **AdventureWorks2016** database, which is available for download [here](/sql/samples/adventureworks-install-configure).
-* To enable the TCP/IP protocol, which is disabled by default with SQL Server Express installation. Enable the TCP/IP protocol by following the article [Enable or Disable a Server Network Protocol](/sql/database-engine/configure-windows/enable-or-disable-a-server-network-protocol#SSMSProcedure).
-* To configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).
-* An Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/) before you begin.
-* A SQL Managed Instance. You can create a SQL Managed Instance by following the detail in the article [Create an Azure SQL Managed Instance](/azure/azure-sql/managed-instance/instance-create-quickstart).
-* To download and install [Data Migration Assistant](https://www.microsoft.com/download/details.aspx?id=53595) v3.3 or later.
-* A Microsoft Azure Virtual Network created using the Azure Resource Manager deployment model, which provides the Azure Database Migration Service with site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md).
-* A completed assessment of your on-premises database and schema migration using Data Migration Assistant, as described in the article [Performing a SQL Server migration assessment](/sql/dma/dma-assesssqlonprem).
-* To download and install the `Az.DataMigration` module (version 0.7.2 or later) from the PowerShell Gallery by using [Install-Module PowerShell cmdlet](/powershell/module/powershellget/Install-Module).
-* To ensure that the credentials used to connect to source SQL Server instance have the [CONTROL SERVER](/sql/t-sql/statements/grant-server-permissions-transact-sql) permission.
-* To ensure that the credentials used to connect to target SQL Managed Instance has the CONTROL DATABASE permission on the target SQL Managed Instance databases.
--
-## Sign in to your Microsoft Azure subscription
-
-Sign in to your Azure subscription by using PowerShell. For more information, see the article [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
-
-## Create a resource group
-
-An Azure resource group is a logical container in which Azure resources are deployed and managed.
-
-Create a resource group by using the [`New-AzResourceGroup`](/powershell/module/az.resources/new-azresourcegroup) command.
-
-The following example creates a resource group named *myResourceGroup* in the *East US* region.
-
-```powershell
-New-AzResourceGroup -ResourceGroupName myResourceGroup -Location EastUS
-```
-
-## Create an instance of Azure Database Migration Service
-
-You can create new instance of Azure Database Migration Service by using the `New-AzDataMigrationService` cmdlet.
-This cmdlet expects the following required parameters:
-
-* *Azure Resource Group name*. You can use [`New-AzResourceGroup`](/powershell/module/az.resources/new-azresourcegroup) command to create an Azure Resource group as previously shown and provide its name as a parameter.
-* *Service name*. String that corresponds to the desired unique service name for Azure Database Migration Service.
-* *Location*. Specifies the location of the service. Specify an Azure data center location, such as West US or Southeast Asia.
-* *Sku*. This parameter corresponds to DMS Sku name. Currently supported Sku names are *Basic_1vCore*, *Basic_2vCores*, *GeneralPurpose_4vCores*.
-* *Virtual Subnet Identifier*. You can use the cmdlet [`New-AzVirtualNetworkSubnetConfig`](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) to create a subnet.
-
-The following example creates a service named *MyDMS* in the resource group *MyDMSResourceGroup* located in the *East US* region using a virtual network named *MyVNET* and a subnet named *MySubnet*.
-
-```powershell
-$vNet = Get-AzVirtualNetwork -ResourceGroupName MyDMSResourceGroup -Name MyVNET
-
-$vSubNet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $vNet -Name MySubnet
-
-$service = New-AzDms -ResourceGroupName myResourceGroup `
- -ServiceName MyDMS `
- -Location EastUS `
- -Sku Basic_2vCores `
- -VirtualSubnetId $vSubNet.Id`
-```
-
-## Create a migration project
-
-After creating an Azure Database Migration Service instance, create a migration project. An Azure Database Migration Service project requires connection information for both the source and target instances, as well as a list of databases that you want to migrate as part of the project.
-
-### Create a Database Connection Info object for the source and target connections
-
-You can create a Database Connection Info object by using the `New-AzDmsConnInfo` cmdlet, which expects the following parameters:
-
-* *ServerType*. The type of database connection requested, for example, SQL, Oracle, or MySQL. Use SQL for SQL Server and Azure SQL.
-* *DataSource*. The name or IP of a SQL Server instance or Azure SQL Database instance.
-* *AuthType*. The authentication type for connection, which can be either SqlAuthentication or WindowsAuthentication.
-* *TrustServerCertificate*. This parameter sets a value that indicates whether the channel is encrypted while bypassing walking the certificate chain to validate trust. The value can be `$true` or `$false`.
-
-The following example creates a Connection Info object for a source SQL Server called *MySourceSQLServer* using sql authentication:
-
-```powershell
-$sourceConnInfo = New-AzDmsConnInfo -ServerType SQL `
- -DataSource MySourceSQLServer `
- -AuthType SqlAuthentication `
- -TrustServerCertificate:$true
-```
-
-The next example shows creation of Connection Info for an Azure SQL Managed Instance named ΓÇÿtargetmanagedinstanceΓÇÖ:
-
-```powershell
-$targetResourceId = (Get-AzSqlInstance -Name "targetmanagedinstance").Id
-$targetConnInfo = New-AzDmsConnInfo -ServerType SQLMI -MiResourceId $targetResourceId
-```
-
-### Provide databases for the migration project
-
-Create a list of `AzDataMigrationDatabaseInfo` objects that specifies databases as part of the Azure Database Migration Service project, which can be provided as parameter for creation of the project. You can use the cmdlet `New-AzDataMigrationDatabaseInfo` to create `AzDataMigrationDatabaseInfo`.
-
-The following example creates the `AzDataMigrationDatabaseInfo` project for the **AdventureWorks2016** database and adds it to the list to be provided as parameter for project creation.
-
-```powershell
-$dbInfo1 = New-AzDataMigrationDatabaseInfo -SourceDatabaseName AdventureWorks
-$dbList = @($dbInfo1)
-```
-
-### Create a project object
-
-Finally, you can create an Azure Database Migration Service project called *MyDMSProject* located in *East US* using `New-AzDataMigrationProject` and add the previously created source and target connections and the list of databases to migrate.
-
-```powershell
-$project = New-AzDataMigrationProject -ResourceGroupName myResourceGroup `
- -ServiceName $service.Name `
- -ProjectName MyDMSProject `
- -Location EastUS `
- -SourceType SQL `
- -TargetType SQLMI `
- -SourceConnection $sourceConnInfo `
- -TargetConnection $targetConnInfo `
- -DatabaseInfo $dbList
-```
-
-## Create and start a migration task
-
-Next, create and start an Azure Database Migration Service task. This task requires connection credential information for both the source and target, as well as the list of database tables to be migrated and the information already provided with the project created as a prerequisite.
-
-### Create credential parameters for source and target
-
-Create connection security credentials as a [PSCredential](/dotnet/api/system.management.automation.pscredential) object.
-
-The following example shows the creation of *PSCredential* objects for both the source and target connections, providing passwords as string variables *$sourcePassword* and *$targetPassword*.
-
-```powershell
-$secpasswd = ConvertTo-SecureString -String $sourcePassword -AsPlainText -Force
-$sourceCred = New-Object System.Management.Automation.PSCredential ($sourceUserName, $secpasswd)
-$secpasswd = ConvertTo-SecureString -String $targetPassword -AsPlainText -Force
-$targetCred = New-Object System.Management.Automation.PSCredential ($targetUserName, $secpasswd)
-```
-
-### Create a backup FileShare object
-
-Now create a FileShare object representing the local SMB network share to which Azure Database Migration Service can take the source database backups using the `New-AzDmsFileShare` cmdlet.
-
-```powershell
-$backupPassword = ConvertTo-SecureString -String $password -AsPlainText -Force
-$backupCred = New-Object System.Management.Automation.PSCredential ($backupUserName, $backupPassword)
-
-$backupFileSharePath="\\10.0.0.76\SharedBackup"
-$backupFileShare = New-AzDmsFileShare -Path $backupFileSharePath -Credential $backupCred
-```
-
-### Create selected database object
-
-The next step is to select the source and target databases by using the `New-AzDmsSelectedDB` cmdlet.
-
-The following example is for migrating a single database from SQL Server to an Azure SQL Managed Instance:
-
-```powershell
-$selectedDbs = @()
-$selectedDbs += New-AzDmsSelectedDB -MigrateSqlServerSqlDbMi `
- -Name AdventureWorks2016 `
- -TargetDatabaseName AdventureWorks2016 `
- -BackupFileShare $backupFileShare `
-```
-
-If an entire SQL Server instance needs a lift-and-shift into an Azure SQL Managed Instance, then a loop to take all databases from the source is provided below. In the following example, for $Server, $SourceUserName, and $SourcePassword, provide your source SQL Server details.
-
-```powershell
-$Query = "(select name as Database_Name from master.sys.databases where Database_id>4)";
-$Databases= (Invoke-Sqlcmd -ServerInstance "$Server" -Username $SourceUserName
--Password $SourcePassword -database master -Query $Query)
-$selectedDbs=@()
-foreach($DataBase in $Databases.Database_Name)
- {
- $SourceDB=$DataBase
- $TargetDB=$DataBase
-
-$selectedDbs += New-AzureRmDmsSelectedDB -MigrateSqlServerSqlDbMi `
- -Name $SourceDB `
- -TargetDatabaseName $TargetDB `
- -BackupFileShare $backupFileShare
- }
-```
-
-### SAS URI for Azure Storage Container
-
-Create variable containing the SAS URI that provides the Azure Database Migration Service with access to the storage account container to which the service uploads the backup files.
-
-```powershell
-$blobSasUri="https://mystorage.blob.core.windows.net/test?st=2018-07-13T18%3A10%3A33Z&se=2019-07-14T18%3A10%3A00Z&sp=rwdl&sv=2018-03-28&sr=c&sig=qKlSA512EVtest3xYjvUg139tYSDrasbftY%3D"
-```
-
-> [!NOTE]
-> Azure Database Migration Service does not support using an account level SAS token. You must use a SAS URI for the storage account container. [Learn how to get the SAS URI for blob container](../vs-azure-tools-storage-explorer-blobs.md#get-the-sas-for-a-blob-container).
-
-### Additional configuration requirements
-
-There are a few additional requirements you need to address:
--
-* **Select logins**. Create a list of logins to be migrated as shown in the following example:
-
- ```powershell
- $selectedLogins = @("user1", "user2")
- ```
-
- > [!IMPORTANT]
- > Currently, Azure Database Migration Service only supports migrating SQL logins.
-
-* **Select agent jobs**. Create list of agent jobs to be migrated as shown in the following example:
-
- ```powershell
- $selectedAgentJobs = @("agentJob1", "agentJob2")
- ```
-
- > [!IMPORTANT]
- > Currently, Azure Database Migration Service only supports jobs with T-SQL subsystem job steps.
---
-### Create and start the migration task
-
-Use the `New-AzDataMigrationTask` cmdlet to create and start a migration task.
-
-#### Specify parameters
-
-The `New-AzDataMigrationTask` cmdlet expects the following parameters:
-
-* *TaskType*. Type of migration task to create for SQL Server to Azure SQL Managed Instance migration type *MigrateSqlServerSqlDbMi* is expected.
-* *Resource Group Name*. Name of Azure resource group in which to create the task.
-* *ServiceName*. Azure Database Migration Service instance in which to create the task.
-* *ProjectName*. Name of Azure Database Migration Service project in which to create the task.
-* *TaskName*. Name of task to be created.
-* *SourceConnection*. AzDmsConnInfo object representing source SQL Server connection.
-* *TargetConnection*. AzDmsConnInfo object representing target Azure SQL Managed Instance connection.
-* *SourceCred*. [PSCredential](/dotnet/api/system.management.automation.pscredential) object for connecting to source server.
-* *TargetCred*. [PSCredential](/dotnet/api/system.management.automation.pscredential) object for connecting to target server.
-* *SelectedDatabase*. AzDataMigrationSelectedDB object representing the source and target database mapping.
-* *BackupFileShare*. FileShare object representing the local network share that the Azure Database Migration Service can take the source database backups to.
-* *BackupBlobSasUri*. The SAS URI that provides the Azure Database Migration Service with access to the storage account container to which the service uploads the backup files. Learn how to get the SAS URI for blob container.
-* *SelectedLogins*. List of selected logins to migrate.
-* *SelectedAgentJobs*. List of selected agent jobs to migrate.
-* *SelectedLogins*. List of selected logins to migrate.
-* *SelectedAgentJobs*. List of selected agent jobs to migrate.
---
-#### Create and start a migration task
-
-The following example creates and starts an offline migration task named **myDMSTask**:
-
-```powershell
-$migTask = New-AzDataMigrationTask -TaskType MigrateSqlServerSqlDbMi `
- -ResourceGroupName myResourceGroup `
- -ServiceName $service.Name `
- -ProjectName $project.Name `
- -TaskName myDMSTask `
- -SourceConnection $sourceConnInfo `
- -SourceCred $sourceCred `
- -TargetConnection $targetConnInfo `
- -TargetCred $targetCred `
- -SelectedDatabase $selectedDbs `
- -BackupFileShare $backupFileShare `
- -BackupBlobSasUri $blobSasUri `
- -SelectedLogins $selectedLogins `
- -SelectedAgentJobs $selectedJobs `
-```
--
-## Monitor the migration
-
-To monitor the migration, perform the following tasks.
-
-1. Consolidate all the migration details into a variable called $CheckTask.
-
- To combine migration details such as properties, state, and database information associated with the migration, use the following code snippet:
-
- ```powershell
- $CheckTask = Get-AzDataMigrationTask -ResourceGroupName myResourceGroup `
- -ServiceName $service.Name `
- -ProjectName $project.Name `
- -Name myDMSTask `
- -ResultType DatabaseLevelOutput `
- -Expand
- Write-Host ΓÇÿ$CheckTask.ProjectTask.Properties.OutputΓÇÖ
- ```
-
-2. Use the `$CheckTask` variable to get the current state of the migration task.
-
- To use the `$CheckTask` variable to get the current state of the migration task, you can monitor the migration task running by querying the state property of the task, as shown in the following example:
-
- ```powershell
- if (($CheckTask.ProjectTask.Properties.State -eq "Running") -or ($CheckTask.ProjectTask.Properties.State -eq "Queued"))
- {
- Write-Host "migration task running"
- }
- else if($CheckTask.ProjectTask.Properties.State -eq "Succeeded")
- {
- Write-Host "Migration task is completed Successfully"
- }
- else if($CheckTask.ProjectTask.Properties.State -eq "Failed" -or $CheckTask.ProjectTask.Properties.State -eq "FailedInputValidation" -or $CheckTask.ProjectTask.Properties.State -eq "Faulted")
- {
- Write-Host "Migration Task Failed"
- }
- ```
--
-## Delete the instance of Azure Database Migration Service
-
-After the migration is complete, you can delete the Azure Database Migration Service instance:
-
-```powershell
-Remove-AzDms -ResourceGroupName myResourceGroup -ServiceName MyDMS
-```
--
-## Next steps
-
-Find out more about Azure Database Migration Service in the article [What is the Azure Database Migration Service?](./dms-overview.md).
-
-For information about additional migrating scenarios (source/target pairs), see the Microsoft [Database Migration Guide](/data-migration/).
dms Howto Sql Server To Azure Sql Managed Instance Powershell Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-managed-instance-powershell-online.md
- Title: "PowerShell: Migrate SQL Server to SQL Managed Instance online"-
-description: Learn to online migrate from SQL Server to Azure SQL Managed Instance by using Azure PowerShell and the Azure Database Migration Service.
--- Previously updated : 12/16/2020---
- - devx-track-azurepowershell
- - sql-migration-content
--
-# Migrate SQL Server to SQL Managed Instance online with PowerShell & Azure Database Migration Service
-
-In this article, you online migrate the **Adventureworks2016** database restored to an on-premises instance of SQL Server 2005 or above to an Azure SQL SQL Managed Instance by using Microsoft Azure PowerShell. You can migrate databases from a SQL Server instance to an SQL Managed Instance by using the `Az.DataMigration` module in Microsoft Azure PowerShell.
-
-In this article, you learn how to:
-> [!div class="checklist"]
->
-> * Create a resource group.
-> * Create an instance of Azure Database Migration Service.
-> * Create a migration project in an instance of Azure Database Migration Service.
-> * Run the migration online.
--
-This article provides steps for an online migration, but it's also possible to migrate [offline](howto-sql-server-to-azure-sql-managed-instance-powershell-offline.md).
--
-## Prerequisites
-
-To complete these steps, you need:
-
-* [SQL Server 2016 or above](https://www.microsoft.com/sql-server/sql-server-downloads) (any edition).
-* A local copy of the **AdventureWorks2016** database, which is available for download [here](/sql/samples/adventureworks-install-configure).
-* To enable the TCP/IP protocol, which is disabled by default with SQL Server Express installation. Enable the TCP/IP protocol by following the article [Enable or Disable a Server Network Protocol](/sql/database-engine/configure-windows/enable-or-disable-a-server-network-protocol#SSMSProcedure).
-* To configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).
-* An Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/) before you begin.
-* A SQL Managed Instance. You can create a SQL Managed Instance by following the detail in the article [Create a ASQL Managed Instance](/azure/azure-sql/managed-instance/instance-create-quickstart).
-* To download and install [Data Migration Assistant](https://www.microsoft.com/download/details.aspx?id=53595) v3.3 or later.
-* A Microsoft Azure Virtual Network created using the Azure Resource Manager deployment model, which provides the Azure Database Migration Service with site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md).
-* A completed assessment of your on-premises database and schema migration using Data Migration Assistant, as described in the article [Performing a SQL Server migration assessment](/sql/dma/dma-assesssqlonprem).
-* To download and install the `Az.DataMigration` module (version 0.7.2 or later) from the PowerShell Gallery by using [Install-Module PowerShell cmdlet](/powershell/module/powershellget/Install-Module).
-* To ensure that the credentials used to connect to source SQL Server instance have the [CONTROL SERVER](/sql/t-sql/statements/grant-server-permissions-transact-sql) permission.
-* To ensure that the credentials used to connect to target SQL Managed Instance has the CONTROL DATABASE permission on the target SQL Managed Instance databases.
-
- > [!IMPORTANT]
- > For online migrations, you must already have set up your Microsoft Entra credentials. For more information, see the article [Use the portal to create a Microsoft Entra application and service principal that can access resources](/entra/identity-platform/howto-create-service-principal-portal).
-
-## Create a resource group
-
-An Azure resource group is a logical container in which Azure resources are deployed and managed.
-
-Create a resource group by using the [`New-AzResourceGroup`](/powershell/module/az.resources/new-azresourcegroup) command.
-
-The following example creates a resource group named *myResourceGroup* in the *East US* region.
-
-```powershell
-New-AzResourceGroup -ResourceGroupName myResourceGroup -Location EastUS
-```
-
-## Create an instance of DMS
-
-You can create new instance of Azure Database Migration Service by using the `New-AzDataMigrationService` cmdlet.
-This cmdlet expects the following required parameters:
-
-* *Azure Resource Group name*. You can use [`New-AzResourceGroup`](/powershell/module/az.resources/new-azresourcegroup) command to create an Azure Resource group as previously shown and provide its name as a parameter.
-* *Service name*. String that corresponds to the desired unique service name for Azure Database Migration Service.
-* *Location*. Specifies the location of the service. Specify an Azure data center location, such as West US or Southeast Asia.
-* *Sku*. This parameter corresponds to DMS Sku name. Currently supported Sku names are *Basic_1vCore*, *Basic_2vCores*, *GeneralPurpose_4vCores*.
-* *Virtual Subnet Identifier*. You can use the cmdlet [`New-AzVirtualNetworkSubnetConfig`](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) to create a subnet.
-
-The following example creates a service named *MyDMS* in the resource group *MyDMSResourceGroup* located in the *East US* region using a virtual network named *MyVNET* and a subnet named *MySubnet*.
--
-```powershell
-$vNet = Get-AzVirtualNetwork -ResourceGroupName MyDMSResourceGroup -Name MyVNET
-
-$vSubNet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $vNet -Name MySubnet
-
-$service = New-AzDms -ResourceGroupName myResourceGroup `
- -ServiceName MyDMS `
- -Location EastUS `
- -Sku Basic_2vCores `
- -VirtualSubnetId $vSubNet.Id`
-```
-
-## Create a migration project
-
-After creating an Azure Database Migration Service instance, create a migration project. An Azure Database Migration Service project requires connection information for both the source and target instances, as well as a list of databases that you want to migrate as part of the project.
-Define source and target connectivity connection strings.
-
-The following script defines source SQL Server connection details:
-
-```powershell
-# Source connection properties
-$sourceDataSource = "<mysqlserver.domain.com/privateIP of source SQL>"
-$sourceUserName = "domain\user"
-$sourcePassword = "mypassword"
-```
-
-The following script defines the target SQL Managed Instance connection details:
-
-```powershell
-# Target MI connection properties
-$targetMIResourceId = "/subscriptions/<subid>/resourceGroups/<rg>/providers/Microsoft.Sql/managedInstances/<myMI>"
-$targetUserName = "<user>"
-$targetPassword = "<password>"
-```
---
-### Define source and target database mapping
-
-Provide databases to be migrated in this migration project
-
-The following script maps source database to the respective new database on the target SQL Managed Instance with the provided name.
-
-```powershell
-# Selected databases (Source database name to target database name mapping)
-$selectedDatabasesMap = New-Object System.Collections.Generic.Dictionary"[String,String]"
-$selectedDatabasesMap.Add("<source database name>", "<target database name> ")
-```
-
-For multiple databases, add the list of databases to the above script using the following format:
-
-```powershell
-$selectedDatabasesMap = New-Object System.Collections.Generic.Dictionary"[String,String]"
-$selectedDatabasesMap.Add("<source database name1>", "<target database name1> ")
-$selectedDatabasesMap.Add("<source database name2>", "<target database name2> ")
-```
-
-### Create DMS Project
-
-You can create an Azure Database Migration Service project within the DMS instance.
-
-```powershell
-# Create DMS project
-$project = New-AzDataMigrationProject `
- -ResourceGroupName $dmsResourceGroupName `
- -ServiceName $dmsServiceName `
- -ProjectName $dmsProjectName `
- -Location $dmsLocation `
- -SourceType SQL `
- -TargetType SQLMI `
-
-# Create selected databases object
-$selectedDatabases = @();
-foreach ($sourceDbName in $selectedDatabasesMap.Keys){
- $targetDbName = $($selectedDatabasesMap[$sourceDbName])
- $selectedDatabases += New-AzDmsSelectedDB -MigrateSqlServerSqlDbMi `
- -Name $sourceDbName `
- -TargetDatabaseName $targetDbName `
- -BackupFileShare $backupFileShare `
-}
-```
---
-### Create a backup FileShare object
-
-Now create a FileShare object representing the local SMB network share to which Azure Database Migration Service can take the source database backups using the New-AzDmsFileShare cmdlet.
-
-```powershell
-# SMB Backup share properties
-$smbBackupSharePath = "\\shareserver.domain.com\mybackup"
-$smbBackupShareUserName = "domain\user"
-$smbBackupSharePassword = "<password>"
-
-# Create backup file share object
-$smbBackupSharePasswordSecure = ConvertTo-SecureString -String $smbBackupSharePassword -AsPlainText -Force
-$smbBackupShareCredentials = New-Object System.Management.Automation.PSCredential ($smbBackupShareUserName, $smbBackupSharePasswordSecure)
-$backupFileShare = New-AzDmsFileShare -Path $smbBackupSharePath -Credential $smbBackupShareCredentials
-```
-
-### Define the Azure Storage
-
-Select Azure Storage Container to be used for migration:
-
-```powershell
-# Storage resource id
-$storageAccountResourceId = "/subscriptions/<subscriptionname>/resourceGroups/<rg>/providers/Microsoft.Storage/storageAccounts/<mystorage>"
-```
--
-<a name='configure-azure-active-directory-app'></a>
-
-### Configure Microsoft Entra App
-
-Provide the required details for Microsoft Entra ID for an online SQL Managed Instance migration:
-
-```powershell
-# AAD properties
-$AADAppId = "<appid-guid>"
-$AADAppKey = "<app-key>"
-
-# Create AAD object
-$AADAppKeySecure = ConvertTo-SecureString $AADAppKey -AsPlainText -Force
-$AADApp = New-AzDmsAadApp -ApplicationId $AADAppId -AppKey $AADAppKeySecure
-```
--
-## Create and start a migration task
-
-Next, create and start an Azure Database Migration Service task. Call the source and target using variables, and list the database tables to be migrated:
--
-```powershell
-# Managed Instance online migration properties
-$dmsTaskName = "testmigration1"
-
-# Create source connection info
-$sourceConnInfo = New-AzDmsConnInfo -ServerType SQL `
- -DataSource $sourceDataSource `
- -AuthType WindowsAuthentication `
- -TrustServerCertificate:$true
-$sourcePasswordSecure = ConvertTo-SecureString -String $sourcePassword -AsPlainText -Force
-$sourceCredentials = New-Object System.Management.Automation.PSCredential ($sourceUserName, $sourcePasswordSecure)
-
-# Create target connection info
-$targetConnInfo = New-AzDmsConnInfo -ServerType SQLMI `
- -MiResourceId $targetMIResourceId
-$targetPasswordSecure = ConvertTo-SecureString -String $targetPassword -AsPlainText -Force
-$targetCredentials = New-Object System.Management.Automation.PSCredential ($targetUserName, $targetPasswordSecure)
-```
-
-The following example creates and starts an online migration task:
-
-```powershell
-# Create DMS migration task
-$migTask = New-AzDataMigrationTask -TaskType MigrateSqlServerSqlDbMiSync `
- -ResourceGroupName $dmsResourceGroupName `
- -ServiceName $dmsServiceName `
- -ProjectName $dmsProjectName `
- -TaskName $dmsTaskName `
- -SourceConnection $sourceConnInfo `
- -SourceCred $sourceCredentials `
- -TargetConnection $targetConnInfo `
- -TargetCred $targetCredentials `
- -SelectedDatabase $selectedDatabases `
- -BackupFileShare $backupFileShare `
- -AzureActiveDirectoryApp $AADApp `
- -StorageResourceId $storageAccountResourceId
-```
-
-For more information, see [New-AzDataMigrationTask](/powershell/module/az.datamigration/new-azdatamigrationtask).
-
-## Monitor the migration
-
-To monitor the migration, perform the following tasks.
-
-### Check the status of task
-
-```powershell
-# Get migration task status details
-$migTask = Get-AzDataMigrationTask `
- -ResourceGroupName $dmsResourceGroupName `
- -ServiceName $dmsServiceName `
- -ProjectName $dmsProjectName `
- -Name $dmsTaskName `
- -ResultType DatabaseLevelOutput `
- -Expand
-
-# Task state will be either of 'Queued', 'Running', 'Succeeded', 'Failed', 'FailedInputValidation' or 'Faulted'
-$taskState = $migTask.ProjectTask.Properties.State
-
-# Display task state
-$taskState | Format-List
-```
-
-Use the following to get list of errors:-
-
-```powershell
-# Get task errors
-$taskErrors = $migTask.ProjectTask.Properties.Errors
-
-# Display task errors
-foreach($taskError in $taskErrors){
- $taskError | Format-List
-}
--
-# Get database level details
-$databaseLevelOutputs = $migTask.ProjectTask.Properties.Output
-
-# Display database level details
-foreach($databaseLevelOutput in $databaseLevelOutputs){
-
- # This is the source database name.
- $databaseName = $databaseLevelOutput.SourceDatabaseName;
-
- Write-Host "=========="
- Write-Host "Start migration details for database " $databaseName
- # This is the status for that database - It will be either of:
- # INITIAL, FULL_BACKUP_UPLOADING, FULL_BACKUP_UPLOADED, LOG_FILES_UPLOADING,
- # CUTOVER_IN_PROGRESS, CUTOVER_INITIATED, CUTOVER_COMPLETED, COMPLETED, CANCELLED, FAILED
- $databaseMigrationState = $databaseLevelOutput.MigrationState;
-
- # Details about last restored backup. This contains file names, LSN, backup date, etc
- $databaseLastRestoredBackup = $databaseLevelOutput.LastRestoredBackupSetInfo
-
- # Details about last restored backup. This contains file names, LSN, backup date, etc
- $databaseLastRestoredBackup = $databaseLevelOutput.LastRestoredBackupSetInfo
-
- # Details about last Currently active/most recent backups. This contains file names, LSN, backup date, etc
- $databaseActiveBackpusets = $databaseLevelOutput.ActiveBackupSets
-
- # Display info
- $databaseLevelOutput | Format-List
-
- Write-Host "Currently active/most recent backupset details:"
- $databaseActiveBackpusets | select BackupStartDate, BackupFinishedDate, FirstLsn, LastLsn -ExpandProperty ListOfBackupFiles | Format-List
-
- Write-Host "Last restored backupset details:"
- $databaseLastRestoredBackupFiles | Format-List
-
- Write-Host "End migration details for database " $databaseName
- Write-Host "=========="
-}
-```
-
-## Performing the cutover
-
-With an online migration, a full backup and restore of databases is performed, and then work proceeds on restoring the Transaction Logs stored in the BackupFileShare.
-
-When the database in a Azure SQL Managed Instance is updated with latest data and is in sync with the source database, you can perform a cutover.
-
-The following example will complete the cutover\migration. Users invoke this command at their discretion.
-
-```powershell
-$command = Invoke-AzDmsCommand -CommandType CompleteSqlMiSync `
- -ResourceGroupName myResourceGroup `
- -ServiceName $service.Name `
- -ProjectName $project.Name `
- -TaskName myDMSTask `
- -DatabaseName "Source DB Name"
-```
-
-## Deleting the instance of Azure Database Migration Service
-
-After the migration is complete, you can delete the Azure Database Migration Service instance:
-
-```powershell
-Remove-AzDms -ResourceGroupName myResourceGroup -ServiceName MyDMS
-```
-
-## Additional resources
-
-For information about additional migrating scenarios (source/target pairs), see the Microsoft [Database Migration Guide](/data-migration/).
-
-## Next steps
-
-Find out more about Azure Database Migration Service in the article [What is the Azure Database Migration Service?](./dms-overview.md).
dms Howto Sql Server To Azure Sql Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-powershell.md
- Title: "PowerShell: Migrate SQL Server to SQL Database"-
-description: Learn to migrate a database from SQL Server to Azure SQL Database by using Azure PowerShell with the Azure Database Migration Service.
--- Previously updated : 02/20/2020---
- - devx-track-azurepowershell
- - sql-migration-content
--
-# Migrate a SQL Server database to Azure SQL Database using Azure PowerShell
-
-In this article, you migrate the **Adventureworks2012** database restored to an on-premises instance of SQL Server 2016 or above to Azure SQL Database by using Microsoft Azure PowerShell. You can migrate databases from a SQL Server instance to Azure SQL Database by using the `Az.DataMigration` module in Microsoft Azure PowerShell.
-
-In this article, you learn how to:
-> [!div class="checklist"]
->
-> * Create a resource group.
-> * Create an instance of the Azure Database Migration Service.
-> * Create a migration project in an Azure Database Migration Service instance.
-> * Run the migration.
-
-## Prerequisites
-
-To complete these steps, you need:
-
-* [SQL Server 2016 or above](https://www.microsoft.com/sql-server/sql-server-downloads) (any edition)
-* To enable the TCP/IP protocol, which is disabled by default with SQL Server Express installation. Enable the TCP/IP protocol by following the article [Enable or Disable a Server Network Protocol](/sql/database-engine/configure-windows/enable-or-disable-a-server-network-protocol#SSMSProcedure).
-* To configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).
-* An Azure SQL Database instance. You can create an Azure SQL Database instance by following the detail in the article [Create a database in Azure SQL Database in the Azure portal](/azure/azure-sql/database/single-database-create-quickstart).
-* [Data Migration Assistant](https://www.microsoft.com/download/details.aspx?id=53595) v3.3 or later.
-* To have created a Microsoft Azure Virtual Network by using the Azure Resource Manager deployment model, which provides the Azure Database Migration Service with site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md).
-* To have completed assessment of your on-premises database and schema migration using Data Migration Assistant as described in the article [Performing a SQL Server migration assessment](/sql/dma/dma-assesssqlonprem)
-* To download and install the Az.DataMigration module from the PowerShell Gallery by using [Install-Module PowerShell cmdlet](/powershell/module/powershellget/Install-Module); be sure to open the PowerShell command window using run as an Administrator.
-* To ensure that the credentials used to connect to source SQL Server instance has the [CONTROL SERVER](/sql/t-sql/statements/grant-server-permissions-transact-sql) permission.
-* To ensure that the credentials used to connect to target Azure SQL DB instance has the CONTROL DATABASE permission on the target Azure SQL Database databases.
-* An Azure subscription. If you don't have one, create a [free](https://azure.microsoft.com/free/) account before you begin.
-
-## Log in to your Microsoft Azure subscription
-
-Use the directions in the article [Log in with Azure PowerShell](/powershell/azure/authenticate-azureps) to sign in to your Azure subscription by using PowerShell.
-
-## Create a resource group
-
-An Azure resource group is a logical container into which Azure resources are deployed and managed. Create a resource group before you can create a virtual machine.
-
-Create a resource group by using the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) command.
-
-The following example creates a resource group named *myResourceGroup* in the *EastUS* region.
-
-```powershell
-New-AzResourceGroup -ResourceGroupName myResourceGroup -Location EastUS
-```
-
-## Create an instance of Azure Database Migration Service
-
-You can create new instance of Azure Database Migration Service by using the `New-AzDataMigrationService` cmdlet.
-This cmdlet expects the following required parameters:
-
-* *Azure Resource Group name*. You can use [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) command to create Azure Resource group as previously shown and provide its name as a parameter.
-* *Service name*. String that corresponds to the desired unique service name for Azure Database Migration Service
-* *Location*. Specifies the location of the service. Specify an Azure data center location, such as West US or Southeast Asia
-* *Sku*. This parameter corresponds to DMS Sku name. The currently supported Sku name is *GeneralPurpose_4vCores*.
-* *Virtual Subnet Identifier*. You can use cmdlet [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) to create a subnet.
-
-The following example creates a service named *MyDMS* in the resource group *MyDMSResourceGroup* located in the *East US* region using a virtual network named *MyVNET* and subnet called *MySubnet*.
-
-```powershell
- $vNet = Get-AzVirtualNetwork -ResourceGroupName MyDMSResourceGroup -Name MyVNET
-
-$vSubNet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $vNet -Name MySubnet
-
-$service = New-AzDms -ResourceGroupName MyDMSResourceGroup `
- -ServiceName MyDMS `
- -Location EastUS `
- -Sku GeneralPurpose_4vCores `
- -VirtualSubnetId $vSubNet.Id`
-```
-
-## Create a migration project
-
-After creating an Azure Database Migration Service instance, create a migration project. An Azure Database Migration Service project requires connection information for both the source and target instances, as well as a list of databases that you want to migrate as part of the project.
-
-### Create a Database Connection Info object for the source and target connections
-
-You can create a Database Connection Info object by using the `New-AzDmsConnInfo` cmdlet. This cmdlet expects the following parameters:
-
-* *ServerType*. The type of database connection requested, for example, SQL, Oracle, or MySQL. Use SQL for SQL Server and Azure SQL.
-* *DataSource*. The name or IP of a SQL Server instance or Azure SQL Database.
-* *AuthType*. The authentication type for connection, which can be either SqlAuthentication or WindowsAuthentication.
-* *TrustServerCertificate* parameter sets a value that indicates whether the channel is encrypted while bypassing walking the certificate chain to validate trust. Value can be true or false.
-
-The following example creates Connection Info object for source SQL Server called MySourceSQLServer using sql authentication:
-
-```powershell
-$sourceConnInfo = New-AzDmsConnInfo -ServerType SQL `
- -DataSource MySourceSQLServer `
- -AuthType SqlAuthentication `
- -TrustServerCertificate:$true
-```
-
-> [!NOTE]
-> If the migration ends with an error when providing source DataSource as public IP address or the DNS of SQL Server, then use the name of the Azure VM running the SQL Server.
-
-The next example shows creation of Connection Info for a server called SQLAzureTarget using sql authentication:
-
-```powershell
-$targetConnInfo = New-AzDmsConnInfo -ServerType SQL `
- -DataSource "sqlazuretarget.database.windows.net" `
- -AuthType SqlAuthentication `
- -TrustServerCertificate:$false
-```
-
-### Provide databases for the migration project
-
-Create a list of `AzDataMigrationDatabaseInfo` objects that specifies databases as part of the Azure Database Migration project that can be provided as parameter for creation of the project. The Cmdlet `New-AzDataMigrationDatabaseInfo` can be used to create AzDataMigrationDatabaseInfo.
-
-The following example creates `AzDataMigrationDatabaseInfo` project for the **AdventureWorks2016** database and adds it to the list to be provided as parameter for project creation.
-
-```powershell
-$dbInfo1 = New-AzDataMigrationDatabaseInfo -SourceDatabaseName AdventureWorks2016
-$dbList = @($dbInfo1)
-```
-
-### Create a project object
-
-Finally you can create Azure Database Migration project called *MyDMSProject* located in *East US* using `New-AzDataMigrationProject` and adding the previously created source and target connections and the list of databases to migrate.
-
-```powershell
-$project = New-AzDataMigrationProject -ResourceGroupName myResourceGroup `
- -ServiceName $service.Name `
- -ProjectName MyDMSProject `
- -Location EastUS `
- -SourceType SQL `
- -TargetType SQLDB `
- -SourceConnection $sourceConnInfo `
- -TargetConnection $targetConnInfo `
- -DatabaseInfo $dbList
-```
-
-## Create and start a migration task
-
-Finally, create and start Azure Database Migration task. Azure Database Migration task requires connection credential information for both source and target and list of database tables to be migrated in addition to the information already provided with the project created as a prerequisite.
-
-### Create credential parameters for source and target
-
-Connection security credentials can be created as a [PSCredential](/dotnet/api/system.management.automation.pscredential) object.
-
-The following example shows the creation of *PSCredential* objects for both source and target connections providing passwords as string variables *$sourcePassword* and *$targetPassword*.
-
-```powershell
-$secpasswd = ConvertTo-SecureString -String $sourcePassword -AsPlainText -Force
-$sourceCred = New-Object System.Management.Automation.PSCredential ($sourceUserName, $secpasswd)
-$secpasswd = ConvertTo-SecureString -String $targetPassword -AsPlainText -Force
-$targetCred = New-Object System.Management.Automation.PSCredential ($targetUserName, $secpasswd)
-```
-
-### Create a table map and select source and target parameters for migration
-
-Another parameter needed for migration is mapping of tables from source to target to be migrated. Create dictionary of tables that provides a mapping between source and target tables for migration. The following example illustrates mapping between source and target tables Human Resources schema for the AdventureWorks 2016 database.
-
-```powershell
-$tableMap = New-Object 'system.collections.generic.dictionary[string,string]'
-$tableMap.Add("HumanResources.Department", "HumanResources.Department")
-$tableMap.Add("HumanResources.Employee","HumanResources.Employee")
-$tableMap.Add("HumanResources.EmployeeDepartmentHistory","HumanResources.EmployeeDepartmentHistory")
-$tableMap.Add("HumanResources.EmployeePayHistory","HumanResources.EmployeePayHistory")
-$tableMap.Add("HumanResources.JobCandidate","HumanResources.JobCandidate")
-$tableMap.Add("HumanResources.Shift","HumanResources.Shift")
-```
-
-The next step is to select the source and target databases and provide table mapping to migrate as a parameter by using the `New-AzDmsSelectedDB` cmdlet, as shown in the following example:
-
-```powershell
-$selectedDbs = New-AzDmsSelectedDB -MigrateSqlServerSqlDb -Name AdventureWorks2016 `
- -TargetDatabaseName AdventureWorks2016 `
- -TableMap $tableMap
-```
-
-### Create the migration task and start it
-
-Use the `New-AzDataMigrationTask` cmdlet to create and start a migration task. This cmdlet expects the following parameters:
-
-* *TaskType*. Type of migration task to create for SQL Server to Azure SQL Database migration type *MigrateSqlServerSqlDb* is expected.
-* *Resource Group Name*. Name of Azure resource group in which to create the task.
-* *ServiceName*. Azure Database Migration Service instance in which to create the task.
-* *ProjectName*. Name of Azure Database Migration Service project in which to create the task.
-* *TaskName*. Name of task to be created.
-* *SourceConnection*. AzDmsConnInfo object representing source SQL Server connection.
-* *TargetConnection*. AzDmsConnInfo object representing target Azure SQL Database connection.
-* *SourceCred*. [PSCredential](/dotnet/api/system.management.automation.pscredential) object for connecting to source server.
-* *TargetCred*. [PSCredential](/dotnet/api/system.management.automation.pscredential) object for connecting to target server.
-* *SelectedDatabase*. AzDataMigrationSelectedDB object representing the source and target database mapping.
-* *SchemaValidation*. (optional, switch parameter) Following the migration, performs a comparison of the schema information between source and target.
-* *DataIntegrityValidation*. (optional, switch parameter) Following the migration, performs a checksum-based data integrity validation between source and target.
-* *QueryAnalysisValidation*. (optional, switch parameter) Following the migration, performs a quick and intelligent query analysis by retrieving queries from the source database and executes them in the target.
-
-The following example creates and starts a migration task named myDMSTask:
-
-```powershell
-$migTask = New-AzDataMigrationTask -TaskType MigrateSqlServerSqlDb `
- -ResourceGroupName myResourceGroup `
- -ServiceName $service.Name `
- -ProjectName $project.Name `
- -TaskName myDMSTask `
- -SourceConnection $sourceConnInfo `
- -SourceCred $sourceCred `
- -TargetConnection $targetConnInfo `
- -TargetCred $targetCred `
- -SelectedDatabase $selectedDbs `
-```
-
-The following example creates and starts the same migration task as above but also performs all three validations:
-
-```powershell
-$migTask = New-AzDataMigrationTask -TaskType MigrateSqlServerSqlDb `
- -ResourceGroupName myResourceGroup `
- -ServiceName $service.Name `
- -ProjectName $project.Name `
- -TaskName myDMSTask `
- -SourceConnection $sourceConnInfo `
- -SourceCred $sourceCred `
- -TargetConnection $targetConnInfo `
- -TargetCred $targetCred `
- -SelectedDatabase $selectedDbs `
- -SchemaValidation `
- -DataIntegrityValidation `
- -QueryAnalysisValidation `
-```
-
-## Monitor the migration
-
-You can monitor the migration task running by querying the state property of the task as shown in the following example:
-
-```powershell
-if (($mytask.ProjectTask.Properties.State -eq "Running") -or ($mytask.ProjectTask.Properties.State -eq "Queued"))
-{
- write-host "migration task running"
-}
-```
-
-## Deleting the DMS instance
-
-After the migration is complete, you can delete the Azure DMS instance:
-
-```powershell
-Remove-AzDms -ResourceGroupName myResourceGroup -ServiceName MyDMS
-```
-
-## Next step
-
-* Review the migration guidance in the Microsoft [Database Migration Guide](/data-migration/).
dms Pre Reqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/pre-reqs.md
- Title: Prerequisites for Azure Database Migration Service
-description: Learn about an overview of the prerequisites for using the Azure Database Migration Service to perform database migrations.
--- Previously updated : 02/25/2020---
- - sql-migration-content
--
-# Overview of prerequisites for using the Azure Database Migration Service
-
-There are several prerequisites required to ensure Azure Database Migration Service runs smoothly when performing database migrations. Some of the prerequisites apply across all scenarios (source-target pairs) supported by the service, while other prerequisites are unique to a specific scenario.
-
-Prerequisites associated with using the Azure Database Migration Service are listed in the following sections.
-
-## Prerequisites common across migration scenarios
-
-Azure Database Migration Service prerequisites that are common across all supported migration scenarios include the need to:
-
-* Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md).
-* Ensure that your virtual network Network Security Group (NSG) rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
-* When using a firewall appliance in front of your source database(s), you may need to add firewall rules to allow Azure Database Migration Service to access the source database(s) for migration.
-* Configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).
-* Enable the TCP/IP protocol, which is disabled by default during SQL Server Express installation, by following the instructions in the article [Enable or Disable a Server Network Protocol](/sql/database-engine/configure-windows/enable-or-disable-a-server-network-protocol#SSMSProcedure).
-
- > [!IMPORTANT]
- > Creating an instance of Azure Database Migration Service requires access to virtual network settings that are normally not within the same resource group. As a result, the user creating an instance of DMS requires permission at subscription level. To create the required roles, which you can assign as needed, run the following script:
- >
- > ```
- >
- > $readerActions = `
- > "Microsoft.Network/networkInterfaces/ipConfigurations/read", `
- > "Microsoft.DataMigration/*/read", `
- > "Microsoft.Resources/subscriptions/resourceGroups/read"
- >
- > $writerActions = `
- > "Microsoft.DataMigration/*/write", `
- > "Microsoft.DataMigration/*/delete", `
- > "Microsoft.DataMigration/*/action", `
- > "Microsoft.Network/virtualNetworks/subnets/join/action", `
- > "Microsoft.Network/virtualNetworks/write", `
- > "Microsoft.Network/virtualNetworks/read", `
- > "Microsoft.Resources/deployments/validate/action", `
- > "Microsoft.Resources/deployments/*/read", `
- > "Microsoft.Resources/deployments/*/write"
- >
- > $writerActions += $readerActions
- >
- > # TODO: replace with actual subscription IDs
- > $subScopes = ,"/subscriptions/00000000-0000-0000-0000-000000000000/","/subscriptions/11111111-1111-1111-1111-111111111111/"
- >
- > function New-DmsReaderRole() {
- > $aRole = [Microsoft.Azure.Commands.Resources.Models.Authorization.PSRoleDefinition]::new()
- > $aRole.Name = "Azure Database Migration Reader"
- > $aRole.Description = "Lets you perform read only actions on DMS service/project/tasks."
- > $aRole.IsCustom = $true
- > $aRole.Actions = $readerActions
- > $aRole.NotActions = @()
- >
- > $aRole.AssignableScopes = $subScopes
- > #Create the role
- > New-AzRoleDefinition -Role $aRole
- > }
- >
- > function New-DmsContributorRole() {
- > $aRole = [Microsoft.Azure.Commands.Resources.Models.Authorization.PSRoleDefinition]::new()
- > $aRole.Name = "Azure Database Migration Contributor"
- > $aRole.Description = "Lets you perform CRUD actions on DMS service/project/tasks."
- > $aRole.IsCustom = $true
- > $aRole.Actions = $writerActions
- > $aRole.NotActions = @()
- >
- > $aRole.AssignableScopes = $subScopes
- > #Create the role
- > New-AzRoleDefinition -Role $aRole
- > }
- >
- > function Update-DmsReaderRole() {
- > $aRole = Get-AzRoleDefinition "Azure Database Migration Reader"
- > $aRole.Actions = $readerActions
- > $aRole.NotActions = @()
- > Set-AzRoleDefinition -Role $aRole
- > }
- >
- > function Update-DmsContributorRole() {
- > $aRole = Get-AzRoleDefinition "Azure Database Migration Contributor"
- > $aRole.Actions = $writerActions
- > $aRole.NotActions = @()
- > Set-AzRoleDefinition -Role $aRole
- > }
- >
- > # Invoke above functions
- > New-DmsReaderRole
- > New-DmsContributorRole
- > Update-DmsReaderRole
- > Update-DmsContributorRole
- > ```
-
-## Prerequisites for migrating SQL Server to Azure SQL Database
-
-In addition to Azure Database Migration Service prerequisites that are common to all migration scenarios, there are also prerequisites that apply specifically to one scenario or another.
-
-When using the Azure Database Migration Service to perform SQL Server to Azure SQL Database migrations, in addition to the prerequisites that are common to all migration scenarios, be sure to address the following additional prerequisites:
-
-* Create an instance of Azure SQL Database, which you do by following the detail in the article [Create a database in Azure SQL Database in the Azure portal](/azure/azure-sql/database/single-database-create-quickstart).
-* Download and install the [Data Migration Assistant](https://www.microsoft.com/download/details.aspx?id=53595) v3.3 or later.
-* Open your Windows Firewall to allow the Azure Database Migration Service to access the source SQL Server, which by default is TCP port 1433.
-* If you are running multiple named SQL Server instances using dynamic ports, you may wish to enable the SQL Browser Service and allow access to UDP port 1434 through your firewalls so that the Azure Database Migration Service can connect to a named instance on your source server.
-* Create a server-level [firewall rule](/azure/azure-sql/database/firewall-configure) for SQL Database to allow the Azure Database Migration Service access to the target databases. Provide the subnet range of the virtual network used for the Azure Database Migration Service.
-* Ensure that the credentials used to connect to source SQL Server instance have [CONTROL SERVER](/sql/t-sql/statements/grant-server-permissions-transact-sql) permissions.
-* Ensure that the credentials used to connect to target database have CONTROL DATABASE permission on the target database.
-
- > [!NOTE]
- > For a complete listing of the prerequisites required to use the Azure Database Migration Service to perform migrations from SQL Server to Azure SQL Database, see the tutorial [Migrate SQL Server to Azure SQL Database](./tutorial-sql-server-to-azure-sql.md).
- >
-
-## Prerequisites for migrating SQL Server to Azure SQL Managed Instance
-
-* Create a SQL Managed Instance by following the detail in the article [Create a Azure SQL Managed Instance in the Azure portal](/azure/azure-sql/managed-instance/instance-create-quickstart).
-* Open your firewalls to allow SMB traffic on port 445 for the Azure Database Migration Service IP address or subnet range.
-* Open your Windows Firewall to allow the Azure Database Migration Service to access the source SQL Server, which by default is TCP port 1433.
-* If you are running multiple named SQL Server instances using dynamic ports, you may wish to enable the SQL Browser Service and allow access to UDP port 1434 through your firewalls so that the Azure Database Migration Service can connect to a named instance on your source server.
-* Ensure that the logins used to connect the source SQL Server and target Managed Instance are members of the sysadmin server role.
-* Create a network share that the Azure Database Migration Service can use to back up the source database.
-* Ensure that the service account running the source SQL Server instance has write privileges on the network share that you created and that the computer account for the source server has read/write access to the same share.
-* Make a note of a Windows user (and password) that has full control privilege on the network share that you previously created. The Azure Database Migration Service impersonates the user credential to upload the backup files to Azure Storage container for restore operation.
-* Create a blob container and retrieve its SAS URI by using the steps in the article [Manage Azure Blob Storage resources with Storage Explorer](../vs-azure-tools-storage-explorer-blobs.md#get-the-sas-for-a-blob-container). Be sure to select all permissions (Read, Write, Delete, List) on the policy window while creating the SAS URI.
-* Ensure both Azure Database Migration Service IP address and Azure SQL Managed Instance subnet can communicate with blob container.
-
- > [!NOTE]
- > For a complete listing of the prerequisites required to use the Azure Database Migration Service to perform migrations from SQL Server to SQL Managed Instance, see the tutorial [Migrate SQL Server to SQL Managed Instance](./tutorial-sql-server-to-managed-instance.md).
-
-## Next steps
-
-For an overview of the Azure Database Migration Service and regional availability, see the article [What is the Azure Database Migration Service](dms-overview.md).
dms Quickstart Create Data Migration Service Hybrid Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/quickstart-create-data-migration-service-hybrid-portal.md
- Title: "Quickstart: Create a hybrid mode instance with Azure portal"-
-description: Use the Azure portal to create an instance of Azure Database Migration Service in hybrid mode.
--- Previously updated : 03/13/2020---
- - mode-ui
- - subject-rbac-steps
- - sql-migration-content
--
-# Quickstart: Create a hybrid mode instance with Azure portal & Azure Database Migration Service
-
-Azure Database Migration Service hybrid mode manages database migrations by using a migration worker that's hosted on-premises together with an instance of Azure Database Migration Service running in the cloud. Hybrid mode is especially useful for scenarios in which there's a lack of site-to-site connectivity between the on-premises network and Azure or if there's limited site-to-site connectivity bandwidth.
-
->[!NOTE]
->Currently, Azure Database Migration Service running in hybrid mode supports SQL Server migrations to:
->
->- Azure SQL Managed Instance with near zero downtime (online).
->- Azure SQL Database single database with some downtime (offline).
->- MongoDb to Azure CosmosDB with near zero downtime (online).
->- MongoDb to Azure CosmosDB with some downtime (offline).
-
-In this Quickstart, you use the Azure portal to create an instance of Azure Database Migration Service in hybrid mode. Afterwards, you download, install, and set up the hybrid worker in your on-premises network. During preview, you can use Azure Database Migration Service hybrid mode to migrate data from an on-premises instance of SQL Server to Azure SQL Database.
-
-> [!NOTE]
-> The Azure Database Migration Service hybrid installer runs on Microsoft Windows Server 2012 R2, Window Server 2016, Windows Server 2019, and Windows 10.
-
-> [!IMPORTANT]
-> The Azure Database Migration Service hybrid installer requires .NET 4.7.2 or later. To find the latest versions of .NET, see the [Download .NET Framework](https://dotnet.microsoft.com/download/dotnet-framework) page.
-
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
-
-## Sign in to the Azure portal
-
-From a browser, sign in to the [Azure portal](https://portal.azure.com).
-
-The default view is your service dashboard.
-
-## Register the resource provider
-
-Register the Microsoft.DataMigration resource provider before you create your first instance of Azure Database Migration Service.
-
-1. In the Azure portal, select **Subscriptions**, select the subscription in which you want to create the instance of Azure Database Migration Service, and then select **Resource providers**.
-
- ![Search resource provider](media/quickstart-create-data-migration-service-hybrid-portal/dms-portal-search-resource-provider.png)
-
-2. Search for migration, and then to the right of **Microsoft.DataMigration**, select **Register**.
-
- ![Register resource provider](media/quickstart-create-data-migration-service-hybrid-portal/dms-portal-register-resource-provider.png)
-
-## Create an instance of the service
-
-1. Select +**Create a resource** to create an instance of Azure Database Migration Service.
-
-2. Search the Marketplace for "migration", select **Azure Database Migration Service**, and then on the **Azure Database Migration Service** screen, select **Create**.
-
-3. On the **Create Migration Service** screen:
-
- - Choose a **Service Name** that is memorable and unique to identify your instance of Azure Database Migration Service.
- - Select the Azure **Subscription** in which you want to create the instance.
- - Select an existing **Resource Group** or create a new one.
- - Choose the **Location** that is closest to your source or target server.
- - For **Service mode**, select **Hybrid (Preview)**.
-
- ![Create migration service - basics](media/quickstart-create-data-migration-service-hybrid-portal/dms-create-service-basics.png)
-
-4. Select **Review + create**.
-
-5. On the **Review + create** tab, review the Terms, verify the other information provided, and then select **Create**.
-
- ![Create migration service - Review + create](media/quickstart-create-data-migration-service-hybrid-portal/dms-create-service-review-and-create.png)
-
- After a few moments, your instance of Azure Database Migration Service in hybrid mode is created and ready to set up. The Azure Database Migration Service instance displays as shown in the following image:
-
- ![Azure Database Migration Service hybrid mode instance](media/quickstart-create-data-migration-service-hybrid-portal/dms-instance-hybrid-mode.png)
-
-6. After the service created, select **Properties**, and then copy the value displayed in the **Resource Id** box, which you'll use to install the Azure Database Migration Service hybrid worker.
-
- ![Azure Database Migration Service hybrid mode properties](media/quickstart-create-data-migration-service-hybrid-portal/dms-copy-resource-id.png)
-
-## Create Azure App registration ID
-
-You need to create an Azure App registration ID that the on-premises hybrid worker can use to communicate with Azure Database Migration Service in the cloud.
-
-1. In the Azure portal, select **Microsoft Entra ID**, select **App registrations**, and then select **New registration**.
-2. Specify a name for the application, and then, under **Supported account types**, select the type of accounts to support to specify who can use the application.
-
- ![Azure Database Migration Service hybrid mode register application](media/quickstart-create-data-migration-service-hybrid-portal/dms-register-application.png)
-
-3. Use the default values for the **Redirect URI (optional)** fields, and then select **Register**.
-
-4. After App ID registration is completed, make a note of the **Application (client) ID**, which you'll use while installing the hybrid worker.
-
-5. In the Azure portal, navigate to Azure Database Migration Service.
-
-6. In the navigation menu, select **Access control (IAM)**.
-
-7. Select **Add** > **Add role assignment**.
-
- :::image type="content" source="~/reusable-content/ce-skilling/azure/media/role-based-access-control/add-role-assignment-menu-generic.png" alt-text="Screenshot showing Access control (IAM) page with Add role assignment menu open.":::
-
-8. On the **Role** tab, select the **Contributor** role.
-
- :::image type="content" source="~/reusable-content/ce-skilling/azure/media/role-based-access-control/add-role-assignment-role-generic.png" alt-text="Screenshot showing Add role assignment page with Role tab selected.":::
-
-9. On the **Members** tab, select **User, group, or service principal**, and then select the App ID name.
-
-10. On the **Review + assign** tab, select **Review + assign** to assign the role.
-
- For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml).
-
-## Download and install the hybrid worker
-
-1. In the Azure portal, navigate to your instance of Azure Database Migration Service.
-
-2. Under **Settings**, select **Hybrid**, and then select **Installer download** to download the hybrid worker.
-
- ![Azure Database Migration Service hybrid worker download](media/quickstart-create-data-migration-service-hybrid-portal/dms-installer-download.png)
-
-3. Extract the ZIP file on the server that will be hosting the Azure Database Migration Service hybrid worker.
-
- > [!IMPORTANT]
- > The Azure Database Migration Service hybrid installer requires .NET 4.7.2 or later. To find the latest versions of .NET, see the [Download .NET Framework](https://dotnet.microsoft.com/download/dotnet-framework) page.
-
-4. In the install folder, locate and open the **dmsSettings.json** file, specify the **ApplicationId** and **resourceId**, and then save the file.
-
- ![Azure Database Migration Service hybrid worker settings](media/quickstart-create-data-migration-service-hybrid-portal/dms-settings.png)
-
-5. Generate a certificate that Azure Database Migration Service can use to authenticate the communication from the hybrid worker by using the following command.
-
- ```
- <drive>:\<folder>\Install>DMSWorkerBootstrap.exe -a GenerateCert
- ```
-
- A certificate is generated in the Install folder.
-
- ![Azure Database Migration Service hybrid worker certificate](media/quickstart-create-data-migration-service-hybrid-portal/dms-certificate.png)
-
-6. In the Azure portal, navigate to the App ID, under **Manage**, select **Certificated & secrets**, and then select **Upload certificate** to select the public certificate you generated.
-
- ![Azure Database Migration Service hybrid worker certificate upload](media/quickstart-create-data-migration-service-hybrid-portal/dms-app-upload-certificate.png)
-
-7. Install the Azure Database Migration Service hybrid worker on your on-premises server by running the following command:
-
- ```
- <drive>:\<folder>\Install>DMSWorkerBootstrap.exe -a Install -IAcceptDMSLicenseTerms -d
- ```
-
- > [!NOTE]
- > When running the install command, you can also use the following parameters:
- >
- > - **-TelemetryOptOut** - Stops the worker from sending telemetry but continues to log locally minimally. The installer still sends telemetry.
- > - **-p {InstallLocation}**. Enables changing the installation path, which by default is ΓÇ£C:\Program Files\DatabaseMigrationServiceHybridΓÇ¥.
-
-8. If the installer runs without error, then the service will show an online status in Azure Database Migration Service and you're ready to migrate your databases.
-
- ![Azure Database Migration Service online](media/quickstart-create-data-migration-service-hybrid-portal/dms-instance-hybrid-mode-online.png)
-
-## Uninstall Azure Database Migration Service hybrid mode
-
-Currently, uninstalling Azure Database Migration Service hybrid mode is supported only via the Azure Database Migration Service hybrid worker installer on your on-premises server, by using the following command:
-
-```
-<drive>:\<folder>\Install>DMSWorkerBootstrap.exe -a uninstall
-```
-
-> [!NOTE]
-> When running the uninstall command, you can also use the ΓÇ£-ReuseCertΓÇ¥ parameter, which keeps the AdApp cert generated by the generateCert workflow. This enables using the same cert that was previously generated and uploaded.
-
-## Set up the Azure Database Migration Service hybrid worker using PowerShell
-
-In addition to installing the Azure Database Migration Service hybrid worker via the Azure portal, we provide a [PowerShell script](https://techcommunity.microsoft.com/gxcuf89792/attachments/gxcuf89792/MicrosoftDataMigration/119/1/DMS_Hybrid_Script.zip) that you can use to automate the worker installation steps after you create a new instance of Azure Database Migration Service in hybrid mode. The script:
-
-1. Creates a new AdApp.
-2. Downloads the installer.
-3. Runs the generateCert workflow.
-4. Uploads the certificate.
-5. Adds the AdApp as contributor to your Azure Database Migration Service instance.
-6. Runs the install workflow.
-
-This script is intended for quick prototyping when the user already has all the necessary permissions in the environment. Note that in your production environment, the AdApp and Cert may have different requirements, so the script could fail.
-
-> [!IMPORTANT]
-> This script assumes that there is an existing instance of Azure Database Migration Service in hybrid mode and that the Azure account used has permissions to create AdApps in the tenant and to modify Azure RBAC on the subscription.
-
-Fill in the parameters at the top of the script, and then run the script from an Administrator PowerShell instance.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Migrate SQL Server to an Azure SQL Managed Instance online](tutorial-sql-server-managed-instance-online.md)
-> [Migrate SQL Server to Azure SQL Database offline](tutorial-sql-server-to-azure-sql.md)
dms Quickstart Create Data Migration Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/quickstart-create-data-migration-service-portal.md
- Title: "Quickstart: Create an instance using the Azure portal"-
-description: Use the Azure portal to create an instance of Azure Database Migration Service.
--- Previously updated : 01/29/2021---
- - mode-ui
- - sql-migration-content
--
-# Quickstart: Create an instance of the Azure Database Migration Service by using the Azure portal
-
-In this quickstart, you use the Azure portal to create an instance of Azure Database Migration Service. After you create the instance, you can use it to migrate data from multiple database sources to Azure data platforms, such as from SQL Server to Azure SQL Database or from SQL Server to an Azure SQL Managed Instance.
-
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
-
-## Sign in to the Azure portal
-
-From a web browser, sign in to the [Azure portal](https://portal.azure.com). The default view is your service dashboard.
-
-> [!NOTE]
-> You can create up to 10 instances of DMS per subscription per region. If you require a greater number of instances, please create a support ticket.
-
-<! Register the resource provider -->
-
-<! Create an instance of the service -->
-
-## Clean up resources
-
-You can clean up the resources created in this quickstart by deleting the [Azure resource group](../azure-resource-manager/management/overview.md). To delete the resource group, navigate to the instance of the Azure Database Migration Service that you created. Select the **Resource group** name, and then select **Delete resource group**. This action deletes all assets in the resource group as well as the group itself.
-
-## Next steps
-
-* [Migrate SQL Server to Azure SQL Database](tutorial-sql-server-to-azure-sql.md)
-* [Migrate SQL Server to an Azure SQL Managed Instance offline](tutorial-sql-server-to-managed-instance.md)
-* [Migrate SQL Server to an Azure SQL Managed Instance online](tutorial-sql-server-managed-instance-online.md)
dms Resource Custom Roles Sql Db Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-db-managed-instance.md
- Title: "Custom roles: Online SQL Server to SQL Managed Instance migrations"-
-description: Learn to use the custom roles for SQL Server to Azure SQL Managed Instance online migrations.
--- Previously updated : 02/08/2021---
- - sql-migration-content
--
-# Custom roles for SQL Server to Azure SQL Managed Instance online migrations
-
-Azure Database Migration Service uses an APP ID to interact with Azure Services. The APP ID requires either the Contributor role at the Subscription level (which many Corporate security departments won't allow) or creation of custom roles that grant the specific permissions that Azure Database Migration Service requires. Since there's a limit of 2,000 custom roles in Microsoft Entra ID, you may want to combine all permissions required specifically by the APP ID into one or two custom roles, and then grant the APP ID the custom role on specific objects or resource groups (vs. at the subscription level). If the number of custom roles isn't a concern, you can split the custom roles by resource type, to create three custom roles in total as described below.
-
-The AssignableScopes section of the role definition json string allows you to control where the permissions appear in the **Add Role Assignment** UI in the portal. You'll likely want to define the role at the resource group or even resource level to avoid cluttering the UI with extra roles. Note that this doesn't perform the actual role assignment.
-
-## Minimum number of roles
-
-We currently recommend creating a minimum of two custom roles for the APP ID, one at the resource level and the other at the subscription level.
-
-> [!NOTE]
-> The last custom role requirement may eventually be removed, as new SQL Managed Instance code is deployed to Azure.
-
-**Custom Role for the APP ID**. This role is required for Azure Database Migration Service migration at the *resource* or *resource group* level that hosts the Azure Database Migration Service (for more information about the APP ID, see the article [Use the portal to create a Microsoft Entra application and service principal that can access resources](/entra/identity-platform/howto-create-service-principal-portal)).
-
-```json
-{
- "Name": "DMS Role - App ID",
- "IsCustom": true,
- "Description": "DMS App ID access to complete MI migrations",
- "Actions": [
- "Microsoft.Storage/storageAccounts/read",
- "Microsoft.Storage/storageAccounts/listKeys/action",
- "Microsoft.Storage/storageaccounts/blobservices/read",
- "Microsoft.Storage/storageaccounts/blobservices/write",
- "Microsoft.Sql/managedInstances/read",
- "Microsoft.Sql/managedInstances/write",
- "Microsoft.Sql/managedInstances/databases/read",
- "Microsoft.Sql/managedInstances/databases/write",
- "Microsoft.Sql/managedInstances/databases/delete",
- "Microsoft.Sql/managedInstances/metrics/read",
- "Microsoft.DataMigration/locations/*",
- "Microsoft.DataMigration/services/*"
- ],
- "NotActions": [
- ],
- "AssignableScopes": [
- "/subscriptions/<subscription_id>/ResourceGroups/<StorageAccount_rg_name>",
- "/subscriptions/<subscription_id>/ResourceGroups/<ManagedInstance_rg_name>",
- "/subscriptions/<subscription_id>/ResourceGroups/<DMS_rg_name>",
- ]
-}
-```
-
-**Custom role for the APP ID - subscription**. This role is required for Azure Database Migration Service migration at *subscription* level that hosts the SQL Managed Instance.
-
-```json
-{
- "Name": "DMS Role - App ID - Sub",
- "IsCustom": true,
- "Description": "DMS App ID access at subscription level to complete MI migrations",
- "Actions": [
- "Microsoft.Sql/locations/managedDatabaseRestoreAzureAsyncOperation/*"
- ],
- "NotActions": [
- ],
- "AssignableScopes": [
- "/subscriptions/<subscription_id>"
- ]
-}
-```
-
-The json above must be stored in two text files, and you can use either the AzureRM, AZ PowerShell cmdlets, or Azure CLI to create the roles using either **New-AzureRmRoleDefinition (AzureRM)** or **New-AzRoleDefinition (AZ)**.
-
-For more information, see the article [Azure custom roles](../role-based-access-control/custom-roles.md).
-
-After you create these custom roles, you must add role assignments to users and APP ID(s) to the appropriate resources or resource groups:
-
-* The ΓÇ£DMS Role - App IDΓÇ¥ role must be granted to the APP ID that will be used for the migrations, and also at the Storage Account, Azure Database Migration Service instance, and SQL Managed Instance resource levels. It is granted at the resource or resource group level that hosts the Azure Database Migration Service.
-* The ΓÇ£DMS Role - App ID - SubΓÇ¥ role must be granted to the APP ID at the subscription level that hosts the SQL Managed Instance (granting at the resource or resource group will fail). This requirement is temporary until a code update is deployed.
-
-## Expanded number of roles
-
-If the number of custom roles in your Microsoft Entra ID isn't a concern, we recommend you create a total of three roles. You'll still need the ΓÇ£DMS Role - App ID ΓÇô SubΓÇ¥ role, but the ΓÇ£DMS Role - App IDΓÇ¥ role above is split by resource type into two different roles.
-
-**Custom role for the APP ID for SQL Managed Instance**
-
-```json
-{
- "Name": "DMS Role - App ID - SQL MI",
- "IsCustom": true,
- "Description": "DMS App ID access to complete MI migrations",
- "Actions": [
- "Microsoft.Sql/managedInstances/read",
- "Microsoft.Sql/managedInstances/write",
- "Microsoft.Sql/managedInstances/databases/read",
- "Microsoft.Sql/managedInstances/databases/write",
- "Microsoft.Sql/managedInstances/databases/delete",
- "Microsoft.Sql/managedInstances/metrics/read"
- ],
- "NotActions": [
- ],
- "AssignableScopes": [
- "/subscriptions/<subscription_id>/resourceGroups/<ManagedInstance_rg_name>"
- ]
-}
-```
-
-**Custom role for the APP ID for Storage**
-
-```json
-{
- "Name": "DMS Role - App ID - Storage",
- "IsCustom": true,
- "Description": "DMS App ID storage access to complete MI migrations",
- "Actions": [
-"Microsoft.Storage/storageAccounts/read",
- "Microsoft.Storage/storageAccounts/listKeys/action",
- "Microsoft.Storage/storageaccounts/blobservices/read",
- "Microsoft.Storage/storageaccounts/blobservices/write"
- ],
- "NotActions": [
- ],
- "AssignableScopes": [
- "/subscriptions/<subscription_id>/resourceGroups/<StorageAccount_rg_name>"
- ]
-}
-```
-
-## Role assignment
-
-To assign a role to users/APP ID, open the Azure portal, perform the following steps:
-
-1. Navigate to the resource group or resource (except for the role that needs to be granted on the subscription), go to **Access Control**, and then scroll to find the custom roles you just created.
-
-2. Select the appropriate role, select the APP ID, and then save the changes.
-
- Your APP ID(s) now appears listed on the **Role assignments** tab.
-
-## Next steps
-
-* Review the migration guidance for your scenario in the Microsoft [Database Migration Guide](/data-migration/).
dms Resource Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-network-topologies.md
- Title: Network topologies for SQL Managed Instance migrations-
-description: Learn the source and target configurations for Azure SQL Managed Instance migrations using the Azure Database Migration Service.
--- Previously updated : 01/08/2020---
- - sql-migration-content
--
-# Network topologies for Azure SQL Managed Instance migrations using Azure Database Migration Service
-
-This article discusses various network topologies that Azure Database Migration Service can work with to provide a comprehensive migration experience from SQL Servers to Azure SQL Managed Instance.
-
-## Azure SQL Managed Instance configured for Hybrid workloads
-
-Use this topology if your Azure SQL Managed Instance is connected to your on-premises network. This approach provides the most simplified network routing and yields maximum data throughput during the migration.
-
-![Network Topology for Hybrid Workloads](media/resource-network-topologies/hybrid-workloads.png)
-
-**Requirements**
--- In this scenario, the SQL Managed Instance and the Azure Database Migration Service instance are created in the same Microsoft Azure Virtual Network, but they use different subnets. -- The virtual network used in this scenario is also connected to the on-premises network by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md).-
-## SQL Managed Instance isolated from the on-premises network
-
-Use this network topology if your environment requires one or more of the following scenarios:
--- The SQL Managed Instance is isolated from on-premises connectivity, but your Azure Database Migration Service instance is connected to the on-premises network.-- If Azure role-based access control (Azure RBAC) policies are in place and you need to limit the users to accessing the same subscription that is hosting the SQL Managed Instance.-- The virtual networks used for the SQL Managed Instance and Azure Database Migration Service are in different subscriptions.-
-![Network Topology for Managed Instance isolated from the on-premises network](media/resource-network-topologies/mi-isolated-workload.png)
-
-**Requirements**
--- The virtual network that Azure Database Migration Service uses for this scenario must also be connected to the on-premises network by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md).-- Set up [VNet network peering](../virtual-network/virtual-network-peering-overview.md) between the virtual network used for SQL Managed Instance and Azure Database Migration Service.-
-## Cloud-to-cloud migrations: Shared virtual network
-
-Use this topology if the source SQL Server is hosted in an Azure VM and shares the same virtual network with SQL Managed Instance and Azure Database Migration Service.
-
-![Network Topology for Cloud-to-Cloud migrations with a shared VNet](media/resource-network-topologies/cloud-to-cloud.png)
-
-**Requirements**
--- No additional requirements.-
-## Cloud to cloud migrations: Isolated virtual network
-
-Use this network topology if your environment requires one or more of the following scenarios:
--- The SQL Managed Instance is provisioned in an isolated virtual network.-- If Azure role-based access control (Azure RBAC) policies are in place and you need to limit the users to accessing the same subscription that is hosting SQL Managed Instance.-- The virtual networks used for SQL Managed Instance and Azure Database Migration Service are in different subscriptions.-
-![Network Topology for Cloud-to-Cloud migrations with an isolated VNet](media/resource-network-topologies/cloud-to-cloud-isolated.png)
-
-**Requirements**
--- Set up [VNet network peering](../virtual-network/virtual-network-peering-overview.md) between the virtual network used for SQL Managed Instance and Azure Database Migration Service.-
-## Inbound security rules
-
-| **NAME** | **PORT** | **PROTOCOL** | **SOURCE** | **DESTINATION** | **ACTION** |
-||-|--||--||
-| DMS_subnet | Any | Any | DMS SUBNET | Any | Allow |
-
-## Outbound security rules
-
-| **NAME** | **PORT** | **PROTOCOL** | **SOURCE** | **DESTINATION** | **ACTION** | **Reason for rule** |
-||-|--||||--|
-| ServiceBus | 443, ServiceTag: ServiceBus | TCP | Any | Any | Allow | Management plane communication through Service Bus. <br/>(If Microsoft peering is enabled, you may not need this rule.) |
-| Storage | 443, ServiceTag: Storage | TCP | Any | Any | Allow | Management plane using Azure blob storage. <br/>(If Microsoft peering is enabled, you may not need this rule.) |
-| Diagnostics | 443, ServiceTag: AzureMonitor | TCP | Any | Any | Allow | DMS uses this rule to collect diagnostic information for troubleshooting purposes. <br/>(If Microsoft peering is enabled, you may not need this rule.) |
-| SQL Source server | 1433 (or TCP IP port that SQL Server is listening to) | TCP | Any | On-premises address space | Allow | SQL Server source connectivity from DMS <br/>(If you have site-to-site connectivity, you may not need this rule.) |
-| SQL Server named instance | 1434 | UDP | Any | On-premises address space | Allow | SQL Server named instance source connectivity from DMS <br/>(If you have site-to-site connectivity, you may not need this rule.) |
-| SMB share | 445 (if scenario neeeds) | TCP | Any | On-premises address space | Allow | SMB network share for DMS to store database backup files for migrations to Azure SQL Database MI and SQL Servers on Azure VM <br/>(If you have site-to-site connectivity, you may not need this rule). |
-| DMS_subnet | Any | Any | Any | DMS_Subnet | Allow | |
-
-## See also
--- [Migrate SQL Server to SQL Managed Instance](./tutorial-sql-server-to-managed-instance.md)-- [Overview of prerequisites for using Azure Database Migration Service](./pre-reqs.md)-- [Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md)-
-## Next steps
--- For an overview of Azure Database Migration Service, see the article [What is Azure Database Migration Service?](dms-overview.md).-- For current information about regional availability of Azure Database Migration Service, see the [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=database-migration) page.
dms Tutorial Sql Server Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online.md
- Title: "Tutorial: Migrate SQL Server online to SQL Managed Instance"-
-description: Learn to perform an online migration from SQL Server to an Azure SQL Managed Instance by using Azure Database Migration Service (classic)
--- Previously updated : 06/07/2023---
- - sql-migration-content
--
-# Tutorial: Migrate SQL Server to an Azure SQL Managed Instance online using DMS (classic)
--
-> [!NOTE]
-> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Managed Instance by using the [Azure SQL migration extension for Azure Data Studio](/data-migration/sql-server/managed-instance/database-migration-service).
->
-> To compare features between versions, review [compare versions](dms-overview.md#compare-versions).
-
-You can use Azure Database Migration Service to migrate the databases from a SQL Server instance to an [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview) with minimal downtime. For extra methods that may require some manual effort, see the article [SQL Server instance migration to Azure SQL Managed Instance](/azure/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide).
-
-In this tutorial, you migrate the [AdventureWorks2016](/sql/samples/adventureworks-install-configure#download-backup-files) database from an on-premises instance of SQL Server to a SQL Managed Instance with minimal downtime by using Azure Database Migration Service.
-
-You learn how to:
-> [!div class="checklist"]
->
-> * Register the Azure DataMigration resource provider.
-> * Create an instance of Azure Database Migration Service.
-> * Create a migration project and start online migration by using Azure Database Migration Service.
-> * Monitor the migration.
-> * Perform the migration cutover when you are ready.
-
-> [!IMPORTANT]
-> For online migrations from SQL Server to SQL Managed Instance using Azure Database Migration Service, you must provide the full database backup and subsequent log backups in the SMB network share that the service can use to migrate your databases. Azure Database Migration Service does not initiate any backups, and instead uses existing backups, which you may already have as part of your disaster recovery plan, for the migration.
-> Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups (that is, full and t-log) into a single backup media isn't supported.
-> Use compressed backups to reduce the likelihood of experiencing potential issues associated with migrating large backups.
-
-> [!NOTE]
-> Using Azure Database Migration Service to perform an online migration requires creating an instance based on the Premium pricing tier.
-
-> [!IMPORTANT]
-> For an optimal migration experience, Microsoft recommends creating an instance of Azure Database Migration Service in the same Azure region as the target database. Moving data across regions or geographies can slow down the migration process and introduce errors.
-
-> [!IMPORTANT]
-> Reduce the duration of the online migration process as much as possible to minimize the risk of interruption caused by instance reconfiguration or planned maintenance. In case of such an event, migration process will start from the beginning. In case of planned maintenance, there is a grace period of 36 hours before migration process is restarted.
--
-This article describes an online migration from SQL Server to a SQL Managed Instance. For an offline migration, see [Migrate SQL Server to a SQL Managed Instance offline using DMS](tutorial-sql-server-to-managed-instance.md).
-
-## Prerequisites
-
-To complete this tutorial, you need to:
-
-* Download and install [SQL Server 2016 or later](https://www.microsoft.com/sql-server/sql-server-downloads).
-* Enable the TCP/IP protocol, which is disabled by default during SQL Server Express installation, by following the instructions in the article [Enable or Disable a Server Network Protocol](/sql/database-engine/configure-windows/enable-or-disable-a-server-network-protocol#SSMSProcedure).
-* [Restore the AdventureWorks2016 database to the SQL Server instance.](/sql/samples/adventureworks-install-configure#restore-to-sql-server)
-* Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). [Learn network topologies for SQL Managed Instance migrations using Azure Database Migration Service](./resource-network-topologies.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details.
-
- > [!NOTE]
- > During virtual network setup, if you use ExpressRoute with network peering to Microsoft, add the following service [endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) to the subnet in which the service will be provisioned:
- >
- > * Target database endpoint (for example, SQL endpoint, Azure Cosmos DB endpoint, and so on)
- > * Storage endpoint
- > * Service bus endpoint
- >
- > This configuration is necessary because Azure Database Migration Service lacks internet connectivity.
- >
- >If you donΓÇÖt have site-to-site connectivity between the on-premises network and Azure or if there is limited site-to-site connectivity bandwidth, consider using Azure Database Migration Service in hybrid mode (Preview). Hybrid mode leverages an on-premises migration worker together with an instance of Azure Database Migration Service running in the cloud. To create an instance of Azure Database Migration Service in hybrid mode, see the article [Create an instance of Azure Database Migration Service in hybrid mode using the Azure portal](./quickstart-create-data-migration-service-portal.md).
-
- > [!IMPORTANT]
- > Regarding the storage account used as part of the migration, you must either:
- > * Choose to allow all network to access the storage account.
- > * Turn on [subnet delegation](../virtual-network/manage-subnet-delegation.md) on MI subnet and update the Storage Account firewall rules to allow this subnet.
- > * You can't use an Azure Storage account that has a private endpoint with Azure Database Migration Service.
-
-* Ensure that your virtual network Network Security Group rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
-* Configure your [Windows Firewall for source database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).
-* Open your Windows Firewall to allow Azure Database Migration Service to access the source SQL Server, which by default is TCP port 1433. If your default instance is listening on some other port, add that to the firewall.
-* If you're running multiple named SQL Server instances using dynamic ports, you may wish to enable the SQL Browser Service and allow access to UDP port 1434 through your firewalls so that Azure Database Migration Service can connect to a named instance on your source server.
-* If you're using a firewall appliance in front of your source databases, you may need to add firewall rules to allow Azure Database Migration Service to access the source database(s) for migration, and files via SMB port 445.
-* Create a SQL Managed Instance by following the detail in the article [Create a SQL Managed Instance in the Azure portal](/azure/azure-sql/managed-instance/instance-create-quickstart).
-* Ensure that the logins used to connect the source SQL Server and the target SQL Managed Instance are members of the sysadmin server role.
-* Provide an SMB network share that contains all your database full database backup files and subsequent transaction log backup files, which Azure Database Migration Service can use for database migration.
-* Ensure that the service account running the source SQL Server instance has write privileges on the network share that you created and that the computer account for the source server has read/write access to the same share.
-* Make a note of a Windows user (and password) that has full control privilege on the network share that you previously created. Azure Database Migration Service impersonates the user credential to upload the backup files to Azure Storage container for restore operation.
-* Create a Microsoft Entra Application ID that generates the Application ID key that Azure Database Migration Service can use to connect to target Azure SQL Managed Instance and Azure Storage Container. For more information, see the article [Use portal to create a Microsoft Entra application and service principal that can access resources](/entra/identity-platform/howto-create-service-principal-portal).
-
- > [!NOTE]
- > The Application ID used by the Azure Database Migration Service supports secret (password-based) authentication for service principals. It does not support certificate-based authentication.
-
- > [!NOTE]
- > Azure Database Migration Service requires the Contributor permission on the subscription for the specified Application ID. Alternatively, you can create custom roles that grant the specific permissions that Azure Database Migration Service requires. For step-by-step guidance about using custom roles, see the article [Custom roles for SQL Server to SQL Managed Instance online migrations](./resource-custom-roles-sql-db-managed-instance.md).
-
-* Create or make a note of **Standard Performance tier**, Azure Storage Account, that allows DMS service to upload the database backup files to and use for migrating databases. Make sure to create the Azure Storage Account in the same region as the Azure Database Migration Service instance is created.
-
- > [!NOTE]
- > When you migrate a database that's protected by [Transparent Data Encryption](/azure/azure-sql/database/transparent-data-encryption-tde-overview) to a managed instance by using online migration, the corresponding certificate from the on-premises or Azure VM SQL Server instance must be migrated before the database restore. For detailed steps, see [Migrate a TDE cert to a managed instance](/azure/azure-sql/database/transparent-data-encryption-tde-overview).
---
-> [!NOTE]
-> For additional detail, see the article [Network topologies for Azure SQL Managed Instance migrations using Azure Database Migration Service](./resource-network-topologies.md).
-
-## Create a migration project
-
-After an instance of the service is created, locate it within the Azure portal, open it, and then create a new migration project.
-
-1. In the Azure portal menu, select **All services**. Search for and select **Azure Database Migration Services**.
-
- ![Locate all instances of Azure Database Migration Service](media/tutorial-sql-server-to-managed-instance-online/dms-search.png)
-
-2. On the **Azure Database Migration Services** screen, select the Azure Database Migration Service instance that you created.
-
-3. Select **New Migration Project**.
-
- ![Locate your instance of Azure Database Migration Service](media/tutorial-sql-server-to-managed-instance-online/dms-create-project-1.png)
-
-4. On the **New migration project** screen, specify a name for the project, in the **Source server type** text box, select **SQL Server**, in the **Target server type** text box, select **Azure SQL Database Managed Instance**, and then for **Choose type of activity**, select **Online data migration**.
-
- ![Create Database Migration Service Project](media/tutorial-sql-server-to-managed-instance-online/dms-create-project-2.png)
-
-5. Select **Create and run activity** to create the project and run the migration activity.
-
-## Specify source details
-
-1. On the **Select source** screen, specify the connection details for the source SQL Server instance.
-
- Make sure to use a Fully Qualified Domain Name (FQDN) for the source SQL Server instance name. You can also use the IP Address for situations in which DNS name resolution isn't possible.
-
-2. If you haven't installed a trusted certificate on your server, select the **Trust server certificate** check box.
-
- When a trusted certificate isn't installed, SQL Server generates a self-signed certificate when the instance is started. This certificate is used to encrypt the credentials for client connections.
-
- > [!CAUTION]
- > TLS connections that are encrypted using a self-signed certificate does not provide strong security. They are susceptible to man-in-the-middle attacks. You should not rely on TLS using self-signed certificates in a production environment or on servers that are connected to the internet.
-
- ![Source Details](media/tutorial-sql-server-to-managed-instance-online/dms-source-details.png)
-
-3. Select **Next: Select target**
-
-## Specify target details
-
-1. On the **Select target** screen, specify the **Application ID** and **Key** that the DMS instance can use to connect to the target instance of SQL Managed Instance and the Azure Storage Account.
-
- For more information, see the article [Use portal to create a Microsoft Entra application and service principal that can access resources](/entra/identity-platform/howto-create-service-principal-portal).
-
-2. Select the **Subscription** containing the target instance of SQL Managed Instance, and then choose the target SQL Managed instance.
-
- If you haven't already provisioned the SQL Managed Instance, select the [link](/azure/azure-sql/managed-instance/instance-create-quickstart) to help you provision the instance. When the SQL Managed Instance is ready, return to this specific project to execute the migration.
-
-3. Provide **SQL User** and **Password** to connect to the SQL Managed Instance.
-
- ![Select Target](media/tutorial-sql-server-to-managed-instance-online/dms-target-details.png)
-
-4. Select **Next: Select databases**.
-
-## Specify source databases
-
-1. On the **Select databases** screen, select the source databases that you want to migrate.
-
- ![Select Source Databases](media/tutorial-sql-server-to-managed-instance-online/dms-source-database.png)
-
- > [!IMPORTANT]
- > If you use SQL Server Integration Services (SSIS), DMS does not currently support migrating the catalog database for your SSIS projects/packages (SSISDB) from SQL Server to SQL Managed Instance. However, you can provision SSIS in Azure Data Factory (ADF) and redeploy your SSIS projects/packages to the destination SSISDB hosted by SQL Managed Instance. For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
-
-2. Select **Next: Configure migration settings**.
-
-## Configure migration settings
-
-1. On the **Configure migration settings** screen, provide the following details:
-
- | Parameter | Description |
- |--||
- |**SMB Network location share** | The local SMB network share or Azure file share that contains the full database backup files and transaction log backup files that Azure Database Migration Service can use for migration. The service account running the source SQL Server instance must have read\write privileges on this network share. Provide an FQDN or IP addresses of the server in the network share, for example, '\\\servername.domainname.com\backupfolder' or '\\\IP address\backupfolder'. For improved performance, it's recommended to use separate folder for each database to be migrated. You can provide the database level file share path by using the **Advanced Settings** option. If you're running into issues connecting to the SMB share, see [SMB share](known-issues-azure-sql-db-managed-instance-online.md#smb-file-share-connectivity). |
- |**User name** | Make sure that the Windows user has full control privilege on the network share that you provided above. Azure Database Migration Service impersonates the user credential to upload the backup files to Azure Storage container for restore operation. If using Azure File share, use the storage account name prepended with AZURE\ as the username. |
- |**Password** | Password for the user. If using Azure file share, use a storage account key as the password. |
- |**Subscription of the Azure Storage Account** | Select the subscription that contains the Azure Storage Account. |
- |**Azure Storage Account** | Select the Azure Storage Account that DMS can upload the backup files from the SMB network share to and use for database migration. We recommend selecting the Storage Account in the same region as the DMS service for optimal file upload performance. |
-
- ![Configure Migration Settings](media/tutorial-sql-server-to-managed-instance-online/dms-configure-migration-settings.png)
-
- > [!NOTE]
- > If Azure Database Migration Service shows error ΓÇÿSystem Error 53ΓÇÖ or ΓÇÿSystem Error 57ΓÇÖ, the cause might result from an inability of Azure Database Migration Service to access Azure file share. If you encounter one of these errors, please grant access to the storage account from the virtual network using the instructions [here](../storage/common/storage-network-security.md?toc=%2fazure%2fvirtual-network%2ftoc.json#grant-access-from-a-virtual-network).
-
- > [!IMPORTANT]
- > If loopback check functionality is enabled and the source SQL Server and file share are on the same computer, then source won't be able to access the files hare using FQDN. To fix this issue, disable loopback check functionality using the instructions [here](https://support.microsoft.com/help/926642/error-message-when-you-try-to-access-a-server-locally-by-using-its-fqd).
-
-2. Select **Next: Summary**.
-
-## Review the migration summary
-
-1. On the **Summary** screen, in the **Activity name** text box, specify a name for the migration activity.
-
-2. Review and verify the details associated with the migration project.
-
- ![Migration project summary](media/tutorial-sql-server-to-managed-instance-online/dms-project-summary.png)
-
-## Run and monitor the migration
-
-1. Select **Start migration**.
-
-2. The migration activity window appears to display the current databases migration status. Select **Refresh** to update the display.
-
- ![Migration activity in progress](media/tutorial-sql-server-to-managed-instance-online/dms-monitor-migration.png)
-
- You can further expand the databases and logins categories to monitor the migration status of the respective server objects.
-
- ![Migration activity status](media/tutorial-sql-server-to-managed-instance-online/dms-monitor-migration-extend.png)
-
-## Performing migration cutover
-
-After the full database backup is restored on the target instance of SQL Managed Instance, the database is available for performing a migration cutover.
-
-1. When you're ready to complete the online database migration, select **Start Cutover**.
-
-2. Stop all the incoming traffic to source databases.
-
-3. Take the [tail-log backup], make the backup file available in the SMB network share, and then wait until this final transaction log backup is restored.
-
- At that point, you see **Pending changes** set to 0.
-
-4. Select **Confirm**, and then select **Apply**.
-
- ![Preparing to complete cutover](media/tutorial-sql-server-to-managed-instance-online/dms-complete-cutover.png)
-
- > [!IMPORTANT]
- > After the cutover, availability of SQL Managed Instance with Business Critical service tier only can take significantly longer than General Purpose as three secondary replicas have to be seeded for Always On High Availability group. This operation duration depends on the size of data, for more information, see [Management operations duration](/azure/azure-sql/managed-instance/management-operations-overview#duration).
-
-5. When the database migration status shows **Completed**, connect your applications to the new target instance of SQL Managed Instance.
-
- ![Cutover complete](media/tutorial-sql-server-to-managed-instance-online/dms-cutover-complete.png)
-
-## Additional resources
-
-* For a tutorial showing you how to migrate a database to SQL Managed Instance using the T-SQL RESTORE command, see [Restore a backup to SQL Managed Instance using the restore command](/azure/azure-sql/managed-instance/restore-sample-database-quickstart).
-* For information about SQL Managed Instance, see [What is SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview).
-* For information about connecting apps to SQL Managed Instance, see [Connect applications](/azure/azure-sql/managed-instance/connect-application-instance).
dms Tutorial Sql Server To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-azure-sql.md
- Title: "Tutorial: Migrate SQL Server offline to Azure SQL Database"-
-description: Learn to migrate from SQL Server to Azure SQL Database offline by using Azure Database Migration Service (classic).
--- Previously updated : 10/10/2023---
- - sql-migration-content
--
-# Tutorial: Migrate SQL Server to Azure SQL Database using DMS (classic)
--
-> [!NOTE]
-> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Database by using the [Azure SQL migration extension for Azure Data Studio](/data-migration/sql-server/database/database-migration-service).
->
-> To compare features between versions, review [compare versions](dms-overview.md#compare-versions).
-
-You can use Azure Database Migration Service to migrate the databases from a SQL Server instance to [Azure SQL Database](/azure/sql-database/). In this tutorial, you migrate the [AdventureWorks2016](/sql/samples/adventureworks-install-configure#download-backup-files) database restored to an on-premises instance of SQL Server 2016 (or later) to a single database or pooled database in Azure SQL Database by using Azure Database Migration Service.
-
-You will learn how to:
-> [!div class="checklist"]
->
-> - Assess and evaluate your on-premises database for any blocking issues by using the Data Migration Assistant.
-> - Use the Data Migration Assistant to migrate the database sample schema.
-> - Register the Azure DataMigration resource provider.
-> - Create an instance of Azure Database Migration Service.
-> - Create a migration project by using Azure Database Migration Service.
-> - Run the migration.
-> - Monitor the migration.
--
-## Prerequisites
-
-To complete this tutorial, you need to:
--- Download and install [SQL Server 2016 or later](https://www.microsoft.com/sql-server/sql-server-downloads).-- Enable the TCP/IP protocol, which is disabled by default during SQL Server Express installation, by following the instructions in the article [Enable or Disable a Server Network Protocol](/sql/database-engine/configure-windows/enable-or-disable-a-server-network-protocol#SSMSProcedure).-- [Restore the AdventureWorks2016 database to the SQL Server instance.](/sql/samples/adventureworks-install-configure#restore-to-sql-server)-- Create a database in Azure SQL Database, which you do by following the details in the article [Create a database in Azure SQL Database using the Azure portal](/azure/azure-sql/database/single-database-create-quickstart). For purposes of this tutorial, the name of the Azure SQL Database is assumed to be **AdventureWorksAzure**, but you can provide whatever name you wish.-
- > [!NOTE]
- > If you use SQL Server Integration Services (SSIS) and want to migrate the catalog database for your SSIS projects/packages (SSISDB) from SQL Server to Azure SQL Database, the destination SSISDB will be created and managed automatically on your behalf when you provision SSIS in Azure Data Factory (ADF). For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
-
-- Download and install the latest version of the [Data Migration Assistant](https://www.microsoft.com/download/details.aspx?id=53595).-- Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details.-
- > [!NOTE]
- > During virtual network setup, if you use ExpressRoute with network peering to Microsoft, add the following service [endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) to the subnet in which the service will be provisioned:
- >
- > - Target database endpoint (for example, SQL endpoint, Azure Cosmos DB endpoint, and so on)
- > - Storage endpoint
- > - Service bus endpoint
- >
- > This configuration is necessary because Azure Database Migration Service lacks internet connectivity.
- >
- >If you don't have site-to-site connectivity between the on-premises network and Azure or if there is limited site-to-site connectivity bandwidth, consider using Azure Database Migration Service in hybrid mode (Preview). Hybrid mode leverages an on-premises migration worker together with an instance of Azure Database Migration Service running in the cloud. To create an instance of Azure Database Migration Service in hybrid mode, see the article [Create an instance of Azure Database Migration Service in hybrid mode using the Azure portal](./quickstart-create-data-migration-service-hybrid-portal.md).
--- Ensure that your virtual network Network Security Group outbound security rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage, and AzureMonitor. For more detail on Azure virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).-- Configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).-- Open your firewall on Windows to allow Azure Database Migration Service to access the source SQL Server, which by default is TCP port 1433. If your default instance is listening on some other port, add that to the firewall.-- If you're running multiple named SQL Server instances using dynamic ports, you might wish to enable the SQL Browser Service and allow access to UDP port 1434 through your firewalls so that Azure Database Migration Service can connect to a named instance on your source server.-- When using a firewall appliance in front of your source database(s), you might need to add firewall rules to allow Azure Database Migration Service to access the source database(s) for migration.-- Create a server-level IP [firewall rule](/azure/azure-sql/database/firewall-configure) for Azure SQL Database to allow Azure Database Migration Service access to the target databases. Provide the subnet range of the virtual network used for Azure Database Migration Service.-- Ensure that the credentials used to connect to source SQL Server instance have [CONTROL SERVER](/sql/t-sql/statements/grant-server-permissions-transact-sql) permissions.-- Ensure that the credentials used to connect to target Azure SQL Database instance have [CONTROL DATABASE](/sql/t-sql/statements/grant-database-permissions-transact-sql) permission on the target databases.-
- > [!IMPORTANT]
- > Creating an instance of Azure Database Migration Service requires access to virtual network settings that are normally not within the same resource group. As a result, the user creating an instance of DMS requires permission at subscription level. To create the required roles, which you can assign as needed, run the following script:
- >
- > ```
- >
- > $readerActions = `
- > "Microsoft.Network/networkInterfaces/ipConfigurations/read", `
- > "Microsoft.DataMigration/*/read", `
- > "Microsoft.Resources/subscriptions/resourceGroups/read"
- >
- > $writerActions = `
- > "Microsoft.DataMigration/services/*/write", `
- > "Microsoft.DataMigration/services/*/delete", `
- > "Microsoft.DataMigration/services/*/action", `
- > "Microsoft.Network/virtualNetworks/subnets/join/action", `
- > "Microsoft.Network/virtualNetworks/write", `
- > "Microsoft.Network/virtualNetworks/read", `
- > "Microsoft.Resources/deployments/validate/action", `
- > "Microsoft.Resources/deployments/*/read", `
- > "Microsoft.Resources/deployments/*/write"
- >
- > $writerActions += $readerActions
- >
- > # TODO: replace with actual subscription IDs
- > $subScopes = ,"/subscriptions/00000000-0000-0000-0000-000000000000/","/subscriptions/11111111-1111-1111-1111-111111111111/"
- >
- > function New-DmsReaderRole() {
- > $aRole = [Microsoft.Azure.Commands.Resources.Models.Authorization.PSRoleDefinition]::new()
- > $aRole.Name = "Azure Database Migration Reader"
- > $aRole.Description = "Lets you perform read only actions on DMS service/project/tasks."
- > $aRole.IsCustom = $true
- > $aRole.Actions = $readerActions
- > $aRole.NotActions = @()
- >
- > $aRole.AssignableScopes = $subScopes
- > #Create the role
- > New-AzRoleDefinition -Role $aRole
- > }
- >
- > function New-DmsContributorRole() {
- > $aRole = [Microsoft.Azure.Commands.Resources.Models.Authorization.PSRoleDefinition]::new()
- > $aRole.Name = "Azure Database Migration Contributor"
- > $aRole.Description = "Lets you perform CRUD actions on DMS service/project/tasks."
- > $aRole.IsCustom = $true
- > $aRole.Actions = $writerActions
- > $aRole.NotActions = @()
- >
- > $aRole.AssignableScopes = $subScopes
- > #Create the role
- > New-AzRoleDefinition -Role $aRole
- > }
- >
- > function Update-DmsReaderRole() {
- > $aRole = Get-AzRoleDefinition "Azure Database Migration Reader"
- > $aRole.Actions = $readerActions
- > $aRole.NotActions = @()
- > Set-AzRoleDefinition -Role $aRole
- > }
- >
- > function Update-DmsConributorRole() {
- > $aRole = Get-AzRoleDefinition "Azure Database Migration Contributor"
- > $aRole.Actions = $writerActions
- > $aRole.NotActions = @()
- > Set-AzRoleDefinition -Role $aRole
- > }
- >
- > # Invoke above functions
- > New-DmsReaderRole
- > New-DmsContributorRole
- > Update-DmsReaderRole
- > Update-DmsConributorRole
- > ```
-
-## Assess your on-premises database
-
-Before you can migrate data from a SQL Server instance to a single database or pooled database in Azure SQL Database, you need to assess the SQL Server database for any blocking issues that might prevent migration. Using the Data Migration Assistant, follow the steps described in the article [Performing a SQL Server migration assessment](/sql/dma/dma-assesssqlonprem) to complete the on-premises database assessment. A summary of the required steps follows:
-
-1. In the Data Migration Assistant, select the New (+) icon, and then select the **Assessment** project type.
-2. Specify a project name. From the **Assessment type** drop-down list, select **Database Engine**, in the **Source server type** text box, select **SQL Server**, in the **Target server type** text box, select **Azure SQL Database**, and then select **Create** to create the project.
-
- When you're assessing the source SQL Server database migrating to a single database or pooled database in Azure SQL Database, you can choose one or both of the following assessment report types:
-
- - Check database compatibility
- - Check feature parity
-
- Both report types are selected by default.
-
-3. In the Data Migration Assistant, on the **Options** screen, select **Next**.
-4. On the **Select sources** screen, in the **Connect to a server** dialog box, provide the connection details to your SQL Server, and then select **Connect**.
-5. In the **Add sources** dialog box, select **AdventureWorks2016**, select **Add**, and then select **Start Assessment**.
-
- > [!NOTE]
- > If you use SSIS, DMA does not currently support the assessment of the source SSISDB. However, SSIS projects/packages will be assessed/validated as they are redeployed to the destination SSISDB hosted by Azure SQL Database. For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
-
- When the assessment is complete, the results display as shown in the following graphic:
-
- ![Assess data migration](media/tutorial-sql-server-to-azure-sql/dma-assessments.png)
-
- For databases in Azure SQL Database, the assessments identify feature parity issues and migration blocking issues for deploying to a single database or pooled database.
-
- - The **SQL Server feature parity** category provides a comprehensive set of recommendations, alternative approaches available in Azure, and mitigating steps to help you plan the effort into your migration projects.
- - The **Compatibility issues** category identifies partially supported or unsupported features that reflect compatibility issues that might block migrating SQL Server database(s) to Azure SQL Database. Recommendations are also provided to help you address those issues.
-
-6. Review the assessment results for migration blocking issues and feature parity issues by selecting the specific options.
-
-## Migrate the sample schema
-
-After you're comfortable with the assessment and satisfied that the selected database is a viable candidate for migration to a single database or pooled database in Azure SQL Database, use DMA to migrate the schema to Azure SQL Database.
-
-> [!NOTE]
-> Before you create a migration project in Data Migration Assistant, be sure that you have already provisioned a database in Azure as mentioned in the prerequisites.
-
-> [!IMPORTANT]
-> If you use SSIS, DMA does not currently support the migration of source SSISDB, but you can redeploy your SSIS projects/packages to the destination SSISDB hosted by Azure SQL Database. For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
-
-To migrate the **AdventureWorks2016** schema to a single database or pooled database Azure SQL Database, perform the following steps:
-
-1. In the Data Migration Assistant, select the New (+) icon, and then under **Project type**, select **Migration**.
-2. Specify a project name, in the **Source server type** text box, select **SQL Server**, and then in the **Target server type** text box, select **Azure SQL Database**.
-3. Under **Migration Scope**, select **Schema only**.
-
- After performing the previous steps, the Data Migration Assistant interface should appear as shown in the following graphic:
-
- ![Create Data Migration Assistant Project](media/tutorial-sql-server-to-azure-sql/dma-create-project.png)
-
-4. Select **Create** to create the project.
-5. In the Data Migration Assistant, specify the source connection details for your SQL Server, select **Connect**, and then select the **AdventureWorks2016** database.
-
- ![Data Migration Assistant Source Connection Details](media/tutorial-sql-server-to-azure-sql/dma-source-connect.png)
-
-6. Select **Next**, under **Connect to target server**, specify the target connection details for the Azure SQL Database, select **Connect**, and then select the **AdventureWorksAzure** database you had pre-provisioned in Azure SQL Database.
-
- ![Data Migration Assistant Target Connection Details](media/tutorial-sql-server-to-azure-sql/dma-target-connect.png)
-
-7. Select **Next** to advance to the **Select objects** screen, on which you can specify the schema objects in the **AdventureWorks2016** database that need to be deployed to Azure SQL Database.
-
- By default, all objects are selected.
-
- ![Generate SQL Scripts](media/tutorial-sql-server-to-azure-sql/dma-assessment-source.png)
-
-8. Select **Generate SQL script** to create the SQL scripts, and then review the scripts for any errors.
-
- ![Schema Script](media/tutorial-sql-server-to-azure-sql/dma-schema-script.png)
-
-9. Select **Deploy schema** to deploy the schema to Azure SQL Database, and then after the schema is deployed, check the target server for any anomalies.
-
- ![Deploy Schema](media/tutorial-sql-server-to-azure-sql/dma-schema-deploy.png)
---
-## Create a migration project
-
-After the service is created, locate it within the Azure portal, open it, and then create a new migration project.
-
-1. In the Azure portal menu, select **All services**. Search for and select **Azure Database Migration Services**.
-
- ![Locate all instances of Azure Database Migration Service](media/tutorial-sql-server-to-azure-sql/dms-search.png)
-
-2. On the **Azure Database Migration Services** screen, select the Azure Database Migration Service instance that you created.
-
-3. Select **New Migration Project**.
-
- ![Locate your instance of Azure Database Migration Service](media/tutorial-sql-server-to-azure-sql/dms-instance-search.png)
-
-4. On the **New migration project** screen, specify a name for the project, in the **Source server type** text box, select **SQL Server**, in the **Target server type** text box, select **Azure SQL Database**, and then for **Choose Migration activity type**, select **Data migration**.
-
- ![Create Database Migration Service Project](media/tutorial-sql-server-to-azure-sql/dms-create-project-2.png)
-
-5. Select **Create and run activity** to create the project and run the migration activity.
-
-## Specify source details
-
-1. On the **Select source** screen, specify the connection details for the source SQL Server instance.
-
- Make sure to use a Fully Qualified Domain Name (FQDN) for the source SQL Server instance name. You can also use the IP Address for situations in which DNS name resolution isn't possible.
-
-2. If you have not installed a trusted certificate on your source server, select the **Trust server certificate** check box.
-
- When a trusted certificate is not installed, SQL Server generates a self-signed certificate when the instance is started. This certificate is used to encrypt the credentials for client connections.
-
- > [!CAUTION]
- > TLS connections that are encrypted using a self-signed certificate do not provide strong security. They are susceptible to man-in-the-middle attacks. You should not rely on TLS using self-signed certificates in a production environment or on servers that are connected to the internet.
-
- > [!IMPORTANT]
- > If you use SSIS, DMS does not currently support the migration of source SSISDB, but you can redeploy your SSIS projects/packages to the destination SSISDB hosted by Azure SQL Database. For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
-
- ![Source Details](media/tutorial-sql-server-to-azure-sql/dms-source-details-2.png)
-
-3. Select **Next: Select databases**.
-
-## Select databases for migration
-
-Select either all databases or specific databases that you want to migrate to Azure SQL Database. DMS provides you with the expected migration time for selected databases. If the migration downtimes are acceptable continue with the migration. If the migration downtimes are not acceptable, consider migrating to [SQL Managed Instance with near-zero downtime](tutorial-sql-server-managed-instance-online.md) or submit ideas/suggestions for improvement, and other feedback in the [Azure Community forum ΓÇö Azure Database Migration Service](https://feedback.azure.com/d365community/forum/2dd7eb75-ef24-ec11-b6e6-000d3a4f0da0).
-
-1. Choose the database(s) you want to migrate from the list of available databases.
-1. Review the expected downtime. If it's acceptable, select **Next: Select target >>**
-
- ![Source databases](media/tutorial-sql-server-to-azure-sql/select-database.png)
---
-## Specify target details
-
-1. On the **Select target** screen, provide authentication settings to your Azure SQL Database.
-
- ![Select target](media/tutorial-sql-server-to-azure-sql/select-target.png)
-
- > [!NOTE]
- > Currently, SQL authentication is the only supported authentication type.
-
-1. Select **Next: Map to target databases** screen, map the source and the target database for migration.
-
- If the target database contains the same database name as the source database, Azure Database Migration Service selects the target database by default.
-
- ![Map to target databases](media/tutorial-sql-server-to-azure-sql/dms-map-targets-activity-2.png)
-
-1. Select **Next: Configuration migration settings**, expand the table listing, and then review the list of affected fields.
-
- Azure Database Migration Service auto selects all the empty source tables that exist on the target Azure SQL Database instance. If you want to remigrate tables that already include data, you need to explicitly select the tables on this blade.
-
- ![Select tables](media/tutorial-sql-server-to-azure-sql/dms-configure-setting-activity-2.png)
-
-1. Select **Next: Summary**, review the migration configuration and in the **Activity name** text box, specify a name for the migration activity.
-
- ![Choose validation option](media/tutorial-sql-server-to-azure-sql/dms-configuration-2.png)
-
-## Run the migration
--- Select **Start migration**.-
- The migration activity window appears, and the **Status** of the activity is **Pending**.
-
- ![Activity Status](media/tutorial-sql-server-to-azure-sql/dms-activity-status-1.png)
-
-## Monitor the migration
-
-1. On the migration activity screen, select **Refresh** to update the display until the **Status** of the migration shows as **Completed**.
-
- ![Activity Status Completed](media/tutorial-sql-server-to-azure-sql/dms-completed-activity-1.png)
-
-2. Verify the target database(s) on the target **Azure SQL Database**.
-
-## Additional resources
--- For information about Azure Database Migration Service, see the article [What is Azure Database Migration Service?](./dms-overview.md).-- For information about Azure SQL Database, see the article [What is the Azure SQL Database service?](/azure/azure-sql/database/sql-database-paas-overview).
dms Tutorial Sql Server To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-managed-instance.md
- Title: "Tutorial: Migrate SQL Server to SQL Managed Instance"-
-description: Learn to migrate from SQL Server to an Azure SQL Managed Instance by using Azure Database Migration Service (classic).
--- Previously updated : 02/08/2023---
- - fasttrack-edit
- - sql-migration-content
--
-# Tutorial: Migrate SQL Server to an Azure SQL Managed Instance offline using DMS (classic)
--
-> [!NOTE]
-> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Managed Instance by using the [Azure SQL migration extension for Azure Data Studio](/data-migration/sql-server/managed-instance/database-migration-service).
->
-> To compare features between versions, review [compare versions](dms-overview.md#compare-versions).
-
-You can use Azure Database Migration Service to migrate the databases from a SQL Server instance to an [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview). For additional methods that may require some manual effort, see the article [SQL Server to Azure SQL Managed Instance](/azure/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide).
-
-In this tutorial, you migrate the [AdventureWorks2016](/sql/samples/adventureworks-install-configure#download-backup-files) database from an on-premises instance of SQL Server to a SQL Managed Instance by using Azure Database Migration Service.
-
-You will learn how to:
-> [!div class="checklist"]
->
-> - Register the Azure DataMigration resource provider.
-> - Create an instance of Azure Database Migration Service.
-> - Create a migration project by using Azure Database Migration Service.
-> - Run the migration.
-> - Monitor the migration.
-
-> [!IMPORTANT]
-> For offline migrations from SQL Server to SQL Managed Instance, Azure Database Migration Service can create the backup files for you. Alternately, you can provide the latest full database backup in the SMB network share that the service will use to migrate your databases. Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups into a single backup media is not supported. Note that you can use compressed backups as well, to reduce the likelihood of experiencing potential issues with migrating large backups.
--
-This article describes an offline migration from SQL Server to a SQL Managed Instance. For an online migration, see [Migrate SQL Server to an SQL Managed Instance online using DMS](tutorial-sql-server-managed-instance-online.md).
-
-## Prerequisites
-
-To complete this tutorial, you need to:
--- Download and install [SQL Server 2016 or later](https://www.microsoft.com/sql-server/sql-server-downloads).-- Enable the TCP/IP protocol, which is disabled by default during SQL Server Express installation, by following the instructions in the article [Enable or Disable a Server Network Protocol](/sql/database-engine/configure-windows/enable-or-disable-a-server-network-protocol#SSMSProcedure).-- [Restore the AdventureWorks2016 database to the SQL Server instance.](/sql/samples/adventureworks-install-configure#restore-to-sql-server)-- Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). [Learn network topologies for SQL Managed Instance migrations using Azure Database Migration Service](./resource-network-topologies.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details.-
- > [!NOTE]
- > During virtual network setup, if you use ExpressRoute with network peering to Microsoft, add the following service [endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) to the subnet in which the service will be provisioned:
- > - Target database endpoint (for example, SQL endpoint, Azure Cosmos DB endpoint, and so on)
- > - Storage endpoint
- > - Service bus endpoint
- >
- > This configuration is necessary because Azure Database Migration Service lacks internet connectivity.
--- Ensure that your virtual network Network Security Group rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage, and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).-- Configure your [Windows Firewall for source database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).-- Open your Windows Firewall to allow Azure Database Migration Service to access the source SQL Server, which by default is TCP port 1433. If your default instance is listening on some other port, add that to the firewall.-- If you're running multiple named SQL Server instances using dynamic ports, you may wish to enable the SQL Browser Service and allow access to UDP port 1434 through your firewalls so that Azure Database Migration Service can connect to a named instance on your source server.-- If you're using a firewall appliance in front of your source databases, you may need to add firewall rules to allow Azure Database Migration Service to access the source database(s) for migration, as well as files via SMB port 445.-- Create a SQL Managed Instance by following the detail in the article [Create a SQL Managed Instance in the Azure portal](/azure/azure-sql/managed-instance/instance-create-quickstart).-- Ensure that the logins used to connect the source SQL Server and target SQL Managed Instance are members of the sysadmin server role.-
- >[!NOTE]
- >By default, Azure Database Migration Service only supports migrating SQL logins. However, you can enable the ability to migrate Windows logins by:
- >
- >- Ensuring that the target SQL Managed Instance has AAD read access, which can be configured via the Azure portal by a user with the **Global Administrator** role.
- >- Configuring your Azure Database Migration Service instance to enable Windows user/group login migrations, which is set up via the Azure portal, on the Configuration page. After enabling this setting, restart the service for the changes to take effect.
- >
- > After restarting the service, Windows user/group logins appear in the list of logins available for migration. For any Windows user/group logins you migrate, you are prompted to provide the associated domain name. Service user accounts (account with domain name NT AUTHORITY) and virtual user accounts (account name with domain name NT SERVICE) are not supported.
--- Create a network share that Azure Database Migration Service can use to back up the source database.-- Ensure that the service account running the source SQL Server instance has write privileges on the network share that you created and that the computer account for the source server has read/write access to the same share.-- Make a note of a Windows user (and password) that has full control privilege on the network share that you previously created. Azure Database Migration Service impersonates the user credential to upload the backup files to Azure Storage container for restore operation.-- Create a blob container and retrieve its SAS URI by using the steps in the article [Manage Azure Blob Storage resources with Storage Explorer](../vs-azure-tools-storage-explorer-blobs.md#get-the-sas-for-a-blob-container), be sure to select all permissions (Read, Write, Delete, List) on the policy window while creating the SAS URI. This detail provides Azure Database Migration Service with access to your storage account container for uploading the backup files used for migrating databases to SQL Managed Instance.-
- > [!NOTE]
- > - Azure Database Migration Service does not support using an account level SAS token when configuring the Storage Account settings during the [Configure Migration Settings](#configure-migration-settings) step.
-
-- Ensure both the Azure Database Migration Service IP address and the Azure SQL Managed Instance subnet can communicate with the blob container.--
-
-> [!NOTE]
-> For additional detail, see the article [Network topologies for Azure SQL Managed Instance migrations using Azure Database Migration Service](./resource-network-topologies.md).
-
-## Create a migration project
-
-After an instance of the service is created, locate it within the Azure portal, open it, and then create a new migration project.
-
-1. In the Azure portal menu, select **All services**. Search for and select **Azure Database Migration Services**.
-
- ![Locate all instances of Azure Database Migration Service](media/tutorial-sql-server-to-managed-instance/dms-search.png)
-
-2. On the **Azure Database Migration Services** screen, select the Azure Database Migration Service instance that you created.
-
-3. Select **New Migration Project**.
-
- ![Locate your instance of Azure Database Migration Service](media/tutorial-sql-server-to-managed-instance/dms-create-project-1.png)
-
-4. On the **New migration project** screen, specify a name for the project, in the **Source server type** text box, select **SQL Server**, in the **Target server type** text box, select **Azure SQL Database Managed Instance**, and then for **Choose type of activity**, select **Offline data migration**.
-
- ![Create Database Migration Service Project](media/tutorial-sql-server-to-managed-instance/dms-create-project-2.png)
-
-5. Select **Create and run activity** to create the project and run the migration activity.
-
-## Specify source details
-
-1. On the **Select source** screen, specify the connection details for the source SQL Server instance.
-
- Make sure to use a Fully Qualified Domain Name (FQDN) for the source SQL Server instance name. You can also use the IP Address for situations in which DNS name resolution isn't possible.
-
-2. If you haven't installed a trusted certificate on your server, select the **Trust server certificate** check box.
-
- When a trusted certificate isn't installed, SQL Server generates a self-signed certificate when the instance is started. This certificate is used to encrypt the credentials for client connections.
-
- > [!CAUTION]
- > TLS connections that are encrypted using a self-signed certificate does not provide strong security. They are susceptible to man-in-the-middle attacks. You should not rely on TLS using self-signed certificates in a production environment or on servers that are connected to the internet.
-
- ![Source Details](media/tutorial-sql-server-to-managed-instance/dms-source-details.png)
-
-3. Select **Next: Select target**
-
-## Specify target details
-
-1. On the **Select target** screen, specify the connection details for the target, which is the pre-provisioned SQL Managed Instance to which you're migrating the **AdventureWorks2016** database.
-
- If you haven't already provisioned the SQL Managed Instance, select the [link](/azure/azure-sql/managed-instance/instance-create-quickstart) to help you provision the instance. You can still continue with project creation and then, when the SQL Managed Instance is ready, return to this specific project to execute the migration.
-
- ![Select Target](media/tutorial-sql-server-to-managed-instance/dms-target-details.png)
-
-2. Select **Next: Select databases**. On the **Select databases** screen, select the **AdventureWorks2016** database for migration.
-
- ![Select Source Databases](media/tutorial-sql-server-to-managed-instance/dms-source-database.png)
-
- > [!IMPORTANT]
- > If you use SQL Server Integration Services (SSIS), DMS does not currently support migrating the catalog database for your SSIS projects/packages (SSISDB) from SQL Server to SQL Managed Instance. However, you can provision SSIS in Azure Data Factory (ADF) and redeploy your SSIS projects/packages to the destination SSISDB hosted by SQL Managed Instance. For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
-
-3. Select **Next: Select logins**
-
-## Select logins
-
-1. On the **Select logins** screen, select the logins that you want to migrate.
-
- >[!NOTE]
- >By default, Azure Database Migration Service only supports migrating SQL logins. To enable support for migrating Windows logins, see the **Prerequisites** section of this tutorial.
-
- ![Select logins](media/tutorial-sql-server-to-managed-instance/dms-select-logins.png)
-
-2. Select **Next: Configure migration settings**.
-
-## Configure migration settings
-
-1. On the **Configure migration settings** screen, provide the following details:
-
- | Parameter | Description |
- |--||
- |**Choose source backup option** | Choose the option **I will provide latest backup files** when you already have full backup files available for DMS to use for database migration. Choose the option **I will let Azure Database Migration Service create backup files** when you want DMS to take the source database full backup at first and use it for migration. |
- |**Network location share** | The local SMB network share that Azure Database Migration Service can take the source database backups to. The service account running source SQL Server instance must have write privileges on this network share. Provide an FQDN or IP addresses of the server in the network share, for example, '\\\servername.domainname.com\backupfolder' or '\\\IP address\backupfolder'.|
- |**User name** | Make sure that the Windows user has full control privilege on the network share that you provided above. Azure Database Migration Service will impersonate the user credential to upload the backup files to Azure Storage container for restore operation. If TDE-enabled databases are selected for migration, the above windows user must be the built-in administrator account and [User Account Control](/windows/security/identity-protection/user-account-control/user-account-control-overview) must be disabled for Azure Database Migration Service to upload and delete the certificates files.) |
- |**Password** | Password for the user. |
- |**Storage account settings** | The SAS URI that provides Azure Database Migration Service with access to your storage account container to which the service uploads the backup files and that is used for migrating databases to SQL Managed Instance. [Learn how to get the SAS URI for blob container](../vs-azure-tools-storage-explorer-blobs.md#get-the-sas-for-a-blob-container). This SAS URI must be for the blob container, not for the storage account.|
- |**TDE Settings** | If you're migrating the source databases with Transparent Data Encryption (TDE) enabled, you need to have write privileges on the target SQL Managed Instance. Select the subscription in which the SQL Managed Instance provisioned from the drop-down menu. Select the target **Azure SQL Database Managed Instance** in the drop-down menu. |
-
- ![Configure Migration Settings](media/tutorial-sql-server-to-managed-instance/dms-configure-migration-settings.png)
-
-2. Select **Next: Summary**.
-
-## Review the migration summary
-
-1. On the **Summary** screen, in the **Activity name** text box, specify a name for the migration activity.
-
-2. Review and verify the details associated with the migration project.
-
- ![Migration project summary](media/tutorial-sql-server-to-managed-instance/dms-project-summary.png)
-
-## Run the migration
--- Select **Start migration**.-
- The migration activity window appears that displays the current migration status of the databases and logins.
-
-## Monitor the migration
-
-1. In the migration activity screen, select **Refresh** to update the display.
-
- ![Screenshot that shows the migration activity screen and the Refresh button.](media/tutorial-sql-server-to-managed-instance/dms-monitor-migration.png)
-
-2. You can further expand the databases and logins categories to monitor the migration status of the respective server objects.
-
- ![Migration activity in progress](media/tutorial-sql-server-to-managed-instance/dms-monitor-migration-extend.png)
-
-3. After the migration completes, verify the target database on the SQL Managed Instance environment.
-
-## Additional resources
--- For a tutorial showing you how to migrate a database to SQL Managed Instance using the T-SQL RESTORE command, see [Restore a backup to SQL Managed Instance using the restore command](/azure/azure-sql/managed-instance/restore-sample-database-quickstart).-- For information about SQL Managed Instance, see [What is SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview).-- For information about connecting apps to SQL Managed Instance, see [Connect applications](/azure/azure-sql/managed-instance/connect-application-instance).
event-grid Event Handlers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-handlers.md
Title: Azure Event Grid event handlers description: Describes supported event handlers for Azure Event Grid. Azure Automation, Functions, Event Hubs, Hybrid Connections, Logic Apps, Service Bus, Queue Storage, Webhooks. Previously updated : 06/16/2023 Last updated : 07/31/2024 # Event handlers in Azure Event Grid
event-hubs Event Hubs Kafka Connect Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-kafka-connect-tutorial.md
Title: Integrate with Apache Kafka Connect- Azure Event Hubs | Microsoft Docs
-description: This article provides information on how to use Kafka Connect with Azure Event Hubs for Kafka.
+ Title: Integrate with Apache Kafka Connect
+description: This article provides a walkthrough that shows you how to use Kafka Connect with Azure Event Hubs for Kafka.
Previously updated : 05/18/2023 Last updated : 07/31/2024
+# customer intent: As a developer, I want to know how to use Apache Kafka Connect with Azure Event Hubs for Kafka.
# Integrate Apache Kafka Connect support on Azure Event Hubs
-[Apache Kafka Connect](https://kafka.apache.org/documentation/#connect) is a framework to connect and import/export data from/to any external system such as MySQL, HDFS, and file system through a Kafka cluster. This tutorial walks you through using Kafka Connect framework with Event Hubs.
+[Apache Kafka Connect](https://kafka.apache.org/documentation/#connect) is a framework to connect and import/export data from/to any external system such as MySQL, HDFS, and file system through a Kafka cluster. This article walks you through using Kafka Connect framework with Event Hubs.
-
-This tutorial walks you through integrating Kafka Connect with an event hub and deploying basic FileStreamSource and FileStreamSink connectors. While these connectors aren't meant for production use, they demonstrate an end-to-end Kafka Connect scenario where Azure Event Hubs acts as a Kafka broker.
+This article walks you through integrating Kafka Connect with an event hub and deploying basic `FileStreamSource` and `FileStreamSink` connectors. While these connectors aren't meant for production use, they demonstrate an end-to-end Kafka Connect scenario where Azure Event Hubs acts as a Kafka broker.
> [!NOTE] > This sample is available on [GitHub](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/connect).
-In this tutorial, you take the following steps:
-
-> [!div class="checklist"]
-> * Create an Event Hubs namespace
-> * Clone the example project
-> * Configure Kafka Connect for Event Hubs
-> * Run Kafka Connect
-> * Create connectors
- ## Prerequisites To complete this walkthrough, make sure you have the following prerequisites:
An Event Hubs namespace is required to send and receive from any Event Hubs serv
## Clone the example project Clone the Azure Event Hubs repository and navigate to the tutorials/connect subfolder:
-```
+```bash
git clone https://github.com/Azure/azure-event-hubs-for-kafka.git cd azure-event-hubs-for-kafka/tutorials/connect ``` ## Configure Kafka Connect for Event Hubs
-Minimal reconfiguration is necessary when redirecting Kafka Connect throughput from Kafka to Event Hubs. The following `connect-distributed.properties` sample illustrates how to configure Connect to authenticate and communicate with the Kafka endpoint on Event Hubs:
+Minimal reconfiguration is necessary when redirecting Kafka Connect throughput from Kafka to Event Hubs. The following `connect-distributed.properties` sample illustrates how to configure Connect to authenticate and communicate with the Kafka endpoint on Event Hubs:
```properties # e.g. namespace.servicebus.windows.net:9093
plugin.path={KAFKA.DIRECTORY}/libs # path to the libs directory within the Kafka
In this step, a Kafka Connect worker is started locally in distributed mode, using Event Hubs to maintain cluster state.
-1. Save the above `connect-distributed.properties` file locally. Be sure to replace all values in braces.
+1. Save the `connect-distributed.properties` file locally. Be sure to replace all values in braces.
2. Navigate to the location of the Kafka release on your machine.
-4. Run `./bin/connect-distributed.sh /PATH/TO/connect-distributed.properties`. The Connect worker REST API is ready for interaction when you see `'INFO Finished starting connectors and tasks'`.
+4. Run `./bin/connect-distributed.sh /PATH/TO/connect-distributed.properties`. The Connect worker REST API is ready for interaction when you see `'INFO Finished starting connectors and tasks'`.
> [!NOTE] > Kafka Connect uses the Kafka AdminClient API to automatically create topics with recommended configurations, including compaction. A quick check of the namespace in the Azure portal reveals that the Connect worker's internal topics have been created automatically.
In this step, a Kafka Connect worker is started locally in distributed mode, usi
>Kafka Connect internal topics **must use compaction**. The Event Hubs team is not responsible for fixing improper configurations if internal Connect topics are incorrectly configured. ### Create connectors
-This section walks you through spinning up FileStreamSource and FileStreamSink connectors.
+This section walks you through spinning up `FileStreamSource` and `FileStreamSink` connectors.
1. Create a directory for input and output data files. ```bash mkdir ~/connect-quickstart ```
-2. Create two files: one file with seed data from which the FileStreamSource connector reads, and another to which our FileStreamSink connector writes.
+2. Create two files: one file with seed data from which the `FileStreamSource` connector reads, and another to which our `FileStreamSink` connector writes.
```bash seq 1000 > ~/connect-quickstart/input.txt touch ~/connect-quickstart/output.txt ```
-3. Create a FileStreamSource connector. Be sure to replace the curly braces with your home directory path.
+3. Create a `FileStreamSource` connector. Be sure to replace the curly braces with your home directory path.
```bash curl -s -X POST -H "Content-Type: application/json" --data '{"name": "file-source","config": {"connector.class":"org.apache.kafka.connect.file.FileStreamSourceConnector","tasks.max":"1","topic":"connect-quickstart","file": "{YOUR/HOME/PATH}/connect-quickstart/input.txt"}}' http://localhost:8083/connectors ```
- You should see the event hub `connect-quickstart` on your Event Hubs instance after running the above command.
+ You should see the event hub `connect-quickstart` on your Event Hubs instance after running the command.
4. Check status of source connector. ```bash curl -s http://localhost:8083/connectors/file-source/status ```
- Optionally, you can use [Service Bus Explorer](https://github.com/paolosalvatori/ServiceBusExplorer/releases) to verify that events have arrived in the `connect-quickstart` topic.
+ Optionally, you can use [Service Bus Explorer](https://github.com/paolosalvatori/ServiceBusExplorer/releases) to verify that events arrived in the `connect-quickstart` topic.
-5. Create a FileStreamSink Connector. Again, make sure you replace the curly braces with your home directory path.
+5. Create a FileStreamSink Connector. Again, make sure you replace the curly braces with your home directory path.
```bash curl -X POST -H "Content-Type: application/json" --data '{"name": "file-sink", "config": {"connector.class":"org.apache.kafka.connect.file.FileStreamSinkConnector", "tasks.max":"1", "topics":"connect-quickstart", "file": "{YOUR/HOME/PATH}/connect-quickstart/output.txt"}}' http://localhost:8083/connectors ```
This section walks you through spinning up FileStreamSource and FileStreamSink c
``` ### Cleanup
-Kafka Connect creates Event Hubs topics to store configurations, offsets, and status that persist even after the Connect cluster has been taken down. Unless this persistence is desired, it's recommended that these topics are deleted. You may also want to delete the `connect-quickstart` Event Hubs that were created during this walkthrough.
+Kafka Connect creates Event Hubs topics to store configurations, offsets, and status that persist even after the Connect cluster has been taken down. Unless this persistence is desired, we recommend that you delete these topics. You might also want to delete the `connect-quickstart` Event Hubs that were created during this walkthrough.
-## Next steps
+## Related content
To learn more about Event Hubs for Kafka, see the following articles: -- [Mirror a Kafka broker in an event hub](event-hubs-kafka-mirror-maker-tutorial.md)-- [Connect Apache Spark to an event hub](event-hubs-kafka-spark-tutorial.md)-- [Connect Apache Flink to an event hub](event-hubs-kafka-flink-tutorial.md)-- [Explore samples on our GitHub](https://github.com/Azure/azure-event-hubs-for-kafka)-- [Connect Akka Streams to an event hub](event-hubs-kafka-akka-streams-tutorial.md) - [Apache Kafka developer guide for Azure Event Hubs](apache-kafka-developer-guide.md)
+- [Explore samples on our GitHub](https://github.com/Azure/azure-event-hubs-for-kafka)
++
event-hubs Event Processor Balance Partition Load https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-processor-balance-partition-load.md
Title: Balance partition load across multiple instances - Azure Event Hubs | Microsoft Docs
+ Title: Balance partition load across multiple instances
description: Describes how to balance partition load across multiple instances of your application using an event processor and the Azure Event Hubs SDK. - Previously updated : 11/14/2022+ Last updated : 07/31/2024
+#customer intent: As a developer, I want to know how to run multiple instances of my processing client to read data from an event hub.
# Balance partition load across multiple instances of your application
When the checkpoint is performed to mark an event as processed, an entry in chec
By default, the function that processes events is called sequentially for a given partition. Subsequent events and calls to this function from the same partition queue up behind the scenes as the event pump continues to run in the background on other threads. Events from different partitions can be processed concurrently and any shared state that is accessed across partitions have to be synchronized.
-## Next steps
+## Related content
See the following quick starts: - [.NET Core](event-hubs-dotnet-standard-getstarted-send.md)
event-hubs Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/private-link-service.md
Title: Integrate Azure Event Hubs with Azure Private Link Service
-description: Learn how to integrate Azure Event Hubs with Azure Private Link Service
Previously updated : 02/15/2023--
+description: This article describes how to allow access to your Event Hubs namespace only via private endpoints by using the Azure Private Link Service.
Last updated : 07/31/2024+
+# customer intent: As an IT admin, I want to restrict access to an Event Hubs namespace to a private endpoint in a virtual network.
# Allow access to Azure Event Hubs namespaces via private endpoints Azure Private Link Service enables you to access Azure Services (for example, Azure Event Hubs, Azure Storage, and Azure Cosmos DB) and Azure hosted customer/partner services over a **private endpoint** in your virtual network.
-A private endpoint is a network interface that connects you privately and securely to a service powered by Azure Private Link. The private endpoint uses a private IP address from your virtual network, effectively bringing the service into your virtual network. All traffic to the service can be routed through the private endpoint, so no gateways, NAT devices, ExpressRoute or VPN connections, or public IP addresses are needed. Traffic between your virtual network and the service traverses over the Microsoft backbone network, eliminating exposure from the public Internet. You can connect to an instance of an Azure resource, giving you the highest level of granularity in access control.
+A private endpoint is a network interface that connects you privately and securely to a service powered by Azure Private Link. The private endpoint uses a private IP address from your virtual network, effectively bringing the service into your virtual network. All traffic to the service is routed through the private endpoint, so no gateways, NAT devices, ExpressRoute or VPN connections, or public IP addresses are needed. Traffic between your virtual network and the service traverses over the Microsoft backbone network, eliminating exposure from the public Internet. You can connect to an instance of an Azure resource, giving you the highest level of granularity in access control.
For more information, see [What is Azure Private Link?](../private-link/private-link-overview.md) ## Important points - This feature isn't supported in the **basic** tier.-- Enabling private endpoints can prevent other Azure services from interacting with Event Hubs. Requests that are blocked include those from other Azure services, from the Azure portal, from logging and metrics services, and so on. As an exception, you can allow access to Event Hubs resources from certain **trusted services** even when private endpoints are enabled. For a list of trusted services, see [Trusted services](#trusted-microsoft-services).
+- Enabling private endpoints can prevent other Azure services from interacting with Event Hubs. Requests that are blocked include those from other Azure services, from the Azure portal, from logging and metrics services, and so on. As an exception, you can allow access to Event Hubs resources from certain **trusted services** even when private endpoints are enabled. For a list of trusted services, see [Trusted services](#trusted-microsoft-services).
- Specify **at least one IP rule or virtual network rule** for the namespace to allow traffic only from the specified IP addresses or subnet of a virtual network. If there are no IP and virtual network rules, the namespace can be accessed over the public internet (using the access key). ## Add a private endpoint using Azure portal
To integrate an Event Hubs namespace with Azure Private Link, you need the follo
- A subnet in the virtual network. You can use the **default** subnet. - Owner or contributor permissions for both the namespace and the virtual network.
-Your private endpoint and virtual network must be in the same region. When you select a region for the private endpoint using the portal, it will automatically filter only virtual networks that are in that region. Your namespace can be in a different region.
+Your private endpoint and virtual network must be in the same region. When you select a region for the private endpoint using the portal, it automatically filters virtual networks that are in that region. Your namespace can be in a different region.
Your private endpoint uses a private IP address in your virtual network.
If you already have an Event Hubs namespace, you can create a private link conne
1. On the **Networking** page, for **Public network access**, select **Disabled** if you want the namespace to be accessed only via private endpoints. 1. For **Allow trusted Microsoft services to bypass this firewall**, select **Yes** if you want to allow [trusted Microsoft services](#trusted-microsoft-services) to bypass this firewall.
- :::image type="content" source="./media/private-link-service/public-access-disabled.png" alt-text="Screenshot of the Networking page with public network access as Disabled.":::
+ :::image type="content" source="./media/private-link-service/public-access-disabled.png" alt-text="Screenshot of the Networking page with public network access as Disabled." lightbox="./media/private-link-service/public-access-disabled.png":::
1. Switch to the **Private endpoint connections** tab. 1. Select the **+ Private Endpoint** button at the top of the page.
If you already have an Event Hubs namespace, you can create a private link conne
1. On the **Tags** page, create any tags (names and values) that you want to associate with the private endpoint resource. Then, select **Review + create** button at the bottom of the page. 1. On the **Review + create**, review all the settings, and select **Create** to create the private endpoint.
- ![Create Private Endpoint - Review and Create page](./media/private-link-service/create-private-endpoint-review-create-page.png)
-12. Confirm that you see the private endpoint connection you created shows up in the list of endpoints. In this example, the private endpoint is auto-approved because you connected to an Azure resource in your directory and you have sufficient permissions.
+ ![Screenshot that shows the Review + create page.](./media/private-link-service/create-private-endpoint-review-create-page.png)
+12. Confirm that you see the private endpoint connection you created shows up in the list of endpoints. Refresh the page and switch to the **Private endpoint connections** tab. In this example, the private endpoint is auto-approved because you connected to an Azure resource in your directory and you have sufficient permissions.
- ![Private endpoint created](./media/private-link-service/private-endpoint-created.png)
+ ![Screenshot that shows the Private endpoint connections page with the newly created private endpoint.](./media/private-link-service/private-endpoint-created.png)
[!INCLUDE [event-hubs-trusted-services](./includes/event-hubs-trusted-services.md)]
There are four provisioning states:
| None | Pending | Connection is created manually and is pending approval from the Private Link resource owner. | | Approve | Approved | Connection was automatically or manually approved and is ready to be used. | | Reject | Rejected | Connection was rejected by the private link resource owner. |
-| Remove | Disconnected | Connection was removed by the private link resource owner, the private endpoint becomes informative and should be deleted for cleanup. |
+| Remove | Disconnected | Connection was removed by the private link resource owner. The private endpoint becomes informative and should be deleted for cleanup. |
### Approve, reject, or remove a private endpoint connection
There are four provisioning states:
2. Select the **private endpoint** you wish to approve 3. Select the **Approve** button.
- ![Approve private endpoint](./media/private-link-service/approve-private-endpoint.png)
+ :::image type="content" source="./media/private-link-service/approve-private-endpoint.png" alt-text="Screenshot that shows the Private endpoint connections tab with the Approve button highlighted.":::
4. On the **Approve connection** page, add a comment (optional), and select **Yes**. If you select **No**, nothing happens. 5. You should see the status of the private endpoint connection in the list changed to **Approved**.
There are four provisioning states:
1. If there are any private endpoint connections you want to reject, whether it's a pending request or existing connection, select the connection and select the **Reject** button.
- ![Reject private endpoint](./media/private-link-service/private-endpoint-reject-button.png)
+ :::image type="content" source="./media/private-link-service/private-endpoint-reject-button.png" alt-text="Screenshot that shows the Private endpoint connections tab with the Reject button highlighted.":::
2. On the **Reject connection** page, enter a comment (optional), and select **Yes**. If you select **No**, nothing happens. 3. You should see the status of the private endpoint connection in the list changed to **Rejected**.
Aliases: <event-hubs-namespace-name>.servicebus.windows.net
For more, see [Azure Private Link service: Limitations](../private-link/private-link-service-overview.md#limitations)
-## Next steps
+## Related content
- Learn more about [Azure Private Link](../private-link/private-link-service-overview.md) - Learn more about [Azure Event Hubs](event-hubs-about.md)
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
Title: Connectivity providers and locations for Azure ExpressRoute
description: This article provides a detailed overview of peering locations served by each ExpressRoute connectivity provider to connect to Azure. -+ Last updated 04/21/2024
The following table shows locations by service provider. If you want to view ava
| **[Retelit](https://www.retelit.it/EN/Home.aspx)** | Supported | Supported | Milan | | **RISQ** |Supported | Supported | Quebec City<br/>Montreal | | **SCSK** |Supported | Supported | Tokyo3 |
-| **[Sejong Telecom](https://www.sejongtelecom.net/en/pages/service/cloud_ms)** | Supported | Supported | Seoul |
+| **[Sejong Telecom](https://www.sejongtelecom.net/)** | Supported | Supported | Seoul |
| **[SES](https://www.ses.com/networks/signature-solutions/signature-cloud/ses-and-azure-expressroute)** | Supported | Supported | London2<br/>Washington DC | | **[SIFY](https://sifytechnologies.com/)** | Supported | Supported | Chennai<br/>Mumbai2 | | **[SingTel](https://www.singtel.com/about-us/news-releases/singtel-provide-secure-private-access-microsoft-azure-public-cloud)** |Supported |Supported | Hong Kong2<br/>Singapore<br/>Singapore2 |
Enabling private connectivity to fit your needs can be challenging, based on the
| **Orange Networks** | Europe | | **[Perficient](https://www.perficient.com/Partners/Microsoft/Cloud/Azure-ExpressRoute)** | North America | | **[Presidio](https://www.presidio.com/subpage/1107/microsoft-azure)** | North America |
-| **[sol-tec](https://www.sol-tec.com/what-we-do/)** | Europe |
+| **[sol-tec](https://www.advania.co.uk/our-services/azure-and-cloud/)** | Europe |
| **[Venha Pra Nuvem](https://venhapranuvem.com.br/)** | South America | | **[Vigilant.IT](https://vigilant.it/networking-services/microsoft-azure-networking/)** | Australia |
expressroute Expressroute Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-routing.md
ExpressRoute can't be configured as transit routers. You have to rely on your co
Default routes are permitted only on Azure private peering sessions. In such a case, ExpressRoute routes all traffic from the associated virtual networks to your network. Advertising default routes into private peering results in the internet path from Azure being blocked. You must rely on your corporate edge to route traffic from and to the internet for services hosted in Azure.
-To enable connectivity to other Azure services and infrastructure services, you must make sure one of the following items is in place:
-
-* You use user-defined routing to allow internet connectivity for every subnet requiring Internet connectivity.
+Some services are not able to be accessed from your corporate edge. To enable connectivity to other Azure services and infrastructure services, you must use user-defined routing to allow internet connectivity for every subnet requiring Internet connectivity for these services.
> [!NOTE] > Advertising default routes will break Windows and other VM license activation. For information about a work around, see [use user defined routes to enable KMS activation](/archive/blogs/mast/use-azure-custom-routes-to-enable-kms-activation-with-forced-tunneling).
hdinsight Hdinsight Apps Install Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-apps-install-applications.md
Title: Install third-party applications on Azure HDInsight description: Learn how to install third-party Apache Hadoop applications on Azure HDInsight.-+ Last updated 12/08/2023
The following list shows the published applications:
|[AtScale Intelligence Platform](https://aws.amazon.com/marketplace/pp/AtScale-AtScale-Intelligence-Platform/B07BWWHH18) |Hadoop |AtScale turns your HDInsight cluster into a scale-out OLAP server, allowing you to query billions of rows of data interactively using the BI tools you already know, own, and love ΓÇô from Microsoft Excel, Power BI, Tableau Software to QlikView. | |[Datameer](https://azuremarketplace.microsoft.com/marketplace/apps/datameer.datameer) |Hadoop |Datameer's self-service scalable platform for preparing, exploring, and governing your data for analytics accelerates turning complex multisource data into valuable business-ready information, delivering faster, smarter insights at an enterprise-scale. | |[Dataiku DSS on HDInsight](https://azuremarketplace.microsoft.com/marketplace/apps/dataiku.dataiku-data-science-studio) |Hadoop, Spark |Dataiku DSS in an enterprise data science platform that lets data scientists and data analysts collaborate to design and run new data products and services more efficiently, turning raw data into impactful predictions. |
-|[WANdisco Fusion HDI App](https://community.wandisco.com/s/article/Use-WANdisco-Fusion-for-parallel-operation-of-ADLS-Gen1-and-Gen2) |Hadoop, Spark, HBase, Kafka |Keeping data consistent in a distributed environment is a massive data operations challenge. WANdisco Fusion, an enterprise-class software platform, solves this problem by enabling unstructured data consistency across any environment. |
+|[WANdisco Fusion HDI App](https://docs.wandisco.com/bigdata/wdfusion/adls/) |Hadoop, Spark, HBase, Kafka |Keeping data consistent in a distributed environment is a massive data operations challenge. WANdisco Fusion, an enterprise-class software platform, solves this problem by enabling unstructured data consistency across any environment. |
|H2O SparklingWater for HDInsight |Spark |H2O Sparkling Water supports the following distributed algorithms: GLM, Naïve Bayes, Distributed Random Forest, Gradient Boosting Machine, Deep Neural Networks, Deep learning, K-means, PCA, Generalized Low Rank Models, Anomaly Detection, Autoencoders. | |[Striim for Real-Time Data Integration to HDInsight](https://azuremarketplace.microsoft.com/marketplace/apps/striim.striimbyol) |Hadoop, HBase, Spark, Kafka |Striim (pronounced "stream") is an end-to-end streaming data integration + intelligence platform, enabling continuous ingestion, processing, and analytics of disparate data streams. | |[Jumbune Enterprise-Accelerating BigData Analytics](https://azuremarketplace.microsoft.com/marketplace/apps/impetus-infotech-india-pvt-ltd.impetus_jumbune) |Hadoop, Spark |At a high level, Jumbune assists enterprises by, 1. Accelerating Tez, MapReduce & Spark engine based Hive, Java, Scala workload performance. 2. Proactive Hadoop Cluster Monitoring, 3. Establishing Data Quality management on distributed file system. |
healthcare-apis Api Versioning Dicom Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/api-versioning-dicom-service.md
Title: API versioning for DICOM service - Azure Health Data Services
description: This guide gives an overview of the API version policies for the DICOM service. --++ Last updated 10/13/2023
healthcare-apis Change Feed Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/change-feed-overview.md
Title: Change feed overview for the DICOM service in Azure Health Data Services description: Learn how to use the change feed in the DICOM service to access the logs of all the changes that occur in your organization's medical imaging data. The change feed allows you to query, process, and act upon the change events in a scalable and efficient way. --++ Last updated 1/18/2024
healthcare-apis Configure Cross Origin Resource Sharing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/configure-cross-origin-resource-sharing.md
Last updated 10/09/2023 --++ # Configure cross-origin resource sharing
healthcare-apis Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/configure-customer-managed-keys.md
Title: Configure customer-managed keys (CMK) for the DICOM service in Azure Health Data Services description: Use customer-managed keys (CMK) to encrypt data in the DICOM service. Create and manage CMK in Azure Key Vault and update the encryption key with a managed identity. -+ Last updated 11/20/2023
healthcare-apis Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/customer-managed-keys.md
Title: Best practices for customer-managed keys for the DICOM service in Azure Health Data Services description: Encrypt your data with customer-managed keys (CMK) in the DICOM service in Azure Health Data Services. Get tips on requirements, best practices, limitations, and troubleshooting. -+ Last updated 11/20/2023
healthcare-apis Data Partitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/data-partitions.md
Title: Enable data partitioning for the DICOM service in Azure Health Data Services description: Learn how to enable data partitioning for efficient storage and management of medical images for the DICOM service in Azure Health Data Services. -+ Last updated 03/26/2024
healthcare-apis Deploy Dicom Services In Azure Data Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/deploy-dicom-services-in-azure-data-lake.md
Title: Deploy the DICOM service with Azure Data Lake Storage description: Learn how to deploy the DICOM service and store all your DICOM data in its native format with a data lake in Azure Health Data Services. --++ Last updated 11/21/2023
healthcare-apis Dicom Data Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-data-lake.md
Title: Manage medical imaging data with the DICOM service and Azure Data Lake Storage description: Learn how to use the DICOM service in Azure Health Data Services to store, access, and analyze medical imaging data in the cloud. Explore the benefits, architecture, and data contracts of the integration of the DICOM service with Azure Data Lake Storage. --++ Last updated 03/11/2024
healthcare-apis Dicom Extended Query Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-extended-query-tags-overview.md
Title: DICOM extended query tags overview - Azure Health Data Services description: In this article, you'll learn the concepts of Extended Query Tags. --++ Last updated 10/9/2023
healthcare-apis Dicom Register Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-register-application.md
Title: Register a client application for the DICOM service in Microsoft Entra ID description: Learn how to register a client application for the DICOM service in Microsoft Entra ID. --++ Last updated 09/02/2022
healthcare-apis Dicom Service V2 Api Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-service-v2-api-changes.md
Title: DICOM Service API v2 Changes - Azure Health Data Services
description: This guide gives an overview of the changes in the v2 API for the DICOM service. --++ Last updated 10/13/2023
healthcare-apis Dicom Services Conformance Statement V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement-v2.md
Title: DICOM Conformance Statement version 2 for Azure Health Data Services
description: Read about the features and specifications of the DICOM service v2 API, which supports a subset of the DICOMweb Standard for medical imaging data. A DICOM Conformance Statement is a technical document that describes how a device or software implements the DICOM standard. --++ Last updated 1/18/2024
healthcare-apis Dicom Services Conformance Statement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement.md
Title: DICOM Conformance Statement version 1 for Azure Health Data Services
description: Read about the features and specifications of the DICOM service v1 API, which supports a subset of the DICOMweb Standard for medical imaging data. A DICOM Conformance Statement is a technical document that describes how a device or software implements the DICOM standard. --++ Last updated 10/13/2023
healthcare-apis Dicomweb Standard Apis C Sharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-c-sharp.md
Title: Use C# and DICOMweb Standard APIs in Azure Health Data Services description: Learn how to use C# and DICOMweb Standard APIs to store, retrieve, search, and delete DICOM files in the DICOM service. --++ Last updated 10/18/2023
healthcare-apis Dicomweb Standard Apis Curl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-curl.md
Title: Use cURL and DICOMweb Standard APIs in Azure Health Data Services description: Use cURL and DICOMweb Standard APIs to store, retrieve, search, and delete DICOM files in the DICOM service. --++ Last updated 10/18/2023
healthcare-apis Dicomweb Standard Apis Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-python.md
Title: Use Python and DICOMweb Standard APIs in Azure Health Data Services description: Use Python and DICOMweb Standard APIs to store, retrieve, search, and delete DICOM files in the DICOM service. --++ Last updated 02/15/2022
healthcare-apis Dicomweb Standard Apis With Dicom Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-with-dicom-services.md
Title: Access DICOMweb APIs to manage DICOM data in Azure Health Data Services description: Learn how to use DICOMweb APIs to store, review, search, and delete DICOM objects. Learn how to use custom APIs to track changes and assign unique tags to DICOM data. --++ Last updated 05/29/2024
healthcare-apis Enable Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/enable-diagnostic-logging.md
Title: Enable diagnostic logging in the DICOM service - Azure Health Data Services description: This article explains how to enable diagnostic logging in the DICOM service. --++ Last updated 10/13/2023
healthcare-apis Export Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/export-files.md
Title: Export DICOM files by using the export API of the DICOM service description: This how-to guide explains how to export DICOM files to an Azure Blob Storage account. --++ Last updated 10/30/2023
healthcare-apis Get Access Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/get-access-token.md
Title: Get an access token for the DICOM service in Azure Health Data Services description: Find out how to secure your access to the DICOM service with a token. Use the Azure command-line tool and unique identifiers to manage your medical images. --++ Last updated 10/13/2023
healthcare-apis Get Started With Analytics Dicom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/get-started-with-analytics-dicom.md
Title: Get started using DICOM data in analytics workloads - Azure Health Data S
description: Learn how to use Azure Data Factory and Microsoft Fabric to perform analytics on DICOM data. --++ Last updated 10/13/2023
healthcare-apis Import Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/import-files.md
Title: Import DICOM files into the DICOM service description: Learn how to import DICOM files by using bulk import in Azure Health Data Services. --++ Last updated 10/05/2023
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/overview.md
Title: Overview of the DICOM service in Azure Health Data Services description: The DICOM service is a cloud-based solution for storing, managing, and exchanging medical imaging data securely and efficiently with any DICOMwebΓäó-enabled systems or applications. Learn more about its benefits and use cases. --++ Last updated 10/13/2023
healthcare-apis Pull Dicom Changes From Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/pull-dicom-changes-from-change-feed.md
Title: Access DICOM Change Feed logs by using C# and the DICOM client package in Azure Health Data Services description: Learn how to use C# code to consume Change Feed, a feature of the DICOM service that provides logs of all the changes in your organization's medical imaging data. The code example uses the DICOM client package to access and process the Change Feed. --++ Last updated 1/18/2024
healthcare-apis References For Dicom Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/references-for-dicom-service.md
Title: References for DICOM service - Azure Health Data Services description: This reference provides related resources for the DICOM service. --++ Last updated 06/03/2022
healthcare-apis Update Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/update-files.md
Title: Update files in the DICOM service in Azure Health Data Services description: Learn how to use the bulk update API in Azure Health Data Services to modify DICOM attributes for multiple files in the DICOM service. This article explains the benefits, requirements, and steps of the bulk update operation. --++ Last updated 1/18/2024
healthcare-apis Healthcare Apis Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-quickstart.md
Title: Azure Health Data Services quickstart description: Learn how to create a workspace for Azure Health Data Services by using the Azure portal. The workspace is a centralized logical container for instances of the FHIR service, DICOM service, and MedTech service. -+ Last updated 06/07/2024
healthcare-apis Release Notes 2021 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes-2021.md
Title: Release notes for 2021 Azure Health Data Services monthly releases
description: 2021 - Explore the new capabilities and benefits of Azure Health Data Services in 2021. Learn about the features and enhancements introduced in the FHIR, DICOM, and MedTech services that help you manage and analyze health data. -+ Last updated 03/13/2024
healthcare-apis Release Notes 2022 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes-2022.md
Title: Release notes for 2022 Azure Health Data Services monthly releases
description: 2022 - Explore the Azure Health Data Services release notes for 2022. Learn about the features and enhancements introduced in the FHIR, DICOM, and MedTech services that help you manage and analyze health data. -+ Last updated 03/13/2024
healthcare-apis Release Notes 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes-2023.md
Title: Release notes for 2023 Azure Health Data Services monthly releases
description: 2023 - Find out about features and improvements introduced in 2023 for the FHIR, DICOM, and MedTech services in Azure Health Data Services. Review the monthly release notes and learn how to get the most out of healthcare data. -+ Last updated 03/13/2024
healthcare-apis Release Notes 2024 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes-2024.md
Title: Release notes for 2024 Azure Health Data Services monthly releases
description: 2024 - Stay updated with the latest features and improvements for the FHIR, DICOM, and MedTech services in Azure Health Data Services in 2024. Read the monthly release notes and learn how to get the most out of healthcare data. -+ Last updated 07/29/2024
key-vault Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-bicep.md
description: Quickstart showing how to create Azure key vaults, and add key to t
-+
key-vault Hsm Protected Keys Byok https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/hsm-protected-keys-byok.md
description: Use this article to help you plan for, generate, and transfer your
-+ Last updated 01/30/2024
For more information on login options via the CLI, take a look at [sign in with
||||| |Cryptomathic|ISV (Enterprise Key Management System)|Multiple HSM brands and models including<ul><li>nCipher</li><li>Thales</li><li>Utimaco</li></ul>See [Cryptomathic site for details](https://www.cryptomathic.com/)|| |Entrust|Manufacturer,<br/>HSM as a service|<ul><li>nShield family of HSMs</li><li>nShield as a service</ul>|[nCipher new BYOK tool and documentation](https://www.ncipher.com/products/key-management/cloud-microsoft-azure)|
-|Fortanix|Manufacturer,<br/>HSM as a service|<ul><li>Self-Defending Key Management Service (SDKMS)</li><li>Equinix SmartKey</li></ul>|[Exporting SDKMS keys to Cloud Providers for BYOK - Azure Key Vault](https://support.fortanix.com/hc/en-us/articles/360040071192-Exporting-SDKMS-keys-to-Cloud-Providers-for-BYOK-Azure-Key-Vault)|
+|Fortanix|Manufacturer,<br/>HSM as a service|<ul><li>Self-Defending Key Management Service (SDKMS)</li><li>Equinix SmartKey</li></ul>|[Exporting SDKMS keys to Cloud Providers for BYOK - Azure Key Vault](https://support.fortanix.com/hc/articles/11620525047828-Fortanix-DSM-Azure-Key-Vault-BYOK-Bring-Your-Own-Key)|
|IBM|Manufacturer|IBM 476x, CryptoExpress|[IBM Enterprise Key Management Foundation](https://www.ibm.com/security/key-management/ekmf-bring-your-own-key-azure)| |Marvell|Manufacturer|All LiquidSecurity HSMs with<ul><li>Firmware version 2.0.4 or later</li><li>Firmware version 3.2 or newer</li></ul>|[Marvell BYOK tool and documentation](https://www.marvell.com/products/security-solutions/nitrox-hs-adapters/exporting-marvell-hsm-keys-to-cloud-azure-key-vault.html)| |Securosys SA|Manufacturer, HSM as a service|Primus HSM family, Securosys Clouds HSM|[Primus BYOK tool and documentation](https://www.securosys.com/primus-azure-byok)|
key-vault About Managed Storage Account Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/about-managed-storage-account-keys.md
description: Overview of Azure Key Vault managed storage account keys.
-+ Last updated 01/30/2024
key-vault About Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/about-secrets.md
description: Overview of Azure Key Vault secrets.
-+ Last updated 01/30/2024
key-vault Javascript Developer Guide Backup Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/javascript-developer-guide-backup-secrets.md
Title: Back up Azure Key Vault secret with JavaScript
description: Back up and restore Key Vault secret using JavaScript. -+
key-vault Javascript Developer Guide Delete Secret https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/javascript-developer-guide-delete-secret.md
Title: Delete Azure Key Vault secret with JavaScript
description: Delete, restore, or purge a Key Vault secret using JavaScript. -+
key-vault Javascript Developer Guide Enable Disable Secret https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/javascript-developer-guide-enable-disable-secret.md
Title: Enable a Azure Key Vault secret with JavaScript
description: Enable or disable a Key Vault secret using JavaScript. -+
key-vault Javascript Developer Guide Find Secret https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/javascript-developer-guide-find-secret.md
Title: Find or list Azure Key Vault secrets with JavaScript
description: Find a set of secrets or list secrets or secret version in a Key Vault JavaScript. -+
key-vault Javascript Developer Guide Get Secret https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/javascript-developer-guide-get-secret.md
Title: Get Azure Key Vault secret with JavaScript
description: Get the current secret or a specific version of a secret in Azure Key Vault with JavaScript. -+
key-vault Javascript Developer Guide Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/javascript-developer-guide-get-started.md
Title: Getting started with Azure Key Vault secret in JavaScript
description: Set up your environment, install npm packages, and authenticate to Azure to get started using Key Vault secrets in JavaScript -+
key-vault Javascript Developer Guide Set Update Rotate Secret https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/javascript-developer-guide-set-update-rotate-secret.md
Title: Create, update, or rotate Azure Key Vault secrets with JavaScript
description: Create or update with the set method, or rotate secrets with JavaScript. -+
key-vault Multiline Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/multiline-secrets.md
Title: Store a multiline secret in Azure Key Vault
description: Tutorial showing how to set multiline secrets from Azure Key Vault using Azure CLI and PowerShell -+
key-vault Overview Storage Keys Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/overview-storage-keys-powershell.md
Title: Azure Key Vault managed storage account - PowerShell version description: The managed storage account feature provides a seamless integration, between Azure Key Vault and an Azure storage account. -+
key-vault Overview Storage Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/overview-storage-keys.md
Title: Manage storage account keys with Azure Key Vault and the Azure CLI
description: Storage account keys provide seamless integration between Azure Key Vault and key-based access to an Azure storage account. -+
key-vault Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-bicep.md
Title: Azure Quickstart - Create an Azure key vault and a secret using Bicep | M
description: Quickstart showing how to create Azure key vaults, and add secrets to the vaults using Bicep. -+
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-cli.md
Title: Quickstart - Set and retrieve a secret from Azure Key Vault description: Quickstart showing how to set and retrieve a secret from Azure Key Vault using Azure CLI -+
key-vault Quick Create Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-go.md
description: Learn how to create, retrieve, and delete secrets from an Azure key
Last updated 01/10/2024-+ ms.devlang: golang
key-vault Quick Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-java.md
Last updated 01/11/2023-+ ms.devlang: java
key-vault Quick Create Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-net.md
description: Learn how to create, retrieve, and delete secrets from an Azure key
Last updated 01/20/2023-+ ms.devlang: csharp
key-vault Quick Create Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-node.md
description: Learn how to create, retrieve, and delete secrets from an Azure key
Last updated 02/02/2023-+ ms.devlang: javascript
key-vault Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-portal.md
Title: Azure Quickstart - Set and retrieve a secret from Key Vault using Azure p
description: Quickstart showing how to set and retrieve a secret from Azure Key Vault using the Azure portal -+ Last updated 04/04/2024
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-powershell.md
Title: Quickstart - Set & retrieve a secret from Key Vault using PowerShell
description: In this quickstart, learn how to create, retrieve, and delete secrets from an Azure Key Vault using Azure PowerShell. -+
key-vault Quick Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-python.md
description: Learn how to create, retrieve, and delete secrets from an Azure key
Last updated 02/03/2023-+ ms.devlang: python
key-vault Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-template.md
tags: azure-resource-manager-+
key-vault Secrets Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/secrets-best-practices.md
Title: Best practices for secrets management - Azure Key Vault | Microsoft Docs
description: Learn about best practices for Azure Key Vault secrets management. tags: azure-key-vault-+ Last updated 09/21/2021
key-vault Storage Keys Sas Tokens Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/storage-keys-sas-tokens-code.md
Title: Fetch shared access signature tokens in code | Azure Key Vault description: The managed storage account feature provides a seamless integration between Azure Key Vault and an Azure storage account. This sample uses the Azure SDK for .NET to manage SAS tokens. -+
key-vault Tutorial Rotation Dual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/tutorial-rotation-dual.md
description: Use this tutorial to learn how to automate the rotation of a secret
tags: 'rotation'-+ Last updated 01/30/2024
key-vault Tutorial Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/tutorial-rotation.md
tags: 'rotation' -+ Last updated 01/20/2023
load-balancer Load Balancer Custom Probe Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-custom-probe-overview.md
For HTTP/S probes, if the configured interval is longer than the above timeout p
## Probe source IP address
-For Load Balancer's health probe to mark up your instance, you **must** allow 168.63.129.16 IP address in any Azure [network security groups](../virtual-network/network-security-groups-overview.md) and local firewall policies. The **AzureLoadBalancer** service tag identifies this source IP address in your [network security groups](../virtual-network/network-security-groups-overview.md) and permits health probe traffic by default. You can learn more about this IP [here](../virtual-network/what-is-ip-address-168-63-129-16.md).
+For Azure Load Balancer's health probe to mark up your instance, you **must** allow 168.63.129.16 IP address in any Azure [network security groups](../virtual-network/network-security-groups-overview.md) and local firewall policies. The **AzureLoadBalancer** service tag identifies this source IP address in your [network security groups](../virtual-network/network-security-groups-overview.md) and permits health probe traffic by default. You can learn more about this IP [here](../virtual-network/what-is-ip-address-168-63-129-16.md).
-If you don't allow the [source IP](#probe-source-ip-address) of the probe in your firewall policies, the health probe fails as it is unable to reach your instance. In turn, Azure Load Balancer marks your instance as *down* due to the health probe failure. This misconfiguration can cause your load balanced application scenario to fail. All IPv4 Load Balancer health probes originate from the IP address 168.63.129.16 as their source. IPv6 probes use a link-local address as their source.
+If you don't allow the [source IP](#probe-source-ip-address) of the probe in your firewall policies, the health probe fails as it is unable to reach your instance. In turn, Azure Load Balancer marks your instance as *down* due to the health probe failure. This misconfiguration can cause your load balanced application scenario to fail. All IPv4 Load Balancer health probes originate from the IP address 168.63.129.16 as their source. IPv6 probes use a link-local address (fe80::1234:5678:9abc) as their source. For a dual-stack Azure Load Balancer, you must [configure a Network Security Group](./virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-cli.md#create-a-network-security-group-rule-for-inbound-and-outbound-connections) for the IPv6 health probe to function.
## Limitations
load-balancer Load Balancer Tcp Reset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-tcp-reset.md
Title: Load Balancer TCP Reset and idle timeout in Azure
-description: With this article, learn about Azure Load Balancer with bidirectional TCP RST packets on idle timeout.
+description: With this article, learn about Azure Load Balancer with bidirectional TCP Reset packets on idle timeout.
Previously updated : 01/19/2024 Last updated : 07/31/2024 # Load Balancer TCP Reset and Idle Timeout
-You can use [Standard Load Balancer](./load-balancer-overview.md) to create a more predictable application behavior for your scenarios by enabling TCP Reset on Idle for a given rule. Load Balancer's default behavior is to silently drop flows when the idle timeout of a flow is reached. Enabling TCP reset causes Load Balancer to send bidirectional TCP Resets (TCP RST packets) on idle timeout to inform your application endpoints that the connection timed out and is no longer usable. Endpoints can immediately establish a new connection if needed.
+You can use [Standard Load Balancer](./load-balancer-overview.md) to create a more predictable application behavior for your scenarios by enabling TCP Reset on Idle for a given rule. Load Balancer's default behavior is to silently drop flows when the idle timeout of a flow is reached. Enabling TCP reset causes Load Balancer to send bidirectional TCP Resets (TCP reset packets) on idle timeout to inform your application endpoints that the connection timed out and is no longer usable. Endpoints can immediately establish a new connection if needed.
:::image type="content" source="media/load-balancer-tcp-reset/load-balancer-tcp-reset.png" alt-text="Diagram shows default TCP reset behavior of network nodes."::: ## TCP reset
-You change this default behavior and enable sending TCP Resets on idle timeout on inbound NAT rules, load balancing rules, and [outbound rules](./load-balancer-outbound-connections.md#outboundrules). When enabled per rule, Load Balancer sends bidirectional TCP Resets (TCP RST packets) to both client and server endpoints at the time of idle timeout for all matching flows.
+You change this default behavior and enable sending TCP Resets on idle timeout on inbound NAT rules, load balancing rules, and [outbound rules](./load-balancer-outbound-connections.md#outboundrules). When enabled per rule, Load Balancer sends bidirectional TCP Resets (TCP RST packets) to both client and server endpoints at the time of idle timeout for all matching flows.
-Endpoints receiving TCP RST packets close the corresponding socket immediately. This provides an immediate notification to the endpoint's connection release and any future communication on the same TCP connection will fail. Applications can purge connections when the socket closes and reestablish connections as needed without waiting for the TCP connection to eventually time-out.
+Endpoints receiving TCP rest packets close the corresponding socket immediately. This provides an immediate notification to the endpoint's connection release and any future communication on the same TCP connection will fail. Applications can purge connections when the socket closes and reestablish connections as needed without waiting for the TCP connection to eventually time out.
For many scenarios, TCP reset can reduce the need to send TCP (or application layer) keepalives to refresh the idle timeout of a flow.
-If your idle durations exceed configuration limits or your application shows an undesirable behavior with TCP Resets enabled, you can still need to use TCP keepalives, or application layer keepalives, to monitor the liveness of the TCP connections. Further, keepalives can also remain useful for when the connection is proxied somewhere in the path, particularly application layer keepalives.
+If your idle durations exceed configuration limits or your application shows an undesirable behavior with TCP Resets enabled, you can still need to use TCP keepalives, or application layer keepalives, to monitor the liveness of the TCP connections. Further, keepalives can also remain useful for when the connection is proxied somewhere in the path, particularly application layer keepalives.
By carefully examining the entire end to end scenario, you can determine the benefits from enabling TCP Resets and adjusting the idle timeout. Then you decide if more steps can be required to ensure the desired application behavior.
Azure Load Balancer has a 4 minutes to 100-minutes timeout range for Load Balanc
When the connection is closed, your client application can receive the following error message: "The underlying connection was closed: A connection that was expected to be kept alive was closed by the server."
-If TCP RSTs are enabled, and it's missed for any reason, RSTs will be sent for any subsequent packets. If the TCP RST option isn't enabled, then packets will be silently dropped.
+If TCP resets are enabled, and it's missed for any reason, resets for any subsequent packets. If the TCP reset option isn't enabled, then packets are silently dropped.
A common practice is to use a TCP keep-alive. This practice keeps the connection active for a longer period. For more information, see these [.NET examples](/dotnet/api/system.net.servicepoint.settcpkeepalive). With keep-alive enabled, packets are sent during periods of inactivity on the connection. Keep-alive packets ensure the idle timeout value isn't reached and the connection is maintained for a long period.
It's important to take into account how the idle timeout values set for differen
### Outbound - If there's an outbound rule with an idle timeout value different than 4 minutes (which is what public IP outbound idle timeout is locked at), the outbound rule idle timeout takes precedence.-- Because a NAT gateway will always take precedence over load balancer outbound rules (and over public IP addresses assigned directly to VMs), the idle timeout value assigned to the NAT gateway will be used. (Along the same lines, the locked public IP outbound idle timeouts of 4 minutes of any IPs assigned to the NAT GW aren't considered.)
+- Because a NAT gateway will always take precedence over load balancer outbound rules (and over public IP addresses assigned directly to VMs), the idle timeout value assigned to the NAT gateway will be used. (Along the same lines, the locked public IP outbound idle timeouts of 4 minutes of any IPs assigned to the NAT GW aren't considered.)
## Limitations - TCP reset only sent during TCP connection in ESTABLISHED state. - TCP idle timeout doesn't affect load balancing rules on UDP protocol.-- TCP reset isn't supported for ILB HA ports when a network virtual appliance is in the path. A workaround could be to use outbound rule with TCP reset from NVA.
+- TCP reset isn't supported for Internal Load Balancer HA ports when a network virtual appliance is in the path. A workaround could be to use outbound rule with TCP reset from Network Virtual Appliance.
## Next steps - Learn about [Standard Load Balancer](./load-balancer-overview.md). - Learn about [outbound rules](./load-balancer-outbound-connections.md#outboundrules).-- [Configure TCP RST on Idle Timeout](load-balancer-tcp-idle-timeout.md)
+- [Configure TCP RST on Idle Timeout](load-balancer-tcp-idle-timeout.md)
load-balancer Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/whats-new.md
The product group is actively working on resolutions for the following known iss
|Issue |Description |Mitigation | | - |||
-| IP based LB outbound IP | IP based LB uses Azure's Default Outbound Access IP for outbound | In order to prevent outbound access from this IP, use NAT Gateway for a predictable IP address and to prevent SNAT port exhaustion |
-| numberOfProbes, "Unhealthy threshold" | Health probe configuration property numberOfProbes, otherwise known as "Unhealthy threshold" in Portal, isn't respected. Load Balancer health probes will probe up/down immediately after one probe regardless of the property's configured value | To control the number of successful or failed consecutive probes necessary to mark backend instances as healthy or unhealthy, please leverage the property ["probeThreshold"](/azure/templates/microsoft.network/loadbalancers?pivots=deployment-language-arm-template#probepropertiesformat-1) instead |
+| IP-based Load Balancer outbound IP | IP-based Load Balancers are currently not secure-by-default and will use the backend instances' default outbound access IPs for outbound connections. If the Load Balancer is a public Load Balancer, either the default outbound access IPs or the Load Balancer's frontend IP may be used. | In order to prevent backend instances behind an IP-based Load Balancer from using default outbound access, use NAT Gateway for a predictable IP address and to prevent SNAT port exhaustion, or leverage the private subnet feature to secure your Load Balancer. |
+| numberOfProbes, "Unhealthy threshold" | Health probe configuration property numberOfProbes, otherwise known as "Unhealthy threshold" in Portal, isn't respected. Load Balancer health probes will probe up/down immediately after one probe regardless of the property's configured value. | To control the number of successful or failed consecutive probes necessary to mark backend instances as healthy or unhealthy, please leverage the property ["probeThreshold"](/azure/templates/microsoft.network/loadbalancers?pivots=deployment-language-arm-template#probepropertiesformat-1) instead. |
logic-apps Logic Apps Data Operations Code Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-data-operations-code-samples.md
ms.suite: integration Previously updated : 12/13/2023 Last updated : 07/31/2024 # Data operation code samples for Azure Logic Apps [!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)]
-Here are the code samples for the data operation action definitions in the article, [Perform data operations](../logic-apps/logic-apps-perform-data-operations.md). You can use these samples for when you want to try the examples with your own logic app's underlying workflow definition, Azure subscription, and API connections. Just copy and paste these action definitions into the code view editor for your logic app's workflow definition, and then modify the definitions for your specific workflow.
+Here are the code samples for the data operation action definitions in the article, [Perform data operations](logic-apps-perform-data-operations.md). You can use these samples for when you want to try the examples with your own logic app's underlying workflow definition, Azure subscription, and API connections. Just copy and paste these action definitions into the code view editor for your logic app's workflow definition, and then modify the definitions for your specific workflow.
Based on JavaScript Object Notation (JSON) standards, these action definitions appear in alphabetical order. However, in the Logic App Designer, each definition appears in the correct sequence within your workflow because each action definition's `runAfter` property specifies the run order.
Based on JavaScript Object Notation (JSON) standards, these action definitions a
## Compose
-To try the [**Compose** action example](../logic-apps/logic-apps-perform-data-operations.md#compose-action),
+To try the [**Compose** action example](logic-apps-perform-data-operations.md#compose-action),
here are the action definitions you can use: ```json
here are the action definitions you can use:
{ "name": "firstNameVar", "type": "String",
- "value": "Sophie "
+ "value": "Sophia "
} ] },
here are the action definitions you can use:
{ "name": "lastNameVar", "type": "String",
- "value": "Owen"
+ "value": "Owens"
} ] },
here are the action definitions you can use:
## Create CSV table
-To try the [**Create CSV table** action example](../logic-apps/logic-apps-perform-data-operations.md#create-csv-table-action), here are the action definitions you can use:
+To try the [**Create CSV table** action example](logic-apps-perform-data-operations.md#create-csv-table-action), here are the action definitions you can use:
```json "actions": {
To try the [**Create CSV table** action example](../logic-apps/logic-apps-perfor
## Create HTML table
-To try the [**Create HTML table** action example](../logic-apps/logic-apps-perform-data-operations.md#create-html-table-action),
+To try the [**Create HTML table** action example](logic-apps-perform-data-operations.md#create-html-table-action),
here are the action definitions you can use: ```json
here are the action definitions you can use:
## Filter array
-To try the [**Filter array** action example](../logic-apps/logic-apps-perform-data-operations.md#filter-array-action), here are the action definitions you can use:
+To try the [**Filter array** action example](logic-apps-perform-data-operations.md#filter-array-action), here are the action definitions you can use:
```json "actions": {
To try the [**Filter array** action example](../logic-apps/logic-apps-perform-da
## Join
-To try the [**Join** action example](../logic-apps/logic-apps-perform-data-operations.md#join-action), here are the action definitions you can use:
+To try the [**Join** action example](logic-apps-perform-data-operations.md#join-action), here are the action definitions you can use:
```json "actions": {
To try the [**Join** action example](../logic-apps/logic-apps-perform-data-opera
## Parse JSON
-To try the [**Parse JSON** action example](../logic-apps/logic-apps-perform-data-operations.md#parse-json-action), here are the action definitions you can use:
+To try the [**Parse JSON** action example](logic-apps-perform-data-operations.md#parse-json-action), here are the action definitions you can use:
```json "actions": {
To try the [**Parse JSON** action example](../logic-apps/logic-apps-perform-data
"type": "Object", "value": { "Member": {
- "Email": "Sophie.Owen@contoso.com",
- "FirstName": "Sophie",
- "LastName": "Owen"
+ "Email": "Sophia.Owens@fabrikam.com",
+ "FirstName": "Sophia",
+ "LastName": "Owens"
} } }
To try the [**Parse JSON** action example](../logic-apps/logic-apps-perform-data
## Select
-To try the [**Select** action example](../logic-apps/logic-apps-perform-data-operations.md#select-action), the following action definitions create a JSON object array from an integer array:
+To try the [**Select** action example](logic-apps-perform-data-operations.md#select-action), the following action definitions create a JSON object array from an integer array:
```json "actions": {
To try the [**Select** action example](../logic-apps/logic-apps-perform-data-ope
}, ```
-The following example shows action definitions that create a string array from a JSON object array, but for this task, next to the **Map** box, switch to text mode (![Icon for text mode.](media/logic-apps-perform-data-operations/text-mode.png)) in the designer, or use the code view editor instead:
+The following example shows action definitions that create a string array from a JSON object array, but for this task, next to the **Map** box, switch to text mode (**T** icon) in the designer, or use the code view editor instead:
```json "actions": {
The following example shows action definitions that create a string array from a
## Next steps
-* [Perform data operations](../logic-apps/logic-apps-perform-data-operations.md)
+* [Perform data operations](logic-apps-perform-data-operations.md)
logic-apps Logic Apps Perform Data Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-perform-data-operations.md
ms.suite: integration Previously updated : 12/13/2023 Last updated : 07/31/2024 # Customer intent: As a developer using Azure Logic Apps, I want to perform various data operations on various data types for my workflow in Azure Logic Apps.
This how-to guide shows how you can work with data in your logic app workflow in
* Create an array based on the specified properties for all the items in another array. * Create a string from all the items in an array and separate those items using a specified character.
-For other ways to work with data, review the [data manipulation functions](workflow-definition-language-functions-reference.md) that Azure Logic Apps provides.
- ## Prerequisites * An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* The logic app workflow where you want to perform the data operation. This workflow must already have a [trigger](logic-apps-overview.md#logic-app-concepts) as the first step in your workflow. Both Consumption and Standard logic app workflows support the data operations described in this guide.
+* The logic app workflow where you want to perform the data operation. Both Consumption and Standard logic app workflows support the data operations described in this guide.
- All data operations are available only as actions. So, before you can use these actions, your workflow must already start with a trigger and include any other actions required to create the outputs that you want to use in the data operation.
+ All data operations are available only as actions. So, before you can use these actions, your workflow must already start with a [trigger](logic-apps-overview.md#logic-app-concepts) as the first step and include any other actions required to create the outputs that you want to use in the data operation.
## Data operation actions
The following actions help you work with data in JavaScript Object Notation (JSO
| [**Compose**](#compose-action) | Create a message, or string, from multiple inputs that can have various data types. You can then use this string as a single input, rather than repeatedly entering the same inputs. For example, you can create a single JSON message from various inputs. | | [**Parse JSON**](#parse-json-action) | Create user-friendly data tokens for properties in JSON content so that you can more easily use the properties in your logic apps. |
-To create more complex JSON transformations, see [Perform advanced JSON transformations with Liquid templates](../logic-apps/logic-apps-enterprise-integration-liquid-transform.md).
+To create more complex JSON transformations, see [Perform advanced JSON transformations with Liquid templates](logic-apps-enterprise-integration-liquid-transform.md).
### Array actions
For example, you can construct a JSON message from multiple variables, such as s
`{ "age": <ageVar>, "fullName": "<lastNameVar>, <firstNameVar>" }`
-and creates the following output:
+And creates the following output:
`{"age":35,"fullName":"Owens,Sophia"}`
-To try the **Compose** action, follow these steps by using the workflow designer. Or, if you prefer working in the code view editor, you can copy the example **Compose** and **Initialize variable** action definitions from this guide into your own logic app's underlying workflow definition: [Data operation code examples - Compose](../logic-apps/logic-apps-data-operations-code-samples.md#compose-action-example). For more information about the **Compose** action in the underlying JSON workflow definition, see the [Compose action](logic-apps-workflow-actions-triggers.md#compose-action).
+To try the **Compose** action, follow these steps by using the workflow designer. Or, if you prefer working in the code view editor, you can copy the example **Compose** and **Initialize variable** action definitions from this guide into your own logic app's underlying workflow definition: [Data operation code examples - Compose](logic-apps-data-operations-code-samples.md#compose-action-example). For more information about the **Compose** action in the underlying JSON workflow definition, see the [Compose action](logic-apps-workflow-actions-triggers.md#compose-action).
### [Consumption](#tab/consumption) 1. In the [Azure portal](https://portal.azure.com), Visual Studio, or Visual Studio Code, open your logic app workflow in the designer.
- This example uses the Azure portal and a sample workflow with the **Recurrence** trigger followed by several **Initialize variable** actions. These actions are set up to create two string variables and an integer variable.
+ This example uses the Azure portal and a sample workflow with the **Recurrence** trigger followed by several **Variables** actions named **Initialize variable**. These actions are set up to create two string variables and an integer variable.
+
+ | Operation | Properties and values |
+ |--|--|
+ | **Initialize variable** | - **Name**: firstNameVar <br>- **Type**: String <br>- **Value**: Sophia |
+ | **Initialize variable** | - **Name**: lastNameVar <br>- **Type**: String <br>- **Value**: Owens |
+ | **Initialize variable** | - **Name**: ageVar <br>- **Type**: Integer <br>- **Value**: 35 |
- ![Screenshot showing the Azure portal and the designer with a sample Consumption workflow for the Compose action.](./media/logic-apps-perform-data-operations/sample-start-compose-action-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/sample-start-compose-action-consumption.png" alt-text="Screenshot shows Azure portal, Consumption workflow designer, and example workflow for Compose action." lightbox="media/logic-apps-perform-data-operations/sample-start-compose-action-consumption.png":::
-1. In your workflow where you want to create the output, follow one of these steps:
+1. [Follow these general steps to add the **Data Operations** action named **Compose**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
- * To add an action under the last step, select **New step**.
+1. On the designer, select the **Compose** action, if not already selected. In the **Inputs** box, enter the inputs to use for creating the output.
- * To add an action between steps, move your mouse over the connecting arrow so the plus sign (**+**) appears. Select the plus sign, and then select **Add an action**.
+ For this example, follow these steps:
-1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **compose**.
+ 1. In the **Inputs** box, enter the following sample JSON object, including the spacing as shown:
-1. From the actions list, select the action named **Compose**.
+ ```json
+ {
+ "age": ,
+ "fullName": " , "
+ }
+ ```
- ![Screenshot showing the designer for a Consumption workflow, the "Choose an operation" search box with "compose" entered, and the "Compose" action selected.](./media/logic-apps-perform-data-operations/select-compose-action-consumption.png)
+ 1. In the JSON object, put your cursor in the corresponding locations, select the dynamic content list (lightning icon), and then select the corresponding variable from the list:
-1. In the **Inputs** box, enter the inputs to use for creating the output.
+ | JSON property | Variable |
+ ||-|
+ | **`age`** | **ageVar** |
+ | **`fullName`** | "**lastNameVar**, **firstNameVar**" |
- For this example, select inside the **Inputs** box, which opens the dynamic content list. From that list, select the previously created variables:
+ The following example shows both added and not yet added variables:
- ![Screenshot showing the designer for a Consumption workflow, the "Compose" action, and the selected inputs to use.](./media/logic-apps-perform-data-operations/configure-compose-action-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/configure-compose-action.png" alt-text="Screenshot shows Consumption workflow, Compose action, dynamic content list, and selected inputs to use." lightbox="media/logic-apps-perform-data-operations/configure-compose-action.png":::
- The following screenshot shows the finished example **Compose** action:
+ The following example shows the finished sample **Compose** action:
- ![Screenshot showing the designer for a Consumption workflow and the finished example for the "Compose" action.](./media/logic-apps-perform-data-operations/finished-compose-action-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/finished-compose-action.png" alt-text="Screenshot shows Consumption workflow and finished example Compose action." lightbox="media/logic-apps-perform-data-operations/finished-compose-action.png":::
1. Save your workflow. On the designer toolbar, select **Save**.
To try the **Compose** action, follow these steps by using the workflow designer
1. In the [Azure portal](https://portal.azure.com) or Visual Studio Code, open your logic app workflow in the designer.
- This example uses the Azure portal and a sample workflow with the **Recurrence** trigger followed by several **Initialize variable** actions. These actions are set up to create two string variables and an integer variable.
+ This example uses the Azure portal and a sample workflow with the **Recurrence** trigger followed by several **Variables** actions named **Initialize variable**. These actions are set up to create two string variables and an integer variable.
- ![Screenshot showing the Azure portal and the designer for a sample Standard workflow for the Compose action.](./media/logic-apps-perform-data-operations/sample-start-compose-action-standard.png)
+ | Operation | Properties and values |
+ |--|--|
+ | **Initialize variable** | - **Name**: firstNameVar <br>- **Type**: String <br>- **Value**: Sophia |
+ | **Initialize variable** | - **Name**: lastNameVar <br>- **Type**: String <br>- **Value**: Owens |
+ | **Initialize variable** | - **Name**: ageVar <br>- **Type**: Integer <br>- **Value**: 35 |
-1. In your workflow where you want to create the output, follow one of these steps:
+ :::image type="content" source="media/logic-apps-perform-data-operations/sample-start-compose-action-standard.png" alt-text="Screenshot shows Azure portal, Standard workflow designer, and example workflow for Compose action." lightbox="media/logic-apps-perform-data-operations/sample-start-compose-action-standard.png":::
- * To add an action under the last step, select the plus sign (**+**), and then select **Add an action**.
+1. [Follow these general steps to add the **Data Operations** action named **Compose**](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
- * To add an action between steps, select the plus sign (**+**) between those steps, and then select **Add an action**.
+1. On the designer, select the **Compose** action, if not already selected. In the **Inputs** box, enter the inputs to use for creating the output.
-1. After the connector gallery opens, [follow these general steps to find the **Data Operations** action named **Compose**](create-workflow-with-trigger-or-action.md?tabs=standard#add-an-action-to-run-a-task).
+ For this example, follow these steps:
- > [!NOTE]
- >
- > If the connector results box shows the message that **We couldn't find any results for compose**,
- > you get this result because the connector name is actually **Data Operations**, not **Compose**,
- > which is the action name.
+ 1. In the **Inputs** box, enter the following sample JSON object, including the spacing as shown:
-1. After the action information box opens, in the **Inputs** box, enter the inputs to use for creating the output.
+ ```json
+ {
+ "age": ,
+ "fullName": " , "
+ }
+ ```
- For this example, select inside the **Inputs** box, and then select the lightning icon, which opens the dynamic content list. From that list, select the previously created variables:
+ 1. In the JSON object, put your cursor in the corresponding locations, select the dynamic content list (lightning icon), and then select the corresponding variable from the list:
- ![Screenshot showing the designer for a Standard workflow, the "Compose" action, and the selected inputs to use.](./media/logic-apps-perform-data-operations/configure-compose-action-standard.png)
+ | JSON property | Variable |
+ ||-|
+ | **`age`** | **ageVar** |
+ | **`fullName`** | "**lastNameVar**, **firstNameVar**" |
- The following screenshot shows the finished example **Compose** action:
+ The following example shows both added and not yet added variables:
- ![Screenshot showing the designer for a Standard workflow and the finished example for the "Compose" action.](./media/logic-apps-perform-data-operations/finished-compose-action-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/configure-compose-action.png" alt-text="Screenshot shows Standard workflow, Compose action, dynamic content list, and selected inputs to use." lightbox="media/logic-apps-perform-data-operations/configure-compose-action.png":::
+
+ The following example shows the finished sample **Compose** action:
+
+ :::image type="content" source="media/logic-apps-perform-data-operations/finished-compose-action.png" alt-text="Screenshot shows Standard workflow and finished example Compose action." lightbox="media/logic-apps-perform-data-operations/finished-compose-action.png":::
1. Save your workflow. On the designer toolbar, select **Save**.
To try the **Compose** action, follow these steps by using the workflow designer
To confirm whether the **Compose** action creates the expected results, send yourself a notification that includes output from the **Compose** action.
-#### [Consumption](#tab/consumption)
- 1. In your workflow, add an action that can send you the results from the **Compose** action. This example continues by using the Office 365 Outlook action named **Send an email**.
-1. In this action, for each box where you want the results to appear, select inside each box, which opens the dynamic content list. From that list, under the **Compose** action, select **Outputs**.
+1. In this action, for each box where you want the results to appear, select inside each box, and then select the dynamic content list. From that list, under the **Compose** action, select **Outputs**.
For this example, the result appears in the email's body, so add the **Outputs** field to the **Body** box.
- ![Screenshot showing the Azure portal, designer for an example Consumption workflow, and the "Send an email" action with the output from the preceding "Compose" action.](./media/logic-apps-perform-data-operations/send-email-compose-action-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/send-email-compose-action.png" alt-text="Screenshot shows workflow designer, the action named Send an email, and output from the preceding Compose action." lightbox="media/logic-apps-perform-data-operations/send-email-compose-action.png":::
-1. Save your workflow, and then manually run your workflow. On the designer toolbar, select **Run Trigger** > **Run**.
+1. Save your workflow, and then manually run your workflow.
-#### [Standard](#tab/standard)
+ - Consumption workflow: On the designer toolbar, select **Run** > **Run**.
+ - Standard workflow: On the workflow navigation menu, select **Overview**. On the **Overview** page toolbar, select **Run** > **Run**.
-1. In your workflow, add an action that can send you the results from the **Compose** action.
+If you used the Office 365 Outlook action, the following example shows the result:
- This example continues by using the Office 365 Outlook action named **Send an email**.
-
-1. In this action, for each box where you want the results to appear, select inside each box, and then select the lightning icon, which opens the dynamic content list. From that list, under the **Compose** action, select **Outputs**.
-
- > [!NOTE]
- >
- > If the dynamic content list shows the message that **We can't find any outputs to match this input format**,
- > select **See more** next to the **Compose** label in the list.
- >
- > ![Screenshot showing a Standard workflow and the dynamic content list with "See more" selected for the "Compose" action.](./media/logic-apps-perform-data-operations/send-email-compose-action-see-more.png)
-
- For this example, the result appears in the email's body, so add the **Outputs** field to the **Body** box.
-
- ![Screenshot showing the Azure portal, designer for an example Standard workflow, and the "Send an email" action with the output from the preceding "Compose" action.](./media/logic-apps-perform-data-operations/send-email-compose-action-standard.png)
-
-1. Save your workflow, and then manually run your workflow. On the workflow navigation menu, select **Overview** > **Run Trigger** > **Run**.
---
-If you used the Office 365 Outlook action, you get a result similar to the following screenshot:
-
-![Screenshot showing an email with the "Compose" action results.](./media/logic-apps-perform-data-operations/compose-email-results.png)
<a name="create-csv-table-action"></a>
To try the **Create CSV table** action, follow these steps by using the workflo
This example uses the Azure portal and a sample workflow with the **Recurrence** trigger followed by an **Initialize variable** action. The action is set up to create a variable where the initial value is an array that has some properties and values in JSON format.
- ![Screenshot showing the Azure portal and the designer with a sample Consumption workflow for the "Create CSV table" action.](./media/logic-apps-perform-data-operations/sample-start-create-table-action-consumption.png)
-
-1. In your workflow where you want to create the CSV table, follow one of these steps:
+ | Operation | Properties and values |
+ |--|--|
+ | **Initialize variable** | - **Name**: myJSONArray <br>- **Type**: Array <br>- **Value**: `[ { "Description": "Apples", "Product_ID": 1 }, { "Description": "Oranges", "Product_ID": 2 }]` |
- * To add an action under the last step, select **New step**.
+ :::image type="content" source="media/logic-apps-perform-data-operations/sample-start-create-table-action-consumption.png" alt-text="Screenshot shows Consumption workflow designer, and example workflow for action named Create CSV table." lightbox="media/logic-apps-perform-data-operations/sample-start-create-table-action-consumption.png":::
- * To add an action between steps, move your mouse over the connecting arrow so the plus sign (**+**) appears. Select the plus sign, and then select **Add an action**.
+1. [Follow these general steps to add the **Data Operations** action named **Create CSV table**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **create csv table**.
+1. On the designer, select the **Create CSV table** action, if not already selected. In the **From** box, enter the array or expression to use for creating the table.
-1. From the actions list, select the action named **Create CSV table**.
+ For this example, select inside the **From** box, and select the dynamic content list (lightning icon). From that list, select the **myJSONArray** variable:
- ![Screenshot showing the designer for a Consumption workflow, the "Choose an operation" search box with "create csv table" entered, and the "Create CSV table" action selected.](./media/logic-apps-perform-data-operations/select-create-csv-table-action-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/configure-create-csv-table-action.png" alt-text="Screenshot shows Consumption workflow, action named Create CSV table, and the selected input to use." lightbox="media/logic-apps-perform-data-operations/configure-create-csv-table-action.png":::
-1. In the **From** box, enter the array or expression to use for creating the table.
-
- For this example, select inside the **From** box, which opens the dynamic content list. From that list, select the previously created variable:
-
- ![Screenshot showing the designer for a Consumption workflow, the "Create CSV table" action, and the selected input to use.](./media/logic-apps-perform-data-operations/configure-create-csv-table-action-consumption.png)
-
- > [!NOTE]
+ > [!TIP]
> > To create user-friendly tokens for the properties in JSON objects so that you can select
- > those properties as inputs, use the action named [Parse JSON](#parse-json-action)
+ > those properties as inputs, use the action named [**Parse JSON**](#parse-json-action)
> before you use the **Create CSV table** action. The following screenshot shows the finished example **Create CSV table** action:
- ![Screenshot showing the designer for a Consumption workflow and the finished example for the "Create CSV table" action.](./media/logic-apps-perform-data-operations/finished-create-csv-table-action-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/finished-create-csv-table-action.png" alt-text="Screenshot shows Consumption workflow and finished example action named Create CSV table." lightbox="media/logic-apps-perform-data-operations/finished-create-csv-table-action.png":::
1. Save your workflow. On the designer toolbar, select **Save**.
To try the **Create CSV table** action, follow these steps by using the workflo
This example uses the Azure portal and a sample workflow with the **Recurrence** trigger followed by an **Initialize variable** action. The action is set up to create a variable where the initial value is an array that has some properties and values in JSON format.
- ![Screenshot showing the Azure portal and the designer with a sample Standard workflow for the "Create CSV table" action.](./media/logic-apps-perform-data-operations/sample-start-create-table-action-standard.png)
+ | Operation | Properties and values |
+ |--|--|
+ | **Initialize variable** | - **Name**: myJSONArray <br>- **Type**: Array <br>- **Value**: `[ { "Description": "Apples", "Product_ID": 1 }, { "Description": "Oranges", "Product_ID": 2 }]` |
-1. In your workflow where you want to create the output, follow one of these steps:
+ :::image type="content" source="media/logic-apps-perform-data-operations/sample-start-create-table-action-standard.png" alt-text="Screenshot shows Azure portal, Standard workflow designer, and example workflow for action named Create CSV table." lightbox="media/logic-apps-perform-data-operations/sample-start-create-table-action-standard.png":::
- * To add an action under the last step, select the plus sign (**+**), and then select **Add an action**.
+1. [Follow these general steps to add the **Data Operations** action named **Create CSV table**](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
- * To add an action between steps, select the plus sign (**+**) between those steps, and then select **Add an action**.
+1. On the designer, select the **Create CSV table** action, if not already selected. In the **From** box, enter the array or expression to use for creating the table.
-1. After the connector gallery opens, [follow these general steps to find the **Data Operations** action named **Create CSV table**](create-workflow-with-trigger-or-action.md?tabs=standard#add-an-action-to-run-a-task).
+ For this example, select inside the **From** box, and select the dynamic content list (lightning icon). From that list, select the **myJSONArray** variable:
-1. After the action information box appears, in the **From** box, enter the array or expression to use for creating the table.
+ :::image type="content" source="media/logic-apps-perform-data-operations/configure-create-csv-table-action.png" alt-text="Screenshot shows Standard workflow, action named Create CSV table, and the selected input to use." lightbox="media/logic-apps-perform-data-operations/configure-create-csv-table-action.png":::
- For this example, select inside the **From** box, and then select the lightning icon, which opens the dynamic content list. From that list, select the previously created variable:
-
- ![Screenshot showing the designer for a Standard workflow, the "Create CSV table" action, and the selected input to use.](./media/logic-apps-perform-data-operations/configure-create-csv-table-action-standard.png)
-
- > [!NOTE]
+ > [!TIP]
> > To create user-friendly tokens for the properties in JSON objects so that you can select
- > those properties as inputs, use the action named [Parse JSON](#parse-json-action)
+ > those properties as inputs, use the action named [**Parse JSON**](#parse-json-action)
> before you use the **Create CSV table** action. The following screenshot shows the finished example **Create CSV table** action:
- ![Screenshot showing the designer for a Standard workflow and the finished example for the "Create CSV table" action.](./media/logic-apps-perform-data-operations/finished-create-csv-table-action-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/finished-create-csv-table-action.png" alt-text="Screenshot shows Standard workflow and finished example action named Create CSV table." lightbox="media/logic-apps-perform-data-operations/finished-create-csv-table-action.png":::
1. Save your workflow. On the designer toolbar, select **Save**.
To try the **Create CSV table** action, follow these steps by using the workflo
By default, the **Columns** property is set to automatically create the table columns based on the array items. To specify custom headers and values, follow these steps:
-1. If the **Columns** property doesn't appear in the action information box, from the **Add new parameters** list, select **Columns**.
+1. If the **Columns** property doesn't appear in the action information box, from the **Advanced parameters** list, select **Columns**.
1. Open the **Columns** list, and select **Custom**.
By default, the **Columns** property is set to automatically create the table co
1. In the **Value** property, specify the custom value to use instead.
-To return values from the array, you can use the [`item()` function](workflow-definition-language-functions-reference.md#item) with the **Create CSV table** action. In a `For_each` loop, you can use the [`items()` function](workflow-definition-language-functions-reference.md#items).
+To return values from the array, you can use the [**`item()`** function](workflow-definition-language-functions-reference.md#item) with the **Create CSV table** action. In a **`For_each`** loop, you can use the [**`items()`** function](workflow-definition-language-functions-reference.md#items).
For example, suppose you want table columns that have only the property values and not the property names from an array. To return only these values, follow these steps for working in designer view or in code view.
Oranges,2
In the **Create CSV table** action, keep the **Header** column empty. On each row in the **Value** column, dereference each array property that you want. Each row under **Value** returns all the values for the specified array property and becomes a column in your table.
-##### [Consumption](#tab/consumption)
-
-1. For each array property that you want, in the **Value** column, select inside the edit box, which opens the dynamic content list.
-
-1. From that list, select **Expression** to open the expression editor instead.
-
-1. In the expression editor, enter the following expression but replace `<array-property-name>` with the array property name for the value that you want.
-
- Syntax: `item()?['<array-property-name>']`
-
- Examples:
-
- * `item()?['Description']`
- * `item()?['Product_ID']`
-
- ![Screenshot showing the "Create CSV table" action in a Consumption workflow and how to dereference the "Description" array property.](./media/logic-apps-perform-data-operations/csv-table-expression-consumption.png)
-
-1. Repeat the preceding steps for each array property. When you're done, your action looks similar to the following example:
-
- ![Screenshot showing the "Create CSV table" action in a Consumption workflow and the "item()" function.](./media/logic-apps-perform-data-operations/finished-csv-expression-consumption.png)
-
-1. To resolve expressions into more descriptive versions, switch to code view and back to designer view, and then reopen the collapsed action:
-
- The **Create CSV table** action now appears similar to the following example:
-
- ![Screenshot showing the "Create CSV table" action in a Consumption workflow and resolved expressions without headers.](./media/logic-apps-perform-data-operations/resolved-csv-expression-consumption.png)
-
-##### [Standard](#tab/standard)
- 1. For each array property that you want, in the **Value** column, select inside the edit box, and then select the function icon, which opens the expression editor. Make sure that the **Function** list appears selected.
-1. In the expression editor, enter the following expression but replace `<array-property-name>` with the array property name for the value that you want. When you're done with each expression, select **Add**.
+1. In the expression editor, enter the following expression, but replace `<array-property-name>` with the array property name for the value that you want. When you're done with each expression, select **Add**.
Syntax: `item()?['<array-property-name>']`
In the **Create CSV table** action, keep the **Header** column empty. On each ro
* `item()?['Description']` * `item()?['Product_ID']`
- ![Screenshot showing the "Create CSV table" action in a Standard workflow and how to dereference the "Description" array property.](./media/logic-apps-perform-data-operations/csv-table-expression-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/csv-table-expression.png" alt-text="Screenshot shows workflow designer, action named Create CSV table, and how to dereference array property named Description." lightbox="media/logic-apps-perform-data-operations/csv-table-expression.png":::
-1. Repeat the preceding steps for each array property. When you're done, your action looks similar to the following example:
+ For more information, see [**item()** function](workflow-definition-language-functions-reference.md#item).
- ![Screenshot showing the "Create CSV table" action in a Standard workflow and the "item()" function.](./media/logic-apps-perform-data-operations/finished-csv-expression-standard.png)
+1. Repeat the preceding steps for each array property. When you're done, your action looks similar to the following example:
-
+ :::image type="content" source="media/logic-apps-perform-data-operations/finished-csv-expression.png" alt-text="Screenshot shows action named Create CSV table and function named item()." lightbox="media/logic-apps-perform-data-operations/finished-csv-expression.png":::
#### Work in code view
In the action's JSON definition, within the `columns` array, set the `header` pr
To confirm whether the **Create CSV table** action creates the expected results, send yourself a notification that includes output from the **Create CSV table** action.
-#### [Consumption](#tab/consumption)
-
-1. In your workflow, add an action that can send you the results from the **Create CSV table** action.
-
- This example continues by using the Office 365 Outlook action named **Send an email**.
-
-1. In this action, for each box where you want the results to appear, select inside the box, which opens the dynamic content list. Under the **Create CSV table** action, select **Output**.
-
- ![Screenshot showing a Consumption workflow with the "Send an email" action and the "Output" field from the preceding "Create CSV table" action entered in the email body.](./media/logic-apps-perform-data-operations/send-email-create-csv-table-action-consumption.png)
-
- > [!NOTE]
- >
- > If the dynamic content list shows the message that **We can't find any outputs to match this input format**,
- > select **See more** next to the **Create CSV table** label in the list.
- >
- > ![Screenshot showing a Consumption workflow and the dynamic content list with "See more" selected for the "Create CSV table" action.](./media/logic-apps-perform-data-operations/send-email-create-csv-table-action-see-more.png)
-
-1. Save your workflow, and then manually run your workflow. On the designer toolbar, select **Run Trigger** > **Run**.
-
-#### [Standard](#tab/standard)
- 1. In your workflow, add an action that can send you the results from the **Create CSV table** action. This example continues by using the Office 365 Outlook action named **Send an email**. 1. In this action, for each box where you want the results to appear, select inside each box, which opens the dynamic content list. From that list, under the **Create CSV table** action, select **Output**.
- ![Screenshot showing a Standard workflow with the "Send an email" action and the "Output" field from the preceding "Create CSV table" action entered in the email body.](./media/logic-apps-perform-data-operations/send-email-create-csv-table-action-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/send-email-create-csv-table-action.png" alt-text="Screenshot shows workflow with action named Send an email. The Body property contains the field named Output from preceding action named Create CSV table." lightbox="media/logic-apps-perform-data-operations/send-email-create-csv-table-action.png":::
- > [!NOTE]
- >
- > If the dynamic content list shows the message that **We can't find any outputs to match this input format**,
- > select **See more** next to the **Create CSV table** label in the list.
- >
- > ![Screenshot showing a Standard workflow and the dynamic content list with "See more" selected for the "Create CSV table" action.](./media/logic-apps-perform-data-operations/send-email-create-csv-table-action-see-more.png)
+1. Save your workflow, and then manually run your workflow.
-1. Save your workflow, and then manually run your workflow. On the workflow navigation menu, select **Overview** > **Run Trigger** > **Run**.
+ - Consumption workflow: On the designer toolbar, select **Run** > **Run**.
+ - Standard workflow: On the workflow navigation menu, select **Overview**. On the **Overview** page toolbar, select **Run** > **Run**.
-
+If you used the Office 365 Outlook action, the following example shows the result:
-If you used the Office 365 Outlook action, you get a result similar to the following screenshot:
-
-![Screenshot showing an email with the "Create CSV table" action results.](./media/logic-apps-perform-data-operations/create-csv-table-email-results.png)
> [!NOTE] >
To try the **Create HTML table** action, follow these steps by using the workflo
This example uses the Azure portal and a sample workflow with the **Recurrence** trigger followed by an **Initialize variable** action. The action is set up to create a variable where the initial value is an array that has some properties and values in JSON format.
- ![Screenshot showing the Azure portal and the designer with a sample Consumption workflow for the "Create HTML table" action.](./media/logic-apps-perform-data-operations/sample-start-create-table-action-consumption.png)
-
-1. In your workflow where you want to create an HTML table, follow one of these steps:
-
- * To add an action under the last step, select **New step**.
-
- * To add an action between steps, move your mouse over the connecting arrow so the plus sign (**+**) appears. Select the plus sign, and then select **Add an action**.
+ | Operation | Properties and values |
+ |--|--|
+ | **Initialize variable** | - **Name**: myJSONArray <br>- **Type**: Array <br>- **Value**: `[ { "Description": "Apples", "Product_ID": 1 }, { "Description": "Oranges", "Product_ID": 2 }]` |
-1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **create html table**.
+ :::image type="content" source="media/logic-apps-perform-data-operations/sample-start-create-table-action-consumption.png" alt-text="Screenshot shows Azure portal, Consumption workflow designer, and sample workflow for action named Create HTML table." lightbox="media/logic-apps-perform-data-operations/sample-start-create-table-action-consumption.png":::
-1. From the actions list, select the action named **Create HTML table**.
+1. [Follow these general steps to add the **Data Operations** action named **Create HTML table**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
- ![Screenshot showing the designer for a Consumption workflow, the "Choose an operation" search box with "create html table" entered, and the "Create HTML table" action selected.](./media/logic-apps-perform-data-operations/select-create-html-table-action-consumption.png)
+1. On the designer, select the **Create HTML table** action, if not already selected. In the **From** box, enter the array or expression to use for creating the table.
-1. In the **From** box, enter the array or expression to use for creating the table.
+ For this example, select inside the **From** box, and select the dynamic content list (lightning icon). From that list, select the **myJSONArray** variable:
- For this example, select inside the **From** box, which opens the dynamic content list. From that list, select the previously created variable:
+ :::image type="content" source="media/logic-apps-perform-data-operations/configure-create-html-table-action.png" alt-text="Screenshot shows Consumption workflow, action named Create HTML table, and the selected input to use." lightbox="media/logic-apps-perform-data-operations/configure-create-html-table-action.png":::
- ![Screenshot showing the designer for a Consumption workflow, the "Create HTML table" action, and the selected input to use.](./media/logic-apps-perform-data-operations/configure-create-html-table-action-consumption.png)
-
- > [!NOTE]
+ > [!TIP]
> > To create user-friendly tokens for the properties in JSON objects so that you can select
- > those properties as inputs, use the action named [Parse JSON](#parse-json-action)
+ > those properties as inputs, use the action named [**Parse JSON**](#parse-json-action)
> before you use the **Create HTML table** action. The following screenshot shows the finished example **Create HTML table** action:
- ![Screenshot showing the designer for a Consumption workflow and the finished example for the "Create HTML table" action.](./media/logic-apps-perform-data-operations/finished-create-html-table-action-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/finished-create-html-table-action.png" alt-text="Screenshot shows Consumption workflow and finished example action named Create HTML table." lightbox="media/logic-apps-perform-data-operations/finished-create-html-table-action.png":::
1. Save your workflow. On the designer toolbar, select **Save**.
To try the **Create HTML table** action, follow these steps by using the workflo
This example uses the Azure portal and a sample workflow with the **Recurrence** trigger followed by an **Initialize variable** action. The action is set up to create a variable where the initial value is an array that has some properties and values in JSON format.
- ![Screenshot showing the Azure portal and the designer with a sample Standard workflow for the "Create HTML table" action.](./media/logic-apps-perform-data-operations/sample-start-create-table-action-standard.png)
+ | Operation | Properties and values |
+ |--|--|
+ | **Initialize variable** | - **Name**: myJSONArray <br>- **Type**: Array <br>- **Value**: `[ { "Description": "Apples", "Product_ID": 1 }, { "Description": "Oranges", "Product_ID": 2 }]` |
-1. In your workflow where you want to create the output, follow one of these steps:
+ :::image type="content" source="media/logic-apps-perform-data-operations/sample-start-create-table-action-standard.png" alt-text="Screenshot shows Azure portal, Standard workflow designer, and sample workflow for action named Create HTML table." lightbox="media/logic-apps-perform-data-operations/sample-start-create-table-action-standard.png":::
- * To add an action under the last step, select the plus sign (**+**), and then select **Add an action**.
+1. [Follow these general steps to add the **Data Operations** action named **Create HTML table**](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
- * To add an action between steps, select the plus sign (**+**) between those steps, and then select **Add an action**.
+1. On the designer, select the **Create HTML table** action, if not already selected. In the **From** box, enter the array or expression to use for creating the table.
-1. After the connector gallery opens, [follow these general steps to find the **Data Operations** action named **Create HTML table**](create-workflow-with-trigger-or-action.md?tabs=standard#add-an-action-to-run-a-task).
+ For this example, select inside the **From** box, and select the dynamic content list (lightning icon). From that list, select the **myJSONArray** variable:
-1. After the action information box appears, in the **From** box, enter the array or expression to use for creating the table.
-
- For this example, select inside the **From** box, and then select the lightning icon, which opens the dynamic content list. From that list, select the previously created variable:
+ :::image type="content" source="media/logic-apps-perform-data-operations/configure-create-html-table-action.png" alt-text="Screenshot shows Standard workflow, action named Create HTML table, and the selected input to use." lightbox="media/logic-apps-perform-data-operations/configure-create-html-table-action.png":::
- ![Screenshot showing the designer for a Standard workflow, the "Create HTML table" action, and the selected input to use.](./media/logic-apps-perform-data-operations/configure-create-html-table-action-standard.png)
-
- > [!NOTE]
+ > [!TIP]
> > To create user-friendly tokens for the properties in JSON objects so that you can select
- > those properties as inputs, use the action named [Parse JSON](#parse-json-action)
- > before you use the **Create CSV table** action.
+ > those properties as inputs, use the action named [**Parse JSON**](#parse-json-action)
+ > before you use the **Create HTML table** action.
The following screenshot shows the finished example **Create HTML table** action:
- ![Screenshot showing the designer for a Standard workflow and the finished example for the "Create HTML table" action.](./media/logic-apps-perform-data-operations/finished-create-html-table-action-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/finished-create-html-table-action.png" alt-text="Screenshot shows Standard workflow and finished example action named Create HTML table." lightbox="media/logic-apps-perform-data-operations/finished-create-html-table-action.png":::
1. Save your workflow. On the designer toolbar, select **Save**.
To try the **Create HTML table** action, follow these steps by using the workflo
By default, the **Columns** property is set to automatically create the table columns based on the array items. To specify custom headers and values, follow these steps:
+1. If the **Columns** property doesn't appear in the action information box, from the **Advanced parameters** list, select **Columns**.
+ 1. Open the **Columns** list, and select **Custom**. 1. In the **Header** property, specify the custom header text to use instead.
Oranges,2
In the **Create HTML table** action, keep the **Header** column empty. On each row in the **Value** column, dereference each array property that you want. Each row under **Value** returns all the values for the specified array property and becomes a column in your table.
-##### [Consumption](#tab/consumption)
-
-1. For each array property that you want, in the **Value** column, select inside the edit box, which opens the dynamic content list.
-
-1. From that list, select **Expression** to open the expression editor instead.
+1. For each array property that you want, in the **Value** column, select inside the edit box, and then select the function icon, which opens the expression editor. Make sure that the **Function** list appears selected.
-1. In the expression editor, enter the following expression, but replace `<array-property-name>` with the array property name for the value that you want, and then select **OK**. For more information, see [**item()** function](workflow-definition-language-functions-reference.md#item).
+1. In the expression editor, enter the following expression, but replace `<array-property-name>` with the array property name for the value that you want. When you're done with each expression, select **Add**.
Syntax: `item()?['<array-property-name>']`
In the **Create HTML table** action, keep the **Header** column empty. On each r
* `item()?['Description']` * `item()?['Product_ID']`
- ![Screenshot showing the "Create HTML table" action in a Consumption workflow and how to dereference the "Description" array property.](./media/logic-apps-perform-data-operations/html-table-expression-consumption.png)
-
-1. Repeat the preceding steps for each array property. When you're done, your action looks similar to the following example:
-
- ![Screenshot showing the "Create HTML table" action in a Consumption workflow and the "item()" function.](./media/logic-apps-perform-data-operations/finished-html-expression-consumption.png)
-
-1. To resolve expressions into more descriptive versions, switch to code view and back to designer view, and then reopen the collapsed action:
-
- The **Create HTML table** action now appears similar to the following example:
-
- ![Screenshot showing the "Create HTML table" action in a Consumption workflow and resolved expressions without headers.](./media/logic-apps-perform-data-operations/resolved-html-expression-consumption.png)
-
-##### [Standard](#tab/standard)
-
-1. For each array property that you want, in the **Value** column, select inside the edit box, and then select the function icon, which opens the expression editor.
-
-1. In the expression editor, enter the following expression, but replace `<array-property-name>` with the array property name for the value that you want, and then select **Add**. For more information, see [**item()** function](workflow-definition-language-functions-reference.md#item).
-
- Syntax: `item()?['<array-property-name>']`
-
- Examples:
-
- * `item()?['Description']`
- * `item()?['Product_ID']`
+ :::image type="content" source="media/logic-apps-perform-data-operations/html-table-expression.png" alt-text="Screenshot shows workflow designer, action named Create HTML table, and how to dereference array property named Description." lightbox="media/logic-apps-perform-data-operations/html-table-expression.png":::
- ![Screenshot showing the "Create HTML table" action in a Standard workflow and how to dereference the "Description" array property.](./media/logic-apps-perform-data-operations/html-table-expression-standard.png)
+ For more information, see [**item()** function](workflow-definition-language-functions-reference.md#item).
1. Repeat the preceding steps for each array property. When you're done, your action looks similar to the following example:
- ![Screenshot showing the "Create HTML table" action in a Standard workflow and the "item()" function.](./media/logic-apps-perform-data-operations/finished-html-expression-standard.png)
--
+ :::image type="content" source="media/logic-apps-perform-data-operations/finished-html-expression.png" alt-text="Screenshot shows action named Create HTML table and function named item()." lightbox="media/logic-apps-perform-data-operations/finished-html-expression.png":::
#### Work in code view
In the action's JSON definition, within the `columns` array, set the `header` pr
To confirm whether the **Create HTML table** action creates the expected results, send yourself a notification that includes output from the **Create HTML table** action.
-#### [Consumption](#tab/consumption)
-
-1. In your workflow, add an action that can send you the results from the **Create HTML table** action.
-
- This example continues by using the Office 365 Outlook action named **Send an email**.
-
-1. In this action, for each box where you want the results to appear, select inside each box, which opens the dynamic content list. From that list, under the **Create HTML table** action, select **Output**.
-
- ![Screenshot showing a Consumption workflow with the "Send an email" action and the "Output" field from the preceding "Create HTML table" action entered in the email body.](./media/logic-apps-perform-data-operations/send-email-create-html-table-action-consumption.png)
-
- > [!NOTE]
- >
- > * If the dynamic content list shows the message that **We can't find any outputs to match this input format**,
- > select **See more** next to the **Create HTML table** label in the list.
- >
- > ![Screenshot showing a Consumption workflow and the dynamic content list with "See more" selected for the "Create HTML table" action.](./media/logic-apps-perform-data-operations/send-email-create-html-table-action-see-more.png)
- >
- > * When you include the HTML table output in an email action, make sure that you set the **Is HTML** property
- > to **Yes** in the email action's advanced options. That way, the email action correctly formats the HTML table.
- > However, if your table is returned with incorrect formatting, see [how to check your table data formatting](#format-table-data).
-
-1. Save your workflow, and then manually run your workflow. On the designer toolbar, select **Run Trigger** > **Run**.
-
-#### [Standard](#tab/standard)
- 1. In your workflow, add an action that can send you the results from the **Create HTML table** action. This example continues by using the Office 365 Outlook action named **Send an email**. 1. In this action, for each box where you want the results to appear, select inside each box, and then select the lightning icon, which opens the dynamic content list. From that list, under the **Create HTML table** action, select **Output**.
- ![Screenshot showing a Standard workflow with the "Send an email" action and the "Output" field from the preceding "Create HTML table" action entered in the email body.](./media/logic-apps-perform-data-operations/send-email-create-html-table-action-standard.png)
-
- > [!NOTE]
- >
- > If the dynamic content list shows the message that **We can't find any outputs to match this input format**,
- > select **See more** next to the **Create HTML table** label in the list.
- >
- > ![Screenshot showing a Standard workflow and the dynamic content list with "See more" selected for the "Create HTML table" action.](./media/logic-apps-perform-data-operations/send-email-create-html-table-action-see-more.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/send-email-create-html-table-action.png" alt-text="Screenshot shows workflow with action named Send an email. The Body property contains the Output field from preceding action named Create HTML table." lightbox="media/logic-apps-perform-data-operations/send-email-create-html-table-action.png":::
-1. Save your workflow, and then manually run your workflow. On the workflow navigation menu, select **Overview** > **Run Trigger** > **Run**.
+1. Save your workflow, and then manually run your workflow.
-
+ - Consumption workflow: On the designer toolbar, select **Run** > **Run**.
+ - Standard workflow: On the workflow navigation menu, select **Overview**. On the **Overview** page toolbar, select **Run** > **Run**.
-If you used the Office 365 Outlook action, you get a result similar to the following screenshot:
+If you used the Office 365 Outlook action, the following example shows the result:
-![Screenshot showing an email with the "Create HTML table" results.](./media/logic-apps-perform-data-operations/create-html-table-email-results.png)
<a name="filter-array-action"></a>
To try the **Filter array** action, follow these steps by using the workflow des
This example uses the Azure portal and a sample workflow with the **Recurrence** trigger followed by an **Initialize variable** action. The action is set up to create where the initial value is an array that has some sample integer values.
+ | Operation | Properties and values |
+ |--|--|
+ | **Initialize variable** | - **Name**: myIntegerArray <br>- **Type**: Array <br>- **Value**: `[1,2,3,4]` |
+ > [!NOTE] > > Although this example uses a simple integer array, this action is especially useful for JSON > object arrays where you can filter based on the objects' properties and values.
- ![Screenshot showing the Azure portal and the designer with a sample Consumption workflow for the "Filter array" action.](./media/logic-apps-perform-data-operations/sample-start-filter-array-action-consumption.png)
-
-1. In your workflow where you want to create the filtered array, follow one of these steps:
+ :::image type="content" source="media/logic-apps-perform-data-operations/sample-start-filter-array-action-consumption.png" alt-text="Screenshot shows Azure portal, Consumption workflow designer, and example workflow for action named Filter array." lightbox="media/logic-apps-perform-data-operations/sample-start-filter-array-action-consumption.png":::
- * To add an action under the last step, select **New step**.
+1. [Follow these general steps to find the **Data Operations** action named **Filter array**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
- * To add an action between steps, move your mouse over the connecting arrow so the plus sign (**+**) appears. Select the plus sign, and then select **Add an action**.
+1. On the designer, select the **Filter array** action, if not already selected. In the **From** box, enter the array or expression to use as the filter.
-1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **filter array**.
-
-1. From the actions list, select the action named **Filter array**.
-
- ![Screenshot showing the designer for a Consumption workflow, the "Choose an operation" search box with "filter array" entered, and the "Filter array" action selected.](./media/logic-apps-perform-data-operations/select-filter-array-action-consumption.png)
-
-1. In the **From** box, enter the array or expression to use as the filter.
-
- For this example, select the **From** box, which opens the dynamic content list. From that list, select the previously created variable:
+ For this example, select inside the **From** box, and then select the lightning icon, which opens the dynamic content list. From that list, select the previously created variable:
- ![Screenshot showing the designer for a Consumption workflow, the "Filter array" action, and the selected input to use.](./media/logic-apps-perform-data-operations/configure-filter-array-action-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/configure-filter-array-action.png" alt-text="Screenshot shows Consumption workflow, action named Filter array, and selected input to use." lightbox="media/logic-apps-perform-data-operations/configure-filter-array-action.png":::
1. For the condition, specify the array items to compare, select the comparison operator, and specify the comparison value. This example uses the [**item()** function](workflow-definition-language-functions-reference.md#item) to access each item in the array, while the **Filter array** action searches for array items where the value is greater than one. The following screenshot shows the finished example **Filter array** action:
- ![Screenshot showing the designer for a Consumption workflow and the finished example for the "Filter array" action.](./media/logic-apps-perform-data-operations/finished-filter-array-action-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/finished-filter-array-action.png" alt-text="Screenshot shows Consumption workflow and finished example action named Filter array." lightbox="media/logic-apps-perform-data-operations/finished-filter-array-action.png":::
1. Save your workflow. On the designer toolbar, select **Save**.
To try the **Filter array** action, follow these steps by using the workflow des
This example uses the Azure portal and a sample workflow with the **Recurrence** trigger followed by an **Initialize variable** action. The action is set up to create where the initial value is an array that has some sample integer values.
+ | Operation | Properties and values |
+ |--|--|
+ | **Initialize variable** | - **Name**: myIntegerArray <br>- **Type**: Array <br>- **Value**: `[1,2,3,4]` |
+ > [!NOTE] > > Although this example uses a simple integer array, this action is especially useful for JSON > object arrays where you can filter based on the objects' properties and values.
- ![Screenshot showing the Azure portal and the designer with a sample Standard workflow for the "Filter array" action.](./media/logic-apps-perform-data-operations/sample-start-filter-array-action-standard.png)
-
-1. In your workflow where you want to create the filtered array, follow one of these steps:
-
- * To add an action under the last step, select the plus sign (**+**), and then select **Add an action**.
+ :::image type="content" source="media/logic-apps-perform-data-operations/sample-start-filter-array-action-standard.png" alt-text="Screenshot shows Azure portal, Standard workflow designer, and example workflow for action named Filter array." lightbox="media/logic-apps-perform-data-operations/sample-start-filter-array-action-standard.png":::
- * To add an action between steps, select the plus sign (**+**) between those steps, and then select **Add an action**.
+1. [Follow these general steps to find the **Data Operations** action named **Filter array**](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
-1. After the connector gallery opens, [follow these general steps to find the **Data Operations** action named **Filter array**](create-workflow-with-trigger-or-action.md?tabs=standard#add-an-action-to-run-a-task).
-
-1. After the action information box appears, in the **From** box, enter the array or expression to use as the filter.
+1. On the designer, select the **Filter array** action, if not already selected. In the **From** box, enter the array or expression to use as the filter.
For this example, select inside the **From** box, and then select the lightning icon, which opens the dynamic content list. From that list, select the previously created variable:
- ![Screenshot showing the designer for a Standard workflow, the "Filter array" action, and the selected input to use.](./media/logic-apps-perform-data-operations/configure-filter-array-action-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/configure-filter-array-action.png" alt-text="Screenshot shows Standard workflow, action named Filter array, and selected input to use." lightbox="media/logic-apps-perform-data-operations/configure-filter-array-action.png":::
1. For the condition, specify the array items to compare, select the comparison operator, and specify the comparison value. This example uses the [**item()** function](workflow-definition-language-functions-reference.md#item) to access each item in the array, while the **Filter array** action searches for array items where the value is greater than one. The following screenshot shows the finished example **Filter array** action:
- ![Screenshot showing the designer for a Standard workflow and the finished example for the "Filter array" action.](./media/logic-apps-perform-data-operations/finished-filter-array-action-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/finished-filter-array-action.png" alt-text="Screenshot shows Standard workflow and finished example action named Filter array." lightbox="media/logic-apps-perform-data-operations/finished-filter-array-action.png":::
1. Save your workflow. On the designer toolbar, select **Save**.
To try the **Filter array** action, follow these steps by using the workflow des
To confirm whether **Filter array** action creates the expected results, send yourself a notification that includes output from the **Filter array** action.
-#### [Consumption](#tab/consumption)
- 1. In your workflow, add an action that can send you the results from the **Filter array** action. This example continues by using the Office 365 Outlook action named **Send an email**. 1. In this action, complete the following steps:
- 1. For each box where you want the results to appear, select inside each box, which opens the dynamic content list.
-
- 1. From that list, select **Expression** to open the expression editor instead.
-
- 1. To get the array output from the **Filter array** action, enter the following expression, which uses the [**actionBody()** function](workflow-definition-language-functions-reference.md#actionBody) with the **Filter array** action name, and then select **OK**.
-
- `actionBody('Filter_array')`
-
- ![Screenshot showing a Consumption workflow with the "Send an email" action and the action outputs from the "Filter array" action.](./media/logic-apps-perform-data-operations/send-email-filter-array-action-consumption.png)
-
- The resolved expression specifies to show the outputs from the **Filter_array** action in the email body when sent:
-
- ![Screenshot showing a Consumption workflow with the finished "Send an email" action for the "Filter array" action.](./media/logic-apps-perform-data-operations/send-email-filter-array-action-complete-consumption.png)
-
-1. Save your workflow, and then manually run your workflow. On the designer toolbar, select **Run Trigger** > **Run**.
-
-#### [Standard](#tab/standard)
-
-1. In your workflow, add an action that can send you the results from the **Filter array** action.
-
-1. In this action, complete the following steps:
-
- 1. For each box where you want the results to appear, select inside box, and then select the function icon, which opens the expression editor. Make sure that the **Function** list appears selected.
+ 1. For each box where you want the results to appear, select inside each box, and then select the function icon, which opens the expression editor. Make sure that the **Function** list appears selected.
- 1. To get the array output from the **Filter array** action, enter the following expression, which uses the [**actionBody()** function](workflow-definition-language-functions-reference.md#actionBody) with the **Filter array** action name, and then select **OK**.
+ 1. To get the array output from the **Filter array** action, enter the following expression, which uses the [**body()** function](workflow-definition-language-functions-reference.md#body) with the **Filter array** action name, and then select **Add**.
- `actionBody('Filter_array')`
+ `body('Filter_array')`
- ![Screenshot showing a Standard workflow with the "Send an email" action and the action outputs from the "Filter array" action.](./media/logic-apps-perform-data-operations/send-email-filter-array-action-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/send-email-filter-array-action.png" alt-text="Screenshot shows workflow with action named Send an email. The Body property contains the body() function, which gets the body content from the preceding action named Filter array." lightbox="media/logic-apps-perform-data-operations/send-email-filter-array-action.png":::
The resolved expression specifies to show the outputs from the **Filter_array** action in the email body when sent:
- ![Screenshot showing a Standard workflow with the finished "Send an email" action for the "Filter array" action.](./media/logic-apps-perform-data-operations/send-email-filter-array-action-complete-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/send-email-filter-array-action-complete.png" alt-text="Screenshot shows Standard workflow and finished example action for Send an email." lightbox="media/logic-apps-perform-data-operations/send-email-filter-array-action-complete.png":::
-1. Save your workflow, and then manually run your workflow. On the workflow navigation menu, select **Overview** > **Run Trigger** > **Run**.
+1. Save your workflow, and then manually run your workflow.
-
+ - Consumption workflow: On the designer toolbar, select **Run** > **Run**.
+ - Standard workflow: On the workflow navigation menu, select **Overview**. On the **Overview** page toolbar, select **Run** > **Run**.
-If you used the Office 365 Outlook action, you get a result similar to the following screenshot:
+If you used the Office 365 Outlook action, the following example shows the result:
-![Screenshot showing an email with the "Filter array" action results.](./media/logic-apps-perform-data-operations/filter-array-email-results.png)
<a name="join-action"></a>
To try the **Join** action, follow these steps by using the workflow designer. O
This example uses the Azure portal and a sample workflow with the **Recurrence** trigger followed by an **Initialize variable** action. This action is set up to create a variable where the initial value is an array that has some sample integer values.
- ![Screenshot showing the Azure portal and the designer with a sample Consumption workflow for the "Join" action.](./media/logic-apps-perform-data-operations/sample-start-join-action-consumption.png)
-
-1. In your workflow where you want to create the string from an array, follow one of these steps:
+ | Operation | Properties and values |
+ |--|--|
+ | **Initialize variable** | - **Name**: myIntegerArray <br>- **Type**: Array <br>- **Value**: `[1,2,3,4]` |
- * To add an action under the last step, select **New step**.
+ :::image type="content" source="media/logic-apps-perform-data-operations/sample-start-join-action-consumption.png" alt-text="Screenshot shows Azure portal, Consumption workflow designer, and example workflow for the action named Join." lightbox="media/logic-apps-perform-data-operations/sample-start-join-action-consumption.png":::
- * To add an action between steps, move your mouse over the connecting arrow so the plus sign (**+**) appears. Select the plus sign, and then select **Add an action**.
+1. [Follow these general steps to find the **Data Operations** action named **Join**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **join**.
-
-1. From the actions list, select the action named **Join**.
-
- ![Screenshot showing the designer for a Consumption workflow, the "Choose an operation" search box, and the "Join" action selected.](./media/logic-apps-perform-data-operations/select-join-action-consumption.png)
+1. On the designer, select the **Join** action, if not already selected. In the **From** box, enter the array that has the items that you want to join as a string.
1. In the **From** box, enter the array that has the items you want to join as a string.
- For this example, select inside the **From** box, which opens the dynamic content list. From that list, select the previously created variable:
+ For this example, select inside the **From** box, and then select the lightning icon, which opens the dynamic content list. From that list, select the previously created variable:
- ![Screenshot showing the designer for a Consumption workflow, the "Join" action, and the selected array output to use join as a string.](./media/logic-apps-perform-data-operations/configure-join-action-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/configure-join-action.png" alt-text="Screenshot shows Consumption workflow, action named Join, and selected array output to join as a string." lightbox="media/logic-apps-perform-data-operations/configure-join-action.png":::
-1. In the **Join with** box, enter the character to use for separating each array item.
+1. In the **Join With** box, enter the character to use for separating each array item.
- This example uses a colon (**:**) as the separator.
+ This example uses a colon (**:**) as the separator for the **Join With** property.
- ![Screenshot showing where to provide the separator character.](./media/logic-apps-perform-data-operations/finished-join-action-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/finished-join-action.png" alt-text="Screenshot shows Consumption workflow and the finished example for the action named Join." lightbox="media/logic-apps-perform-data-operations/finished-join-action.png":::
1. Save your workflow. On the designer toolbar, select **Save**.
To try the **Join** action, follow these steps by using the workflow designer. O
This example uses the Azure portal and a sample workflow with the **Recurrence** trigger followed by an **Initialize variable** action. The action is set up to create where the initial value is an array that has some sample integer values.
- ![Screenshot showing the Azure portal and the designer with a sample Standard workflow for the "Join" action.](./media/logic-apps-perform-data-operations/sample-start-join-action-standard.png)
-
-1. In your workflow where you want to create the filtered array, follow one of these steps:
+ | Operation | Properties and values |
+ |--|--|
+ | **Initialize variable** | - **Name**: myIntegerArray <br>- **Type**: Array <br>- **Value**: `[1,2,3,4]` |
- * To add an action under the last step, select the plus sign (**+**), and then select **Add an action**.
+ :::image type="content" source="media/logic-apps-perform-data-operations/sample-start-join-action-standard.png" alt-text="Screenshot shows Azure portal, Standard workflow designer, and example workflow for the action named Join." lightbox="media/logic-apps-perform-data-operations/sample-start-join-action-standard.png":::
- * To add an action between steps, select the plus sign (**+**) between those steps, and then select **Add an action**.
+1. [Follow these general steps to find the **Data Operations** action named **Join**](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
-1. After the connector gallery opens, [follow these general steps to find the **Data Operations** action named **Join**](create-workflow-with-trigger-or-action.md?tabs=standard#add-an-action-to-run-a-task).
-
-1. After the action information box appears, in the **From** box, enter the array that has the items you want to join as a string.
+1. On the designer, select the **Join** action, if not already selected. In the **From** box, enter the array that has the items that you want to join as a string.
For this example, select inside the **From** box, and then select the lightning icon, which opens the dynamic content list. From that list, select the previously created variable:
- ![Screenshot showing the designer for a Standard workflow, the "Join" action, and the selected input to use.](./media/logic-apps-perform-data-operations/configure-join-action-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/configure-join-action.png" alt-text="Screenshot shows Standard workflow, action named Join, and selected array output to join as a string." lightbox="media/logic-apps-perform-data-operations/configure-join-action.png":::
-1. In the **Join with** box, enter the character to use for separating each array item.
+1. In the **Join With** box, enter the character to use for separating each array item.
- This example uses a colon (**:**) as the separator.
+ This example uses a colon (**:**) as the separator for the **Join With** property.
- ![Screenshot showing the designer for a Standard workflow and the finished example for the "Join" action.](./media/logic-apps-perform-data-operations/finished-join-action-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/finished-join-action.png" alt-text="Screenshot shows Standard workflow and the finished example for the action named Join." lightbox="media/logic-apps-perform-data-operations/finished-join-action.png":::
1. Save your workflow. On the designer toolbar, select **Save**.
To try the **Join** action, follow these steps by using the workflow designer. O
To confirm whether the **Join** action creates the expected results, send yourself a notification that includes output from the **Join** action.
-#### [Consumption](#tab/consumption)
- 1. In your workflow, add an action that can send you the results from the **Join** action. This example continues by using the Office 365 Outlook action named **Send an email**.
-1. In this action, for each box where you want the results to appear, select inside each box, which opens the dynamic content list. From that list, under the **Join** action, select **Output**.
+1. In this action, for each box where you want the results to appear, select inside each box, and then select the lightning icon, which opens the dynamic content list. From that list, under the **Join** action, select **Output**.
- ![Screenshot showing a Consumption workflow with the finished "Send an email" action for the "Join" action.](./media/logic-apps-perform-data-operations/send-email-join-action-complete-consumption.png)
-
- > [!NOTE]
- >
- > If the dynamic content list shows the message that **We can't find any outputs to match this input format**,
- > select **See more*