Updates from: 08/01/2024 01:19:00
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Security Analytics Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-security-analytics-sentinel.md
Previously updated : 03/06/2023 Last updated : 07/31/2024 #Customer intent: As an IT professional, I want to gather logs and audit data using Microsoft Sentinel and Azure Monitor to secure applications that use Azure Active Directory B2C.
active-directory-b2c Partner Whoiam Rampart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-whoiam-rampart.md
Previously updated : 05/02/2023 Last updated : 07/31/2024
active-directory-b2c Tenant Management Directory Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management-directory-quota.md
Previously updated : 06/15/2023 Last updated : 07/31/2024
# Manage directory size quota of your Azure Active Directory B2C tenant
-It's important that you monitor how you use your Azure AD B2C directory quota. Directory quota has a given size that is expressed in number of objects. These objects include user accounts, app registrations, groups, etc. When the number of objects in your tenant reach quota size, the directory will generate an error when trying to create a new object.
+It's important that you monitor how you use your Azure AD B2C directory quota. Directory quota has a size that's expressed in number of objects. These objects include user accounts, app registrations, groups, etc. When the number of objects in your tenant reach quota size, the directory will generate an error when trying to create a new object.
## Monitor directory quota usage in your Azure AD B2C tenant
The response from the API call looks similar to the following json:
- The attribute `used` is the number of objects you already have in the directory.
-If your tenant usage is higher that 80%, you can remove inactive users or request for a quota increase.
+If your tenant usage is higher that 80%, you can remove inactive users or request for a quota size increase.
-## Request increase directory quota size
+## Increase directory quota size
-You can request to increase the quota size by [contacting support](find-help-open-support-ticket.md)
+You can request to increase the quota size by [contacting support](find-help-open-support-ticket.md).
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 07/01/2024 Last updated : 07/31/2024
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Microsoft Entra ID](../active-directory/fundamentals/whats-new.md), [Azure AD B2C developer release notes](custom-policy-developer-notes.md) and [What's new in Microsoft Entra External ID](/entra/external-id/whats-new-docs).
+## July 2024
+
+### Updated articles
+
+- [Developer notes for Azure Active Directory B2C](custom-policy-developer-notes.md) - Updated Twitter to X
+- [Custom email verification with SendGrid](custom-email-sendgrid.md) - Updated the localization script
+ ## June 2024 ### Updated articles
Welcome to what's new in Azure Active Directory B2C documentation. This article
- [Set up sign-up and sign-in with a LinkedIn account using Azure Active Directory B2C](identity-provider-linkedin.md) - Updated LinkedIn instructions - [Page layout versions](page-layout.md) - Updated page layout versions
-## February 2024
-
-### New articles
--- [Enable CAPTCHA in Azure Active Directory B2C](add-captcha.md)-- [Define a CAPTCHA technical profile in an Azure Active Directory B2C custom policy](captcha-technical-profile.md)-- [Verify CAPTCHA challenge string using CAPTCHA display control](display-control-captcha.md)-
-### Updated articles
--- [Enable custom domains in Azure Active Directory B2C](custom-domain.md) - Updated steps to block the default B2C domain-- [Manage Azure AD B2C custom policies with Microsoft Graph PowerShell](manage-custom-policies-powershell.md) - Microsoft Graph PowerShell updates -- [Localization string IDs](localization-string-ids.md) - CAPTCHA updates-- [Page layout versions](page-layout.md) - CAPTCHA updates-
ai-services Jailbreak Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/concepts/jailbreak-detection.md
This shield aims to safeguard against attacks that use information not directly
### Language availability
-Currently, the Prompt Shields API supports the English language. While our API doesn't restrict the submission of non-English content, we can't guarantee the same level of quality and accuracy in the analysis of such content. We recommend users to primarily submit content in English to ensure the most reliable and accurate results from the API.
+Prompt Shields have been specifically trained and tested on the following languages: Chinese, English, French, German, Italian, Japanese, Portuguese. However, the feature can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application.
### Text length limitations
ai-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/concepts/best-practices.md
Title: Conversational language understanding best practices
-description: Apply best practices when using conversational language understanding
+description: Learn how to apply best practices when you use conversational language understanding.
#
# Best practices for conversational language understanding
-Use the following guidelines to create the best possible projects in conversational language understanding.
+Use the following guidelines to create the best possible projects in conversational language understanding.
## Choose a consistent schema
-Schema is the definition of your intents and entities. There are different approaches you could take when defining what you should create as an intent versus an entity. There are some questions you need to ask yourself:
+Schema is the definition of your intents and entities. There are different approaches you could take when you define what you should create as an intent versus an entity. Ask yourself these questions:
- What actions or queries am I trying to capture from my user? - What pieces of information are relevant in each action?
-You can typically think of actions and queries as _intents_, while the information required to fulfill those queries as _entities_.
+You can typically think of actions and queries as _intents_, while the information required to fulfill those queries are _entities_.
-For example, assume you want your customers to cancel subscriptions for various products that you offer through your chatbot. You can create a _Cancel_ intent with various examples like _"Cancel the Contoso service,"_ or _"stop charging me for the Fabrikam subscription."_ The user's intent here is to _cancel,_ the _Contoso service_ or _Fabrikam subscription_ are the subscriptions they would like to cancel. Therefore, you can create an entity for _subscriptions_. You can then model your entire project to capture actions as intents and use entities to fill in those actions. This allows you to cancel anything you define as an entity, such as other products. You can then have intents for signing up, renewing, upgrading, etc. that all make use of the _subscriptions_ and other entities.
+For example, assume that you want your customers to cancel subscriptions for various products that you offer through your chatbot. You can create a _cancel_ intent with various examples like "Cancel the Contoso service" or "Stop charging me for the Fabrikam subscription." The user's intent here is to _cancel_, and the _Contoso service_ or _Fabrikam subscription_ are the subscriptions they want to cancel.
-The above schema design makes it easy for you to extend existing capabilities (canceling, upgrading, signing up) to new targets by creating a new entity.
+To proceed, you create an entity for _subscriptions_. Then you can model your entire project to capture actions as intents and use entities to fill in those actions. This approach allows you to cancel anything you define as an entity, such as other products. You can then have intents for signing up, renewing, and upgrading that all make use of the _subscriptions_ and other entities.
-Another approach is to model the _information_ as intents and _actions_ as entities. Let's take the same example, allowing your customers to cancel subscriptions through your chatbot. You can create an intent for each subscription available, such as _Contoso_ with utterances like _"cancel Contoso,"_ _"stop charging me for contoso services,"_ _"Cancel the Contoso subscription."_ You would then create an entity to capture the action, _cancel._ You can define different entities for each action or consolidate actions as one entity with a list component to differentiate between actions with different keys.
+The preceding schema design makes it easy for you to extend existing capabilities (canceling, upgrading, or signing up) to new targets by creating a new entity.
+
+Another approach is to model the _information_ as intents and the _actions_ as entities. Let's take the same example of allowing your customers to cancel subscriptions through your chatbot.
+
+You can create an intent for each subscription available, such as _Contoso_, with utterances like "Cancel Contoso," "Stop charging me for Contoso services," and "Cancel the Contoso subscription." You then create an entity to capture the _cancel_ action. You can define different entities for each action or consolidate actions as one entity with a list component to differentiate between actions with different keys.
This schema design makes it easy for you to extend new actions to existing targets by adding new action entities or entity components.
-Make sure to avoid trying to funnel all the concepts into just intents, for example don't try to create a _Cancel Contoso_ intent that only has the purpose of that one specific action. Intents and entities should work together to capture all the required information from the customer.
+Make sure to avoid trying to funnel all the concepts into intents. For example, don't try to create a _Cancel Contoso_ intent that only has the purpose of that one specific action. Intents and entities should work together to capture all the required information from the customer.
-You also want to avoid mixing different schema designs. Do not build half of your application with actions as intents and the other half with information as intents. Ensure it is consistent to get the possible results.
+You also want to avoid mixing different schema designs. Don't build half of your application with actions as intents and the other half with information as intents. To get the possible results, ensure that it's consistent.
[!INCLUDE [Balance training data](../includes/balance-training-data.md)]
You also want to avoid mixing different schema designs. Do not build half of you
## Use standard training before advanced training
-[Standard training](../how-to/train-model.md#training-modes) is free and faster than Advanced training, making it useful to quickly understand the effect of changing your training set or schema while building the model. Once you're satisfied with the schema, consider using advanced training to get the best AIQ out of your model.
+[Standard training](../how-to/train-model.md#training-modes) is free and faster than advanced training. It can help you quickly understand the effect of changing your training set or schema while you build the model. After you're satisfied with the schema, consider using advanced training to get the best AIQ out of your model.
## Use the evaluation feature
-
-When you build an app, it's often helpful to catch errors early. ItΓÇÖs usually a good practice to add a test set when building the app, as training and evaluation results are very useful in identifying errors or issues in your schema.
-
-## Machine-learning components and composition
-
-See [Component types](./entity-components.md#component-types).
-## Using the "none" score Threshold
+When you build an app, it's often helpful to catch errors early. It's usually a good practice to add a test set when you build the app. Training and evaluation results are useful in identifying errors or issues in your schema.
-If you see too many false positives, such as out-of-context utterances being marked as valid intents, See [confidence threshold](./none-intent.md) for information on how it affects inference.
-
-* Non machine-learned entity components like lists and regex are by definition not contextual. If you see list or regex entities in unintended places, try labeling the list synonyms as the machine-learned component.
+## Machine-learning components and composition
-* For entities, you can use learned component as the ΓÇÿRequiredΓÇÖ component, to restrict when a composed entity should fire.
+For more information, see [Component types](./entity-components.md#component-types).
-For example, suppose you have an entity called "*ticket quantity*" that attempts to extract the number of tickets you want to reserve for booking flights, for utterances such as "*Book two tickets tomorrow to Cairo.*"
+## Use the None score threshold
+If you see too many false positives, such as out-of-context utterances being marked as valid intents, see [Confidence threshold](./none-intent.md) for information on how it affects inference.
-Typically, you would add a prebuilt component for `Quantity.Number` that already extracts all numbers in utterances. However if your entity was only defined with the prebuilt component, it would also extract other numbers as part of the *ticket quantity* entity, such as "*Book two tickets tomorrow to Cairo at 3 PM.*"
+* Non-machine-learned entity components, like lists and regex, are by definition not contextual. If you see list or regex entities in unintended places, try labeling the list synonyms as the machine-learned component.
+* For entities, you can use learned component as the Required component, to restrict when a composed entity should fire.
-To resolve this, you would label a learned component in your training data for all the numbers that are meant to be a *ticket quantity*. The entity now has two components:
-* The prebuilt component that can interpret all numbers, and
-* The learned component that predicts where the *ticket quantity* is in a sentence.
+For example, suppose you have an entity called **Ticket Quantity** that attempts to extract the number of tickets you want to reserve for booking flights, for utterances such as "Book two tickets tomorrow to Cairo."
-If you require the learned component, make sure that *ticket quantity* is only returned when the learned component predicts it in the right context. If you also require the prebuilt component, you can then guarantee that the returned *ticket quantity* entity is both a number and in the correct position.
+Typically, you add a prebuilt component for `Quantity.Number` that already extracts all numbers in utterances. However, if your entity was only defined with the prebuilt component, it also extracts other numbers as part of the **Ticket Quantity** entity, such as "Book two tickets tomorrow to Cairo at 3 PM."
+To resolve this issue, you label a learned component in your training data for all the numbers that are meant to be a ticket quantity. The entity now has two components:
-## Addressing model inconsistencies
+* The prebuilt component that can interpret all numbers.
+* The learned component that predicts where the ticket quantity is located in a sentence.
-If your model is overly sensitive to small grammatical changes, like casing or diacritics, you can systematically manipulate your dataset directly in the Language Studio. To use these features, click on the Settings tab on the left toolbar and locate the **Advanced project settings** section.
+If you require the learned component, make sure that **Ticket Quantity** is only returned when the learned component predicts it in the right context. If you also require the prebuilt component, you can then guarantee that the returned **Ticket Quantity** entity is both a number and in the correct position.
+## Address model inconsistencies
-First, you can ***Enable data transformation for casing***, which normalizes the casing of utterances when training, testing, and implementing your model. If you've migrated from LUIS, you might recognize that LUIS did this normalization by default. To access this feature via the API, set the `"normalizeCasing"` parameter to `true`. See an example below:
+If your model is overly sensitive to small grammatical changes, like casing or diacritics, you can systematically manipulate your dataset directly in Language Studio. To use these features, select the **Settings** tab on the left pane and locate the **Advanced project settings** section.
+First, you can enable the setting for **Enable data transformation for casing**, which normalizes the casing of utterances when training, testing, and implementing your model. If you migrated from LUIS, you might recognize that LUIS did this normalization by default. To access this feature via the API, set the `normalizeCasing` parameter to `true`. See the following example:
```json {
First, you can ***Enable data transformation for casing***, which normalizes the
} ... ```
-Second, you can also leverage the **Advanced project settings** to ***Enable data augmentation for diacritics*** to generate variations of your training data for possible diacritic variations used in natural language. This feature is available for all languages, but it is especially useful for Germanic and Slavic languages, where users often write words using classic English characters instead of the correct characters. For example, the phrase "Navigate to the sports channel" in French is "Accédez à la chaîne sportive". When this feature is enabled, the phrase "Accedez a la chaine sportive" (without diacritic characters) is also included in the training dataset. If you enable this feature, please note that the utterance count of your training set will increase, and you may need to adjust your training data size accordingly. The current maximum utterance count after augmentation is 25,000. To access this feature via the API, set the `"augmentDiacritics"` parameter to `true`. See an example below:
+
+Second, you can also enable the setting for **Enable data augmentation for diacritics** to generate variations of your training data for possible diacritic variations used in natural language. This feature is available for all languages. It's especially useful for Germanic and Slavic languages, where users often write words by using classic English characters instead of the correct characters. For example, the phrase "Navigate to the sports channel" in French is "Accédez à la chaîne sportive." When this feature is enabled, the phrase "Accedez a la chaine sportive" (without diacritic characters) is also included in the training dataset.
+
+If you enable this feature, the utterance count of your training set increases. For this reason, you might need to adjust your training data size accordingly. The current maximum utterance count after augmentation is 25,000. To access this feature via the API, set the `augmentDiacritics` parameter to `true`. See the following example:
```json {
Second, you can also leverage the **Advanced project settings** to ***Enable dat
... ```
-## Addressing model overconfidence
+## Address model overconfidence
-Customers can use the LoraNorm recipe version in case the model is being incorrectly overconfident. An example of this can be like the below (note that the model predicts the incorrect intent with 100% confidence). This makes the confidence threshold project setting unusable.
+Customers can use the LoraNorm recipe version if the model is being incorrectly overconfident. An example of this behavior can be like the following scenario where the model predicts the incorrect intent with 100% confidence. This score makes the confidence threshold project setting unusable.
| Text | Predicted intent | Confidence score | |-|-|-|
-| "*Who built the Eiffel Tower?*" | `Sports` | 1.00 |
-| "*Do I look good to you today?*" | `QueryWeather` | 1.00 |
-| "*I hope you have a good evening.*" | `Alarm` | 1.00 |
+| "Who built the Eiffel Tower?" | `Sports` | 1.00 |
+| "Do I look good to you today?" | `QueryWeather` | 1.00 |
+| "I hope you have a good evening." | `Alarm` | 1.00 |
-To address this, use the `2023-04-15` configuration version that normalizes confidence scores. The confidence threshold project setting can then be adjusted to achieve the desired result.
+To address this scenario, use the `2023-04-15` configuration version that normalizes confidence scores. The confidence threshold project setting can then be adjusted to achieve the desired result.
```console curl --location 'https://<your-resource>.cognitiveservices.azure.com/language/authoring/analyze-conversations/projects/<your-project>/:train?api-version=2022-10-01-preview' \
curl --location 'https://<your-resource>.cognitiveservices.azure.com/language/au
} ```
-Once the request is sent, you can track the progress of the training job in Language Studio as usual.
+After the request is sent, you can track the progress of the training job in Language Studio as usual.
> [!NOTE]
-> You have to retrain your model after updating the `confidenceThreshold` project setting. Afterwards, you'll need to republish the app for the new threshold to take effect.
+> You have to retrain your model after you update the `confidenceThreshold` project setting. Afterward, you need to republish the app for the new threshold to take effect.
### Normalization in model version 2023-04-15
-Model version 2023-04-15, conversational language understanding provides normalization in the inference layer that doesn't affect training.
+With model version 2023-04-15, conversational language understanding provides normalization in the inference layer that doesn't affect training.
-The normalization layer normalizes the classification confidence scores to a confined range. The range selected currently is from `[-a,a]` where "a" is the square root of the number of intents. As a result, the normalization depends on the number of intents in the app. If there's a very low number of intents, the normalization layer has a very small range to work with. With a fairly large number of intents, the normalization is more effective.
+The normalization layer normalizes the classification confidence scores to a confined range. The range selected currently is from `[-a,a]` where "a" is the square root of the number of intents. As a result, the normalization depends on the number of intents in the app. If the number of intents is low, the normalization layer has a small range to work with. With a large number of intents, the normalization is more effective.
-If this normalization doesnΓÇÖt seem to help intents that are out of scope to the extent that the confidence threshold can be used to filter out of scope utterances, it might be related to the number of intents in the app. Consider adding more intents to the app, or if you're using an orchestrated architecture, consider merging apps that belong to the same domain together.
+If this normalization doesn't seem to help intents that are out of scope to the extent that the confidence threshold can be used to filter out-of-scope utterances, it might be related to the number of intents in the app. Consider adding more intents to the app. Or, if you're using an orchestrated architecture, consider merging apps that belong to the same domain together.
-## Debugging composed entities
+## Debug composed entities
-Entities are functions that emit spans in your input with an associated type. The function is defined by one or more components. You can mark components as needed, and you can decide whether to enable the *combine components* setting. When you combine components, all spans that overlap will be merged into a single span. If the setting isn't used, each individual component span will be emitted.
-
-To better understand how individual components are performing, you can disable the setting and set each component to "not required". This lets you inspect the individual spans that are emitted, and experiment with removing components so that only problematic components are generated.
+Entities are functions that emit spans in your input with an associated type. One or more components define the function. You can mark components as needed, and you can decide whether to enable the **Combine components** setting. When you combine components, all spans that overlap are merged into a single span. If the setting isn't used, each individual component span is emitted.
+
+To better understand how individual components are performing, you can disable the setting and set each component to **Not required**. This setting lets you inspect the individual spans that are emitted and experiment with removing components so that only problematic components are generated.
-## Evaluate a model using multiple test sets
+## Evaluate a model by using multiple test sets
-Data in a conversational language understanding project can have two data sets. A "testing" set, and a "training" set. If you want to use multiple test sets to evaluate your model, you can:
+Data in a conversational language understanding project can have two datasets: a testing set and a training set. If you want to use multiple test sets to evaluate your model, you can:
* Give your test sets different names (for example, "test1" and "test2"). * Export your project to get a JSON file with its parameters and configuration.
-* Use the JSON to import a new project, and rename your second desired test set to "test".
-* Train the model to run the evaluation using your second test set.
+* Use the JSON to import a new project. Rename your second desired test set to "test."
+* Train the model to run the evaluation by using your second test set.
## Custom parameters for target apps and child apps
-If you're using [orchestrated apps](./app-architecture.md), you may want to send custom parameter overrides for various child apps. The `targetProjectParameters` field allows users to send a dictionary representing the parameters for each target project. For example, consider an orchestrator app named `Orchestrator` orchestrating between a conversational language understanding app named `CLU1` and a custom question answering app named `CQA1`. If you want to send a parameter named "top" to the question answering app, you can use the above parameter.
+If you're using [orchestrated apps](./app-architecture.md), you might want to send custom parameter overrides for various child apps. The `targetProjectParameters` field allows users to send a dictionary representing the parameters for each target project. For example, consider an orchestrator app named `Orchestrator` orchestrating between a conversational language understanding app named `CLU1` and a custom question answering app named `CQA1`. If you want to send a parameter named "top" to the question answering app, you can use the preceding parameter.
```console curl --request POST \
curl --request POST \
## Copy projects across language resources
-Often you can copy conversational language understanding projects from one resource to another using the **copy** button in Azure Language Studio. However in some cases, it might be easier to copy projects using the API.
+Often you can copy conversational language understanding projects from one resource to another by using the **Copy** button in Language Studio. In some cases, it might be easier to copy projects by using the API.
-First, identify the:
- * source project name
- * target project name
- * source language resource
- * target language resource, which is where you want to copy it to.
+First, identify the:
+
+ * Source project name.
+ * Target project name.
+ * Source language resource.
+ * Target language resource, which is where you want to copy it to.
-Call the API to authorize the copy action, and get the `accessTokens` for the actual copy operation later.
+Call the API to authorize the copy action and get `accessTokens` for the actual copy operation later.
```console curl --request POST \
curl --request POST \
--data '{"projectKind":"Conversation","allowOverwrite":false}' ```
-Call the API to complete the copy operation. Use the response you got earlier as the payload.
+Call the API to complete the copy operation. Use the response you got earlier as the payload.
```console curl --request POST \
curl --request POST \
}' ```
+## Address out-of-domain utterances
-## Addressing out of domain utterances
-
-Customers can use the new recipe version '2024-06-01-preview' in case the model has poor AIQ on out of domain utterances. An example of this with the default recipe can be like the below where the model has 3 intents Sports, QueryWeather and Alarm. The test utterances are out of domain utterances and the model classifies them as InDomain with a relatively high confidence score.
+Customers can use the new recipe version `2024-06-01-preview` if the model has poor AIQ on out-of-domain utterances. An example of this scenario with the default recipe can be like the following example where the model has three intents: `Sports`, `QueryWeather`, and `Alarm`. The test utterances are out-of-domain utterances and the model classifies them as `InDomain` with a relatively high confidence score.
| Text | Predicted intent | Confidence score | |-|-|-|
-| "*Who built the Eiffel Tower?*" | `Sports` | 0.90 |
-| "*Do I look good to you today?*" | `QueryWeather` | 1.00 |
-| "*I hope you have a good evening.*" | `Alarm` | 0.80 |
+| "Who built the Eiffel Tower?" | `Sports` | 0.90 |
+| "Do I look good to you today?" | `QueryWeather` | 1.00 |
+| "I hope you have a good evening." | `Alarm` | 0.80 |
-To address this, use the `2024-06-01-preview` configuration version that is built specifically to address this issue while also maintaining reasonably good quality on In Domain utterances.
+To address this scenario, use the `2024-06-01-preview` configuration version that's built specifically to address this issue while also maintaining reasonably good quality on `InDomain` utterances.
```console curl --location 'https://<your-resource>.cognitiveservices.azure.com/language/authoring/analyze-conversations/projects/<your-project>/:train?api-version=2022-10-01-preview' \
curl --location 'https://<your-resource>.cognitiveservices.azure.com/language/au
} ```
-Once the request is sent, you can track the progress of the training job in Language Studio as usual.
+After the request is sent, you can track the progress of the training job in Language Studio as usual.
Caveats:-- The None Score threshold for the app (confidence threshold below which the topIntent is marked as None) when using this recipe should be set to 0. This is because this new recipe attributes a certain portion of the in domain probabilities to out of domain so that the model isn't incorrectly overconfident about in domain utterances. As a result, users may see slightly reduced confidence scores for in domain utterances as compared to the prod recipe.-- This recipe isn't recommended for apps with just two (2) intents, such as IntentA and None, for example.-- This recipe isn't recommended for apps with low number of utterances per intent. A minimum of 25 utterances per intent is highly recommended.+
+- The None score threshold for the app (confidence threshold below which `topIntent` is marked as `None`) when you use this recipe should be set to 0. This setting is used because this new recipe attributes a certain portion of the in-domain probabilities to out of domain so that the model isn't incorrectly overconfident about in-domain utterances. As a result, users might see slightly reduced confidence scores for in-domain utterances as compared to the prod recipe.
+- We don't recommend this recipe for apps with only two intents, such as `IntentA` and `None`, for example.
+- We don't recommend this recipe for apps with a low number of utterances per intent. We highly recommend a minimum of 25 utterances per intent.
ai-services Entity Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/concepts/entity-components.md
Title: Entity components in Conversational Language Understanding
+ Title: Entity components in conversational language understanding
-description: Learn how Conversational Language Understanding extracts entities from text
+description: Learn how conversational language understanding extracts entities from text.
#
# Entity components
-In Conversational Language Understanding, entities are relevant pieces of information that are extracted from your utterances. An entity can be extracted by different methods. They can be learned through context, matched from a list, or detected by a prebuilt recognized entity. Every entity in your project is composed of one or more of these methods, which are defined as your entity's components. When an entity is defined by more than one component, their predictions can overlap. You can determine the behavior of an entity prediction when its components overlap by using a fixed set of options in the **Entity options**.
+In conversational language understanding, entities are relevant pieces of information that are extracted from your utterances. An entity can be extracted by different methods. They can be learned through context, matched from a list, or detected by a prebuilt recognized entity. Every entity in your project is composed of one or more of these methods, which are defined as your entity's components.
+
+When an entity is defined by more than one component, their predictions can overlap. You can determine the behavior of an entity prediction when its components overlap by using a fixed set of options in the *entity options*.
## Component types
-An entity component determines a way you can extract the entity. An entity can contain one component, which would determine the only method that would be used to extract the entity, or multiple components to expand the ways in which the entity is defined and extracted.
+An entity component determines a way that you can extract the entity. An entity can contain one component, which determines the only method to be used to extract the entity. An entity can also contain multiple components to expand the ways in which the entity is defined and extracted.
### Learned component
-The learned component uses the entity tags you label your utterances with to train a machine learned model. The model learns to predict where the entity is, based on the context within the utterance. Your labels provide examples of where the entity is expected to be present in an utterance, based on the meaning of the words around it and as the words that were labeled. This component is only defined if you add labels by tagging utterances for the entity. If you do not tag any utterances with the entity, it will not have a learned component.
+The learned component uses the entity tags you label your utterances with to train a machine-learned model. The model learns to predict where the entity is based on the context within the utterance. Your labels provide examples of where the entity is expected to be present in an utterance, based on the meaning of the words around it and as the words that were labeled.
+This component is only defined if you add labels by tagging utterances for the entity. If you don't tag any utterances with the entity, it doesn't have a learned component.
-### List component
-The list component represents a fixed, closed set of related words along with their synonyms. The component performs an exact text match against the list of values you provide as synonyms. Each synonym belongs to a "list key", which can be used as the normalized, standard value for the synonym that will return in the output if the list component is matched. List keys are **not** used for matching.
+### List component
-In multilingual projects, you can specify a different set of synonyms for each language. While using the prediction API, you can specify the language in the input request, which will only match the synonyms associated to that language.
+The list component represents a fixed, closed set of related words along with their synonyms. The component performs an exact text match against the list of values you provide as synonyms. Each synonym belongs to a *list key*, which can be used as the normalized, standard value for the synonym that returns in the output if the list component is matched. List keys *aren't* used for matching.
+In multilingual projects, you can specify a different set of synonyms for each language. When you use the prediction API, you can specify the language in the input request, which only matches the synonyms associated to that language.
### Prebuilt component
-The prebuilt component allows you to select from a library of common types such as numbers, datetimes, and names. When added, a prebuilt component is automatically detected. You can have up to five prebuilt components per entity. See [the list of supported prebuilt components](../prebuilt-component-reference.md) for more information.
-
+The prebuilt component allows you to select from a library of common types such as numbers, datetimes, and names. When added, a prebuilt component is automatically detected. You can have up to five prebuilt components per entity. For more information, see [the list of supported prebuilt components](../prebuilt-component-reference.md).
### Regex component
-The regex component matches regular expressions to capture consistent patterns. When added, any text that matches the regular expression will be extracted. You can have multiple regular expressions within the same entity, each with a different key identifier. A matched expression will return the key as part of the prediction response.
+The regex component matches regular expressions to capture consistent patterns. When added, any text that matches the regular expression is extracted. You can have multiple regular expressions within the same entity, each with a different key identifier. A matched expression returns the key as part of the prediction response.
-In multilingual projects, you can specify a different expression for each language. While using the prediction API, you can specify the language in the input request, which will only match the regular expression associated to that language.
-
+In multilingual projects, you can specify a different expression for each language. When you use the prediction API, you can specify the language in the input request, which only matches the regular expression associated to that language.
## Entity options
-When multiple components are defined for an entity, their predictions may overlap. When an overlap occurs, each entity's final prediction is determined by one of the following options.
+When multiple components are defined for an entity, their predictions might overlap. When an overlap occurs, each entity's final prediction is determined by one of the following options.
### Combine components Combine components as one entity when they overlap by taking the union of all the components.
-Use this to combine all components when they overlap. When components are combined, you get all the extra information thatΓÇÖs tied to a list or prebuilt component when they are present.
+Use this option to combine all components when they overlap. When components are combined, you get all the extra information that's tied to a list or prebuilt component when they're present.
#### Example
-Suppose you have an entity called Software that has a list component, which contains ΓÇ£Proseware OSΓÇ¥ as an entry. In your utterance data, you have ΓÇ£I want to buy Proseware OS 9ΓÇ¥ with ΓÇ£Proseware OS 9ΓÇ¥ tagged as Software:
-
+Suppose you have an entity called **Software** that has a list component, which contains "Proseware OS" as an entry. In your utterance data, you have "I want to buy Proseware OS 9" with "Proseware OS 9" tagged as **Software**:
-By using combine components, the entity will return with the full context as ΓÇ£Proseware OS 9ΓÇ¥ along with the key from the list component:
+By using combined components, the entity returns with the full context as "Proseware OS 9" along with the key from the list component:
-Suppose you had the same utterance but only ΓÇ£OS 9ΓÇ¥ was predicted by the learned component:
+Suppose you had the same utterance, but only "OS 9" was predicted by the learned component:
-With combine components, the entity will still return as ΓÇ£Proseware OS 9ΓÇ¥ with the key from the list component:
+With combined components, the entity still returns as "Proseware OS 9" with the key from the list component:
-### Do not combine components
+### Don't combine components
-Each overlapping component will return as a separate instance of the entity. Apply your own logic after prediction with this option.
+Each overlapping component returns as a separate instance of the entity. Apply your own logic after prediction with this option.
#### Example
-Suppose you have an entity called Software that has a list component, which contains ΓÇ£Proseware DesktopΓÇ¥ as an entry. In your utterance data, you have ΓÇ£I want to buy Proseware Desktop ProΓÇ¥ with ΓÇ£Proseware Desktop ProΓÇ¥ tagged as Software:
+Suppose you have an entity called **Software** that has a list component, which contains "Proseware Desktop" as an entry. In your utterance data, you have "I want to buy Proseware Desktop Pro" with "Proseware Desktop Pro" tagged as **Software**:
-When you do not combine components, the entity will return twice:
+When you don't combine components, the entity returns twice:
### Required components
-An entity can sometimes be defined by multiple components but requires one or more of them to be present. Every component can be set as **required**, which means the entity will **not** be returned if that component wasn't present. For example, if you have an entity with a list component and a required learned component, it is guaranteed that any returned entity includes a learned component; if it doesn't, the entity will not be returned.
+Sometimes an entity can be defined by multiple components but requires one or more of them to be present. Every component can be set as *required*, which means the entity *won't* be returned if that component wasn't present. For example, if you have an entity with a list component and a required learned component, it's guaranteed that any returned entity includes a learned component. If it doesn't, the entity isn't returned.
-Required components are most frequently used with learned components, as they can restrict the other component types to a specific context, which is commonly associated to **roles**. You can also require all components to make sure that every component is present for an entity.
+Required components are most frequently used with learned components because they can restrict the other component types to a specific context, which is commonly associated to *roles*. You can also require all components to make sure that every component is present for an entity.
-In the Language Studio, every component in an entity has a toggle next to it that allows you to set it as required.
+In Language Studio, every component in an entity has a toggle next to it that allows you to set it as required.
#### Example
-Suppose you have an entity called **Ticket Quantity** that attempts to extract the number of tickets you want to reserve for flights, for utterances such as _"Book **two** tickets tomorrow to Cairo"_.
+Suppose you have an entity called **Ticket Quantity** that attempts to extract the number of tickets you want to reserve for flights, for utterances such as "Book **two** tickets tomorrow to Cairo."
-Typically, you would add a prebuilt component for _Quantity.Number_ that already extracts all numbers. However if your entity was only defined with the prebuilt, it would also extract other numbers as part of the **Ticket Quantity** entity, such as _"Book **two** tickets tomorrow to Cairo at **3** PM"_.
+Typically, you add a prebuilt component for `Quantity.Number` that already extracts all numbers. If your entity was only defined with the prebuilt component, it also extracts other numbers as part of the **Ticket Quantity** entity, such as "Book **two** tickets tomorrow to Cairo at **3** PM."
-To resolve this, you would label a learned component in your training data for all the numbers that are meant to be **Ticket Quantity**. The entity now has 2 components, the prebuilt that knows all numbers, and the learned one that predicts where the Ticket Quantity is in a sentence. If you require the learned component, you make sure that Ticket Quantity only returns when the learned component predicts it in the right context. If you also require the prebuilt component, you can then guarantee that the returned Ticket Quantity entity is both a number and in the correct position.
+To resolve this scenario, you label a learned component in your training data for all the numbers that are meant to be **Ticket Quantity**. The entity now has two components: the prebuilt component that knows all numbers, and the learned one that predicts where the ticket quantity is in a sentence. If you require the learned component, you make sure that **Ticket Quantity** only returns when the learned component predicts it in the right context. If you also require the prebuilt component, you can then guarantee that the returned **Ticket Quantity** entity is both a number and in the correct position.
+## Use components and options
-## How to use components and options
+Components give you the flexibility to define your entity in more than one way. When you combine components, you make sure that each component is represented and you reduce the number of entities returned in your predictions.
-Components give you the flexibility to define your entity in more than one way. When you combine components, you make sure that each component is represented and you reduce the number of entities returned in your predictions.
+A common practice is to extend a prebuilt component with a list of values that the prebuilt might not support. For example, if you have an **Organization** entity, which has a `General.Organization` prebuilt component added to it, the entity might not predict all the organizations specific to your domain. You can use a list component to extend the values of the **Organization** entity and extend the prebuilt component with your own organizations.
-A common practice is to extend a prebuilt component with a list of values that the prebuilt might not support. For example, if you have an **Organization** entity, which has a _General.Organization_ prebuilt component added to it, the entity may not predict all the organizations specific to your domain. You can use a list component to extend the values of the Organization entity and thereby extending the prebuilt with your own organizations.
+Other times, you might be interested in extracting an entity through context, such as a **Product** in a retail project. You label the learned component of the product to learn _where_ a product is based on its position within the sentence. You might also have a list of products that you already know beforehand that you want to always extract. Combining both components in one entity allows you to get both options for the entity.
-Other times you may be interested in extracting an entity through context such as a **Product** in a retail project. You would label for the learned component of the product to learn _where_ a product is based on its position within the sentence. You may also have a list of products that you already know before hand that you'd like to always extract. Combining both components in one entity allows you to get both options for the entity.
-
-When you do not combine components, you allow every component to act as an independent entity extractor. One way of using this option is to separate the entities extracted from a list to the ones extracted through the learned or prebuilt components to handle and treat them differently.
+When you don't combine components, you allow every component to act as an independent entity extractor. One way of using this option is to separate the entities extracted from a list to the ones extracted through the learned or prebuilt components to handle and treat them differently.
> [!NOTE]
-> Previously during the public preview of the service, there were 4 available options: **Longest overlap**, **Exact overlap**, **Union overlap**, and **Return all separately**. **Longest overlap** and **exact overlap** are deprecated and will only be supported for projects that previously had those options selected. **Union overlap** has been renamed to **Combine components**, while **Return all separately** has been renamed to **Do not combine components**.
-
+> Previously during the public preview of the service, there were four available options: **Longest overlap**, **Exact overlap**, **Union overlap**, and **Return all separately**. **Longest overlap** and **Exact overlap** are deprecated and are only supported for projects that previously had those options selected. **Union overlap** has been renamed to **Combine components**, while **Return all separately** has been renamed to **Do not combine components**.
-## Next steps
+## Related content
-[Supported prebuilt components](../prebuilt-component-reference.md)
+- [Supported prebuilt components](../prebuilt-component-reference.md)
ai-services Multiple Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/concepts/multiple-languages.md
Title: Multilingual projects
-description: Learn about which how to make use of multilingual projects in conversational language understanding
+description: Learn about how to make use of multilingual projects in conversational language understanding.
#
# Multilingual projects
-Conversational language understanding makes it easy for you to extend your project to several languages at once. When you enable multiple languages in projects, you'll be able to add language specific utterances and synonyms to your project, and get multilingual predictions for your intents and entities.
+Conversational language understanding makes it easy for you to extend your project to several languages at once. When you enable multiple languages in projects, you can add language-specific utterances and synonyms to your project. You can get multilingual predictions for your intents and entities.
## Multilingual intent and learned entity components
-When you enable multiple languages in a project, you can train the project primarily in one language and immediately get predictions in others.
+When you enable multiple languages in a project, you can train the project primarily in one language and immediately get predictions in other languages.
-For example, you can train your project entirely with English utterances, and query it in: French, German, Mandarin, Japanese, Korean, and others. Conversational language understanding makes it easy for you to scale your projects to multiple languages by using multilingual technology to train your models.
+For example, you can train your project entirely with English utterances and query it in French, German, Mandarin, Japanese, Korean, and others. Conversational language understanding makes it easy for you to scale your projects to multiple languages by using multilingual technology to train your models.
-Whenever you identify that a particular language is not performing as well as other languages, you can add utterances for that language in your project. In the [tag utterances](../how-to/tag-utterances.md) page in Language Studio, you can select the language of the utterance you're adding. When you introduce examples for that language to the model, it is introduced to more of the syntax of that language, and learns to predict it better.
+Whenever you identify that a particular language isn't performing as well as other languages, you can add utterances for that language in your project. In the [tag utterances](../how-to/tag-utterances.md) page in Language Studio, you can select the language of the utterance you're adding. When you introduce examples for that language to the model, it's introduced to more of the syntax of that language and learns to predict it better.
-You aren't expected to add the same amount of utterances for every language. You should build the majority of your project in one language, and only add a few utterances in languages you observe aren't performing well. If you create a project that is primarily in English, and start testing it in French, German, and Spanish, you might observe that German doesn't perform as well as the other two languages. In that case, consider adding 5% of your original English examples in German, train a new model and test in German again. You should see better results for German queries. The more utterances you add, the more likely the results are going to get better.
+You aren't expected to add the same number of utterances for every language. You should build most of your project in one language and only add a few utterances in languages that you observe aren't performing well. If you create a project that's primarily in English and start testing it in French, German, and Spanish, you might observe that German doesn't perform as well as the other two languages. In that case, consider adding 5% of your original English examples in German, train a new model, and test in German again. You should see better results for German queries. The more utterances you add, the more likely the results are going to get better.
-When you add data in another language, you shouldn't expect it to negatively affect other languages.
+When you add data in another language, you shouldn't expect it to negatively affect other languages.
## List and prebuilt components in multiple languages
-Projects with multiple languages enabled will allow you to specify synonyms **per language** for every list key. Depending on the language you query your project with, you will only get matches for the list component with synonyms of that language. When you query your project, you can specify the language in the request body:
+Projects with multiple languages enabled allow you to specify synonyms *per language* for every list key. Depending on the language you query your project with, you only get matches for the list component with synonyms of that language. When you query your project, you can specify the language in the request body:
```json "query": "{query}" "language": "{language code}" ```
-If you do not provide a language, it will fall back to the default language of your project. See the [language support](../language-support.md) article for a list of different language codes.
+If you don't provide a language, it falls back to the default language of your project. For a list of different language codes, see [Language support](../language-support.md).
-Prebuilt components are similar, where you should expect to get predictions for prebuilt components that are available in specific languages. The request's language again determines which components are attempting to be predicted. See the [prebuilt components](../prebuilt-component-reference.md) reference article for the language support of each prebuilt component.
+Prebuilt components are similar, where you should expect to get predictions for prebuilt components that are available in specific languages. The request's language again determines which components are attempting to be predicted. For information on the language support of each prebuilt component, see the [Supported prebuilt entity components](../prebuilt-component-reference.md).
-## Next steps
+## Related content
* [Tag utterances](../how-to/tag-utterances.md) * [Train a model](../how-to/train-model.md)
ai-services Model Retirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/model-retirements.md
description: Learn about the model deprecations and retirements in Azure OpenAI. Previously updated : 07/18/2024 Last updated : 07/30/2024
These models are currently available for use in Azure OpenAI Service.
| `gpt-35-turbo` | 0125 | No earlier than Feb 22, 2025 | | `gpt-4`<br>`gpt-4-32k` | 0314 | **Deprecation:** October 1, 2024 <br> **Retirement:** June 6, 2025 | | `gpt-4`<br>`gpt-4-32k` | 0613 | **Deprecation:** October 1, 2024 <br> **Retirement:** June 6, 2025 |
-| `gpt-4` | 1106-preview | To be upgraded to `gpt-4` Version: `turbo-2024-04-09`, starting on August 15, 2024, or later **<sup>1</sup>** |
-| `gpt-4` | 0125-preview |To be upgraded to `gpt-4` Version: `turbo-2024-04-09`, starting on August 15, 2024, or later **<sup>1</sup>** |
-| `gpt-4` | vision-preview | To be upgraded to `gpt-4` Version: `turbo-2024-04-09`, starting on August 15, 2024, or later **<sup>1</sup>** |
+| `gpt-4` | 1106-preview | To be upgraded to `gpt-4` Version: `turbo-2024-04-09`, starting on November 15, 2024, or later **<sup>1</sup>** |
+| `gpt-4` | 0125-preview |To be upgraded to `gpt-4` Version: `turbo-2024-04-09`, starting on November 15, 2024, or later **<sup>1</sup>** |
+| `gpt-4` | vision-preview | To be upgraded to `gpt-4` Version: `turbo-2024-04-09`, starting on November 15, 2024, or later **<sup>1</sup>** |
| `gpt-3.5-turbo-instruct` | 0914 | No earlier than Sep 14, 2025 | | `text-embedding-ada-002` | 2 | No earlier than April 3, 2025 | | `text-embedding-ada-002` | 1 | No earlier than April 3, 2025 |
If you're an existing customer looking for information about these models, see [
## Retirement and deprecation history
-## July 18, 2024
+### July 30, 2024
+
+* Updated `gpt-4` preview model upgrade date to November 15, 2024 or later for the following versions:
+ * 1106-preview
+ * 0125-preview
+ * vision-preview
+
+### July 18, 2024
* Updated `gpt-4` 0613 deprecation date to October 1, 2024 and the retirement date to June 6, 2025.
-## June 19, 2024
+### June 19, 2024
* Updated `gpt-35-turbo` 0301 retirement date to no earlier than October 1, 2024. * Updated `gpt-35-turbo` & `gpt-35-turbo-16k`0613 retirement date to October 1, 2024.
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
description: Learn about the different model capabilities that are available with Azure OpenAI. Previously updated : 07/18/2024 Last updated : 07/31/2024
Azure OpenAI Service is powered by a diverse set of models with different capabi
| Models | Description | |--|--|
-| [GPT-4o & GPT-4 Turbo](#gpt-4o-and-gpt-4-turbo) | The latest most capable Azure OpenAI models with multimodal versions, which can accept both text and images as input. |
+| [GPT-4o & GPT-4o mini & GPT-4 Turbo](#gpt-4o-and-gpt-4-turbo) | The latest most capable Azure OpenAI models with multimodal versions, which can accept both text and images as input. |
| [GPT-4](#gpt-4) | A set of models that improve on GPT-3.5 and can understand and generate natural language and code. | | [GPT-3.5](#gpt-35) | A set of models that improve on GPT-3 and can understand and generate natural language and code. | | [Embeddings](#embeddings-models) | A set of models that can convert text into numerical vector form to facilitate text similarity. |
Azure OpenAI Service is powered by a diverse set of models with different capabi
GPT-4o integrates text and images in a single model, enabling it to handle multiple data types simultaneously. This multimodal approach enhances accuracy and responsiveness in human-computer interactions. GPT-4o matches GPT-4 Turbo in English text and coding tasks while offering superior performance in non-English languages and vision tasks, setting new benchmarks for AI capabilities.
-### Early access playground
+### How do I access the GPT-4o and GPT-4o mini models?
-Existing Azure OpenAI customers can test out the **NEW GPT-4o mini** model in the **Azure OpenAI Studio Early Access Playground (Preview)**.
-
-To test the latest model:
-
-> [!NOTE]
-> The GPT-4o mini early access playground is currently only available for resources in **West US3** and **East US**, and is limited to 10 requests every five minutes per subscription. Azure OpenAI content filters are enabled at the default configuration and cannot be modified. GPT-4o mini is a preview model and is currently not available for deployment/direct API access.
-
-1. Navigate to Azure OpenAI Studio at https://oai.azure.com/ and sign-in with credentials that have access to your OpenAI resources.
-2. Select an Azure OpenAI resource in the **West US3** or **East US** regions. If you don't have a resource in one of these regions you will need to [create a resource](../how-to/create-resource.md).
-3. From the main [Azure OpenAI Studio](https://oai.azure.com/) page select the **Early Access Playground (Preview)** button from under the **Get started** section. (This button will only be visible when a resource in **West US3** or **East US** is selected.)
-4. Now you can start asking the model questions just as you would before in the existing [chat playground](../chatgpt-quickstart.md).
-
-### How do I access the GPT-4o model?
-
-GPT-4o is available for **standard** and **global-standard** model deployment.
+GPT-4o and GPT-4o mini are available for **standard** and **global-standard** model deployment.
You need to [create](../how-to/create-resource.md) or use an existing resource in a [supported standard](#gpt-4-and-gpt-4-turbo-model-availability) or [global standard](#global-standard-model-availability) region where the model is available.
-When your resource is created, you can [deploy](../how-to/create-resource.md#deploy-a-model) the GPT-4o model. If you are performing a programmatic deployment, the **model** name is `gpt-4o`, and the **version** is `2024-05-13`.
+When your resource is created, you can [deploy](../how-to/create-resource.md#deploy-a-model) the GPT-4o models. If you are performing a programmatic deployment, the **model** names are:
+
+- `gpt-4o`, **Version** `2024-05-13`
+- `gpt-4o-mini` **Version** `2024-07-18`
### GPT-4 Turbo
See [model versions](../concepts/model-versions.md) to learn about how Azure Ope
| Model ID | Description | Max Request (tokens) | Training Data (up to) | | | : |: |:: |
-|`gpt-4o` (2024-05-13) <br> **GPT-4o (Omni)** | **Latest GA model** <br> - Text, image processing <br> - JSON Mode <br> - parallel function calling <br> - Enhanced accuracy and responsiveness <br> - Parity with English text and coding tasks compared to GPT-4 Turbo with Vision <br> - Superior performance in non-English languages and in vision tasks <br> - **Does not support enhancements** |Input: 128,000 <br> Output: 4,096| Oct 2023 |
+|`gpt-4o-mini` (2024-07-18) <br> **GPT-4o mini** | **Latest small GA model** <br> - Fast, inexpensive, capable model ideal for replacing GPT-3.5 Turbo series models. <br> - Text, image processing <br>- JSON Mode <br> - parallel function calling <br> - **Does not support enhancements** | Input: 128,000 <br> Output: 16,384 | Oct 2023 |
+|`gpt-4o` (2024-05-13) <br> **GPT-4o (Omni)** | **Latest large GA model** <br> - Text, image processing <br> - JSON Mode <br> - parallel function calling <br> - Enhanced accuracy and responsiveness <br> - Parity with English text and coding tasks compared to GPT-4 Turbo with Vision <br> - Superior performance in non-English languages and in vision tasks <br> - **Does not support enhancements** |Input: 128,000 <br> Output: 4,096| Oct 2023 |
| `gpt-4` (turbo-2024-04-09) <br>**GPT-4 Turbo with Vision** | **New GA model** <br> - Replacement for all previous GPT-4 preview models (`vision-preview`, `1106-Preview`, `0125-Preview`). <br> - [**Feature availability**](#gpt-4o-and-gpt-4-turbo) is currently different depending on method of input, and deployment type. <br> - **Does not support enhancements**. | Input: 128,000 <br> Output: 4,096 | Dec 2023 | | `gpt-4` (0125-Preview)*<br>**GPT-4 Turbo Preview** | **Preview Model** <br> -Replaces 1106-Preview <br>- Better code generation performance <br> - Reduces cases where the model doesn't complete a task <br> - JSON Mode <br> - parallel function calling <br> - reproducible output (preview) | Input: 128,000 <br> Output: 4,096 | Dec 2023 | | `gpt-4` (vision-preview)<br>**GPT-4 Turbo with Vision Preview** | **Preview model** <br> - Accepts text and image input. <br> - Supports enhancements <br> - JSON Mode <br> - parallel function calling <br> - reproducible output (preview) | Input: 128,000 <br> Output: 4,096 | Apr 2023 |
For more information on Provisioned deployments, see our [Provisioned guidance](
### Global standard model availability
-**Supported models:**
--- `gpt-4o` **Version:** `2024-05-13`
+`gpt-4o` **Version:** `2024-05-13`
**Supported regions:**
For more information on Provisioned deployments, see our [Provisioned guidance](
- westus - westus3
+`gpt-4o-mini` **Version:** `2024-07-18`
+
+**Supported regions:**
+
+- eastus
+ ### GPT-4 and GPT-4 Turbo model availability #### Public cloud regions [!INCLUDE [GPT-4](../includes/model-matrix/standard-gpt-4.md)] -- #### Select customer access In addition to the regions above which are available to all Azure OpenAI customers, some select pre-existing customers have been granted access to versions of GPT-4 in additional regions:
These models can only be used with Embedding API requests.
| `gpt-35-turbo` (0613) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | 4,096 | Sep 2021 | | `gpt-35-turbo` (1106) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | Input: 16,385<br> Output: 4,096 | Sep 2021| | `gpt-35-turbo` (0125) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | 16,385 | Sep 2021 |
-| `gpt-4` (0613) <sup>**1**<sup> | North Central US <br> Sweden Central | 8192 | Sep 2021 |
+| `gpt-4` (0613) <sup>**1**</sup> | North Central US <br> Sweden Central | 8192 | Sep 2021 |
-**<sup>1<sup>** GPT-4 fine-tuning is currently in public preview. See our [GPT-4 fine-tuning safety evaluation guidance](/azure/ai-services/openai/how-to/fine-tuning?tabs=turbo%2Cpython-new&pivots=programming-language-python#safety-evaluation-gpt-4-fine-tuningpublic-preview) for more information.
+**<sup>1</sup>** GPT-4 fine-tuning is currently in public preview. See our [GPT-4 fine-tuning safety evaluation guidance](/azure/ai-services/openai/how-to/fine-tuning?tabs=turbo%2Cpython-new&pivots=programming-language-python#safety-evaluation-gpt-4-fine-tuningpublic-preview) for more information.
### Whisper models
ai-services Fine Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/fine-tuning.md
Previously updated : 05/16/2024 Last updated : 07/25/2024 zone_pivot_groups: openai-fine-tuning-new
ai-services Function Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/function-calling.md
At a high level you can break down working with functions into three steps:
* `gpt-4` (vision-preview) * `gpt-4` (2024-04-09) * `gpt-4o` (2024-05-13)
+* `gpt-4o-mini` (2024-07-18)
Support for parallel function was first added in API version [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
ai-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quotas-limits.md
- ignite-2023 - references_regions Previously updated : 07/24/2024 Last updated : 07/31/2024
The following sections provide you with a quick guide to the default quotas and
## gpt-4o rate limits
-`gpt-4o` introduces rate limit tiers with higher limits for certain customer types.
+`gpt-4o` and `gpt-4o-mini` have rate limit tiers with higher limits for certain customer types.
### gpt-4o global standard
-|Tier| Quota Limit in tokens per minute (TPM) | Requests per minute |
-||::|::|
-|Enterprise agreement | 30 M | 180 K |
-|Default | 450 K | 2.7 K |
+| Model|Tier| Quota Limit in tokens per minute (TPM) | Requests per minute |
+|||::|::|
+|`gpt-4o`|Enterprise agreement | 30 M | 180 K |
+|`gpt-4o-mini` | Enterprise agreement | 50 M | 300 K |
+|`gpt-4o` |Default | 450 K | 2.7 K |
+|`gpt-4o-mini` | Default | 2 M | 12 K |
M = million | K = thousand ### gpt-4o standard
-|Tier| Quota Limit in tokens per minute (TPM) | Requests per minute |
-||::|::|
-|Enterprise agreement | 1 M | 6 K |
-|Default | 150 K | 900 |
+| Model|Tier| Quota Limit in tokens per minute (TPM) | Requests per minute |
+|||::|::|
+|`gpt-4o`|Enterprise agreement | 1 M | 6 K |
+|`gpt-4o-mini` | Enterprise agreement | 2 M | 12 K |
+|`gpt-4o`|Default | 150 K | 900 |
+|`gpt-4o-mini` | Default | 450 K | 2.7 K |
M = million | K = thousand
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
- ignite-2023 - references_regions Previously updated : 07/18/2024 Last updated : 07/31/2024 recommendations: false
This article provides a summary of the latest releases and major documentation u
## July 2024
-### GPT-4o mini preview model available for early access
+### GPT-4o mini model available for deployment
-GPT-4o mini is the latest model from OpenAI [launched on July 18, 2024](https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/).
+GPT-4o mini is the latest Azure OpenAI model first [announced on July 18, 2024](https://azure.microsoft.com/blog/openais-fastest-model-gpt-4o-mini-is-now-available-on-azure-ai/):
-From OpenAI:
+*"GPT-4o mini allows customers to deliver stunning applications at a lower cost with blazing speed. GPT-4o mini is significantly smarter than GPT-3.5 TurboΓÇöscoring 82% on Measuring Massive Multitask Language Understanding (MMLU) compared to 70%ΓÇöand is more than 60% cheaper.1 The model delivers an expanded 128K context window and integrates the improved multilingual capabilities of GPT-4o, bringing greater quality to languages from around the world."*
-*"GPT-4o mini surpasses GPT-3.5 Turbo and other small models on academic benchmarks across both textual intelligence and multimodal reasoning, and supports the same range of languages as GPT-4o. It also demonstrates strong performance in function calling, which can enable developers to build applications that fetch data or take actions with external systems, and improved long-context performance compared to GPT-3.5 Turbo."*
+The model is currently available for both [standard and global standard deployment](./how-to/deployment-types.md) in the East US region.
-To start testing out the model today in Azure OpenAI, see the [**Azure OpenAI Studio early access playground**](./concepts/models.md#early-access-playground).
+For information on model quota, consult the [quota and limits page](./quotas-limits.md) and for the latest info on model availability refer to the [models page](./concepts/models.md).
### New Responsible AI default content filtering policy
ai-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-pronunciation-assessment.md
zone_pivot_groups: programming-languages-ai-services
In this article, you learn how to evaluate pronunciation with speech to text through the Speech SDK. Pronunciation assessment evaluates speech pronunciation and gives speakers feedback on the accuracy and fluency of spoken audio.
+> [!NOTE]
+> Pronunciation assessment uses a specific version of the speech-to-text model, different from the standard speech to text model, to ensure consistent and accurate pronunciation assessment.
+ ## Use pronunciation assessment in streaming mode Pronunciation assessment supports uninterrupted streaming mode. The recording time can be unlimited through the Speech SDK. As long as you don't stop recording, the evaluation process doesn't finish and you can pause and resume evaluation conveniently.
For how to use Pronunciation Assessment in streaming mode in your own applicatio
::: zone-end
+### Continuous recognition
++
+If your audio file exceeds 30 seconds, use continuous mode for processing. The sample code for continuous mode can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs) under the function `PronunciationAssessmentContinuousWithFile`.
+++
+If your audio file exceeds 30 seconds, use continuous mode for processing.
+++
+If your audio file exceeds 30 seconds, use continuous mode for processing. The sample code for continuous mode can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/java/jre/console/src/com/microsoft/cognitiveservices/speech/samples/console/SpeechRecognitionSamples.java) under the function `pronunciationAssessmentContinuousWithFile`.
+++
+If your audio file exceeds 30 seconds, use continuous mode for processing. The sample code for continuous mode can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/261160e26dfcae4c3aee93308d58d74e36739b6f/samples/python/console/speech_sample.py) under the function `pronunciation_assessment_continuous_from_file`.
+++
+If your audio file exceeds 30 seconds, use continuous mode for processing. The sample code for continuous mode can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/261160e26dfcae4c3aee93308d58d74e36739b6f/samples/js/node/pronunciationAssessmentContinue.js).
+++
+If your audio file exceeds 30 seconds, use continuous mode for processing. The sample code for continuous mode can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/objective-c/ios/speech-samples/speech-samples/ViewController.m) under the function `pronunciationAssessFromFile`.
+++
+If your audio file exceeds 30 seconds, use continuous mode for processing. The sample code for continuous mode can be found on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/swift/ios/speech-samples/speech-samples/ViewController.swift) under the function `continuousPronunciationAssessment`.
+++++ ## Set configuration parameters ::: zone pivot="programming-language-go"
This table lists some of the optional methods you can set for the `Pronunciation
> Content and prosody assessments are only available in the [en-US](./language-support.md?tabs=pronunciation-assessment) locale. > > To explore the content and prosody assessments, upgrade to the SDK version 1.35.0 or later.
+>
+> There is no length limit for the topic parameter.
| Method | Description | |--|-|
You can get pronunciation assessment scores for:
- Syllable groups - Phonemes in [SAPI](/previous-versions/windows/desktop/ee431828(v=vs.85)#american-english-phoneme-table) or [IPA](https://en.wikipedia.org/wiki/IPA) format
-### Supported features per locale
+## Supported features per locale
The following table summarizes which features that locales support. For more specifies, see the following sections. If the locales you require aren't listed in the following table for the supported feature, fill out this [intake form](https://aka.ms/speechpa/intake) for further assistance.
pronunciationAssessmentConfig?.phonemeAlphabet = "IPA"
::: zone-end
-## Assess spoken phonemes
+### Assess spoken phonemes
With spoken phonemes, you can get confidence scores that indicate how likely the spoken phonemes matched the expected phonemes.
pronunciationAssessmentConfig?.nbestPhonemeCount = 5
::: zone-end
+## Pronunciation score calculation
+
+Pronunciation scores are calculated by weighting accuracy, prosody, fluency, and completeness scores based on specific formulas for reading and speaking scenarios.
+
+When sorting the scores of accuracy, prosody, fluency, and completeness from low to high (if each score is available) and representing the lowest score to the highest score as s0 to s3, the pronunciation score is calculated as follows:
+
+For reading scenario:
+ - With prosody score: PronScore = 0.4 * s0 + 0.2 * s1 + 0.2 * s2 + 0.2 * s3
+ - Without prosody score: PronScore = 0.6 * s0 + 0.2 * s1 + 0.2 * s2
+
+For the speaking scenario (the completeness score isn't applicable):
+ - With prosody score: PronScore = 0.6 * s0 + 0.2 * s1 + 0.2 * s2
+ - Without prosody score: PronScore = 0.6 * s0 + 0.4 * s1
+
+This formula provides a weighted calculation based on the importance of each score, ensuring a comprehensive evaluation of pronunciation.
+ ## Related content - Learn about quality [benchmark](https://aka.ms/pronunciationassessment/techblog).
ai-services Language Learning With Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/language-learning-with-pronunciation-assessment.md
+
+ Title: Interactive language learning with pronunciation assessment
+description: Interactive language learning with pronunciation assessment gives you instant feedback on pronunciation, fluency, prosody, grammar, and vocabulary through interactive chats.
++++ Last updated : 8/1/2024+++
+# Interactive language learning with pronunciation assessment
++
+Learning a new language is an exciting journey. Interactive language learning can make your learning experience more engaging and effective. By using pronunciation assessment effectively, you get instant feedback on pronunciation accuracy, fluency, prosody, grammar, and vocabulary through your interactive language learning experience.
+
+> [!NOTE]
+> The language learning feature currently supports only `en-US`. For available regions, refer to [available regions for pronunciation assessment](regions.md#speech-service). If you turn on the **Avatar** button to interact with a text to speech avatar, refer to the available [regions](regions.md#speech-service) for text to speech avatar.
+>
+> If you have any feedback on the language learning feature, fill out [this form](https://aka.ms/speechpa/intake).
+
+## Common use cases
+
+Here are some common scenarios where you can make use of the language learning feature to improve your language skills:
+
+- **Assess pronunciations:** Practice your pronunciation and receive scores with detailed feedback to identify areas for improvement.
+- **Improve speaking skills:** Engage in conversations with a native speaker (or a simulated one) to enhance your speaking skills and build confidence.
+- **Learn new vocabulary:** Expand your vocabulary and work on advanced pronunciation by interacting with AI-driven language models.
+
+## Getting started
+
+In this section, you can learn how to immerse yourself in dynamic conversations with a GPT-powered voice assistant to enhance your speaking skills.
+
+To get started with language learning through chatting, follow these steps:
+
+1. Go to **Language learning** in the [Speech Studio](https://aka.ms/speechstudio).
+
+1. Decide on a scenario or context in which you'd like to interact with the voice assistant. This can be a casual conversation, a specific topic, or a language learning exercise.
+
+ :::image type="content" source="media/pronunciation-assessment/language-learning.png" alt-text="Screenshot of choosing chatting scenario to interact with the voice assistant." lightbox="media/pronunciation-assessment/language-learning.png":::
+
+ If you want to interact with an avatar, toggle the **Avatar** button in the upper right corner to **On**.
+
+1. Press the microphone icon to start speaking naturally, as if you were talking to a real person.
+
+ :::image type="content" source="media/pronunciation-assessment/language-learning-selecting-mic-icon.png" alt-text="Screenshot of selecting the microphone icon to interact with the voice assistant." lightbox="media/pronunciation-assessment/language-learning-selecting-mic-icon.png":::
+
+ For accurate vocabulary and grammar scores, speak at least 3 sentences before assessment.
+
+1. Press the stop button or **Assess my response** button to finish speaking. This action will trigger the assessment process.
+
+ :::image type="content" source="media/pronunciation-assessment/language-learning-assess-response.png" alt-text="Screenshot of selecting the stop button to assess your response." lightbox="media/pronunciation-assessment/language-learning-assess-response.png":::
+
+1. Wait for a moment, and you can get a detailed assessment report.
+
+ :::image type="content" source="media/pronunciation-assessment/language-learning-assess-report.png" alt-text="Screenshot of a detailed assessment report.":::
+
+ The assessment report may include feedback on:
+ - **Accuracy:** Accuracy indicates how closely the phonemes match a native speaker's pronunciation.
+ - **Fluency:** Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words.
+ - **Prosody:** Prosody indicates the nature of the given speech, including stress, intonation, speaking speed, and rhythm.
+ - **Grammar:** Grammar considers lexical accuracy, grammatical accuracy, and diversity of sentence structures, providing a more comprehensive evaluation of language proficiency.
+ - **Vocabulary:** Vocabulary evaluates the speaker's effective usage of words and their appropriateness within the given context to express ideas accurately, as well as the level of lexical complexity.
+
+ When recording your speech for pronunciation assessment, ensure your recording time falls within the recommended range of 20 seconds (equivalent to more than 50 words) to 10 minutes per session. This time range is optimal for evaluating the content of your speech accurately. Whether you have a short and focused conversation or a more extended dialogue, as long as the total recorded time falls within this range, you'll receive comprehensive feedback on your pronunciation, fluency, and content.
+
+ To get feedback on how to improve for each aspect of the assessment, select **Get feedback on how to improve**.
+
+ :::image type="content" source="media/pronunciation-assessment/language-learning-feedback-improve.png" alt-text="Screenshot of selecting the button to get feedback on how to improve for each aspect of the assessment.":::
+
+ When you have completed the conversation, you can also download your chat audio. You can clear the current conversation by selecting **Clear chat**.
+
+## Next steps
+
+- Use [pronunciation assessment with the Speech SDK](how-to-pronunciation-assessment.md)
+- Try [pronunciation assessment in the studio](pronunciation-assessment-tool.md).
ai-services Personal Voice How To Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/personal-voice-how-to-use.md
Here's example SSML in a request for text to speech with the voice name and the
You can use the SSML via the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md). * **Real-time speech synthesis**: Use the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md) to convert text to speech.
- * When you use Speech SDK, don't set Endpoint Id, just like prebuild voice.
+ * When you use Speech SDK, don't set Endpoint ID, just like prebuild voice.
* When you use REST API, please use prebuilt neural voices endpoint.
+## Supported and unsupported SSML elements for personal voice
+
+For detailed information on the supported and unsupported SSML elements for Phoenix and Dragon models, refer to the following table. For instructions on how to use SSML elements, refer to the [SSML document structure and events](speech-synthesis-markup-structure.md).
+
+| Element | Description | Supported in Phoenix | Supported in Dragon |
+|-|--|-||
+| `<voice>` | Specifies the voice and optional effects (`eq_car` and `eq_telecomhp8k`). | Yes | Yes |
+| `<mstts:express-as>` | Specifies speaking styles and roles. | No | No |
+| `<mstts:ttsembedding>` | Specifies the `speakerProfileId` property for a personal voice. | Yes | No |
+| `<lang xml:lang>` | Specifies the speaking language. | Yes | Yes |
+| `<prosody>` | Adjusts pitch, contour, range, rate, and volume. | | |
+|&nbsp;&nbsp;&nbsp;`pitch` | Indicates the baseline pitch for the text. | No | No |
+| &nbsp;&nbsp;&nbsp;`contour`| Represents changes in pitch. | No | No |
+| &nbsp;&nbsp;&nbsp;`range` | Represents the range of pitch for the text. | No | No |
+| &nbsp;&nbsp;&nbsp;`rate` | Indicates the speaking rate of the text. | Yes | Yes |
+| &nbsp;&nbsp;&nbsp;`volume`| Indicates the volume level of the speaking voice. | No | No |
+| `<emphasis>` | Adds or removes word-level stress for the text. | No | No |
+| `<audio>` | Embeds prerecorded audio into an SSML document. | Yes | No |
+| `<mstts:audioduration>` | Specifies the duration of the output audio. | No | No |
+| `<mstts:backgroundaudio>`| Adds background audio to your SSML documents or mixes an audio file with text to speech. | Yes | No |
+| `<phoneme>` | Specifies phonetic pronunciation in SSML documents. | | |
+| &nbsp;&nbsp;&nbsp;`ipa` | One of the phonetic alphabets. | Yes | No |
+| &nbsp;&nbsp;&nbsp;`sapi` | One of the phonetic alphabets. | No | No |
+| &nbsp;&nbsp;&nbsp;`ups` | One of the phonetic alphabets. | Yes | No |
+| &nbsp;&nbsp;&nbsp;`x-sampa`| One of the phonetic alphabets. | Yes | No |
+| `<lexicon>` | Defines how multiple entities are read in SSML. | Yes | Yes (only support alias) |
+| `<say-as>` | Indicates the content type, such as number or date, of the element's text. | Yes | Yes |
+| `<sub>` | Indicates that the alias attribute's text value should be pronounced instead of the element's enclosed text. | Yes | Yes |
+| `<math>` | Uses the MathML as input text to properly pronounce mathematical notations in the output audio. | Yes | No |
+| `<bookmark>` | Gets the offset of each marker in the audio stream. | Yes | No |
+| `<break>` | Overrides the default behavior of breaks or pauses between words. | Yes | Yes |
+| `<mstts:silence>` | Inserts pauses before or after text, or between two adjacent sentences. | Yes | No |
+| `<mstts:viseme>` | Defines the position of the face and mouth while a person is speaking. | Yes | No |
+| `<p>` | Denotes paragraphs in SSML documents. | Yes | Yes |
+| `<s>` | Denotes sentences in SSML documents. | Yes | Yes |
+ ## Reference documentation > [!div class="nextstepaction"]
ai-services Speech Synthesis Markup Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-synthesis-markup-voice.md
The following table describes the usage of the `prosody` element's attributes:
| Attribute | Description | Required or optional | | - | - | - | | `contour` | Contour represents changes in pitch. These changes are represented as an array of targets at specified time positions in the speech output. Sets of parameter pairs define each target. For example: <br/><br/>`<prosody contour="(0%,+20Hz) (10%,-2st) (40%,+10Hz)">`<br/><br/>The first value in each set of parameters specifies the location of the pitch change as a percentage of the duration of the text. The second value specifies the amount to raise or lower the pitch by using a relative value or an enumeration value for pitch (see `pitch`). | Optional |
-| `pitch` | Indicates the baseline pitch for the text. Pitch changes can be applied at the sentence level. The pitch changes should be within 0.5 to 1.5 times the original audio. You can express the pitch as:<ul><li>An absolute value: Expressed as a number followed by "Hz" (Hertz). For example, `<prosody pitch="600Hz">some text</prosody>`.</li><li>A relative value:<ul><li>As a relative number: Expressed as a number preceded by "+" or "-" and followed by "Hz" or "st" that specifies an amount to change the pitch. For example: `<prosody pitch="+80Hz">some text</prosody>` or `<prosody pitch="-2st">some text</prosody>`. The "st" indicates the change unit is semitone, which is half of a tone (a half step) on the standard diatonic scale.<li>As a percentage: Expressed as a number preceded by "+" (optionally) or "-" and followed by "%", indicating the relative change. For example: `<prosody pitch="50%">some text</prosody>` or `<prosody pitch="-50%">some text</prosody>`.</li></ul></li><li>A constant value:<ul><li>x-low</li><li>low</li><li>medium</li><li>high</li><li>x-high</li><li>default</li></ul></li></ul> | Optional |
+| `pitch` | Indicates the baseline pitch for the text. Pitch changes can be applied at the sentence level. The pitch changes should be within 0.5 to 1.5 times the original audio. You can express the pitch as:<ul><li>An absolute value: Expressed as a number followed by "Hz" (Hertz). For example, `<prosody pitch="600Hz">some text</prosody>`.</li><li>A relative value:<ul><li>As a relative number: Expressed as a number preceded by "+" or "-" and followed by "Hz" or "st" that specifies an amount to change the pitch. For example: `<prosody pitch="+80Hz">some text</prosody>` or `<prosody pitch="-2st">some text</prosody>`. The "st" indicates the change unit is semitone, which is half of a tone (a half step) on the standard diatonic scale.<li>As a percentage: Expressed as a number preceded by "+" (optionally) or "-" and followed by "%", indicating the relative change. For example: `<prosody pitch="50%">some text</prosody>` or `<prosody pitch="-50%">some text</prosody>`.</li></ul></li><li>A constant value:<ul><li>`x-low` (equivalently 0.55,-45%)</li><li>`low` (equivalently 0.8, -20%)</li><li>`medium` (equivalently 1, default value)</li><li>`high` (equivalently 1.2, +20%)</li><li>`x-high` (equivalently 1.45, +45%)</li></ul></li></ul> | Optional |
| `range`| A value that represents the range of pitch for the text. You can express `range` by using the same absolute values, relative values, or enumeration values used to describe `pitch`.| Optional |
-| `rate` | Indicates the speaking rate of the text. Speaking rate can be applied at the word or sentence level. The rate changes should be within `0.5` to `2` times the original audio. You can express `rate` as:<ul><li>A relative value: <ul><li>As a relative number: Expressed as a number that acts as a multiplier of the default. For example, a value of `1` results in no change in the original rate. A value of `0.5` results in a halving of the original rate. A value of `2` results in twice the original rate.</li><li>As a percentage: Expressed as a number preceded by "+" (optionally) or "-" and followed by "%", indicating the relative change. For example: `<prosody rate="50%">some text</prosody>` or `<prosody rate="-50%">some text</prosody>`.</li></ul><li>A constant value:<ul><li>x-slow</li><li>slow</li><li>medium</li><li>fast</li><li>x-fast</li><li>default</li></ul></li></ul> | Optional |
-| `volume` | Indicates the volume level of the speaking voice. Volume changes can be applied at the sentence level. You can express the volume as:<ul><li>An absolute value: Expressed as a number in the range of `0.0` to `100.0`, from *quietest* to *loudest*, such as `75`. The default value is `100.0`.</li><li>A relative value: <ul><li>As a relative number: Expressed as a number preceded by "+" or "-" that specifies an amount to change the volume. Examples are `+10` or `-5.5`.</li><li>As a percentage: Expressed as a number preceded by "+" (optionally) or "-" and followed by "%", indicating the relative change. For example: `<prosody volume="50%">some text</prosody>` or `<prosody volume="+3%">some text</prosody>`.</li></ul><li>A constant value:<ul><li>silent</li><li>x-soft</li><li>soft</li><li>medium</li><li>loud</li><li>x-loud</li><li>default</li></ul></li></ul> | Optional |
+| `rate` | Indicates the speaking rate of the text. Speaking rate can be applied at the word or sentence level. The rate changes should be within `0.5` to `2` times the original audio. You can express `rate` as:<ul><li>A relative value: <ul><li>As a relative number: Expressed as a number that acts as a multiplier of the default. For example, a value of `1` results in no change in the original rate. A value of `0.5` results in a halving of the original rate. A value of `2` results in twice the original rate.</li><li>As a percentage: Expressed as a number preceded by "+" (optionally) or "-" and followed by "%", indicating the relative change. For example: `<prosody rate="50%">some text</prosody>` or `<prosody rate="-50%">some text</prosody>`.</li></ul><li>A constant value:<ul><li>`x-slow` (equivalently 0.5, -50%)</li><li>`slow` (equivalently 0.64, -46%)</li><li>`medium` (equivalently 1, default value)</li><li>`fast` (equivalently 1.55, +55%)</li><li>`x-fast` (equivalently 2, +100%)</li></ul></li></ul> | Optional |
+| `volume` | Indicates the volume level of the speaking voice. Volume changes can be applied at the sentence level. You can express the volume as:<ul><li>An absolute value: Expressed as a number in the range of `0.0` to `100.0`, from *quietest* to *loudest*, such as `75`. The default value is `100.0`.</li><li>A relative value: <ul><li>As a relative number: Expressed as a number preceded by "+" or "-" that specifies an amount to change the volume. Examples are `+10` or `-5.5`.</li><li>As a percentage: Expressed as a number preceded by "+" (optionally) or "-" and followed by "%", indicating the relative change. For example: `<prosody volume="50%">some text</prosody>` or `<prosody volume="+3%">some text</prosody>`.</li></ul><li>A constant value:<ul><li>`silent` (equivalently 0)</li><li>`x-soft` (equivalently 0.2)</li><li>`soft` (equivalently 0.4)</li><li>`medium` (equivalently 0.6)</li><li>`loud` (equivalently 0.8)</li><li>`x-loud` (equivalently 1, default value)</li></ul></li></ul> | Optional |
### Prosody examples
The following table describes the `emphasis` element's attributes:
| Attribute | Description | Required or optional | | - | - | - |
-| `level` | Indicates the strength of emphasis to be applied:<ul><li>`reduced`</li><li>`none`</li><li>`moderate`</li><li>`strong`</li></ul>.<br>When the `level` attribute isn't specified, the default level is `moderate`. For details on each attribute, see [emphasis element](https://www.w3.org/TR/speech-synthesis11/#S3.2.2). | Optional |
+| `level` | Indicates the strength of emphasis to be applied:<ul><li>`reduced`</li><li>`none`</li><li>`moderate`</li><li>`strong`</li></ul><br>When the `level` attribute isn't specified, the default level is `moderate`. For details on each attribute, see [emphasis element](https://www.w3.org/TR/speech-synthesis11/#S3.2.2). | Optional |
### Emphasis examples
ai-studio Data Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/data-add.md
- build-2024 Last updated 5/21/2024---++ # How to add and manage data in your Azure AI Studio project
ai-studio Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/whats-new.md
- Title: What's new in Azure AI Studio?-
-description: This article provides you with information about new releases and features.
-
-keywords: Release notes
-- Previously updated : 5/21/2024-----
-# What's new in Azure AI Studio?
-
-Azure AI Studio is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features.
-
-## May 2024
-
-### Azure AI Studio (GA)
-
-Azure AI Studio is now generally available. Azure AI Studio is a unified platform that brings together various Azure AI capabilities that were previously available as standalone Azure services. Azure AI Studio provides a seamless experience for developers, data scientists, and AI engineers to build, deploy, and manage AI models and applications. With Azure AI Studio, you can access a wide range of AI capabilities, including language models, speech, vision, and more, all in one place.
-
-> [!NOTE]
-> Some features are still in public preview and might not be available in all regions. Please refer to the feature level documentation for more information.
-
-### New UI
-
-We've updated the AI Studio navigation experience to help you work more efficiently and seamlessly move through the platform. Get to know the new navigation below:
-
-#### Quickly transition between hubs and projects
-
-Easily navigate between the global, hub, and project scopes.
-- Go back to the previous scope at any time by using the back button at the top of the navigation. -- Tools and resources change dynamically based on whether you are working at the global, hub, or project level. --
-#### Navigate with breadcrumbs
-
-We have added breadcrumbs to prevent you from getting lost in the product.
-- Breadcrumbs are consistently shown on the top navigation, regardless of what page you are on. -- Use these breadcrumbs to quickly move through the platform. --
-#### Customize your navigation
-
-The new navigation can be modified and customized to fit your needs.
-- Collapse and expand groupings as needed to easily access the tools you need the most. -- Collapse the navigation at any time to save screen space. All tools and capabilities will still be available. --
-#### Easily switch between your recent hubs and projects
-
-Switch between recently used hubs and projects at any time using the picker at the top of the navigation.
-- While in a hub, use the picker to access and switch to any of your recently used hubs. -- While in a project, use the picker to access and switch to any of your recently used projects. --
-### View and track your evaluators in a centralized way
-
-Evaluator is a new asset in Azure AI Studio. You can define a new evaluator in SDK and use it to run evaluation that generates scores of one or more metrics. You can view and manage both Microsoft curated evaluators and your own customized evaluators in the evaluator library. For more information, see [Evaluate with the prompt flow SDK](./how-to/develop/flow-evaluate-sdk.md).
-
-### Perform continuous monitoring for generative AI applications
-
-Azure AI Monitoring for Generative AI Applications enables you to continuously track the overall health of your production Prompt Flow deployments. With this feature, you can monitor the quality of LLM responses in addition to obtaining full visibility into the performance of your application, thus, helping you maintain trust and compliance. For more information, see [Monitor quality and safety of deployed prompt flow applications](./how-to/monitor-quality-safety.md).
-
-#### View embeddings benchmarks
-
-You can now compare benchmarks across embeddings models. For more information, see [Explore model benchmarks in Azure AI Studio](./how-to/model-benchmarks.md).
-
-### Fine-tune and deploy Azure OpenAI models
-
-Learn how to customize Azure OpenAI models with fine-tuning. You can train models on more examples and get higher quality results. For more information, see [Fine-tune and deploy Azure OpenAI models](../ai-services/openai/how-to/fine-tuning.md?context=/azure/ai-studio/context/context) and [Deploy Azure OpenAI models](./how-to/deploy-models-openai.md).
-
-### Service-side encryption of metadata
-
-We release simplified management when using customer-managed key encryption for workspaces, with less resources hosted in your Azure subscription. This reduces operational cost, and mitigates policy conflicts compared to the current offering.
-
-### Azure AI model Inference API
-
-The Azure AI Model Inference is an API that exposes a common set of capabilities for foundational models and that can be used by developers to consume predictions from a diverse set of models in a uniform and consistent way. Developers can talk with different models deployed in Azure AI without changing the underlying code they're using. For more information, see [Azure AI Model Inference API](./reference/reference-model-inference-api.md).
-
-### Perform tracing and debugging for GenAI applicationsΓÇ»
-
-Tracing is essential for providing detailed visibility into the performance and behavior of GenAI applications' inner workings. It plays a vital role in enhancing the debugging process, increasing observability, and promoting optimization.
-With this new capability, you can now efficiently monitor and rectify issues in your GenAI application during testing, fostering a more collaborative and efficient development process.
-
-### Use evaluators in the prompt flow SDK
-
-Evaluators in the prompt flow SDK offer a streamlined, code-based experience for evaluating and improving your generative AI apps. You can now easily use Microsoft curated quality and safety evaluators or define custom evaluators tailored to assess generative AI systems for the specific metrics you value. For more information about evaluators via the prompt flow SDK, see [Evaluate with the prompt flow SDK](./how-to/develop/flow-evaluate-sdk.md).
-
-Microsoft curated evaluators are also available in the AI Studio evaluator library, where you can view and manage them. However, custom evaluators are currently only available in the prompt flow SDK. For more information about evaluators in AI Studio, see [How to evaluate generative AI apps with Azure AI Studio](./how-to/evaluate-generative-ai-app.md#view-and-manage-the-evaluators-in-the-evaluator-library).
-
-### Use Prompty for engineering and sharing prompts
-
-Prompty is a new prompt template part of the prompt flow SDK that can be run standalone and integrated into your code. You can download a Prompty from the AI Studio playground, continue iterating on it in your local development environment, and check it into your git repo to share and collaborate on prompts with others. The Prompty format is supported in Semantic Kernel, C#, and LangChain as a community extension.
-
-### Mistral Small
-
-Mistral Small is available in the Azure AI model catalog. Mistral Small is Mistral AI's smallest proprietary Large Language Model (LLM). It can be used on any language-based task that requires high efficiency and low latency. Developers can access Mistral Small through Models as a Service (MaaS), enabling seamless API-based interactions.
-
-Mistral Small is:
--- A small model optimized for low latency: Efficient for high volume and low latency workloads. Mistral Small is Mistral's smallest proprietary model.-- Specialized in RAG: Crucial information isn't lost in the middle of long context windows. Supports up to 32K tokens.-- Strong in coding: Code generation, review, and comments with support for all mainstream coding languages.-- Multi-lingual by design: Best-in-class performance in French, German, Spanish, and Italian - in addition to English. Dozens of other languages are supported.-- Efficient guardrails baked in the model, with another safety layer with safe prompt option.-
-For more information about Phi-3, see the [blog announcement](https://techcommunity.microsoft.com/t5/ai-machine-learning-blog/introducing-mistral-small-empowering-developers-with-efficient/ba-p/4127678).
-
-## April 2024
-
-### Phi-3
-
-The Phi-3 family of models developed by Microsoft is available in the Azure AI model catalog. Phi-3 models are the most capable and cost-effective small language models (SLMs) available, outperforming models of the same size and next size up across various language, reasoning, coding, and math benchmarks. This release expands the selection of high-quality models for customers, offering more practical choices as they compose and build generative AI applications.
--- Phi-3-mini is available in two context-length variantsΓÇö4K and 128K tokens. It's the first model in its class to support a context window of up to 128K tokens, with little effect on quality.-- It's instruction-tuned, meaning that it's trained to follow different types of instructions reflecting how people normally communicate. This ensures the model is ready to use out-of-the-box.-- It's available on Azure AI to take advantage of the deploy > evaluate > fine-tune toolchain, and is available on Ollama for developers to run locally on their laptops.-- It has been optimized for ONNX Runtime with support for Windows DirectML along with cross-platform support across graphics processing unit (GPU), CPU, and even mobile hardware.-- It's also available as an NVIDIA NIM microservice with a standard API interface that can be deployed anywhere. And has been optimized for NVIDIA GPUs. -
-For more information about Phi-3, see the [blog announcement](https://azure.microsoft.com/blog/introducing-phi-3-redefining-whats-possible-with-slms/).
-
-### Meta Llama 3
-
-In collaboration with Meta, Meta Llama 3 models are available in the Azure AI model catalog.
--- Meta-Llama-3-8B pretrained and instruction fine-tuned models are recommended for scenarios with limited computational resources, offering faster training times and suitability for edge devices. It's appropriate for use cases like text summarization, classification, sentiment analysis, and translation.-- Meta-Llama-3-70B pretrained and instruction fine-tuned models are geared towards content creation and conversational AI, providing deeper language understanding for more nuanced tasks, like R&D and enterprise applications requiring nuanced text summarization, classification, language modeling, dialog systems, code generation and instruction following.-
-## February 2024
-
-### Azure AI Studio hub
-
-Azure AI resource is renamed hub. For additional information about the hub, check out [the hub documentation](./concepts/ai-resources.md).
-
-## January 2024
-
-### Benchmarks
-
-New models, datasets, and metrics are released for benchmarks. For additional information about the benchmarks experience, check out [the model catalog documentation](./how-to/model-catalog-overview.md).
-
-Added models:
-- `microsoft-phi-2`-- `mistralai-mistral-7b-instruct-v01`-- `mistralai-mistral-7b-v01`-- `codellama-13b-hf`-- `codellama-13b-instruct-hf`-- `codellama-13b-python-hf`-- `codellama-34b-hf`-- `codellama-34b-instruct-hf`-- `codellama-34b-python-hf`-- `codellama-7b-hf`-- `codellama-7b-instruct-hf`-- `codellama-7b-python-hf`-
-Added datasets:
-- `truthfulqa_generation`-- `truthfulqa_mc1`-
-Added metrics:
-- `Coherence`-- `Fluency`-- `GPTSimilarity`-
-## November 2023
-
-### Benchmarks
-
-Benchmarks are released as public preview in Azure AI Studio. For additional information about the Benchmarks experience, check out [Model benchmarks](how-to/model-benchmarks.md).
-
-Added models:
-- `gpt-35-turbo-0301`-- `gpt-4-0314`-- `gpt-4-32k-0314`-- `llama-2-13b-chat`-- `llama-2-13b`-- `llama-2-70b-chat`-- `llama-2-70b`-- `llama-2-7b-chat`-- `llama-2-7b`-
-Added datasets:
-- `boolq`-- `gsm8k`-- `hellaswag`-- `human_eval`-- `mmlu_humanities`-- `mmlu_other`-- `mmlu_social_sciences`-- `mmlu_stem`-- `openbookqa`-- `piqa`-- `social_iqa`-- `winogrande`-
-Added tasks:
-- `Question Answering`-- `Text Generation`-
-Added metrics:
-- `Accuracy`-
-## Related content
--- Learn more about the [Azure AI Studio](./what-is-ai-studio.md).-- Learn about [what's new in Azure OpenAI Service](../ai-services/openai/whats-new.md).
aks Gpu Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/gpu-cluster.md
The NVIDIA GPU Operator automates the management of all NVIDIA software componen
> [!WARNING] > We don't recommend manually installing the NVIDIA device plugin daemon set with clusters using the AKS GPU image.
+> [!NOTE]
+> There might be additional considerations to take when using the NVIDIA GPU Operator and deploying on SPOT instances. Please refer to <https://github.com/NVIDIA/gpu-operator/issues/577>
++ ### Use the AKS GPU image (preview) AKS provides a fully configured AKS image containing the [NVIDIA device plugin for Kubernetes][nvidia-github]. The AKS GPU image is currently only supported for Ubuntu 18.04.
api-center Add Metadata Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/add-metadata-properties.md
Title: Tutorial - Define custom metadata for API governance description: In this tutorial, define custom metadata in your API center. Use custom and built-in metadata to organize and govern your APIs. -+ Last updated 04/19/2024
api-center Check Minimal Api Permissions Dev Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/check-minimal-api-permissions-dev-proxy.md
Title: Check app's API calls for minimal permissions with Dev Proxy description: Learn how to use Dev Proxy to check if your app is calling APIs using minimal permissions defined in Azure API Center. -+ Last updated 07/17/2024
api-center Configure Environments Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/configure-environments-deployments.md
Title: Tutorial - Add environments and deployments for APIs description: In this tutorial, augment the API inventory in your API center by adding information about API environments and deployments. -+ Last updated 04/22/2024
api-center Discover Shadow Apis Dev Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/discover-shadow-apis-dev-proxy.md
Title: Discover shadow APIs using Dev Proxy description: Learn how to discover shadow APIs in your apps using Dev Proxy and onboard them to API Center. -+ Last updated 07/15/2024
api-center Enable Api Analysis Linting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/enable-api-analysis-linting.md
Title: Perform API linting and analysis - Azure API Center description: Configure linting of API definitions in your API center to analyze compliance of APIs with the organization's API style guide.-+ Last updated 06/29/2024
api-center Enable Api Center Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/enable-api-center-portal.md
Title: Self-host the API Center portal description: How to self-host the API Center portal, a customer-managed website that enables discovery of the API inventory in your Azure API center. -+ Last updated 04/29/2024
api-center Find Nonproduction Api Requests Dev Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/find-nonproduction-api-requests-dev-proxy.md
Title: Find nonproduction API requests with Dev Proxy description: Learn how to check if your app is using production-level APIs defined in Azure API Center using Dev Proxy. -+ Last updated 07/17/2024
api-center Import Api Management Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/import-api-management-apis.md
Title: Import APIs from Azure API Management - Azure API Center description: Add APIs to your Azure API center inventory from your API Management instance. -+ Last updated 06/28/2024
api-center Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/key-concepts.md
Title: Azure API Center - Key concepts description: Key concepts of Azure API Center. API Center inventories an organization's APIs for discovery, reuse, and governance at scale. -+ Last updated 04/23/2024
api-center Manage Apis Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/manage-apis-azure-cli.md
Title: Manage API inventory in Azure API Center - Azure CLI description: Use the Azure CLI to create and update APIs, API versions, and API definitions in your Azure API center. -+ Last updated 06/28/2024
api-center Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/metadata.md
Title: Use metadata to organize and govern APIs description: Learn about metadata in Azure API Center. Use built in and custom metadata to organize your inventory and enforce governance standards. -+ Last updated 04/19/2024
api-center Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/overview.md
Title: Azure API Center - Overview
description: Introduction to key scenarios and capabilities of Azure API Center. API Center inventories an organization's APIs for discovery, reuse, and governance at scale. -+ Last updated 04/15/2024
api-center Register Apis Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/register-apis-github-actions.md
Title: Register APIs using GitHub Actions - Azure API Center description: Learn how to automate the registration of APIs in your API center using a CI/CD workflow based on GitHub Actions.-+ Last updated 07/24/2024
api-center Register Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/register-apis.md
Title: Tutorial - Start your API inventory description: In this tutorial, start the API inventory in your API center by registering APIs using the Azure portal. -+ Last updated 04/19/2024
api-center Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/resources.md
Title: Azure API Center - Code samples and labs
description: Find code samples, reference implementations, labs, and deployment templates to create, populate, and govern your Azure API center. -+ Last updated 06/11/2024
api-center Set Up Api Center Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/set-up-api-center-arm-template.md
Title: Quickstart - Create your Azure API center - ARM template description: In this quickstart, use an Azure Resource Manager template to set up an API center for API discovery, reuse, and governance. -+ Last updated 05/13/2024
api-center Set Up Api Center Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/set-up-api-center-azure-cli.md
Title: Quickstart - Create your Azure API center - Azure CLI description: In this quickstart, use the Azure CLI to set up an API center for API discovery, reuse, and governance. -+ ms.daate: 06/27/2024
api-center Set Up Api Center Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/set-up-api-center-bicep.md
Title: Quickstart - Create your Azure API center - Bicep description: In this quickstart, use Bicep to set up an API center for API discovery, reuse, and governance. -+ Last updated 05/13/2024
api-center Set Up Api Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/set-up-api-center.md
Title: Quickstart - Create your Azure API center - portal description: In this quickstart, use the Azure portal to set up an API center for API discovery, reuse, and governance. -+ Last updated 04/19/2024
api-center Use Vscode Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/use-vscode-extension.md
Title: Interact with API inventory using VS Code extension description: Build, discover, try, and consume APIs from your Azure API center using the Azure API Center extension for Visual Studio Code. -+ Last updated 07/15/2024
api-management Api Management Howto Deploy Multi Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-deploy-multi-region.md
description: Learn how to deploy a Premium tier Azure API Management instance to
Previously updated : 05/15/2024 Last updated : 07/29/2024
This section provides considerations for multi-region deployments when the API M
### IP addresses
-* A public virtual IP address is created in every region added with a virtual network. For virtual networks in either [external mode](api-management-using-with-vnet.md) or [internal mode](api-management-using-with-internal-vnet.md), this public IP address is required for management traffic on port `3443`.
+* A public virtual IP address is created in every region added with a virtual network. For virtual networks in either [external mode](api-management-using-with-vnet.md) or [internal mode](api-management-using-with-internal-vnet.md), this public IP address is used for management traffic on port `3443`.
* **External VNet mode** - The public IP addresses are also required to route public HTTP traffic to the API gateways.
This section provides considerations for multi-region deployments when the API M
* **External VNet mode** - Routing of public HTTP traffic to the regional gateways is handled automatically, in the same way it is for a non-networked API Management instance.
-* **Internal VNet mode** - Private HTTP traffic isn't routed or load-balanced to the regional gateways by default. Users own the routing and are responsible for bringing their own solution to manage routing and private load balancing across multiple regions. Example solutions include Azure Application Gateway and Azure Traffic Manager.
+* **Internal VNet mode** - Private HTTP traffic isn't routed or load-balanced to the regional gateways by default. Users own the routing and are responsible for bringing their own solution to manage routing and private load balancing across multiple regions.
## Next steps
app-service Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/language-support-policy.md
To learn more about specific timelines for the language support policy, see the
- [PHP](https://aka.ms/phprelease) - [Go](https://aka.ms/gorelease)
+## Support status
+
+App Service supports languages on both Linux and Windows operating systems. See the following resources for the list of OS support for each language:
+
+- [.NET](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/dot_net_core.md#support-timeline)
+- [Java](#jdk-versions-and-maintenance)
+- [Node](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/node_support.md#support-timeline)
+- [Python](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/python_support.md#support-timeline)
+- [PHP](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/php_support.md#support-timeline)
++ ## Configure language versions To learn more about how to update language versions for your App Service applications, see the following resources: - [.NET](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/dot_net_core.md#how-to-update-your-app-to-target-a-different-version-of-net-or-net-core)-- [Node](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/node_support.md#node-on-linux-app-service) - [Java](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/java_support.md#java-on-app-service)
+- [Node](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/node_support.md#node-on-linux-app-service)
- [Python](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/python_support.md#how-to-update-your-app-to-target-a-different-version-of-python) - [PHP](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/php_support.md#how-to-update-your-app-to-target-a-different-version-of-php)
app-service Scenario Secure App Access Microsoft Graph As App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-access-microsoft-graph-as-app.md
-+ Last updated 04/05/2023
app-service Scenario Secure App Access Microsoft Graph As User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-access-microsoft-graph-as-user.md
-+ Last updated 09/15/2023
app-service Scenario Secure App Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-access-storage.md
description: In this tutorial, you learn how to access Azure Storage for a .NET
-+ Last updated 07/31/2023
app-service Scenario Secure App Authentication App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-authentication-app-service.md
-+ Last updated 05/16/2024
app-service Scenario Secure App Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-overview.md
-+ Last updated 12/10/2021
app-service Tutorial Connect App Access Microsoft Graph As App Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-microsoft-graph-as-app-javascript.md
-+ Last updated 03/14/2023
app-service Tutorial Connect App Access Microsoft Graph As User Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-microsoft-graph-as-user-javascript.md
-+ Last updated 03/08/2022
app-service Tutorial Connect App Access Storage Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-storage-javascript.md
description: In this tutorial, you learn how to access Azure Storage for a JavaS
-+ Last updated 07/31/2023
automation Automation Tutorial Runbook Textual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/learn/automation-tutorial-runbook-textual.md
Start by creating a simple [PowerShell Workflow runbook](../automation-runbook-t
You can either type code directly into the runbook, or you can select cmdlets, runbooks, and assets from the Library control and add them to the runbook with any related parameters. For this tutorial, you type code directly into the runbook.
-Your runbook is currently empty with only the required `Workflow` keyword, the name of the runbook, and the braces that encase the entire workflow.
+Your runbook is currently empty with only the required `workflow` keyword, the name of the runbook, and the braces that encase the entire workflow.
```powershell
-Workflow MyFirstRunbook-Workflow
+workflow MyFirstRunbook-Workflow
{ } ```
Workflow MyFirstRunbook-Workflow
1. You can use the `Parallel` keyword to create a script block with multiple commands that will run concurrently. Enter the following code *between* the braces: ```powershell
- Parallel {
- Write-Output "Parallel"
- Get-Date
- Start-Sleep -s 3
- Get-Date
- }
-
- Write-Output " `r`n"
- Write-Output "Non-Parallel"
- Get-Date
- Start-Sleep -s 3
- Get-Date
+ parallel
+ {
+ Write-Output "Parallel"
+ Get-Date
+ Start-Sleep -Seconds 3
+ Get-Date
+ }
+
+ Write-Output " `r`n"
+ Write-Output "Non-Parallel"
+ Get-Date
+ Start-Sleep -Seconds 3
+ Get-Date
``` 1. Save the runbook by selecting **Save**.
You've tested and published your runbook, but so far it doesn't do anything usef
```powershell workflow MyFirstRunbook-Workflow {
- $resourceGroup = "resourceGroupName"
-
- # Ensures you do not inherit an AzContext in your runbook
- Disable-AzContextAutosave -Scope Process
-
- # Connect to Azure with system-assigned managed identity
- Connect-AzAccount -Identity
-
- # set and store context
- $AzureContext = Set-AzContext ΓÇôSubscriptionId "<SubscriptionID>"
+ $resourceGroup = "resourceGroupName"
+
+ # Ensures you do not inherit an AzContext in your runbook
+ Disable-AzContextAutosave -Scope Process
+
+ # Connect to Azure with system-assigned managed identity
+ Connect-AzAccount -Identity
+
+ # set and store context
+ $AzureContext = Set-AzContext -SubscriptionId "<SubscriptionID>"
} ```
You can use the `ForEach -Parallel` construct to process commands for each item
```powershell workflow MyFirstRunbook-Workflow {
- Param(
- [string]$resourceGroup,
- [string[]]$VMs,
- [string]$action
- )
-
- # Ensures you do not inherit an AzContext in your runbook
- Disable-AzContextAutosave -Scope Process
-
- # Connect to Azure with system-assigned managed identity
- Connect-AzAccount -Identity
-
- # set and store context
- $AzureContext = Set-AzContext ΓÇôSubscriptionId "<SubscriptionID>"
-
- # Start or stop VMs in parallel
- if ($action -eq "Start") {
- ForEach -Parallel ($vm in $VMs)
- {
- Start-AzVM -Name $vm -ResourceGroupName $resourceGroup -DefaultProfile $AzureContext
- }
- }
- elseif ($action -eq "Stop") {
- ForEach -Parallel ($vm in $VMs)
- {
- Stop-AzVM -Name $vm -ResourceGroupName $resourceGroup -DefaultProfile $AzureContext -Force
- }
- }
- else {
- Write-Output "`r`n Action not allowed. Please enter 'stop' or 'start'."
- }
- }
+ param
+ (
+ [string]$resourceGroup,
+ [string[]]$VMs,
+ [string]$action
+ )
+
+ # Ensures you do not inherit an AzContext in your runbook
+ Disable-AzContextAutosave -Scope Process
+
+ # Connect to Azure with system-assigned managed identity
+ Connect-AzAccount -Identity
+
+ # set and store context
+ $AzureContext = Set-AzContext -SubscriptionId "<SubscriptionID>"
+
+ # Start or stop VMs in parallel
+ if ($action -eq "Start")
+ {
+ ForEach -Parallel ($vm in $VMs)
+ {
+ Start-AzVM -Name $vm -ResourceGroupName $resourceGroup -DefaultProfile $AzureContext
+ }
+ }
+ elseif ($action -eq "Stop")
+ {
+ ForEach -Parallel ($vm in $VMs)
+ {
+ Stop-AzVM -Name $vm -ResourceGroupName $resourceGroup -DefaultProfile $AzureContext -Force
+ }
+ }
+ else
+ {
+ Write-Output "`r`n Action not allowed. Please enter 'stop' or 'start'."
+ }
+ }
``` 1. If you want the runbook to execute with the system-assigned managed identity, leave the code as-is. If you prefer to use a user-assigned managed identity, then:
automation Update Agent Issues Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-agent-issues-linux.md
Last updated 11/01/2021
+ # Troubleshoot Linux update agent issues
automation Update Agent Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-agent-issues.md
Last updated 01/25/2020 + # Troubleshoot Windows update agent issues
automation Update Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-management.md
Last updated 06/29/2024 + # Troubleshoot Update Management issues
automation Configure Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/configure-alerts.md
Last updated 07/15/2024 + # How to create alerts for Update Management
automation Configure Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/configure-groups.md
Last updated 07/15/2024 + # Use dynamic groups with Update Management
automation Configure Wuagent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/configure-wuagent.md
Last updated 07/15/2024 + # Configure Windows Update settings for Azure Automation Update Management
automation Deploy Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/deploy-updates.md
Last updated 07/15/2024 + # How to deploy updates and review results
automation Enable From Automation Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/enable-from-automation-account.md
Last updated 07/15/2024 + # Enable Update Management from an Automation account
automation Enable From Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/enable-from-portal.md
Last updated 07/15/2024 + # Enable Update Management from the Azure portal
automation Enable From Runbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/enable-from-runbook.md
Last updated 07/15/2024 + # Enable Update Management from a runbook
automation Enable From Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/enable-from-template.md
Last updated 07/15/2024+ # Enable Update Management using Azure Resource Manager template
automation Enable From Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/enable-from-vm.md
Last updated 07/15/2024 + # Enable Update Management for an Azure VM
automation Manage Updates For Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/manage-updates-for-vm.md
Last updated 07/15/2024+ # Manage updates and patches for your VMs
automation Mecmintegration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/mecmintegration.md
Last updated 07/15/2024 + # Integrate Update Management with Microsoft Configuration Manager
automation Operating System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/operating-system-requirements.md
Last updated 07/15/2024 + # Operating systems supported by Update Management
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/overview.md
Last updated 07/15/2024 + # Update Management overview
automation Plan Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/plan-deployment.md
Last updated 07/15/2024 + # Plan your Update Management deployment
automation Pre Post Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/pre-post-scripts.md
Last updated 07/15/2024 + # Manage pre-scripts and post-scripts
automation Query Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/query-logs.md
Last updated 07/15/2024 + # Query Update Management logs
automation View Update Assessments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/view-update-assessments.md
Last updated 07/15/2024 + # View update assessments in Update Management
avere-vfxt Avere Vfxt Add Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-add-storage.md
Title: Configure Avere vFXT storage - Azure description: Learn how to add a back-end storage system for a cluster in Avere vFXT for Azure. If you created an Azure Blob container with the cluster, it is ready to use. -+ Last updated 01/13/2020
avere-vfxt Avere Vfxt Additional Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-additional-resources.md
Title: Additional links about Avere vFXT for Azure description: Use these resources for additional information about Avere vFXT for Azure, including Avere cluster documentation and vFXT management documentation. -+ Last updated 01/13/2020
avere-vfxt Avere Vfxt Cluster Gui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-cluster-gui.md
Title: Access the Avere vFXT control panel - Azure description: How to connect to the vFXT cluster and the browser-based Avere Control Panel to configure the Avere vFXT -+ Last updated 12/14/2019
avere-vfxt Avere Vfxt Configure Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-configure-dns.md
Title: Avere vFXT DNS - Azure description: Configuring a DNS server for round-robin load balancing with Avere vFXT for Azure -+ Last updated 10/07/2021
avere-vfxt Avere Vfxt Data Ingest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-data-ingest.md
Title: Moving data to Avere vFXT for Azure description: How to add data to a new storage volume for use with the Avere vFXT for Azure -+ Last updated 12/16/2019
avere-vfxt Avere Vfxt Demo Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-demo-links.md
Title: Avere vFXT for Azure demo projects description: "These samples show key features and use cases for Avere vFXT for Azure: video rendering, high-performance computing, vFXT performance, and client setup." -+ Last updated 12/19/2019
avere-vfxt Avere Vfxt Deploy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-deploy-overview.md
Title: Deployment overview - Avere vFXT for Azure description: Learn how to deploy an Avere vFXT for Azure cluster with this overview. Related articles have specific deployment instructions. -+ Last updated 01/13/2020
avere-vfxt Avere Vfxt Deploy Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-deploy-plan.md
Title: Plan your Avere vFXT system - Azure description: Plan an Avere vFXT for Azure cluster that is right for your needs. Learn questions to ask before going to the Azure Marketplace or creating virtual machines. -+ Last updated 01/21/2020
avere-vfxt Avere Vfxt Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-deploy.md
Title: Deploy Avere vFXT for Azure description: Learn how to use the deployment wizard available from the Azure Marketplace to deploy a cluster with Avere vFXT for Azure. -+ Last updated 01/13/2020
avere-vfxt Avere Vfxt Enable Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-enable-support.md
Title: Enable support for Avere vFXT - Azure description: Learn how to enable automatic upload of support data about your cluster from Avere vFXT for Azure to help Support provide customer service. -+ Last updated 12/14/2019
avere-vfxt Avere Vfxt Manage Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-manage-cluster.md
Title: Manage the Avere vFXT cluster - Azure description: How to manage Avere cluster - add or remove nodes, reboot, stop, or destroy the vFXT cluster -+ Last updated 01/13/2020
avere-vfxt Avere Vfxt Mount Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-mount-clients.md
Title: Mount the Avere vFXT - Azure description: Learn how to connect clients to your vFXT cluster in Avere vFXT for Azure and how to load-balance client traffic among your cluster nodes. -+ Last updated 12/16/2019
avere-vfxt Avere Vfxt Non Owner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-non-owner.md
Title: Avere vFXT non-owner workaround - Azure description: Workaround to allow users without subscription owner permission to deploy Avere vFXT for Azure -+ Last updated 12/19/2019
avere-vfxt Avere Vfxt Open Ticket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-open-ticket.md
Title: How to get support for Avere vFXT for Azure description: Learn how to address issues that may arise while deploying or using Avere vFXT for Azure by creating a support ticket through the Azure portal. -+ Last updated 01/13/2020
avere-vfxt Avere Vfxt Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-overview.md
Title: Avere vFXT for Azure description: Learn about Avere vFXT for Azure, a cloud-based filesystem caching solution for data-intensive high-performance computing tasks. -+ Last updated 03/15/2024
avere-vfxt Avere Vfxt Prereqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-prereqs.md
Title: Avere vFXT prerequisites - Azure description: Learn about tasks to perform before you create a cluster in Avere vFXT for Azure, including dealing with subscriptions, quotas, and storage service endpoints. -+ Last updated 01/21/2020
avere-vfxt Avere Vfxt Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-tuning.md
Title: Avere vFXT cluster tuning - Azure description: Learn about some of the custom tuning for vFXT clusters in Avere vFXT for Azure that you can do, working with a support representative. -+ Last updated 12/19/2019
avere-vfxt Avere Vfxt Whitepapers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-whitepapers.md
Title: Whitepapers and case studies - Avere vFXT for Azure description: Links to downloadable whitepapers, case studies, and other articles that illustrate Avere vFXT for Azure and how it can be used. -+
avere-vfxt Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/disaster-recovery.md
Title: Disaster recovery guidance for Avere vFXT for Azure description: How to protect data in Avere vFXT for Azure from accidental deletion or outages -+ Last updated 12/10/2019
azure-arc Billing Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/billing-extended-security-updates.md
Licenses that are provisioned after the End of Support (EOS) date of October 10,
If you deactivate and then later reactivate a license, you're billed for the window during which the license was deactivated. It isn't possible to evade charges by deactivating a license before a critical security patch and reactivating it shortly before.
-If the region or the tenant of an ESU license is changed, this will be subject to back-billing charges.
+If the region or the tenant of an ESU license is changed, this is subject to back-billing charges.
> [!NOTE] > The back-billing cost appears as a separate line item in invoicing. If you acquired a discount for your core WS2012 ESUs enabled by Azure Arc, the same discount may or may not apply to back-billing. You should verify that the same discounting, if applicable, has been applied to back-billing charges as well. >
-Please note that estimates in the Azure Cost Management forecast may not accurately project monthly costs. Due to the episodic nature of back-billing charges, the projection of monthly costs may appear as overestimated during initial months.
+Note that estimates in the Azure Cost Management forecast may not accurately project monthly costs. Due to the episodic nature of back-billing charges, the projection of monthly costs may appear as overestimated during initial months.
## Billing associated with modifications to an Azure Arc ESU license
Please note that estimates in the Azure Cost Management forecast may not accurat
> If you previously provisioned a Datacenter Virtual Core license, it will be charged with and offer the virtualization benefits associated with the pricing of a Datacenter edition license. > -- **Core modification:** If cores are added to an existing ESU license, they're subject to back-billing (that is, charges for the time elapsed since EOS) and regularly billed from the calendar month in which they were added. If cores are reduced or decremented to an existing ESU license, the billing rate will reflect the reduced number of cores within 5 business days of the change.
+- **Core modification:** If cores are added to an existing ESU license, they're subject to back-billing (that is, charges for the time elapsed since EOS) and regularly billed from the calendar month in which they were added. If cores are reduced or decremented to an existing ESU license, the billing rate reflects the reduced number of cores within 5 days of the change.
- **Activation:** Licenses are billed for their number and edition of cores from the point at which they're activated. The activated license doesn't need to be linked to any Azure Arc-enabled servers to initiate billing. Activation and reactivation are subject to back-billing. Note that licenses that were activated but not linked to any servers may be back-billed if they weren't billed upon creation. Customers are responsible for deletion of any activated but unlinked ESU licenses.
Please note that estimates in the Azure Cost Management forecast may not accurat
## Services included with WS2012 ESUs enabled by Azure Arc
-Purchase of Windows Server 2012/R2 ESUs enabled by Azure Arc provides you with the benefit of access to additional Azure management services at no additional cost for enrolled servers. See [Access to Azure services](prepare-extended-security-updates.md#access-to-azure-services) to learn more.
+Purchase of Windows Server 2012/R2 ESUs enabled by Azure Arc provides you with the benefit of access to more Azure management services at no additional cost for enrolled servers. See [Access to Azure services](prepare-extended-security-updates.md#access-to-azure-services) to learn more.
Azure Arc-enabled servers allow you the flexibility to evaluate and operationalize AzureΓÇÖs robust security, monitoring, and governance capabilities for your non-Azure infrastructure, delivering key value beyond the observability, ease of enrollment, and financial flexibility of WS2012 ESUs enabled by Azure Arc. ## Additional notes -- You'll be billed if you connect an activated Azure Arc ESU license to environments like Azure Stack HCI or Azure VMware Solution. These environments are eligible for free Windows Server 2012 ESUs enabled by Azure Arc and should not be activated through Azure Arc.
+- You'll be billed if you connect an activated Azure Arc ESU license to environments like Azure Stack HCI or Azure VMware Solution. These environments are eligible for free Windows Server 2012 ESUs enabled by Azure Arc and shouldn't be activated through Azure Arc.
- You'll be billed for all of the cores provisioned in the license. If provision licenses for free ESU usage like Visual Studio Development environments, you shouldn't provision additional cores for the scope of licensing applied to non-paid ESU coverage.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/overview.md
Azure Arc-enabled SCVMM doesn't store/process customer data outside the region t
## Next steps
-[Create an Azure Arc VM](create-virtual-machine.md).
+
+- Plan your Arc-enabled SCVMM deployment by reviewing the [support matrix](support-matrix-for-system-center-virtual-machine-manager.md).
+- Once ready, [connect your SCVMM management server to Azure Arc using the onboarding script](quickstart-connect-system-center-virtual-machine-manager-to-arc.md).
azure-functions Opentelemetry Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/opentelemetry-howto.md
Java worker optimizations aren't yet available for OpenTelemetry, so there's not
npm install @opentelemetry/api npm install @opentelemetry/auto-instrumentations-node npm install @azure/monitor-opentelemetry-exporter
+ npm install @azure/functions-opentelemetry-instrumentation
``` ### [OTLP Exporter](#tab/otlp-export)
Java worker optimizations aren't yet available for OpenTelemetry, so there's not
npm install @opentelemetry/api npm install @opentelemetry/auto-instrumentations-node npm install @opentelemetry/exporter-logs-otlp-http
+ npm install @azure/functions-opentelemetry-instrumentation
```
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Perizer Corp.](https://perizer.com)| |[Perrygo Consulting Group, LLC](https://perrygo.com)| |Phacil (By Light) |
-|[Pharicode LLC](https://pharicode.com)|
+|[Pharicode LLC](https://glidefast.com/)|
|Philistin & Heller Group, Inc.| |[Picis Envision](https://www.picis.com/en/)| |[Pinao Consulting LLC](https://www.pcg-msp.com)|
azure-monitor Diagnostics Extension Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-overview.md
Azure Diagnostics extension is an [agent in Azure Monitor](../agents/agents-over
Use Azure Diagnostics extension if you need to: -- Send data to Azure Storage for archiving or to analyze it with tools such as [Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md).-- Send data to [Azure Monitor Metrics](../essentials/data-platform-metrics.md) to analyze it with [metrics explorer](../essentials/metrics-getting-started.md) and to take advantage of features such as near-real-time [metric alerts](../alerts/alerts-metric-overview.md) and [autoscale](../autoscale/autoscale-overview.md) (Windows only).-- Send data to third-party tools by using [Azure Event Hubs](./diagnostics-extension-stream-event-hubs.md).-- Collect [boot diagnostics](/troubleshoot/azure/virtual-machines/boot-diagnostics) to investigate VM boot issues.
+* Send data to Azure Storage for archiving or to analyze it with tools such as [Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md).
+* Send data to [Azure Monitor Metrics](../essentials/data-platform-metrics.md) to analyze it with [metrics explorer](../essentials/metrics-getting-started.md) and to take advantage of features such as near-real-time [metric alerts](../alerts/alerts-metric-overview.md) and [autoscale](../autoscale/autoscale-overview.md) (Windows only).
+* Send data to third-party tools by using [Azure Event Hubs](./diagnostics-extension-stream-event-hubs.md).
+* Collect [boot diagnostics](/troubleshoot/azure/virtual-machines/boot-diagnostics) to investigate VM boot issues.
Limitations of Azure Diagnostics extension: -- It can only be used with Azure resources.-- It has limited ability to send data to Azure Monitor Logs.
+* It can only be used with Azure resources.
+* It has limited ability to send data to Azure Monitor Logs.
## Comparison to Log Analytics agent
The Log Analytics agent in Azure Monitor can also be used to collect monitoring
The key differences to consider are: -- Azure Diagnostics Extension can be used only with Azure virtual machines. The Log Analytics agent can be used with virtual machines in Azure, other clouds, and on-premises.-- Azure Diagnostics extension sends data to Azure Storage, [Azure Monitor Metrics](../essentials/data-platform-metrics.md) (Windows only) and Azure Event Hubs. The Log Analytics agent collects data to [Azure Monitor Logs](../logs/data-platform-logs.md).-- The Log Analytics agent is required for retired [solutions](/previous-versions/azure/azure-monitor/insights/solutions), [VM insights](../vm/vminsights-overview.md), and other services such as [Microsoft Defender for Cloud](../../security-center/index.yml).
+* Azure Diagnostics Extension can be used only with Azure virtual machines. The Log Analytics agent can be used with virtual machines in Azure, other clouds, and on-premises.
+* Azure Diagnostics extension sends data to Azure Storage, [Azure Monitor Metrics](../essentials/data-platform-metrics.md) (Windows only) and Azure Event Hubs. The Log Analytics agent collects data to [Azure Monitor Logs](../logs/data-platform-logs.md).
+* The Log Analytics agent is required for retired [solutions](/previous-versions/azure/azure-monitor/insights/solutions), [VM insights](../vm/vminsights-overview.md), and other services such as [Microsoft Defender for Cloud](../../security-center/index.yml).
## Costs
The following tables list the data that can be collected by the Windows and Linu
### Windows diagnostics extension (WAD)
-| Data source | Description |
-| | |
-| Windows event logs | Events from Windows event log. |
-| Performance counters | Numerical values measuring performance of different aspects of operating system and workloads. |
-| IIS logs | Usage information for IIS websites running on the guest operating system. |
-| Application logs | Trace messages written by your application. |
-| .NET EventSource logs |Code writing events using the .NET [EventSource](/dotnet/api/system.diagnostics.tracing.eventsource) class. |
-| [Manifest-based ETW logs](/windows/desktop/etw/about-event-tracing) |Event tracing for Windows events generated by any process. |
-| Crash dumps (logs) | Information about the state of the process if an application crashes. |
-| File-based logs | Logs created by your application or service. |
-| Agent diagnostic logs | Information about Azure Diagnostics itself. |
+| Data source | Description |
+||-|
+| Windows event logs | Events from Windows event log. |
+| Performance counters | Numerical values measuring performance of different aspects of operating system and workloads. |
+| IIS logs | Usage information for IIS websites running on the guest operating system. |
+| Application logs | Trace messages written by your application. |
+| .NET EventSource logs | Code writing events using the .NET [EventSource](/dotnet/api/system.diagnostics.tracing.eventsource) class. |
+| [Manifest-based ETW logs](/windows/desktop/etw/about-event-tracing) | Event tracing for Windows events generated by any process. |
+| Crash dumps (logs) | Information about the state of the process if an application crashes. |
+| File-based logs | Logs created by your application or service. |
+| Agent diagnostic logs | Information about Azure Diagnostics itself. |
### Linux diagnostics extension (LAD)
-| Data source | Description |
-| | |
-| Syslog | Events sent to the Linux event logging system |
-| Performance counters | Numerical values measuring performance of different aspects of operating system and workloads |
-| Log files | Entries sent to a file-based log |
+| Data source | Description |
+|-|--|
+| Syslog | Events sent to the Linux event logging system |
+| Performance counters | Numerical values measuring performance of different aspects of operating system and workloads |
+| Log files | Entries sent to a file-based log |
## Data destinations
Configure one or more *data sinks* to send data to other destinations. The follo
### Windows diagnostics extension (WAD)
-| Destination | Description |
-|:|:|
-| Azure Monitor Metrics | Collect performance data to Azure Monitor Metrics. See [Send Guest OS metrics to the Azure Monitor metric database](../essentials/collect-custom-metrics-guestos-resource-manager-vm.md). |
-| Event hubs | Use Azure Event Hubs to send data outside of Azure. See [Streaming Azure Diagnostics data to Azure Event Hubs](diagnostics-extension-stream-event-hubs.md). |
-| Azure Storage blobs | Write data to blobs in Azure Storage in addition to tables. |
-| Application Insights | Collect data from applications running in your VM to Application Insights to integrate with other application monitoring. See [Send diagnostic data to Application Insights](diagnostics-extension-to-application-insights.md). |
+| Destination | Description |
+|:-|:--|
+| Azure Monitor Metrics | Collect performance data to Azure Monitor Metrics. See [Send Guest OS metrics to the Azure Monitor metric database](../essentials/collect-custom-metrics-guestos-resource-manager-vm.md). |
+| Event hubs | Use Azure Event Hubs to send data outside of Azure. See [Streaming Azure Diagnostics data to Azure Event Hubs](diagnostics-extension-stream-event-hubs.md). |
+| Azure Storage blobs | Write data to blobs in Azure Storage in addition to tables. |
+| Application Insights | Collect data from applications running in your VM to Application Insights to integrate with other application monitoring. See [Send diagnostic data to Application Insights](diagnostics-extension-to-application-insights.md). |
-You can also collect WAD data from storage into a Log Analytics workspace to analyze it with Azure Monitor Logs, although the Log Analytics agent is typically used for this functionality. It can send data directly to a Log Analytics workspace and supports solutions and insights that provide more functionality. See [Collect Azure diagnostic logs from Azure Storage](../agents/diagnostics-extension-logs.md).
+You can also collect WAD data from storage into a Log Analytics workspace to analyze it with Azure Monitor Logs, although the Log Analytics agent is typically used for this functionality. It can send data directly to a Log Analytics workspace and supports solutions and insights that provide more functionality. See [Collect Azure diagnostic logs from Azure Storage](diagnostics-extension-logs.md).
### Linux diagnostics extension (LAD) LAD writes data to tables in Azure Storage. It supports the sinks in the following table.
-| Destination | Description |
-|:|:|
-| Event hubs | Use Azure Event Hubs to send data outside of Azure. |
-| Azure Storage blobs | Write data to blobs in Azure Storage in addition to tables. |
-| Azure Monitor Metrics | Install the Telegraf agent in addition to LAD. See [Collect custom metrics for a Linux VM with the InfluxData Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md).
+| Destination | Description |
+|:-|:|
+| Event hubs | Use Azure Event Hubs to send data outside of Azure. |
+| Azure Storage blobs | Write data to blobs in Azure Storage in addition to tables. |
+| Azure Monitor Metrics | Install the Telegraf agent in addition to LAD. See [Collect custom metrics for a Linux VM with the InfluxData Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md). |
## Installation and configuration
You can also install and configure both the Windows and Linux diagnostics extens
See the following articles for information on installing and configuring the diagnostics extension for Windows and Linux: -- [Install and configure Azure Diagnostics extension for Windows](diagnostics-extension-windows-install.md)-- [Use Linux diagnostics extension to monitor metrics and logs](../../virtual-machines/extensions/diagnostics-linux.md)
+* [Install and configure Azure Diagnostics extension for Windows](diagnostics-extension-windows-install.md)
+* [Use Linux diagnostics extension to monitor metrics and logs](../../virtual-machines/extensions/diagnostics-linux.md)
+
+## Supported operating systems
+
+The following tables list the operating systems that are supported by WAD and LAD. See the documentation for each agent for unique considerations and for the installation process. See Telegraf documentation for its supported operating systems. All operating systems are assumed to be x64. x86 is not supported for any operating system.
+
+### Windows
+
+| Operating system | Support |
+|:|:-:|
+| Windows Server 2022 | ❌ |
+| Windows Server 2022 Core | ❌ |
+| Windows Server 2019 | ✅ |
+| Windows Server 2019 Core | ❌ |
+| Windows Server 2016 | ✅ |
+| Windows Server 2016 Core | ✅ |
+| Windows Server 2012 R2 | ✅ |
+| Windows Server 2012 | ✅ |
+| Windows 11 Client & Pro | ❌ |
+| Windows 11 Enterprise (including multi-session) | ❌ |
+| Windows 10 1803 (RS4) and higher | ❌ |
+| Windows 10 Enterprise (including multi-session) and Pro (Server scenarios only) | ✅ |
+
+### Linux
+
+| Operating system | Support |
+|:-|:-:|
+| CentOS Linux 9 | ❌ |
+| CentOS Linux 8 | ❌ |
+| CentOS Linux 7 | ✅ |
+| Debian 12 | ❌ |
+| Debian 11 | ❌ |
+| Debian 10 | ❌ |
+| Debian 9 | ✅ |
+| Debian 8 | ❌ |
+| Oracle Linux 9 | ❌ |
+| Oracle Linux 8 | ❌ |
+| Oracle Linux 7 | ✅ |
+| Oracle Linux 6.4+ | ✅ |
+| Red Hat Enterprise Linux Server 9 | ❌ |
+| Red Hat Enterprise Linux Server 8\* | ✅ |
+| Red Hat Enterprise Linux Server 7 | ✅ |
+| SUSE Linux Enterprise Server 15 | ❌ |
+| SUSE Linux Enterprise Server 12 | ✅ |
+| Ubuntu 22.04 LTS | ❌ |
+| Ubuntu 20.04 LTS | ✅ |
+| Ubuntu 18.04 LTS | ✅ |
+| Ubuntu 16.04 LTS | ✅ |
+| Ubuntu 14.04 LTS | ✅ |
+
+\* Requires Python 2 to be installed on the machine and aliased to the python command.
## Other documentation
See the following articles for more information.
### Azure Cloud Services (classic) web and worker roles -- [Introduction to Azure Cloud Services monitoring](../../cloud-services/cloud-services-how-to-monitor.md)-- [Enabling Azure Diagnostics in Azure Cloud Services](../../cloud-services/cloud-services-dotnet-diagnostics.md)-- [Application Insights for Azure Cloud Services](../app/azure-web-apps-net-core.md)<br>-- [Trace the flow of an Azure Cloud Services application with Azure Diagnostics](../../cloud-services/cloud-services-dotnet-diagnostics-trace-flow.md)
+* [Introduction to Azure Cloud Services monitoring](../../cloud-services/cloud-services-how-to-monitor.md)
+* [Enabling Azure Diagnostics in Azure Cloud Services](../../cloud-services/cloud-services-dotnet-diagnostics.md)
+* [Application Insights for Azure Cloud Services](../app/azure-web-apps-net-core.md)
+* [Trace the flow of an Azure Cloud Services application with Azure Diagnostics](../../cloud-services/cloud-services-dotnet-diagnostics-trace-flow.md)
### Azure Service Fabric
azure-monitor Monitor Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/monitor-kubernetes.md
Previously updated : 09/14/2023 Last updated : 07/30/2024 # Monitor Kubernetes clusters using Azure services and cloud native tools
The *platform engineer*, also known as the cluster administrator, is responsible
:::image type="content" source="media/monitor-kubernetes/layers-platform-engineer.png" alt-text="Diagram of layers of Kubernetes environment for platform engineer." lightbox="media/monitor-kubernetes/layers-platform-engineer.png" border="false":::
-Large organizations may also have a *fleet architect*, which is similar to the platform engineer but is responsible for multiple clusters. They need visibility across the entire environment and must perform administrative tasks at scale. At scale recommendations are included in the guidance below. See [What is Azure Kubernetes Fleet Manager (preview)?](../../kubernetes-fleet/overview.md) for details on creating a Fleet resource for multi-cluster and at-scale scenarios.
+Large organizations may also have a *fleet architect*, which is similar to the platform engineer but is responsible for multiple clusters. They need visibility across the entire environment and must perform administrative tasks at scale. At scale recommendations are included in the guidance below. See [What is Azure Kubernetes Fleet Manager?](../../kubernetes-fleet/overview.md) for details on creating a Fleet resource for multi-cluster and at-scale scenarios.
### Azure services for platform engineer
The following table lists the Azure services for the platform engineer to monito
| Service | Description | |:|:|
-| [Container Insights](container-insights-overview.md) | Azure service for AKS and Azure Arc-enabled Kubernetes clusters that use a containerized version of the [Azure Monitor agent](../agents/agents-overview.md) to collect stdout/stderr logs, performance metrics, and Kubernetes events from each node in your cluster. It also collects metrics from the Kubernetes control plane and stores them in the workspace. You can view the data in the Azure portal or query it using [Log Analytics](../logs/log-analytics-overview.md). |
+| [Container Insights](container-insights-overview.md) | Azure service for AKS and Azure Arc-enabled Kubernetes clusters that use a containerized version of the [Azure Monitor agent](../agents/agents-overview.md) to collect stdout/stderr logs, performance metrics, and Kubernetes events from each node in your cluster. You can view the data in the Azure portal or query it using [Log Analytics](../logs/log-analytics-overview.md). Configure the [Prometheus experience](./container-insights-experience-v2.md) to use Container insights views with Prometheus data. |
| [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) | [Prometheus](https://prometheus.io) is a cloud-native metrics solution from the Cloud Native Compute Foundation and the most common tool used for collecting and analyzing metric data from Kubernetes clusters. Azure Monitor managed service for Prometheus is a fully managed solution that's compatible with the Prometheus query language (PromQL) and Prometheus alerts and integrates with Azure Managed Grafana for visualization. This service supports your investment in open source tools without the complexity of managing your own Prometheus environment. | | [Azure Arc-enabled Kubernetes](container-insights-enable-arc-enabled-clusters.md) | Allows you to attach to Kubernetes clusters running in other clouds so that you can manage and configure them in Azure. With the Arc agent installed, you can monitor AKS and hybrid clusters together using the same methods and tools, including Container insights and Prometheus. |
-| [Azure Managed Grafana](../../managed-grafan) | Fully managed implementation of [Grafana](https://grafana.com/), which is an open-source data visualization platform commonly used to present Prometheus and other data. Multiple predefined Grafana dashboards are available for monitoring Kubernetes and full-stack troubleshooting. |
+| [Azure Managed Grafana](../../managed-grafan) | Fully managed implementation of [Grafana](https://grafana.com/), which is an open-source data visualization platform commonly used to present Prometheus and other data. Multiple predefined Grafana dashboards are available for monitoring Kubernetes and full-stack troubleshooting. You may choose to use Grafana for performance monitoring of your cluster, or you can use Container insights by |
### Configure monitoring for platform engineer The sections below identify the steps for complete monitoring of your Kubernetes environment using the Azure services in the above table. Functionality and integration options are provided for each to help you determine where you may need to modify this configuration to meet your particular requirements.
+Onboarding Container insights and Managed Prometheus can be part of the same experience as described in [Enable monitoring for Kubernetes clusters](../containers/kubernetes-monitoring-enable.md). The following sections described each separately so you can consider your all of your onboarding and configuration options for each.
#### Enable scraping of Prometheus metrics
Enable scraping of Prometheus metrics by Azure Monitor managed service for Prome
- Select the option **Enable Prometheus metrics** when you [create an AKS cluster](../../aks/learn/quick-kubernetes-deploy-portal.md). - Select the option **Enable Prometheus metrics** when you enable Container insights on an existing [AKS cluster](container-insights-enable-aks.md) or [Azure Arc-enabled Kubernetes cluster](container-insights-enable-arc-enabled-clusters.md).-- Enable for an existing [AKS cluster](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana) or [Arc-enabled Kubernetes cluster (preview)](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana).
+- Enable for an existing [AKS cluster](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana) or [Arc-enabled Kubernetes cluster](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana).
If you already have a Prometheus environment that you want to use for your AKS clusters, then enable Azure Monitor managed service for Prometheus and then use remote-write to send data to your existing Prometheus environment. You can also [use remote-write to send data from your existing self-managed Prometheus environment to Azure Monitor managed service for Prometheus](../essentials/prometheus-remote-write.md).
See [Default Prometheus metrics configuration in Azure Monitor](../essentials/pr
#### Enable Grafana for analysis of Prometheus data
+> [!NOTE]
+> Use Grafana for your monitoring your Kubernetes environment if you have an existing investment in Grafana or if you prefer to use Grafana dashboards instead of Container insights to analyze your Prometheus data. If you don't want to use Grafana, then enable the [Prometheus experience in Container insights](./container-insights-experience-v2.md) so that you can use Container insights views with your Prometheus data.
+ [Create an instance of Managed Grafana](../../managed-grafan#use-out-of-the-box-dashboards) are available for monitoring Kubernetes clusters including several that present similar information as Container insights views. If you have an existing Grafana environment, then you can continue to use it and add Azure Monitor managed service for [Prometheus as a data source](https://grafana.com/docs/grafana/latest/datasources/prometheus/). You can also [add the Azure Monitor data source to Grafana](https://grafana.com/docs/grafana/latest/datasources/azure-monitor/) to use data collected by Container insights in custom Grafana dashboards. Perform this configuration if you want to focus on Grafana dashboards rather than using the Container insights views and reports.
See [Enable Container insights](../containers/container-insights-onboard.md) for
Once Container insights is enabled for a cluster, perform the following actions to optimize your installation. -- Container insights collects many of the same metric values as [Prometheus](#enable-scraping-of-prometheus-metrics). You can disable collection of these metrics by configuring Container insights to only collect **Logs and events** as described in [Enable cost optimization settings in Container insights](../containers/container-insights-cost-config.md#enable-cost-settings). This configuration disables the Container insights experience in the Azure portal, but you can use Grafana to visualize Prometheus metrics and Log Analytics to analyze log data collected by Container insights.-- Reduce your cost for Container insights data ingestion by reducing the amount of data that's collected.
+- Enable the [Prometheus experience in Container insights](./container-insights-experience-v2.md) so that you can use Container insights views with your Prometheus data.
- To improve your query experience with data collected by Container insights and to reduce collection costs, [enable the ContainerLogV2 schema](container-insights-logs-schema.md) for each cluster. If you only use logs for occasional troubleshooting, then consider configuring this table as [basic logs](../logs/logs-table-plans.md).
+- Use cost presets described in [Enable cost optimization settings in Container insights](../containers/container-insights-cost-config.md#enable-cost-settings) to reduce your cost for Container insights data ingestion by reducing the amount of data that's collected. Disable collection of metrics by configuring Container insights to only collect **Logs and events** since many of the same metric values as [Prometheus](#enable-scraping-of-prometheus-metrics).
If you have an existing solution for collection of logs, then follow the guidance for that tool or enable Container insights and use the [data export feature of Log Analytics workspace](../logs/logs-data-export.md) to send data to [Azure Event Hubs](../../event-hubs/event-hubs-about.md) to forward to alternate system.
azure-monitor Prometheus Metrics Scrape Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-configuration.md
If you are using `basic_auth` setting in your prometheus configuration, please f
1. Create a secret in the **kube-system** namespace named **ama-metrics-mtls-secret**
-The value for password1 is **base64encoded**
+The value for password1 is **base64encoded**.
+ The key *password1* can be anything, but just needs to match your scrapeconfig *password_file* filepath. ```yaml
data:
``` The **ama-metrics-mtls-secret** secret is mounted on to the ama-metrics containers at path - **/etc/prometheus/certs/** and is made available to the process that is scraping prometheus metrics. The key( ex - password1) in the above example will be the file name and the value is base64 decoded and added to the contents of the file within the container and the prometheus scraper uses the contents of this file to get the value that is used as the password used to scrape the endpoint.
-2. In the configmap for the custom scrape configuration use the following setting -
+2. In the configmap for the custom scrape configuration use the following setting. The username field should contain the actual username string. The password_file field should contain the path to a file that contains the password.
+ ```yaml basic_auth:
- username: admin
+ username: <username string>
password_file: /etc/prometheus/certs/password1 ```
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log.md
For details on how to create a diagnostic setting, see [Create diagnostic settin
> [!TIP] > * Sending logs to Log Analytics workspace if free of charge for the default retention period. > * Send to Azure Monitor Logs for more complex querying and alerting and for longer retention of up to 12 years.
-> * Logs exported to a Log Analytics workspace can be [shown in Power BI](https://learn.microsoft.com/power-bi/transform-model/log-analytics/desktop-log-analytics-overview)
+> * Logs exported to a Log Analytics workspace can be [shown in Power BI](/power-bi/transform-model/log-analytics/desktop-log-analytics-overview)
> * [Insights](./activity-log-insights.md) are provided for Activity Logs exported to Log Analytics. > [!NOTE]
azure-netapp-files Cross Region Replication Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-requirements-considerations.md
This article describes requirements and considerations about [using the volume c
* The replication destination volume is read-only until you [fail over to the destination region](cross-region-replication-manage-disaster-recovery.md#fail-over-to-destination-volume) to enable the destination volume for read and write. >[!IMPORTANT] >Failover is a manual process. When you need to activate the destination volume (for example, when you want to fail over to the destination region), you need to break replication peering then mount the destination volume. For more information, see [fail over to the destination volume](cross-region-replication-manage-disaster-recovery.md#fail-over-to-destination-volume)
+ >[!IMPORTANT]
+ > A volume with an active backup policy enabled can't be the destination volume in a reverse resync operation. You must suspend the backup policy on the volume prior to starting the reverse resync then resume when the reverse resync completes.
* Azure NetApp Files replication doesn't currently support multiple subscriptions; all replications must be performed under a single subscription. * See [resource limits](azure-netapp-files-resource-limits.md) for the maximum number of cross-region replication destination volumes. You can open a support ticket to [request a limit increase](azure-netapp-files-resource-limits.md#request-limit-increase) in the default quota of replication destination volumes (per subscription in a region). * There can be a delay up to five minutes for the interface to reflect a newly added snapshot on the source volume.
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md
The resource providers for database services are:
| Resource provider namespace | Azure service | | | - |
-| Microsoft.AzureData | SQL Server registry |
| Microsoft.Cache | [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) | | Microsoft.DBforMariaDB | [Azure Database for MariaDB](../../mariadb/index.yml) | | Microsoft.DBforMySQL | [Azure Database for MySQL](../../mysql/index.yml) |
The resource providers for database services are:
| Microsoft.DocumentDB | [Azure Cosmos DB](../../cosmos-db/index.yml) | | Microsoft.Sql | [Azure SQL Database](/azure/azure-sql/database/index)<br /> [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/index) <br />[Azure Synapse Analytics](/azure/sql-data-warehouse/) | | Microsoft.SqlVirtualMachine | [SQL Server on Azure Virtual Machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview) |
+| Microsoft.AzureData | [SQL Server enabled by Azure Arc](/sql/sql-server/azure-arc/overview) |
## Developer tools resource providers
The resource providers for hybrid services are:
| Resource provider namespace | Azure service | | | - |
-| Microsoft.AzureArcData | Azure Arc-enabled data services |
+| Microsoft.AzureArcData | [Azure Arc-enabled data services](/azure/azure-arc/data/overview) |
| Microsoft.AzureStackHCI | [Azure Stack HCI](/azure-stack/hci/overview) | | Microsoft.HybridCompute | [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) | | Microsoft.Kubernetes | [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) |
azure-sql-edge Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/connect.md
To connect to an Azure SQL Edge Database Engine from a network machine, you need
} ``` -- **SA password for the Azure SQL Edge instance**: This is the value specified for the `SA_PASSWORD` environment variable during deployment of Azure SQL Edge.
+- **SA password for the Azure SQL Edge instance**: This is the value specified for the `MSSQL_SA_PASSWORD` environment variable during deployment of Azure SQL Edge.
## Connect to the Database Engine from within the container
azure-sql-edge Onnx Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/onnx-overview.md
Last updated 09/14/2023-+ keywords: deploy SQL Edge
azure-vmware Azure Vmware Solution Horizon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-horizon.md
To understand the Azure virtual machine sizes that are required for the Horizon
## Next steps
-To learn more about VMware Horizon on Azure VMware Solution, read the [VMware Horizon FAQ](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/horizon/vmw-horizon-on-microsoft-azure-vmware-solution-faq.pdf).
+To learn more about VMware Horizon on Azure VMware Solution, read the [VMware Horizon FAQ](https://www.vmware.com/docs/vmw-horizon-faqs).
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Managed disks | Supported.
Encrypted disks | Supported.<br/><br/> Azure VMs enabled with Azure Disk Encryption can be backed up (with or without the Microsoft Entra app).<br/><br/> Encrypted VMs can't be recovered at the file or folder level. You must recover the entire VM.<br/><br/> You can enable encryption on VMs that Azure Backup is already protecting. <br><br> You can back up and restore disks encrypted via platform-managed keys or customer-managed keys. You can also assign a disk-encryption set while restoring in the same region. That is, providing a disk-encryption set while performing cross-region restore is currently not supported. However, you can assign the disk-encryption set to the restored disk after the restore is complete. Disks with a write accelerator enabled | Azure VMs with disk backup for a write accelerator became available in all Azure public regions on May 18, 2022. If disk backup for a write accelerator is not required as part of VM backup, you can choose to remove it by using the [selective disk feature](selective-disk-backup-restore.md). <br><br>**Important** <br> Virtual machines with write accelerator disks need internet connectivity for a successful backup, even though those disks are excluded from the backup. Disks enabled for access with a private endpoint | Supported.
+Disks with both public and private access disabled | Supported.
Backup and restore of deduplicated VMs or disks | Azure Backup doesn't support deduplication. For more information, see [this article](./backup-support-matrix.md#disk-deduplication-support). <br/> <br/> Azure Backup doesn't deduplicate across VMs in the Recovery Services vault. <br/> <br/> If there are VMs in a deduplication state during restore, the files can't be restored because the vault doesn't understand the format. However, you can successfully perform the full VM restore. Adding a disk to a protected VM | Supported. Resizing a disk on a protected VM | Supported.
backup Blob Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-support-matrix.md
Title: Support matrix for Azure Blobs backup description: Provides a summary of support settings and limitations when backing up Azure Blobs. Previously updated : 07/24/2024 Last updated : 07/31/2024
Operational backup for blobs is available in all public cloud regions, except Fr
# [Vaulted backup](#tab/vaulted-backup)
-Vaulted backup for blobs is currently available in all public regions **except** South Africa West, Sweden Central, Sweden South, Israel Central, Poland Central, India Central, Italy North and Malaysia South.
+Vaulted backup for blobs is available in all public regions.
chaos-studio Chaos Studio Tutorial Aks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aks-cli.md
az rest --method get --uri https://management.azure.com/subscriptions/$SUBSCRIPT
```azurecli-interactive
-az role assignment create --role "Azure Kubernetes Service Cluster Admin Role" --assignee-object-id $EXPERIMENT_PRINCIPAL_ID --assignee-principal-type "ServicePrincipal" --scope subscriptions/$SUBSCRIPTION_ID/resourceGroups/$resourceGroupName/providers/Microsoft.ContainerService/managedClusters/$AKS_CLUSTER_NAME
+az role assignment create --role "Azure Kubernetes Service Cluster Admin Role" --assignee-principal-type "ServicePrincipal" --assignee-object-id $EXPERIMENT_PRINCIPAL_ID --scope subscriptions/$SUBSCRIPTION_ID/resourceGroups/$resourceGroupName/providers/Microsoft.ContainerService/managedClusters/$AKS_CLUSTER_NAME
``` ## Run your experiment
chaos-studio Experiment Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/experiment-examples.md
Last updated 05/07/2024 -+
Here's an example of where you would copy and paste the Azure portal parameter i
[![Screenshot that shows Azure portal parameter location.](images/azure-portal-parameter-examples.png)](images/azure-portal-parameter-examples.png#lightbox)
+To save one of the "experiment.json" examples shown below, simply type *nano experiment.json* into your cloud shell, copy and paste any of the below experiment examples, save it (ctrl+o), exit nano (ctrl+x) and run the following command:
+ ```AzCLI
+az rest --method put --uri https://management.azure.com/subscriptions/6b052e15-03d3-4f17-b2e1-be7f07588291/resourceGroups/exampleRG/providers/Microsoft.Chaos/experiments/exampleExperiment?api-version=2024-01-01
+```
+> [!NOTE]
+> This is the generic command you would use to create any experiment from the Azure CLI
+ > [!NOTE]
-> Make sure your experiment has permission to operate on **ALL** resources within the experiment. These examples exclusively use **System-assigned managed identity**, but we also support User-assigned managed identity. For more information, see [Experiment permissions](chaos-studio-permissions-security.md).
+> Make sure your experiment has permission to operate on **ALL** resources within the experiment. These examples exclusively use **System-assigned managed identity**, but we also support User-assigned managed identity. For more information, see [Experiment permissions](chaos-studio-permissions-security.md). These experiments will **NOT** run without granting the experiment permission to run on the target resources.
><br> ><br> >View all available role assignments [here](chaos-studio-fault-providers.md) to determine which permissions are required for your target resources. ++
-Azure Kubernetes Service (AKS) Network Delay
+Azure Kubernetes Service (AKS) - Network Delay
+**Experiment Description** This experiment delays network communication by 200ms
-### [Azure CLI](#tab/azure-CLI)
-```AzCLI
-PUT https://management.azure.com/subscriptions/6b052e15-03d3-4f17-b2e1-be7f07588291/resourceGroups/exampleRG/providers/Microsoft.Chaos/experiments/exampleExperiment?api-version=2024-01-01
+### [Azure CLI Experiment.JSON](#tab/azure-CLI)
+```AzCLI
{ "identity": { "type": "SystemAssigned",
- "principalId": "35g5795t-8sd4-5b99-a7c8-d5asdh9as7",
- "tenantId": "asd79ash-7daa-95hs-0as8-f3md812e3md"
}, "tags": {}, "location": "westus", "properties": {
- "provisioningState": "Succeeded",
"selectors": [ { "type": "List",
PUT https://management.azure.com/subscriptions/6b052e15-03d3-4f17-b2e1-be7f07588
``` - ### [Azure portal parameters](#tab/azure-portal) ```Azure portal {"action":"delay","mode":"all","selector":{"namespaces":["default"]},"delay":{"latency":"200ms","correlation":"100","jitter":"0ms"}} ```
-Azure Kubernetes Service (AKS) Pod Failure
+Azure Kubernetes Service (AKS) - Pod Failure
+**Experiment Description** This experiment takes down all pods in the cluster for 10 minutes.
-### [Azure CLI](#tab/azure-CLI)
+### [Azure CLI Experiment.JSON](#tab/azure-CLI)
```AzCLI
-PUT https://management.azure.com/subscriptions/6b052e15-03d3-4f17-b2e1-be7f07588291/resourceGroups/exampleRG/providers/Microsoft.Chaos/experiments/exampleExperiment?api-version=2024-01-01
- { "identity": { "type": "SystemAssigned",
- "principalId": "35g5795t-8sd4-5b99-a7c8-d5asdh9as7",
- "tenantId": "asd79ash-7daa-95hs-0as8-f3md812e3md"
}, "tags": {}, "location": "westus", "properties": {
- "provisioningState": "Succeeded",
"selectors": [ { "type": "List",
PUT https://management.azure.com/subscriptions/6b052e15-03d3-4f17-b2e1-be7f07588
{"action":"pod-failure","mode":"all","duration":"600s","selector":{"namespaces":["autoinstrumentationdemo"]}} ```
-Azure Kubernetes Service (AKS) Memory Stress
+Azure Kubernetes Service (AKS) - Memory Stress
+**Experiment Description** This experiment stresses the memory of 4 AKS pods to 95% for 10 minutes.
### [Azure CLI](#tab/azure-CLI) ```AzCLI
-PUT https://management.azure.com/subscriptions/6b052e15-03d3-4f17-b2e1-be7f07588291/resourceGroups/exampleRG/providers/Microsoft.Chaos/experiments/exampleExperiment?api-version=2024-01-01
- { "identity": { "type": "SystemAssigned",
- "principalId": "35g5795t-8sd4-5b99-a7c8-d5asdh9as7",
- "tenantId": "asd79ash-7daa-95hs-0as8-f3md812e3md"
}, "tags": {}, "location": "westus", "properties": {
- "provisioningState": "Succeeded",
"selectors": [ { "type": "List",
PUT https://management.azure.com/subscriptions/6b052e15-03d3-4f17-b2e1-be7f07588
```Azure portal {"mode":"all","selector":{"namespaces":["autoinstrumentationdemo"]},"stressors":{"memory":{"workers":4,"size":"95%"}} ```
-
++
+Azure Kubernetes Service (AKS) - CPU Stress
+
+**Experiment Description** This experiment stresses the CPU of four pods in the AKS cluster to 95%.
+
+### [Azure CLI](#tab/azure-CLI)
+```AzCLI
+{
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "tags": {},
+ "location": "westus",
+ "properties": {
+ "selectors": [
+ {
+ "type": "List",
+ "targets": [
+ {
+ "id": "/subscriptions/123hdq8-123d-89d7-5670-123123/resourceGroups/aks_memory_stress_experiment/providers/Microsoft.ContainerService/managedClusters/nikhilAKScluster/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh",
+ "type": "ChaosTarget"
+ }
+ ],
+ "id": "Selector1"
+ }
+ ],
+ "steps": [
+ {
+ "name": "AKS CPU stress",
+ "branches": [
+ {
+ "name": "AKS CPU stress",
+ "actions": [
+ {
+ "type": "continuous",
+ "selectorId": "Selector1",
+ "duration": "PT10M",
+ "parameters": [
+ {
+ "key": "jsonSpec",
+ "value": "{\"mode\":\"all\",\"selector\":{\"namespaces\":[\"autoinstrumentationdemo\"]},\"stressors\":{\"cpu\":{\"workers\":4,\"load\":95}}}"
+ }
+ ],
+ "name": "urn:csci:microsoft:azureKubernetesServiceChaosMesh:stressChaos/2.1"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+```
+
+### [Azure portal parameters](#tab/azure-portal)
+
+```Azure portal
+{"mode":"all","selector":{"namespaces":["autoinstrumentationdemo"]},"stressors":{"cpu":{"workers":4,"load":95}}}
+```
+
+Azure Kubernetes Service (AKS) - Network Emulation
+
+**Experiment Description** This experiment applies a network emulation to all pods in the specified namespace, adding a latency of 100ms and a packet loss of 0.1% for 5 minutes.
++
+### [Azure CLI Experiment.JSON](#tab/azure-CLI)
+```AzCLI
+{
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "tags": {},
+ "location": "westus",
+ "properties": {
+ "selectors": [
+ {
+ "type": "List",
+ "targets": [
+ {
+ "id": "/subscriptions/123hdq8-123d-89d7-5670-123123/resourceGroups/aks_network_emulation_experiment/providers/Microsoft.ContainerService/managedClusters/nikhilAKScluster/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh",
+ "type": "ChaosTarget"
+ }
+ ],
+ "id": "Selector1"
+ }
+ ],
+ "steps": [
+ {
+ "name": "AKS network emulation",
+ "branches": [
+ {
+ "name": "AKS network emulation",
+ "actions": [
+ {
+ "type": "continuous",
+ "selectorId": "Selector1",
+ "duration": "PT5M",
+ "parameters": [
+ {
+ "key": "jsonSpec",
+ "value": "{\"action\":\"netem\",\"mode\":\"all\",\"selector\":{\"namespaces\":[\"default\"]},\"netem\":{\"latency\":\"100ms\",\"loss\":\"0.1\",\"correlation\":\"25\"}}"
+ }
+ ],
+ "name": "urn:csci:microsoft:azureKubernetesServiceChaosMesh:networkChaos/2.1"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+
+```
+
+### [Azure portal parameters](#tab/azure-portal)
+
+```Azure portal
+{"action":"netem","mode":"all","selector":{"namespaces":["default"]},"netem":{"latency":"100ms","loss":"0.1","correlation":"25"}}
+```
+
+Azure Kubernetes Service (AKS) - Network Partition
+
+**Experiment Description** This experiment partitions the network for all pods in the specified namespace, simulating a network split in the 'to' direction for 5 minutes.
++
+### [Azure CLI Experiment.JSON](#tab/azure-CLI)
+```AzCLI
+{
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "tags": {},
+ "location": "westus",
+ "properties": {
+ "selectors": [
+ {
+ "type": "List",
+ "targets": [
+ {
+ "id": "/subscriptions/123hdq8-123d-89d7-5670-123123/resourceGroups/aks_partition_experiment/providers/Microsoft.ContainerService/managedClusters/nikhilAKScluster/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh",
+ "type": "ChaosTarget"
+ }
+ ],
+ "id": "Selector1"
+ }
+ ],
+ "steps": [
+ {
+ "name": "AKS network partition",
+ "branches": [
+ {
+ "name": "AKS network partition",
+ "actions": [
+ {
+ "type": "continuous",
+ "selectorId": "Selector1",
+ "duration": "PT5M",
+ "parameters": [
+ {
+ "key": "jsonSpec",
+ "value": "{\"action\":\"partition\",\"mode\":\"all\",\"selector\":{\"namespaces\":[\"default\"]},\"partition\":{\"direction\":\"to\"}}"
+ }
+ ],
+ "name": "urn:csci:microsoft:azureKubernetesServiceChaosMesh:networkChaos/2.1"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+
+```
+
+### [Azure portal parameters](#tab/azure-portal)
+
+```Azure portal
+{"action":"partition","mode":"all","selector":{"namespaces":["default"]},"partition":{"direction":"to"}}
+```
+
+Azure Kubernetes Service (AKS) - Network Bandwidth Limitation
+
+**Experiment Description** This experiment limits the network bandwidth for all pods in the specified namespace to 1mbps, with additional parameters for limit, buffer, peak rate, and burst for 5 minutes.
++
+### [Azure CLI Experiment.JSON](#tab/azure-CLI)
+```AzCLI
+{
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "tags": {},
+ "location": "westus",
+ "properties": {
+ "selectors": [
+ {
+ "type": "List",
+ "targets": [
+ {
+ "id": "/subscriptions/123hdq8-123d-89d7-5670-123123/resourceGroups/aks_bandwidth_experiment/providers/Microsoft.ContainerService/managedClusters/nikhilAKScluster/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh",
+ "type": "ChaosTarget"
+ }
+ ],
+ "id": "Selector1"
+ }
+ ],
+ "steps": [
+ {
+ "name": "AKS network bandwidth",
+ "branches": [
+ {
+ "name": "AKS network bandwidth",
+ "actions": [
+ {
+ "type": "continuous",
+ "selectorId": "Selector1",
+ "duration": "PT5M",
+ "parameters": [
+ {
+ "key": "jsonSpec",
+ "value": "{\"action\":\"bandwidth\",\"mode\":\"all\",\"selector\":{\"namespaces\":[\"default\"]},\"bandwidth\":{\"rate\":\"1mbps\",\"limit\":\"50mb\",\"buffer\":\"10kb\",\"peakrate\":\"1mbps\",\"minburst\":\"0\"}}"
+ }
+ ],
+ "name": "urn:csci:microsoft:azureKubernetesServiceChaosMesh:networkChaos/2.1"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+
+```
+
+### [Azure portal parameters](#tab/azure-portal)
+
+```Azure portal
+{"action":"bandwidth","mode":"all","selector":{"namespaces":["default"]},"bandwidth":{"rate":"1mbps","limit":"50mb","buffer":"10kb","peakrate":"1mbps","minburst":"0"}}
+```
+
+Azure Kubernetes Service (AKS) - Network Packet Re-order
+
+**Experiment Description** This experiment reorders network packets for all pods in the specified namespace, with a gap of 5 packets and a reorder percentage of 25% for 5 minutes.
++
+### [Azure CLI Experiment.JSON](#tab/azure-CLI)
+```AzCLI
+{
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "tags": {},
+ "location": "westus",
+ "properties": {
+ "selectors": [
+ {
+ "type": "List",
+ "targets": [
+ {
+ "id": "/subscriptions/123hdq8-123d-89d7-5670-123123/resourceGroups/aks_reorder_experiment/providers/Microsoft.ContainerService/managedClusters/nikhilAKScluster/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh",
+ "type": "ChaosTarget"
+ }
+ ],
+ "id": "Selector1"
+ }
+ ],
+ "steps": [
+ {
+ "name": "AKS network reorder",
+ "branches": [
+ {
+ "name": "AKS network reorder",
+ "actions": [
+ {
+ "type": "continuous",
+ "selectorId": "Selector1",
+ "duration": "PT5M",
+ "parameters": [
+ {
+ "key": "jsonSpec",
+ "value": "{\"action\":\"reorder\",\"mode\":\"all\",\"selector\":{\"namespaces\":[\"default\"]},\"reorder\":{\"gap\":\"5\",\"reorder\":\"25\",\"correlation\":\"50\"}}"
+ }
+ ],
+ "name": "urn:csci:microsoft:azureKubernetesServiceChaosMesh:networkChaos/2.1"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+
+```
+
+### [Azure portal parameters](#tab/azure-portal)
+
+```Azure portal
+{"action":"reorder","mode":"all","selector":{"namespaces":["default"]},"reorder":{"gap":"5","reorder":"25","correlation":"50"}}
+```
+
+Azure Kubernetes Service (AKS) - Network Packet Loss
+
+**Experiment Description** This experiment simulates a packet loss of 10% for all pods in the specified namespace for 5 minutes.
++
+### [Azure CLI Experiment.JSON](#tab/azure-CLI)
+```AzCLI
+{
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "tags": {},
+ "location": "westus",
+ "properties": {
+ "selectors": [
+ {
+ "type": "List",
+ "targets": [
+ {
+ "id": "/subscriptions/123hdq8-123d-89d7-5670-123123/resourceGroups/aks_loss_experiment/providers/Microsoft.ContainerService/managedClusters/nikhilAKScluster/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh",
+ "type": "ChaosTarget"
+ }
+ ],
+ "id": "Selector1"
+ }
+ ],
+ "steps": [
+ {
+ "name": "AKS network loss",
+ "branches": [
+ {
+ "name": "AKS network loss",
+ "actions": [
+ {
+ "type": "continuous",
+ "selectorId": "Selector1",
+ "duration": "PT5M",
+ "parameters": [
+ {
+ "key": "jsonSpec",
+ "value": "{\"action\":\"loss\",\"mode\":\"all\",\"selector\":{\"namespaces\":[\"default\"]},\"loss\":{\"loss\":\"10\",\"correlation\":\"25\"}}"
+ }
+ ],
+ "name": "urn:csci:microsoft:azureKubernetesServiceChaosMesh:networkChaos/2.1"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+
+```
+
+### [Azure portal parameters](#tab/azure-portal)
+
+```Azure portal
+{"action":"loss","mode":"all","selector":{"namespaces":["default"]},"loss":{"loss":"10","correlation":"25"}}
+```
+
+Azure Kubernetes Service (AKS) - Network Packet Duplication
+
+**Experiment Description** This experiment duplicates 50% of the network packets for all pods in the specified namespace for 5 minutes.
++
+### [Azure CLI Experiment.JSON](#tab/azure-CLI)
+```AzCLI
+{
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "tags": {},
+ "location": "westus",
+ "properties": {
+ "selectors": [
+ {
+ "type": "List",
+ "targets": [
+ {
+ "id": "/subscriptions/123hdq8-123d-89d7-5670-123123/resourceGroups/aks_duplicate_experiment/providers/Microsoft.ContainerService/managedClusters/nikhilAKScluster/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh",
+ "type": "ChaosTarget"
+ }
+ ],
+ "id": "Selector1"
+ }
+ ],
+ "steps": [
+ {
+ "name": "AKS network duplicate",
+ "branches": [
+ {
+ "name": "AKS network duplicate",
+ "actions": [
+ {
+ "type": "continuous",
+ "selectorId": "Selector1",
+ "duration": "PT5M",
+ "parameters": [
+ {
+ "key": "jsonSpec",
+ "value": "{\"action\":\"duplicate\",\"mode\":\"all\",\"selector\":{\"namespaces\":[\"default\"]},\"duplicate\":{\"duplicate\":\"50\",\"correlation\":\"50\"}}"
+ }
+ ],
+ "name": "urn:csci:microsoft:azureKubernetesServiceChaosMesh:networkChaos/2.1"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+
+```
+
+### [Azure portal parameters](#tab/azure-portal)
+
+```Azure portal
+{"action":"duplicate","mode":"all","selector":{"namespaces":["default"]},"duplicate":{"duplicate":"50","correlation":"50"}}
+```
+
+Azure Kubernetes Service (AKS) - Network Packet Corruption
+
+**Experiment Description** This experiment corrupts 50% of the network packets for all pods in the specified namespace for 5 minutes.
++
+### [Azure CLI Experiment.JSON](#tab/azure-CLI)
+```AzCLI
+{
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "tags": {},
+ "location": "westus",
+ "properties": {
+ "selectors": [
+ {
+ "type": "List",
+ "targets": [
+ {
+ "id": "/subscriptions/123hdq8-123d-89d7-5670-123123/resourceGroups/aks_corrupt_experiment/providers/Microsoft.ContainerService/managedClusters/nikhilAKScluster/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh",
+ "type": "ChaosTarget"
+ }
+ ],
+ "id": "Selector1"
+ }
+ ],
+ "steps": [
+ {
+ "name": "AKS network corrupt",
+ "branches": [
+ {
+ "name": "AKS network corrupt",
+ "actions": [
+ {
+ "type": "continuous",
+ "selectorId": "Selector1",
+ "duration": "PT5M",
+ "parameters": [
+ {
+ "key": "jsonSpec",
+ "value": "{\"action\":\"corrupt\",\"mode\":\"all\",\"selector\":{\"namespaces\":[\"default\"]},\"corrupt\":{\"corrupt\":\"50\",\"correlation\":\"50\"}}"
+ }
+ ],
+ "name": "urn:csci:microsoft:azureKubernetesServiceChaosMesh:networkChaos/2.1"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+
+```
+
+### [Azure portal parameters](#tab/azure-portal)
+
+```Azure portal
+{"action":"corrupt","mode":"all","selector":{"namespaces":["default"]},"corrupt":{"corrupt":"50","correlation":"50"}}
+```
+
+Azure Load Test - Start/Stop Load Test (With Delay)
+
+**Experiment Description** This experiment starts an existing Azure load test, then waits for 10 minutes using the "delay" action before stopping the load test.
++
+### [Azure CLI Experiment.JSON](#tab/azure-CLI)
+```AzCLI
+{
+
+"identity": {
+ "type": "SystemAssigned",
+ },
+ "tags": {},
+ "location": "eastus",
+ "properties": {
+ "selectors": [
+ {
+ "type": "List",
+ "targets": [
+ {
+ "id": "/subscriptions/123hdq8-123d-89d7-5670-123123/resourceGroups/nikhilLoadTest/providers/microsoft.loadtestservice/loadtests/Nikhil-Demo-Load-Test/providers/Microsoft.Chaos/targets/microsoft-azureloadtest",
+ "type": "ChaosTarget"
+ }
+ ],
+ "id": "66e5124c-12db-4f7e-8549-7299c5828bff"
+ },
+ {
+ "type": "List",
+ "targets": [
+ {
+ "id": "/subscriptions/123hdq8-123d-89d7-5670-123123/resourceGroups/builddemo/providers/microsoft.loadtestservice/loadtests/Nikhil-Demo-Load-Test/providers/Microsoft.Chaos/targets/microsoft-azureloadtest",
+ "type": "ChaosTarget"
+ }
+ ],
+ "id": "9dc23b43-81ca-42c3-beae-3fe8ac80c30b"
+ }
+ ],
+ "steps": [
+ {
+ "name": "Step 1 - Start Load Test",
+ "branches": [
+ {
+ "name": "Branch 1",
+ "actions": [
+ {
+ "selectorId": "66e5124c-12db-4f7e-8549-7299c5828bff",
+ "type": "discrete",
+ "parameters": [
+ {
+ "key": "testId",
+ "value": "ae24e6z9-d88d-4752-8552-c73e8a9adebc"
+ }
+ ],
+ "name": "urn:csci:microsoft:azureLoadTest:start/1.0"
+ },
+ {
+ "type": "delay",
+ "duration": "PT10M",
+ "name": "urn:csci:microsoft:chaosStudio:TimedDelay/1.0"
+ }
+ ]
+ }
+ ]
+ },
+ {
+ "name": "Step 2 - End Load test",
+ "branches": [
+ {
+ "name": "Branch 1",
+ "actions": [
+ {
+ "selectorId": "9dc23b43-81ca-42c3-beae-3fe8ac80c30b",
+ "type": "discrete",
+ "parameters": [
+ {
+ "key": "testId",
+ "value": "ae24e6z9-d88d-4752-8552-c73e8a9adebc"
+ }
+ ],
+ "name": "urn:csci:microsoft:azureLoadTest:stop/1.0"
+ }
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
+}
+
+```
+
+### [Azure portal parameters](#tab/azure-portal)
+
+```Azure portal
+ae24e6z9-d88d-4752-8552-c73e8a9adebc
+```
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
Title: List of updates applied to the Azure Guest OS | Microsoft Docs
description: This article lists the Microsoft Security Response Center updates applied to different Azure Guest OS. See if an update applies to your Guest OS. -+ ms.assetid: d0a272a9-ed01-4f4c-a0b3-bd5e841bdd77 Previously updated : 07/23/2024- Last updated : 07/31/2024+ # Azure Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to your Guest OS. Updates always carry forward for the particular [family][family-explain] they were introduced in.
+## July 2024 Guest OS
+
+| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
+| | | | | |
+| Rel 24-07 | 5040430 | Latest Cumulative Update(LCU) | [6.73] | Jul 09, 2024 |
+| Rel 24-07 | 5040437 | Latest Cumulative Update(LCU) | [7.43] | Jul 09, 2024 |
+| Rel 24-07 | 5040434 | Latest Cumulative Update(LCU) | [5.97] | Jul 09, 2024 |
+| Rel 24-07 | 5039909 | .NET Framework 3.5 Security and Quality Rollup | [2.153] | Jul 09, 2024 |
+| Rel 24-07 | 5039882 | .NET Framework 4.7.2 Cumulative Update LKG | [2.153] | Jul 09, 2024 |
+| Rel 24-07 | 5039910 | .NET Framework 3.5 Security and Quality Rollup LKG |[4.133] | Jul 09, 2024 |
+| Rel 24-07 | 5039881 | .NET Framework 4.7.2 Cumulative Update LKG |[4.133] | Jul 09, 2024 |
+| Rel 24-07 | 5039908 | .NET Framework 3.5 Security and Quality Rollup LKG | [3.141] | Jul 09, 2024 |
+| Rel 24-07 | 5039880 | .NET Framework 4.7.2 Cumulative Update LKG | [3.141] | Jul 09, 2024 |
+| Rel 24-07 | 5039879 | . NET Framework Dot Net | [6.73] | Jul 09, 2024 |
+| Rel 24-07 | 5039889 | .NET Framework 4.8 Security and Quality Rollup LKG | [7.43] | Jul 09, 2024 |
+| Rel 24-07 | 5040497 | Monthly Rollup | [2.153] | Jul 09, 2024 |
+| Rel 24-07 | 5040485 | Monthly Rollup | [3.141] | Jul 09, 2024 |
+| Rel 24-07 | 5040456 | Monthly Rollup | [4.133] | Jul 09, 2024 |
+| Rel 24-07 | 5040570 | Servicing Stack Update | [3.141] | Jul 09, 2024 |
+| Rel 24-07 | 5040569 | Servicing Stack Update | [4.133] | Jul 09, 2024 |
+| Rel 24-07 | 5040562 | Servicing Stack Update | [5.97] | Jul 09, 2024 |
+| Rel 24-07 | 5039339 | Servicing Stack Update LKG | [2.153] | Jul 09, 2024 |
+| Rel 24-07 | 5040571 | Servicing Stack Update | [7.43] | Jul 09, 2024 |
+| Rel 24-07 | 5040563 | Servicing Stack Update | [6.73] | Jul 09, 2024 |
+| Rel 24-07 | 4494175 | January '20 Microcode | [5.97] | Sep 1, 2020 |
+| Rel 24-07 | 4494175 | January '20 Microcode | [6.73] | Sep 1, 2020 |
+
+[5040430]: https://support.microsoft.com/kb/5040430
+[5040437]: https://support.microsoft.com/kb/5040437
+[5040434]: https://support.microsoft.com/kb/5040434
+[5039909]: https://support.microsoft.com/kb/5039909
+[5039882]: https://support.microsoft.com/kb/5039882
+[5039910]: https://support.microsoft.com/kb/5039910
+[5039881]: https://support.microsoft.com/kb/5039881
+[5039908]: https://support.microsoft.com/kb/5039908
+[5039880]: https://support.microsoft.com/kb/5039880
+[5039879]: https://support.microsoft.com/kb/5039879
+[5039889]: https://support.microsoft.com/kb/5039889
+[5040497]: https://support.microsoft.com/kb/5040497
+[5040485]: https://support.microsoft.com/kb/5040485
+[5040456]: https://support.microsoft.com/kb/5040456
+[5040570]: https://support.microsoft.com/kb/5040570
+[5040569]: https://support.microsoft.com/kb/5040569
+[5040562]: https://support.microsoft.com/kb/5040562
+[5039339]: https://support.microsoft.com/kb/5039339
+[5040571]: https://support.microsoft.com/kb/5040571
+[5040563]: https://support.microsoft.com/kb/5040563
+[4494175]: https://support.microsoft.com/kb/4494175
+[2.153]: ./cloud-services-guestos-update-matrix.md#family-2-releases
+[3.141]: ./cloud-services-guestos-update-matrix.md#family-3-releases
+[4.133]: ./cloud-services-guestos-update-matrix.md#family-4-releases
+[5.97]: ./cloud-services-guestos-update-matrix.md#family-5-releases
+[6.73]: ./cloud-services-guestos-update-matrix.md#family-6-releases
+[7.43]: ./cloud-services-guestos-update-matrix.md#family-7-releases
+ ## June 2024 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md
Title: Learn about the latest Azure Guest OS Releases | Microsoft Docs
description: The latest release news and SDK compatibility for Azure Cloud Services Guest OS. -+ ms.assetid: 6306cafe-1153-44c7-8554-623b03d59a34 Previously updated : 07/23/2024- Last updated : 07/31/2024+ # Azure Guest OS releases and SDK compatibility matrix
Unsure about how to update your Guest OS? Check [this][cloud updates] out.
## News updates
+###### **July 31, 2024**
+The July Guest OS released.
+ ###### **June 27, 2024** The June Guest OS released.
The September Guest OS released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-7.43_202407-01 | July 31, 2024 | Post 7.46 |
| WA-GUEST-OS-7.42_202406-01 | June 27, 2024 | Post 7.45 | | WA-GUEST-OS-7.41_202405-01 | June 1, 2024 | Post 7.44 |
-| WA-GUEST-OS-7.40_202404-01 | April 19, 2024 | Post 7.43 |
+|~~WA-GUEST-OS-7.40_202404-01~~| April 19, 2024 | July 31, 2024 |
|~~WA-GUEST-OS-7.39_202403-02~~| April 9, 2024 | June 27, 2024 | |~~WA-GUEST-OS-7.38_202402-01~~| February 24, 2024 | June 1, 2024 | |~~WA-GUEST-OS-7.37_202401-01~~| January 22, 2024 | April 19, 2024 |
The September Guest OS released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-6.73_202407-01 | July 31, 2024 | Post 6.76 |
| WA-GUEST-OS-6.72_202406-01 | June 27, 2024 | Post 6.75 | | WA-GUEST-OS-6.71_202405-01 | June 1, 2024 | Post 6.74 |
-| WA-GUEST-OS-6.70_202404-01 | April 19, 2024 | Post 6.73 |
+|~~WA-GUEST-OS-6.70_202404-01~~| April 19, 2024 | July 31, 2024 |
|~~WA-GUEST-OS-6.69_202403-02~~| April 9, 2024 | June 27, 2024 | |~~WA-GUEST-OS-6.68_202402-01~~| February 24, 2024 | June 1, 2024 | |~~WA-GUEST-OS-6.67_202401-01~~| January 22, 2024 | April 19, 2024 |
The September Guest OS released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-5.97_202407-01 | July 31, 2024 | Post 5.100 |
| WA-GUEST-OS-5.96_202406-01 | June 27, 2024 | Post 5.99 | | WA-GUEST-OS-5.95_202405-01 | June 1, 2024 | Post 5.98 |
-| WA-GUEST-OS-5.94_202404-01 | April 19, 2024 | Post 5.97 |
+|~~WA-GUEST-OS-5.94_202404-01~~| April 19, 2024 | July 31, 2024 |
|~~WA-GUEST-OS-5.93_202403-02~~| April 9, 2024 | June 27, 2024 | |~~WA-GUEST-OS-5.92_202402-01~~| February 24, 2024 | June 1, 2024 | |~~WA-GUEST-OS-5.91_202401-01~~| January 22, 2024 | April 19, 2024 |
The September Guest OS released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-4.133_202407-01 | July 31, 2024 | Post 4.136 |
| WA-GUEST-OS-4.132_202406-01 | June 27, 2024 | Post 4.135 | | WA-GUEST-OS-4.131_202405-01 | June 1, 2024 | Post 4.134 |
-| WA-GUEST-OS-4.130_202404-01 | April 19, 2024 | Post 4.133 |
+|~~WA-GUEST-OS-4.130_202404-01~~| April 19, 2024 | July 31, 2024 |
|~~WA-GUEST-OS-4.129_202403-02~~| April 9, 2024 | June 27, 2024 | |~~WA-GUEST-OS-4.128_202402-01~~| February 24, 2024 | June 1, 2024 | |~~WA-GUEST-OS-4.127_202401-01~~| January 22, 2024 | April 19, 2024 |
The September Guest OS released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-3.141_202407-01 | July 31, 2024 | Post 3.144 |
| WA-GUEST-OS-3.140_202406-01 | June 27, 2024 | Post 3.143 | | WA-GUEST-OS-3.139_202405-01 | June 1, 2024 | Post 3.142 |
-| WA-GUEST-OS-3.138_202404-01 | April 19, 2024 | Post 3.141 |
+|~~WA-GUEST-OS-3.138_202404-01~~| April 19, 2024 | Post 3.141 |
|~~WA-GUEST-OS-3.137_202403-02~~| April 9, 2024 | June 27, 2024 | |~~WA-GUEST-OS-3.136_202402-01~~| February 24, 2024 | June 1, 2024 | |~~WA-GUEST-OS-3.135_202401-01~~| January 22, 2024 | April 19, 2024 |
The September Guest OS released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-2.153_202407-01 | July 31, 2024 | Post 2.156 |
| WA-GUEST-OS-2.152_202406-01 | June 27, 2024 | Post 2.155 | | WA-GUEST-OS-2.151_202405-01 | June 1, 2024 | Post 2.154 |
-| WA-GUEST-OS-2.150_202404-01 | April 19, 2024 | Post 2.153 |
+|~~WA-GUEST-OS-2.150_202404-01~~| April 19, 2024 | July 31, 2024 |
|~~WA-GUEST-OS-2.149_202403-02~~| April 9, 2024 | June 27, 2024 | |~~WA-GUEST-OS-2.148_202402-01~~| February 24, 2024 | June 1, 2024 | |~~WA-GUEST-OS-2.147_202401-01~~| January 22, 2024 | April 19, 2024 |
cloud-services Resource Health For Cloud Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/resource-health-for-cloud-services.md
Title: Resource Health for Cloud Services (Classic) description: This article talks about Resource Health Check (RHC) Support for Microsoft Azure Cloud Services (Classic) --++ Last updated 07/24/2024
cloud-shell Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/vnet/deployment.md
Fill out the form with the following information:
| **Nsg Name** | Enter the name of the NSG. The deployment creates this NSG and assigns an access rule to it. | | **Azure Container Instance OID** | Fill in the value from the prerequisite information that you gathered.<br>The example in this article uses `8fe7fd25-33fe-4f89-ade3-0e705fcf4370`. | | **Container Subnet Name** | Defaults to `cloudshellsubnet`. Enter the name of the subnet for your container. |
-| **Container Subnet Address Prefix** | The example in this article uses `10.1.0.0/16`, which provides 65,543 IP addresses for Cloud Shell instances. |
+| **Container Subnet Address Prefix** | The example in this article uses `10.0.1.0/24`, which provides 254 IP addresses for Cloud Shell instances. |
| **Relay Subnet Name** | Defaults to `relaysubnet`. Enter the name of the subnet that contains your relay. | | **Relay Subnet Address Prefix** | The example in this article uses `10.0.2.0/24`. | | **Storage Subnet Name** | Defaults to `storagesubnet`. Enter the name of the subnet that contains your storage. |
communication-services Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/ai.md
+
+ Title: AI in Azure Communication Services
+
+description: Learn about Communication Services AI concepts
++++ Last updated : 07/10/2024++++
+# Artificial intelligence (AI) overview
+
+Artificial intelligence (AI) technologies can be useful for a wide variety of communication experiences. This concept page summarizes availability of AI and AI-adjacent features in Azure Communication Services. AI features can be split into three categories:
+
+- **Accessors.** APIs that allow you to access Azure Communication data for the purposes of integrating your own separate transformations and bots.
+- **Transformers.** APIs that provide a built-in transformation of communication data using a machine learning or language model.
+- **Bots.** APIs that implement bots that directly communicate with end-users, typically blending structured programming with language models.
+
+Typical communication scenarios integrating these capabilities:
+
+- Transforming audio speech content into text transcriptions
+- Transforming a video feed to blur the user's background
+- Operating a chat or voice bot that responds to human conversation
+- Transforming a corpus of text chat and meeting transcriptions into summaries. This experience might involve a generative AI interface in which a user asks, "summarize all conversations between me and user Joe."
+
+## Messaging: SMS, Chat, Email, WhatsApp
+
+Azure Communication Services capabilities for asynchronous messaging share common patterns for integrating AI listed here.
+
+| Feature | Accessor | Transformer | Bot | Description |
+|--|--|--|--|--|
+| REST APIs and SDKs| ✅ | | | The messaging services center around REST APIs and server-oriented SDKs. You can use these SDKs to export content to an external datastore and attach a language model to summarize conversations. Or you can use the SDKs to integrate a bot that directly engages with human users. |
+| WhatsApp Message Analysis | | ✅ | | The Azure Communication Service messaging APIs for WhatsApp provide a built-in integration with Azure OpenAI that analyses and annotates messages. This integration can detect the user’s language, recognize their intent, and extract key phrases. |
+| [Azure Bot – Chat Channel Integration](../quickstarts/chat/quickstart-botframework-integration.md) | | | ✅ | The Azure Communication Service chat system is directly integrated with Azure Bot services. This integration simplifies creating chat bots that engage with human users.|
+
+## Voice, Video, and Telephony
+
+The patterns for integrating AI into the voice and video system are summarized here.
+
+| Feature | Accessor | Transformer | Bot | Description |
+|--|--|--|--|--|
+| [Call Automation REST APIs and SDKs](../concepts/call-automation/call-automation.md) | ✅ | ✅ | | Call Automation APIs include both accessors and transformers, with REST APIs for playing audio files and recognizing a user’s response. The `recognize` APIs integrate Azure Bot Services to transform users’ audio content into text for easier processing by your service. The most common scenario for these APIs is implementing voice bots, sometimes called interactive voice response (IVR). |
+| [Microsoft Copilot Studio](https://learn.microsoft.com/microsoft-copilot-studio/voice-overview) | | ✅ | ✅ | Copilot studio is directly integrated with Azure Communication Services telephony. This integration is designed for voice bots and IVR. |
+| [Azure Portal Copilot](https://learn.microsoft.com/microsoft-copilot-studio/voice-overview) | | ✅ | ✅ | Copilot studio is directly integrated with Azure Communication Services telephony. This integration is designed for voice bots and IVR. |
+| [Client Raw Audio and Video](../concepts/voice-video-calling/media-access.md) | ✅ | | | The Calling client SDK provides APIs for accessing and modifying the raw audio and video feed. An example scenario is taking the video feed, detecting the human speaker and their background, and customizing that background. |
+| [Client Background effects](../quickstarts/voice-video-calling/get-started-video-effects.md?pivots=platform-web)| | ✅ | | The Calling client SDKs provides APIs for blurring or replacing a user’s background. |
+| [Client Captions](../concepts/voice-video-calling/closed-captions.md) | | ✅ | | The Calling client SDK provides APIs for real-time closed captions. These internally integrate Azure Cognitive Services to transform audio content from the call into text in real-time. |
+| [Client Noise Enhancement and Effects](../tutorials/audio-quality-enhancements/add-noise-supression.md?pivots=platform-web) | | ✅ | | The Calling client SDK integrates a [DeepVQE](https://arxiv.org/abs/2306.03177) machine learning model to improve audio quality through echo cancellation, noise suppression, and dereverberation. This transformation is toggled on and off using the client SDK. |
confidential-computing Confidential Computing Enclaves https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-computing-enclaves.md
Title: Build with SGX enclaves - Azure Virtual Machines description: Learn about Intel SGX hardware to enable your confidential computing workloads. -+ Last updated 11/01/2021
confidential-computing Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-portal.md
Title: Quickstart - Create Intel SGX VM in the Azure Portal description: Get started with your deployments by learning how to quickly create an Intel SGX VM in the Azure Portal -+ Last updated 11/1/2021
cosmos-db Troubleshoot Common Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/troubleshoot-common-issues.md
Title: Troubleshoot common errors in the Azure Cosmos DB for Apache Cassandra description: This article discusses common issues in the Azure Cosmos DB for Apache Cassandra and how to troubleshoot them. -+ Last updated 03/02/2021
cosmos-db Quickstart Rag Chatbot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gen-ai/quickstart-rag-chatbot.md
Title: Quickstart - Build a RAG Chatbot description: Learn how to build a RAG chatbot in Python -+ Last updated 06/26/2024
cosmos-db Rag https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gen-ai/rag.md
Title: Retrieval Augmented Generation (RAG) in Azure Cosmos DB description: Learn about Retrieval Augmented Generation (RAG) in Azure Cosmos DB -+ Last updated 07/09/2024
cosmos-db Change Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/change-log.md
description: Notifies our customers of any minor/medium updates that were pushed
-+ Last updated 07/30/2024
cosmos-db Distribute Throughput Across Partitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/distribute-throughput-across-partitions.md
description: Learn how to redistribute throughput across partitions
-+
cosmos-db Estimate Ru Capacity Planner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/estimate-ru-capacity-planner.md
Title: Estimate costs using the Azure Cosmos DB capacity planner - API for MongoDB description: The Azure Cosmos DB capacity planner allows you to estimate the throughput (RU/s) required and cost for your workload. This article describes how to use the capacity planner to estimate the throughput and cost required when using Azure Cosmos DB for MongoDB. -+ Last updated 06/20/2024
cosmos-db Feature Support 50 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-50.md
description: Learn about Azure Cosmos DB for MongoDB 5.0 server version supporte
-+ Last updated 04/24/2024
cosmos-db Feature Support 60 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-60.md
description: Learn about Azure Cosmos DB for MongoDB 6.0 server version supporte
-+ Last updated 04/24/2024
cosmos-db Feature Support 70 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-70.md
description: Learn about Azure Cosmos DB for MongoDB 7.0 server version supporte
-+ Last updated 07/30/2024
cosmos-db Prevent Rate Limiting Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/prevent-rate-limiting-errors.md
Title: Prevent rate-limiting errors for Azure Cosmos DB for MongoDB operations. description: Learn how to prevent your Azure Cosmos DB for MongoDB operations from hitting rate limiting errors with the SSR (server-side retry) feature.-+
cosmos-db Programmatic Database Migration Assistant Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/programmatic-database-migration-assistant-legacy.md
description: This doc provides an overview of the Database Migration Assistant legacy utility. -+ Last updated 04/20/2023
cosmos-db Troubleshoot Query Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/troubleshoot-query-performance.md
Title: Troubleshoot query issues when using the Azure Cosmos DB for MongoDB description: Learn how to identify, diagnose, and troubleshoot Azure Cosmos DB's API for MongoDB query issues.-+ Last updated 04/02/2024
cosmos-db Tutorial Mongotools Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-mongotools-cosmos-db.md
description: Learn how MongoDB native tools can be used to migrate small dataset
-+ Last updated 08/26/2021
cosmos-db Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/compatibility.md
description: Review Azure Cosmos DB for MongoDB vCore supported features and syn
-+ Last updated 10/21/2023
Below are the list of operators currently supported on Azure Cosmos DB for Mongo
<tr><td><code>$text</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> <tr><td><code>$where</code></td><td><img src="media/compatibility/no-icon.svg" alt="No"></td><td><img src="media/compatibility/no-icon.svg" alt="No"></td><td><img src="media/compatibility/no-icon.svg" alt="No"></td></tr>
-<tr><td rowspan="11">Geospatial Operators</td><td><code>$geoIntersects</code></td><td rowspan="11" colspan="3"><img src="media/compatibility/yes-icon.svg" alt="Yes">In Preview*</td></tr>
-<tr><td><code>$geoWithin</code></td></tr>
-<tr><td><code>$box</code></td></tr>
-<tr><td><code>$center</code></td></tr>
-<tr><td><code>$centerSphere</code></td></tr>
-<tr><td><code>$geometry</code></td></tr>
-<tr><td><code>$maxDistance</code></td></tr>
-<tr><td><code>$minDistance</code></td></tr>
-<tr><td><code>$polygon</code></td></tr>
-<tr><td><code>$near</code></td></tr>
-<tr><td><code>$nearSphere</code></td></tr>
+<tr><td rowspan="11">Geospatial Operators</td><td><code>$geoIntersects</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
+<tr><td><code>$geoWithin</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
+<tr><td><code>$box</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
+<tr><td><code>$center</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
+<tr><td><code>$centerSphere</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
+<tr><td><code>$geometry</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
+<tr><td><code>$maxDistance</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
+<tr><td><code>$minDistance</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
+<tr><td><code>$polygon</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
+<tr><td><code>$near</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
+<tr><td><code>$nearSphere</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
<tr><td rowspan="3">Array Query Operators</td><td><code>$all</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> <tr><td><code>$elemMatch</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
Azure Cosmos DB for MongoDB vCore supports the following indexes and index prope
<tr><td>Multikey Index</td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> <tr><td>Text Index</td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> <tr><td>Wildcard Index</td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
-<tr><td>Geospatial Index</td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">In Preview*</td></tr>
+<tr><td>Geospatial Index</td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
<tr><td>Hashed Index</td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> <tr><td>Vector Index (only available in Cosmos DB)</td><td><img src="medi>vector search</a></td></tr> </table>
cosmos-db Geospatial Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/geospatial-support.md
+
+ Title: Support for Geospatial Queries
+
+description: Introducing support for geospatial queries on vCore based Azure Cosmos DB for MongoDB.
++++++ Last updated : 07/31/2024++
+# Support for Geospatial Queries
++
+Geospatial data can now be stored and queried using vCore-based Azure Cosmos DB for MongoDB. This enhancement provides powerful tools to manage and analyze spatial data, enabling a wide range of applications such as real-time location tracking, route optimization, and spatial analytics.
+
+HereΓÇÖs a quick overview of the geospatial commands and operators now supported:
+
+## Geospatial Query Operators
+
+### **$geoIntersects**
+Selects documents where a specified geometry intersects with the documents' geometry. Useful for finding documents that share any portion of space with a given geometry.
+
+ ```json
+ db.collection.find({
+ location: {
+ $geoIntersects: {
+ $geometry: {
+ type: "<GeoJSON object type>",
+ coordinates: [[[...], [...], [...], [...]]]
+ }
+ }
+ }
+ })
+ ```
+
+### **$geoWithin**
+Selects documents with geospatial data that exists entirely within a specified shape. This operator is used to find documents within a defined area.
+
+ ```json
+ db.collection.find({
+ location: {
+ $geoWithin: {
+ $geometry: {
+ type: "Polygon",
+ coordinates: [[[...], [...], [...], [...]]]
+ }
+ }
+ }
+ })
+ ```
+
+### **$box**
+Defines a rectangular area using two coordinate pairs (bottom-left and top-right corners). Used with the `$geoWithin` operator to find documents within this rectangle. For example, finding all locations within a rectangular region on a map.
+
+ ```json
+ db.collection.find({
+ location: {
+ $geoWithin: {
+ $box: [[lowerLeftLong, lowerLeftLat], [upperRightLong, upperRightLat]]
+ }
+ }
+ })
+ ```
+
+### **$center**
+Defines a circular area using a center point and a radius in radians. Used with the `$geoWithin` operator to find documents within this circle.
+
+ ```json
+ db.collection.find({
+ location: {
+ $geoWithin: {
+ $center: [[longitude, latitude], radius]
+ }
+ }
+ })
+ ```
+
+### **$centerSphere**
+Similar to `$center`, but defines a spherical area using a center point and a radius in radians. Useful for spherical geometry calculations.
+
+ ```json
+ db.collection.find({
+ location: {
+ $geoWithin: {
+ $centerSphere: [[longitude, latitude], radius]
+ }
+ }
+ })
+ ```
+
+### **$geometry**
+Specifies a GeoJSON object to define a geometry. Used with geospatial operators to perform queries based on complex shapes.
+
+ ```json
+ db.collection.find({
+ location: {
+ $geoIntersects: {
+ $geometry: {
+ type: "<GeoJSON object type>",
+ coordinates: [longitude, latitude]
+ }
+ }
+ }
+ })
+ ```
+
+### **$maxDistance**
+Specifies the maximum distance from a point for a geospatial query. Used with `$near` and `$nearSphere` operators. For example, finding all locations within 2 km of a given point.
+
+ ```json
+ db.collection.find({
+ location: {
+ $near: {
+ $geometry: {
+ type: "Point",
+ coordinates: [longitude, latitude]
+ },
+ $maxDistance: distance
+ }
+ }
+ })
+ ```
+
+### **$minDistance**
+Specifies the minimum distance from a point for a geospatial query. Used with `$near` and `$nearSphere` operators.
+
+ ```json
+ db.collection.find({
+ location: {
+ $near: {
+ $geometry: {
+ type: "Point",
+ coordinates: [longitude, latitude]
+ },
+ $minDistance: distance
+ }
+ }
+ })
+ ```
+
+### **$polygon**
+Defines a polygon using an array of coordinate pairs. Used with the `$geoWithin` operator to find documents within this polygon.
+
+ ```json
+ db.collection.find({
+ location: {
+ $geoWithin: {
+ $geometry: {
+ type: "Polygon",
+ coordinates: [[[...], [...], [...], [...]]]
+ }
+ }
+ }
+ })
+ ```
+
+### **$near**
+Finds documents that are near a specified point. Returns documents sorted by distance from the point. For example, finding the nearest restaurants to a user's location.
+
+ ```json
+ db.collection.find({
+ location: {
+ $near: {
+ $geometry: {
+ type: "Point",
+ coordinates: [longitude, latitude]
+ },
+ $maxDistance: distance
+ }
+ }
+ })
+ ```
++
+### **$nearSphere**
+Similar to `$near`, but performs calculations on a spherical surface. Useful for more accurate distance calculations on the Earth's surface.
+
+ ```json
+ db.collection.find({
+ location: {
+ $nearSphere: {
+ $geometry: {
+ type: "Point",
+ coordinates: [longitude, latitude]
+ },
+ $maxDistance: distance
+ }
+ }
+ })
+ ```
+
+## Geospatial Aggregation Stage
+
+### **$geoNear**
+Performs a geospatial query to return documents sorted by distance from a specified point. Can include additional query criteria and return distance information.
+
+ ```json
+ db.collection.aggregate([
+ {
+ $geoNear: {
+ near: {
+ type: "Point",
+ coordinates: [longitude, latitude]
+ },
+ distanceField: "distance",
+ spherical: true
+ }
+ }
+ ])
+ ```
++
+## Considerations and Unsupported Capabilities
++
+* Currently, querying with a single-ringed GeoJSON polygon whose area exceeds a single hemisphere isn't supported. In such cases, Mongo vCore returns the following error message:
+ ```json
+ Error: Custom CRS for big polygon is not supported yet.
+ ```
+* A composite index using a regular index and geospatial index isn't allowed. For example:
+ ```json
+ db.collection.createIndex({a: "2d", b: 1});
+ Error: Compound 2d indexes are not supported yet
+ ```
+* Polygons with holes are currently not supported for use with $geoWithin queries. Although inserting a polygon with holes is not restricted, it eventually fails with the following error message:
+
+ ```json
+ Error: $geoWithin currently doesn't support polygons with holes
+ ```
+* The key field is always required in the $geoNear aggregation stage. If the key field is missing, the following error occurs:
+
+ ```json
+ Error: $geoNear requires a 'key' option as a String
+ ```
+* The `$geoNear`, `$near`, and `$nearSphere` stages don't have strict index requirements, so these queries wouldn't fail if an index is missing.
+
+## Related content
+
+- Read more about [feature compatibility with MongoDB.](compatibility.md)
+- Review options for [migrating from MongoDB to Azure Cosmos DB for MongoDB vCore.](how-to-migrate-native-tools.md)
cosmos-db How To Assess Plan Migration Readiness https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-assess-plan-migration-readiness.md
description: Assess an existing MongoDB installation to determine if it's suitab
-+ - ignite-2023
cosmos-db Migration Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/migration-options.md
description: Review various options to migrate your data from other MongoDB sour
-+ Last updated 11/17/2023
cosmos-db Bulk Executor Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/bulk-executor-java.md
Title: Use bulk executor Java library in Azure Cosmos DB to perform bulk import
description: Bulk import and update Azure Cosmos DB documents using bulk executor Java library -+ ms.devlang: java
cosmos-db Client Metrics Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/client-metrics-java.md
description: Learn how to consume Micrometer metrics in the Java SDK for Azure C
-+ Last updated 12/14/2023
cosmos-db How To Delete By Partition Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-delete-by-partition-key.md
Title: Delete items by partition key value using the Azure Cosmos DB SDK (previ
description: Learn how to delete items by partition key value using the Azure Cosmos DB SDKs -+ Last updated 05/23/2023
cosmos-db How To Migrate From Bulk Executor Library Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-migrate-from-bulk-executor-library-java.md
Title: Migrate from the bulk executor library to the bulk support in Azure Cosmos DB Java V4 SDK description: Learn how to migrate your application from using the bulk executor library to the bulk support in Azure Cosmos DB Java V4 SDK -+ Last updated 05/13/2022
cosmos-db Manage With Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/manage-with-terraform.md
Title: Create and manage Azure Cosmos DB with terraform description: Use terraform to create and configure Azure Cosmos DB for NoSQL -+
cosmos-db Materialized Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/materialized-views.md
description: Learn how to efficiently query a base container by using predefined
-+
cosmos-db Migrate Relational Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/migrate-relational-data.md
description: Learn how to perform a complex data migration for one-to-few relationships from a relational database into Azure Cosmos DB for NoSQL. -+ ms.devlang: python
cosmos-db Multi Tenancy Vector Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/multi-tenancy-vector-search.md
Title: Multitenancy in Azure Cosmos DB description: Learn concepts for building multitenant gen-ai apps in Azure Cosmos DB -+ Last updated 06/26/2024
cosmos-db Query Metrics Performance Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query-metrics-performance-python.md
Title: Get NoSQL query performance and execution metrics in Azure Cosmos DB using Python SDK description: Learn how to retrieve NoSQL query execution metrics and profile NoSQL query performance of Azure Cosmos DB requests. -+ Last updated 05/15/2023
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-dotnet.md
Parse the paginated results of the query by looping through each page of results
## Related content -- [JavaScript/Node.js Quickstart](quickstart-nodejs.md)
+- [Node.js Quickstart](quickstart-nodejs.md)
- [Java Quickstart](quickstart-java.md) - [Python Quickstart](quickstart-python.md) - [Go Quickstart](quickstart-go.md)
cosmos-db Quickstart Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-go.md
Parse the paginated results of the query by looping through each page of results
## Related content - [.NET Quickstart](quickstart-dotnet.md)-- [JavaScript/Node.js Quickstart](quickstart-nodejs.md)
+- [Node.js Quickstart](quickstart-nodejs.md)
- [Java Quickstart](quickstart-java.md) - [Python Quickstart](quickstart-python.md)
cosmos-db Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-java.md
Fetch all of the results of the query using `repository.getItemsByCategory`. Loo
## Related content - [.NET Quickstart](quickstart-dotnet.md)-- [JavaScript/Node.js Quickstart](quickstart-nodejs.md)
+- [Node.js Quickstart](quickstart-nodejs.md)
- [java Quickstart](quickstart-java.md) - [Go Quickstart](quickstart-go.md)
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-python.md
Loop through the results of the query.
## Related content - [.NET Quickstart](quickstart-dotnet.md)-- [JavaScript/Node.js Quickstart](quickstart-nodejs.md)
+- [Node.js Quickstart](quickstart-nodejs.md)
- [Java Quickstart](quickstart-java.md) - [Go Quickstart](quickstart-go.md)
cosmos-db Quickstart Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-terraform.md
tags: azure-resource-manager, terraform -+ Last updated 09/22/2022
cosmos-db Samples Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-terraform.md
Title: Terraform samples for Azure Cosmos DB for NoSQL description: Use Terraform to create and configure Azure Cosmos DB for NoSQL. -+
cosmos-db Throughput Control Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/throughput-control-spark.md
Title: 'Azure Cosmos DB Spark connector: Throughput control' description: Learn how you can control throughput for bulk data movements in the Azure Cosmos DB Spark connector. -+ Last updated 06/22/2022
cosmos-db Troubleshoot Java Sdk V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-java-sdk-v4.md
Title: Diagnose and troubleshoot Azure Cosmos DB Java SDK v4 description: Use features like client-side logging and other third-party tools to identify, diagnose, and troubleshoot Azure Cosmos DB issues in Java SDK v4. -+ Last updated 04/01/2022 ms.devlang: java
cosmos-db Tutorial Deploy App Bicep Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-deploy-app-bicep-aks.md
Title: 'Tutorial: Deploy an ASP.NET web application using Azure Cosmos DB for NoSQL, managed identity, and Azure Kubernetes Service via Bicep' description: Learn how to deploy an ASP.NET MVC web application with Azure Cosmos DB for NoSQL, managed identity, and Azure Kubernetes Service by using Bicep.-+
cosmos-db Vector Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/vector-search.md
The container vector policy can be described as JSON objects. Here are two examp
| **`quantizedFlat`** | Quantizes (compresses) vectors before storing on the index. This can improve latency and throughput at the cost of a small amount of accuracy. | 4096 | | **`diskANN`** | Creates an index based on DiskANN for fast and efficient approximate search. | 4096 |
+> [!NOTE]
+> The `quantizedFlat` and `diskANN` indexes requires that at least 1,000 vectors to be inserted. This is to ensure accuracy of the quantization process. If there are fewer than 1,000 vectors, a full scan is executed instead and will lead to higher RU charges for a vector search query.
+ A few points to note: - The `flat` and `quantizedFlat` index types uses Azure Cosmos DB's index to store and read each vector when performing a vector search. Vector searches with a `flat` index are brute-force searches and produce 100% accuracy or recall. That is, it's guaranteed to find the most similar vectors in the dataset. However, there's a limitation of `505` dimensions for vectors on a flat index.
Here are examples of valid vector index policies:
"excludedPaths": [ { "path": "/_etag/?"
+ },
+ {
+ "path": "/vector1"
} ], "vectorIndexes": [
Here are examples of valid vector index policies:
"excludedPaths": [ { "path": "/_etag/?"
+ },
+ {
+ "path": "/vector1",
+ },
+ {
+ "path": "/vector2",
} ], "vectorIndexes": [
Here are examples of valid vector index policies:
] } ```
-> [!NOTE]
-> The Quantized Flat and DiskANN indexes requires that at least 1,000 vectors to be inserted. This is to ensure accuracy of the quantization process. If there are fewer than 1,000 vectors, a full scan is executed instead, and will lead to higher RU charges for a vector search query.
+
+>[!IMPORTANT]
+> The vector path added to the "excludedPaths" section of the indexing policy to ensure optimized performance for insertion. Not adding the vector path to "excludedPaths" will result in higher RU charge and latency for vector insertions.
> [!IMPORTANT] > At this time in the vector search preview do not use nested path or wild card characters in the path of the vector policy. Replace operations on the vector policy are currently not supported. + ## Perform vector search with queries using VectorDistance() Once you created a container with the desired vector policy, and inserted vector data into the container, you can conduct a vector search using the [Vector Distance](query/vectordistance.md) system function in a query. An example of a NoSQL query that projects the similarity score as the alias `SimilarityScore`, and sorts in order of most-similar to least-similar:
cosmos-db Partial Document Update Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partial-document-update-getting-started.md
description: Learn how to use the partial document update feature with the .NET, Java, and Node SDKs for Azure Cosmos DB for NoSQL. -+
cosmos-db Concepts Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-customer-managed-keys.md
Title: Concepts of customer-managed keys in Azure Cosmos DB for PostgreSQL.
description: Concepts of customer-managed keys. -+ Last updated 04/06/2023
cosmos-db How To Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/how-to-customer-managed-keys.md
Title: How to enable encryption with customer-managed keys in Azure Cosmos DB fo
description: Steps to enable data encryption with customer-managed keys. -+ Last updated 01/03/2024
cosmos-db How To Enable Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/how-to-enable-audit.md
Title: Audit logging - Azure Cosmos DB for PostgreSQL
description: How to enable pgAudit logging in Azure Cosmos DB for PostgreSQL. -+ Last updated 10/01/2023
cosmos-db Howto Ingest Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-ingest-azure-data-factory.md
Title: Using Azure Data Factory for data ingestion - Azure Cosmos DB for Postgre
description: See a step-by-step guide for using Azure Data Factory for ingestion on Azure Cosmos DB for PostgreSQL. -+ Last updated 12/13/2023
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/autoscale.md
description: Use Azure CLI to create a API for Table account and table with auto
-+ Last updated 06/22/2022
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/create.md
description: Create a API for Table table for Azure Cosmos DB
-+
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/lock.md
description: Use Azure CLI to create, list, show properties for, and delete reso
-+ Last updated 06/16/2022
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/serverless.md
description: Use Azure CLI to create a API for Table serverless account and tabl
-+ Last updated 06/16/2022
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/throughput.md
description: Azure CLI scripts for throughput (RU/s) operations for Azure Cosmos
-+
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/autoscale.md
Title: PowerShell script to create a table with autoscale in Azure Cosmos DB for Table description: PowerShell script to create a table with autoscale in Azure Cosmos DB for Table -+
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/create.md
Title: PowerShell script to create a table in Azure Cosmos DB for Table description: Learn how to use a PowerShell script to update the throughput for a database or a container in Azure Cosmos DB for Table -+
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/list-get.md
Title: PowerShell script to list and get Azure Cosmos DB for Table operations description: Azure PowerShell script - Azure Cosmos DB list and get operations for API for Table -+ Last updated 07/31/2020
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/lock.md
description: Create resource lock for Azure Cosmos DB Table API table
-+ Last updated 06/12/2020
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/throughput.md
Title: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DB for Table description: PowerShell scripts for throughput (RU/s) operations for Azure Cosmos DB for Table -+
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/cli-samples.md
Title: Azure CLI Samples for Azure Cosmos DB for Table description: Azure CLI Samples for Azure Cosmos DB for Table -+ Last updated 08/19/2022
cosmos-db Find Request Unit Charge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/find-request-unit-charge.md
Title: Find request unit (RU) charge for a API for Table queries in Azure Cosmos DB description: Learn how to find the request unit (RU) charge for API for Table queries executed against an Azure Cosmos DB container. You can use the Azure portal, .NET, Java, Python, and Node.js languages to find the RU charge. -+ Last updated 10/14/2020
cosmos-db How To Create Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-create-account.md
Title: Create an Azure Cosmos DB for Table account
description: Learn how to create a new Azure Cosmos DB for Table account -+ ms.devlang: csharp
cosmos-db How To Create Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-create-container.md
Title: Create a container in Azure Cosmos DB for Table description: Learn how to create a container in Azure Cosmos DB for Table by using Azure portal, .NET, Java, Python, Node.js, and other SDKs. -+
cosmos-db How To Dotnet Create Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-dotnet-create-item.md
Title: Create an item in Azure Cosmos DB for Table using .NET
description: Learn how to create an item in your Azure Cosmos DB for Table account using the .NET SDK -+ ms.devlang: csharp
cosmos-db How To Dotnet Create Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-dotnet-create-table.md
Title: Create a table in Azure Cosmos DB for Table using .NET
description: Learn how to create a table in your Azure Cosmos DB for Table account using the .NET SDK -+ ms.devlang: csharp
cosmos-db How To Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-dotnet-get-started.md
Title: Get started with Azure Cosmos DB for Table using .NET
description: Get started developing a .NET application that works with Azure Cosmos DB for Table. This article helps you learn how to set up a project and configure access to an Azure Cosmos DB for Table endpoint. -+ ms.devlang: csharp
cosmos-db How To Dotnet Read Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-dotnet-read-item.md
Title: Read an item in Azure Cosmos DB for Table using .NET
description: Learn how to read an item in your Azure Cosmos DB for Table account using the .NET SDK -+ ms.devlang: csharp
cosmos-db How To Use C Plus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-c-plus.md
Title: Use Azure Table Storage and Azure Cosmos DB for Table with C++ description: Store structured data in the cloud using Azure Table storage or the Azure Cosmos DB for Table by using C++.-+ ms.devlang: cpp
cosmos-db How To Use Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-go.md
Title: Use the Azure Table client library for Go description: Store structured data in the cloud using the Azure Table client library for Go.-+ ms.devlang: golang
cosmos-db How To Use Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-java.md
Title: Use the Azure Tables client library for Java description: Store structured data in the cloud using the Azure Tables client library for Java.-+ ms.devlang: java
cosmos-db How To Use Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-nodejs.md
Title: Use Azure Table storage or Azure Cosmos DB for Table from Node.js description: Store structured data in the cloud using Azure Tables client library for Node.js.-+ ms.devlang: javascript
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/introduction.md
description: Use Azure Cosmos DB for Table to store, manage, and query massive volumes of key-value typed NoSQL data. -+ Last updated 02/28/2023
cosmos-db Manage With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/manage-with-bicep.md
Title: Create and manage Azure Cosmos DB for Table with Bicep description: Use Bicep to create and configure Azure Cosmos DB for Table. -+
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/powershell-samples.md
Title: Azure PowerShell samples for Azure Cosmos DB for Table description: Get the Azure PowerShell samples to perform common tasks in Azure Cosmos DB for Table -+
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/quickstart-dotnet.md
Title: Quickstart - Azure Cosmos DB for Table for .NET
description: Learn how to build a .NET app to manage Azure Cosmos DB for Table resources in this quickstart. -+ ms.devlang: csharp
cosmos-db Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/quickstart-java.md
Title: Use the API for Table and Java to build an app - Azure Cosmos DB
description: This quickstart shows how to use the Azure Cosmos DB for Table to create an application with the Azure portal and Java -+ ms.devlang: java
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/quickstart-nodejs.md
Title: 'Quickstart: API for Table with Node.js - Azure Cosmos DB'
description: This quickstart shows how to use the Azure Cosmos DB for Table to create an application with the Azure portal and Node.js -+ ms.devlang: javascript
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/quickstart-python.md
Title: 'Quickstart: API for Table with Python - Azure Cosmos DB' description: This quickstart shows how to access the Azure Cosmos DB for Table from a Python application using the Azure Data Tables SDK -+ ms.devlang: python
cosmos-db Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/resource-manager-templates.md
Title: Resource Manager templates for Azure Cosmos DB for Table description: Use Azure Resource Manager templates to create and configure Azure Cosmos DB for Table. -+
cosmos-db Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/samples-dotnet.md
Title: Examples for Azure Cosmos DB for Table SDK for .NET
description: Find .NET SDK examples on GitHub for common tasks using the Azure Cosmos DB for Table. -+ ms.devlang: csharp
cosmos-db Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/support.md
Title: Azure Table Storage support in Azure Cosmos DB description: Learn how Azure Cosmos DB for Table and Azure Table Storage work together by sharing the same table data model and operations.-+ Last updated 03/07/2023
cosmos-db Tutorial Global Distribution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/tutorial-global-distribution.md
Title: Azure Cosmos DB global distribution tutorial for API for Table
description: Learn how global distribution works in Azure Cosmos DB for Table accounts and how to configure the preferred list of regions -+ Last updated 01/30/2020
cosmos-db Tutorial Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/tutorial-query.md
Title: 'Tutorial: Query Azure Cosmos DB by using the API for Table'
description: Learn how to query data stored in the Azure Cosmos DB for Table account by using OData filters and LINQ queries. -+ Last updated 03/14/2023
cost-management-billing Change Credit Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/change-credit-card.md
Previously updated : 09/13/2023 Last updated : 07/31/2024
If your payment method is being used by an MCA billing profile, the following me
To detach a payment method, a list of conditions must be met. If any conditions aren't met, instructions appear explaining how to meet the condition. A link also appears that takes you to the location where you can resolve the condition.
-When all the conditions are all satisfied, you can detach the payment method from the billing profile.
+When all the conditions are fully satisfied, you can detach the payment method from the billing profile.
> [!NOTE] > When the default payment method is detached, the billing profile is put into an _inactive_ state. Anything deleted in this process will not be able to be recovered. After a billing profile is set to inactive, you must sign up for a new Azure subscription to create new resources.
+#### Detach payment method errors
+
+There are several reasons why trying to detach a payment method might fail. If youΓÇÖre having problems trying to detach (remove) a payment method, it's most likely caused by one of the following reasons.
+
+##### Outstanding charges (past due charges)
+
+You can view your outstanding charges by navigating to **Cost Management + Billing** > select a billing account > under Billing, select **Invoices**, > then in the list of invoices you can view the **Status**. Invoices with **Past Due** status must be paid.
+
+HereΓÇÖs an example of past due charges.
++
+After you pay outstanding charges, you can detach your payment method.
+
+##### Recurring charges set to auto renew
+You can view recurring charges in the Recurring charges page. Navigate to **Cost Management + Billing** > select your billing account > under Billing, select **Recurring charges**. To stop charges from automatically renewing, on the Recurring charges page, select a charge and then one the right side of the row, select the ellipsis symbol (**…**) and then select **Cancel**.
+
+HereΓÇÖs an example of the Recurring charge page with items that must get canceled.
++
+Examples of recurring charges include:
+
+- Azure support agreements
+- Active Azure subscriptions
+- Reservations set to auto renew
+- Savings plans set to auto renew
+
+After all recurring charges are removed, you can detach your payment method.
+
+##### Pending charges
+
+You canΓÇÖt detach your payment method if there are any pending charges. In the Azure portal, pending charges appear with **Due on *date*** status on the Cost Management + Billing > Billing > Invoices page. LetΓÇÖs look at a typical pending charges example.
+
+1. Assume that a billing cycle begins on June 1.
+2. You use Azure services from June 1 to June 10.
+3. You cancel your subscription on June 10.
+4. You pay your invoice on June 12 for the month of May and are paid in full.
+5. However, you still have pending charges for June 1 to June 10.
+
+In this example, you arenΓÇÖt billed for your June usage until the following month (August). So, you canΓÇÖt detach your payment method until you pay the invoice for June, which isnΓÇÖt available until August.
+
+HereΓÇÖs an example of a pending charge.
++
+After you pay all pending charges, you can detach your payment method.
+ #### To detach a payment method 1. In the Delete a payment method area, select the **Detach the current payment method** link. 1. If all conditions are met, select **Detach**. Otherwise, continue to the next step. 1. If Detach is unavailable, a list of conditions is shown. Take the actions listed. Select the link shown in the Detach the default payment method area. Here's an example of a corrective action that explains the actions you need to take. :::image type="content" source="./media/change-credit-card/azure-subscriptions.png" alt-text="Example screenshot showing a corrective action needed to detach a payment method for MCA." :::
-1. When you select the corrective action link, you're redirected to the Azure page where you take action. Take whatever correction action is needed.
+1. When you select the corrective action link, you get redirected to the Azure page where you take action. Take whatever correction action is needed.
1. If necessary, complete all other corrective actions. 1. Navigate back to **Cost Management + Billing** > **Billing profiles** > **Payment methods**. Select **Detach**. At the bottom of the Detach the default payment method page, select **Detach**.
If your payment method is in use by a subscription, do the following steps.
1. In the Delete a payment method area, select **Delete** if all conditions are met. If Delete is unavailable, continue to the next step. 1. A list of conditions is shown. Take the actions listed. Select the link shown in the Delete a payment method area. :::image type="content" source="./media/change-credit-card/payment-method-in-use-mosp.png" alt-text="Example screenshot showing that a payment method is in use by a pay-as-you-go subscription." :::
-1. When you select the corrective action link, you're redirected to the Azure page where you take action. Take whatever correction action is needed.
+1. When you select the corrective action link, you get redirected to the Azure page where you take action. Take whatever correction action is needed.
1. If necessary, complete all other corrective actions. 1. Navigate back to **Cost Management + Billing** > **Billing profiles** > **Payment methods** and delete the payment method.
The following sections answer commonly asked questions about changing your credi
### Why do I keep getting a "session has expired" error message?
-If you get the `Your login session has expired. Please click here to log back in` error message even if you've already logged out and back in, try again with a private browsing session.
+If you already tried signing out and back in, yet you get the error message `Your login session has expired. Please click here to log back in`, try using a private browsing session.
### How do I use a different card for each subscription?
cost-management-billing Analyze Unexpected Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/analyze-unexpected-charges.md
Try the following steps:
- Verify that you have the Owner, Contributor, or Cost Management Contributor role on the subscription. - If you got an error message indicating that you reached the limit of five alerts per subscription, consider editing an existing anomaly alert rule. Add yourself as a recipient instead of creating a new rule in case you exhausted the limit.
+- Anomaly alerts are currently available only in the Azure public cloud. If you are using a government cloud or any of the sovereign clouds, this service is not yet available.
+
+### How can I automate the creation of an anomaly alert rule?
+
+You can automate the creation of anomaly alert rules using the [Scheduled Action API](/rest/api/cost-management/scheduled-actions/create-or-update-by-scope?view=rest-cost-management-2023-11-01&tabs=HTTP), specifying the scheduled action kind as **`InsightAlert.`**
+ ## Get help to identify charges If used the preceding strategies and you still don't understand why you received a charge or if you need other help with billing issues, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
data-factory Connector Dynamics Crm Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dynamics-crm-office-365.md
Previously updated : 07/11/2024 Last updated : 07/31/2024 # Copy and transform data in Dynamics 365 (Microsoft Dataverse) or Dynamics CRM using Azure Data Factory or Azure Synapse Analytics
Additional properties that compare to Dynamics online are **hostName** and **por
| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. If no value is specified, the property uses the default Azure integration runtime. | No | >[!Note]
->Due to the sunset of Idf authentication type by **August 31, 2024**, please upgrade to Active Directory Authentication type before the date if you are currently using it.
+>Due to the sunset of Idf authentication type by **September 15, 2024**, please upgrade to Active Directory Authentication type before the date if you are currently using it.
#### Example: Dynamics on-premises with IFD using Active Directory authentication
data-factory Connector Quickbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-quickbooks.md
description: Learn how to copy data from QuickBooks Online to supported sink dat
-+
The following properties are supported for QuickBooks linked service:
When you use the QuickBooks Online connector in a linked service, it's important to manage OAuth 2.0 refresh tokens from QuickBooks correctly. The linked service uses a refresh token to obtain new access tokens. However, QuickBooks Online periodically updates the refresh token, invalidating the previous one. The linked service does not automatically update the refresh token in Azure Key Vault, so you need to manage updating the refresh token to ensure uninterrupted connectivity. Otherwise you might encounter authentication failures once the refresh token expires.
-You can manually update the refresh token in Azure Key Vault based on QuickBooks Online's refresh token expiry policy. But another approach is to automate updates with a scheduled task or [Azure Function](/samples/azure/azure-quickstart-templates/functions-keyvault-secret-rotation) that checks for a new refresh token and updates it in Azure Key Vault.
+You can manually update the refresh token in Azure Key Vault based on QuickBooks Online's refresh token expiry policy. But another approach is to automate updates with a scheduled task or [Azure Function](https://github.com/Azure-Samples/serverless-keyvault-secret-rotation-handling) that checks for a new refresh token and updates it in Azure Key Vault.
## Dataset properties
data-factory Data Factory Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-troubleshoot-guide.md
Title: General Troubleshooting
description: Learn how to troubleshoot external control activities in Azure Data Factory and Azure Synapse Analytics pipelines. --+ Last updated 05/15/2024
data-factory Data Movement Security Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-movement-security-considerations.md
Title: Security considerations
description: Describes basic security infrastructure that data movement services in Azure Data Factory use to help secure your data. -+ Last updated 01/05/2024
data-factory Enable Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/enable-customer-managed-key.md
Title: Encrypt Azure Data Factory with customer-managed key
description: Enhance Data Factory security with Bring Your Own Key (BYOK) -+ Last updated 10/20/2023
data-factory How To Manage Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-settings.md
Title: Managing Azure Data Factory settings and preferences
description: Learn how to manage Azure Data Factory settings and preferences. --+ Last updated 01/05/2024
data-factory How To Manage Studio Preview Exp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-studio-preview-exp.md
Title: Managing Azure Data Factory studio preview experience
description: Learn more about the Azure Data Factory studio preview experience. --+ Last updated 01/05/2024
data-factory Quota Increase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quota-increase.md
Title: Request quota increases from support description: How to create a support request in the Azure portal for Azure Data Factory to request quota increases or get problem resolution support.-+ - Last updated 05/15/2024
data-factory Solution Template Extract Data From Pdf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-extract-data-from-pdf.md
Title: Extract data from PDF
description: Learn how to use a solution template to extract data from a PDF source using Azure Data Factory. --+ Last updated 05/15/2024
data-factory Solution Template Pii Detection And Masking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-pii-detection-and-masking.md
Title: PII detection and masking
description: Learn how to use a solution template to detect and mask PII data using Azure Data Factory. --+ Last updated 01/05/2024
data-factory Tutorial Bulk Copy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-bulk-copy-portal.md
Title: Copy data in bulk using Azure portal
description: Use Azure Data Factory and Copy Activity to copy data from a source data store to a destination data store in bulk. --+ Last updated 05/15/2024
data-factory Tutorial Bulk Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-bulk-copy.md
Title: Copy data in bulk with PowerShell
description: Use Azure Data Factory with Copy Activity to copy data from a source data store to a destination data store in bulk. --+ Last updated 05/15/2024
data-factory Tutorial Copy Data Dot Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-copy-data-dot-net.md
Title: Copy data from Azure Blob Storage to Azure SQL Database description: 'This tutorial provides step-by-step instructions for copying data from Azure Blob Storage to Azure SQL Database.' --+ Last updated 05/15/2024
data-factory Tutorial Copy Data Portal Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-copy-data-portal-private.md
Title: Use private endpoints to create an Azure Data Factory pipeline description: This tutorial provides step-by-step instructions for using the Azure portal to create a data factory with a pipeline. The pipeline uses the copy activity to copy data from Azure Blob storage to an Azure SQL database. --+ Last updated 05/15/2024
data-factory Tutorial Copy Data Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-copy-data-portal.md
Title: Use the Azure portal to create a data factory pipeline description: This tutorial provides step-by-step instructions for using the Azure portal to create a data factory with a pipeline. The pipeline uses the copy activity to copy data from Azure Blob storage to Azure SQL Database. --+ Last updated 05/15/2024
data-factory Tutorial Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-copy-data-tool.md
Title: Copy data from Azure Blob storage to SQL using Copy Data tool
description: Create an Azure Data Factory and then use the Copy Data tool to copy data from Azure Blob storage to a SQL Database. --+ Last updated 11/02/2023
data-factory Tutorial Push Lineage To Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-push-lineage-to-purview.md
Title: Push Data Factory lineage data to Microsoft Purview
description: Learn about how to push Data Factory lineage data to Microsoft Purview --+ Last updated 05/15/2024
databox Data Box Disk Deploy Upload Verify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-deploy-upload-verify.md
To verify that the data has uploaded into Azure, take the following steps:
## Erasure of data from Data Box Disk
-Once the upload to Azure is complete, the Data Box Disk service erases the data on its disks as per the [NIST SP 800-88](https://csrc.nist.gov/News/2014/Released-SP-800-88-Revision-1,-Guidelines-for-Medi) standard.
+Once the upload to Azure is complete, the Data Box Disk service erases the data on its disks as per the [NIST SP 800-88](https://csrc.nist.gov/News/2014/Released-SP-800-88-Revision-1,-Guidelines-for-Medi) standard. After the erasure is complete, you can [Download the order history](data-box-portal-admin.md#download-order-history).
+ ::: zone target="docs"
defender-for-cloud Concept Data Security Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture.md
Title: Data security posture management
-description: Learn how Defender for Cloud helps improve data security posture in a multicloud environment.
-
+description: Explore how Microsoft Defender for Cloud enhances data security posture management across multicloud environments, ensuring comprehensive protection.
+ - Previously updated : 01/28/2024+ Last updated : 07/30/2024
+#customer intent: As a security professional, I want to understand how Defender for Cloud enhances data security in a multicloud environment so that I can effectively protect sensitive data.
+ # About data security posture management As digital transformation accelerates, organizations move data to the cloud at an exponential rate using multiple data stores such as object stores and managed/hosted databases. The dynamic and complex nature of the cloud increases data threat surfaces and risks. This causes challenges for security teams around data visibility and protecting the cloud data estate.
When you enable data security posture management capabilities with the sensitive
Changes in sensitivity settings take effect the next time that resources are discovered.
-## Next steps
+## Sensitive data discovery
+
+Sensitive data discovery identifies sensitive resources and their related risk and then helps to prioritize and remediate those risks.
+
+Defender for Cloud considers a resource sensitive if a Sensitive Information Type (SIT) is detected in it and the customer has configured the SIT to be considered sensitive. Defender for Cloud detects SITs that are considered sensitive by default.
+
+The sensitive data discovery process operates by taking samples of the resourceΓÇÖs data. The sample data is then used to identify sensitive resources with high confidence without performing a full scan of all assets in the resource.
+
+The sensitive data discovery process is powered by the Microsoft Purview classification engine that uses a common set of SITs and labels for all datastores, regardless of their type or hosting cloud vendor.
+
+Sensitive data discovery detects the existence of sensitive data at the cloud workload level. Sensitive data discovery aims to identify various types of sensitive information, but it might not detect all types.
+
+To get complete data cataloging scanning results with all SITs available in the cloud resource, we recommend you use the scanning features from Microsoft Purview.
+
+### For cloud storage
+
+Defender for Cloud's scanning algorithm selects containers that might contain sensitive information and samples up to 20MBs for each file scanned within the container.
+
+### For cloud Databases
+
+Defender for Cloud selects certain tables and samples between 300 to 1,024 rows using nonblocking queries.
+
+## Next step
-- [Prepare and review requirements](concept-data-security-posture-prepare.md) for data security posture management.-- [Understanding data security posture management - Defender for Cloud in the Field video](episode-thirty-one.md).
+> [!div class="nextstepaction"]
+> [Prepare and review requirements for data security posture management.](concept-data-security-posture-prepare.md)
defender-for-cloud Connect Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/connect-azure-subscription.md
Microsoft Defender for Cloud is a cloud-native application protection platform (
- A cloud security posture management (CSPM) solution that surfaces actions that you can take to prevent breaches - A cloud workload protection platform (CWPP) with specific protections for servers, containers, storage, databases, and other workloads
-Defender for Cloud includes Foundational CSPM capabilities and access to [Microsoft Defender XDR](/microsoft-365/security/defender/microsoft-365-defender) for free. You can add additional paid plans to secure all aspects of your cloud resources. You can try Defender for Cloud for free for the first 30 days. After 30 days charges begin in accordance with the plans enabled in your environment. To learn more about these plans and their costs, see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+Defender for Cloud includes Foundational CSPM capabilities and access to [Microsoft Defender XDR](/microsoft-365/security/defender/microsoft-365-defender) for free. You can add additional paid plans to secure all aspects of your cloud resources. You can try Defender for Cloud for free for the first 30 days. After 30 days, charges begin in accordance with the plans enabled in your environment. To learn more about these plans and their costs, see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
> [!IMPORTANT] > Malware scanning in Defender for Storage is not included for free in the first 30 day trial and will be charged from the first day in accordance with the pricing scheme available on the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
defender-for-cloud Defender For Sql Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-alerts.md
+
+ Title: Explore and investigate Defender for SQL security alerts
+description: Learn how to explore and investigate Defender for SQL security alerts in Microsoft Defender for Cloud.
+++ Last updated : 07/08/2024++
+# Explore and investigate Defender for SQL security alerts
+
+There are several ways to view Microsoft Defender for SQL alerts in Microsoft Defender for Cloud:
+
+- The **Alerts** page.
+
+- The machine's security page.
+
+- The [workload protections dashboard](workload-protections-dashboard.md).
+
+- Through the direct link provided in the alert's email.
+
+## How to view alerts
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Search for and select **Microsoft Defender for Cloud**.
+
+1. Select **Security alerts**.
+
+1. Select an alert.
+
+Alerts are designed to be self-contained, with detailed remediation steps and investigation information in each one. You can investigate further by using other Microsoft Defender for Cloud and Microsoft Sentinel capabilities for a broader view:
+
+- Enable SQL Server's auditing feature for further investigations. If you're a Microsoft Sentinel user, you can upload the SQL auditing logs from the Windows Security Log events to Sentinel and enjoy a rich investigation experience. [Learn more about SQL Server Auditing](/sql/relational-databases/security/auditing/create-a-server-audit-and-server-audit-specification?preserve-view=true&view=sql-server-ver15).
+
+- To improve your security posture, use Defender for Cloud's recommendations for the host machine indicated in each alert to reduce the risks of future attacks.
+
+[Learn more about managing and responding to alerts](managing-and-responding-alerts.yml).
+
+## Related content
+
+For related information, see these resources:
+
+- [Security alerts for SQL Database and Azure Synapse Analytics](alerts-sql-database-and-azure-synapse-analytics.md)
+- [Set up email notifications for security alerts](configure-email-notifications.md)
+- [Learn more about Microsoft Sentinel](../sentinel/index.yml)
defender-for-cloud Defender For Sql Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-usage.md
Title: How to enable Microsoft Defender for SQL servers on machines
+ Title: Enable Microsoft Defender for SQL servers on machines
description: Learn how to protect your Microsoft SQL servers on Azure VMs, on-premises, and in hybrid and multicloud environments with Microsoft Defender for Cloud.
Defender for SQL servers on machines protects your SQL servers hosted in Azure,
|Protected SQL versions:|SQL Server version: 2012, 2014, 2016, 2017, 2019, 2022 <br>- [SQL on Azure virtual machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview)<br>- [SQL Server on Azure Arc-enabled servers](/sql/sql-server/azure-arc/overview)<br><br>| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Microsoft Azure operated by 21Vianet **(Advanced Threat Protection Only)**|
-## Set up Microsoft Defender for SQL servers on machines
+## Enable Defender for SQL on non-Azure machines using the AMA agent
-The Defender for SQL server on machines plan requires Microsoft Monitoring Agent (MMA) or Azure Monitoring Agent (AMA) to prevent attacks and detect misconfigurations. The planΓÇÖs autoprovisioning process is automatically enabled with the plan and is responsible for the configuration of all of the agent components required for the plan to function. This includes installation and configuration of MMA/AMA, workspace configuration, and the installation of the planΓÇÖs VM extension/solution.
+### Prerequisites for enabling Defender for SQL on non-Azure machines
-Microsoft Monitoring Agent (MMA) is set to be retired in August 2024. Defender for Cloud [updated its strategy](upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation) and released a SQL Server-targeted Azure Monitoring Agent (AMA) autoprovisioning process to replace the Microsoft Monitoring Agent (MMA) process which is set to be deprecated. Learn more about the [AMA for SQL server on machines autoprovisioning process](defender-for-sql-autoprovisioning.md) and how to migrate to it.
+- An active Azure subscription.
+- **Subscription owner** permissions on the subscription in which you wish to assign the policy.
-> [!NOTE]
-> Customers who are currently using the **Log Analytics agent/Azure Monitor agent** processes will be asked to [migrate to the AMA for SQL server on machines autoprovisioning process](defender-for-sql-autoprovisioning.md).
+- SQL Server on machines prerequisites:
+ - **Permissions**: the Windows user operating the SQL server must have the **Sysadmin** role on the database.
+ - **Extensions**: The following extensions should be added to the allowlist:
+ - Defender for SQL (IaaS and Arc):
+ - Publisher: Microsoft.Azure.AzureDefenderForSQL
+ - Type: AdvancedThreatProtection.Windows
+ - SQL IaaS Extension (IaaS):
+ - Publisher: Microsoft.SqlServer.Management
+ - Type: SqlIaaSAgent
+ - SQL IaaS Extension (Arc):
+ - Publisher: Microsoft.AzureData
+ - Type: WindowsAgent.SqlServer
+ - AMA extension (IaaS and Arc):
+ - Publisher: Microsoft.Azure.Monitor
+ - Type: AzureMonitorWindowsAgent
-**To enable the plan on a subscription**:
+### Naming conventions in the Deny policy allowlist
+
+- Defender for SQL uses the following naming convention when creating our resources:
+
+ - DCR: `MicrosoftDefenderForSQL--dcr`
+ - DCRA: `/Microsoft.Insights/MicrosoftDefenderForSQL-RulesAssociation`
+ - Resource group: `DefaultResourceGroup-`
+ - Log analytics workspace: `D4SQL--`
+
+- Defender for SQL uses *MicrosoftDefenderForSQL* as a *createdBy* database tag.
+
+### Steps to enable Defender for SQL on non-Azure machines
+
+1. Connect SQL server to Azure Arc. For more information on the supported operating systems, connectivity configuration, and required permissions, see the following documentation:
+
+ - [Plan and deploy Azure Arc-enabled servers](/azure/azure-arc/servers/plan-at-scale-deployment)
+ - [Connected Machine agent prerequisites](/azure/azure-arc/servers/prerequisites)
+ - [Connected Machine agent network requirements](/azure/azure-arc/servers/network-requirements)
+ - [Roles specific to SQL Server enabled by Azure Arc](/sql/relational-databases/security/authentication-access/server-level-roles#roles-specific-to-sql-server-enabled-by-azure-arc)
+
+1. Once Azure Arc is installed, the Azure extension for SQL Server is installed automatically on the database server. For more information, see [Manage automatic connection for SQL Server enabled by Azure Arc](/sql/sql-server/azure-arc/manage-autodeploy).
+
+### Enable Defender for SQL
1. Sign in to the [Azure portal](https://portal.azure.com).
Microsoft Monitoring Agent (MMA) is set to be retired in August 2024. Defender f
1. Select **Save**.
-1. **(Optional)** Configure advanced autoprovisioning settings:
- 1. Navigate to the **Environment settings** page.
+1. Once enabled we use one of the following policy initiatives:
+ - Configure SQL VMs and Arc-enabled SQL servers to install Microsoft Defender for SQL and AMA with a Log analytics workspace (LAW) for a default LAW. This creates resources groups with data collection rules and a default Log analytics workspace. For more information about the Log analytics workspace, see [Log Analytics workspace overview](/azure/azure-monitor/logs/log-analytics-workspace-overview).
+
+ :::image type="content" source="media/defender-for-sql-usage/default-log-analytics-workspace.png" alt-text="Screenshot of how to configure default log analytics workspace." lightbox="media/defender-for-sql-usage/default-log-analytics-workspace.png":::
+
+ - Configure SQL VMs and Arc-enabled SQL servers to install Microsoft Defender for SQL and AMA with a user-defined LAW. This creates a resource group with data collection rules and a custom Log analytics workspace in the predefined region. During this process, we install the Azure monitoring agent. For more information about the options to install the AMA agent, see [Azure Monitor Agent prerequisites](/azure/azure-monitor/agents/azure-monitor-agent-manage#prerequisites).
- 1. Select **Settings & monitoring**.
- - For customers using the new autoprovisioning process, select **Edit configuration** for the **Azure Monitoring Agent for SQL server on machines** component.
- - For customers using the previous autoprovisioning process, select **Edit configuration** for the **Log Analytics agent/Azure Monitor agent** component.
+ :::image type="content" source="media/defender-for-sql-usage/user-defined-log-analytics-workspace.png" alt-text="Screenshot of how to configure user-defined log analytics workspace." lightbox="media/defender-for-sql-usage/user-defined-log-analytics-workspace.png":::
-**To enable the plan on a SQL VM/Arc-enabled SQL Server**:
+1. To complete the installation process, a restart of the SQL server (instance) is necessary for versions 2017 and older.
+
+## Enable Defender for SQL on Azure virtual machines using the AMA agent
+
+### Prerequisites for enabling Defender for SQL on Azure virtual machines
+
+- An active Azure subscription.
+- **Subscription owner** permissions on the subscription in which you wish to assign the policy.
+- SQL Server on machines prerequisites:
+ - **Permissions**: the Windows user operating the SQL server must have the **Sysadmin** role on the database.
+ - **Extensions**: The following extensions should be added to the allowlist:
+ - Defender for SQL (IaaS and Arc):
+ - Publisher: Microsoft.Azure.AzureDefenderForSQL
+ - Type: AdvancedThreatProtection.Windows
+ - SQL IaaS Extension (IaaS):
+ - Publisher: Microsoft.SqlServer.Management
+ - Type: SqlIaaSAgent
+ - SQL IaaS Extension (Arc):
+ - Publisher: Microsoft.AzureData
+ - Type: WindowsAgent.SqlServer
+ - AMA extension (IaaS and Arc):
+ - Publisher: Microsoft.Azure.Monitor
+ - Type: AzureMonitorWindowsAgent
+- Since we're creating a resource group in *East US*, as part of the autoprovisioning enablement process, this region needs to be allowed or Defender for SQL can't complete the installation process successfully.
+
+### Steps to enable Defender for SQL on Azure virtual machines
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Navigate to your SQL VM/Arc-enabled SQL Server.
+1. Search for and select **Microsoft Defender for Cloud**.
-1. In the SQL VM/Arc-enabled SQL Server menu, under Security, selectΓÇ»**Microsoft Defender for Cloud**.
+1. In the Defender for Cloud menu, select **Environment settings**.
-1. In the Microsoft Defender for SQL server on machines section, select **Enable**.
+1. Select the relevant subscription.
-## Explore and investigate security alerts
+1. On the Defender plans page, locate the Databases plan and select **Select types**.
-There are several ways to view Microsoft Defender for SQL alerts in Microsoft Defender for Cloud:
+ :::image type="content" source="media/tutorial-enabledatabases-plan/select-types.png" alt-text="Screenshot that shows you where to select types on the Defender plans page." lightbox="media/tutorial-enabledatabases-plan/select-types.png":::
-- The Alerts page.
+1. In the Resource types selection window, toggle the **SQL servers on machines** plan to **On**.
-- The machine's security page.
+1. Select **Continue**.
-- The [workload protections dashboard](workload-protections-dashboard.md).
+1. Select **Save**.
-- Through the direct link provided in the alert's email.
+1. Once enabled we use one of the following policy initiatives:
+ - Configure SQL VMs and Arc-enabled SQL servers to install Microsoft Defender for SQL and AMA with a Log analytics workspace (LAW) for a default LAW. This creates a resources group in *East US*, and managed identity. For more information about the use of the managed identity, see [Resource Manager template samples for agents in Azure Monitor](/azure/azure-monitor/agents/resource-manager-agent). It also creates a resource group that includes a Data Collection Rules (DCR) and a default LAW. All resources are consolidated under this single resource group. The DCR and LAW are created to align with the region of the virtual machine (VM).
-**To view alerts**:
+ :::image type="content" source="media/defender-for-sql-usage/default-log-analytics-workspace.png" alt-text="Screenshot of how to configure default log analytics workspace." lightbox="media/defender-for-sql-usage/default-log-analytics-workspace.png":::
-1. Sign in to the [Azure portal](https://portal.azure.com).
+ - Configure SQL VMs and Arc-enabled SQL servers to install Microsoft Defender for SQL and AMA with a user-defined LAW. This creates a resources group in *East US*, and managed identity. For more information about the use of the managed identity, see [Resource Manager template samples for agents in Azure Monitor](/azure/azure-monitor/agents/resource-manager-agent). It also creates a resources group with a DCR and a custom LAW in the predefined region.
-1. Search for and select **Microsoft Defender for Cloud**.
+ :::image type="content" source="media/defender-for-sql-usage/user-defined-log-analytics-workspace.png" alt-text="Screenshot of how to configure user-defined log analytics workspace." lightbox="media/defender-for-sql-usage/user-defined-log-analytics-workspace.png":::
+
+1. To complete the installation process, a restart of the SQL server (instance) is necessary for versions 2017 and older.
+
+## Common questions
+
+### Once the deployment is done, how long do we need to wait to see a successful deployment?
+
+It takes approximately 30 minutes to update the protection status by the SQL IaaS Extension, assuming all the prerequisites are fulfilled.
+
+### How do I verify that my deployment ended successfully and that my database is now protected?
+
+1. Locate the database on the upper search bar in the Azure portal.
+1. Under the **Security** tab, select **Defender for Cloud**.
+1. Check the **Protection status**. If the status is **Protected**, the deployment was successful.
++
+### What is the purpose of the managed identity created during the installation process on Azure SQL VMs?
+
+The managed identity is part of the Azure Policy, which pushes out the AMA. It's used by the AMA to access the database to collect the data and send it via the Log Analytics Workspace (LAW) to Defender for Cloud. For more information about the use of the managed identity, see [Resource Manager template samples for agents in Azure Monitor](/azure/azure-monitor/agents/resource-manager-agent).
+
+### Can I use my own DCR or managed-identity instead of Defender for Cloud creating a new one?
+
+Yes, we allow you to bring your own identity or DCR using the following script only. For more information, see [Enable Microsoft Defender for SQL servers on machines at scale](enable-defender-sql-at-scale.md).
+
+### How can I enable SQL servers on machines with AMA at scale?
+
+See [Enable Microsoft Defender for SQL servers on machines at scale](enable-defender-sql-at-scale.md) for the process of how to enable Microsoft Defender for SQLΓÇÖs autoprovisioning across multiple subscriptions simultaneously. It's applicable to SQL servers hosted on Azure Virtual Machines, on-premises environments, and Azure Arc-enabled SQL servers.
+
+### Which tables are used in LAW with AMA?
-1. Select **Security alerts**.
+Defender for SQL on SQL VMs and Arc-enabled SQL servers uses the Log Analytics Workspace (LAW) to transfer data from the database to the Defender for Cloud portal. This means that no data is saved locally at the LAW. The tables in the LAW named *SQLAtpStatus* and the *SqlVulnerabilityAssessmentScanStatus* will be retired [when MMA is deprecated](/azure/azure-monitor/agents/azure-monitor-agent-migration). ATP and VA status can be viewed in the Defender for Cloud portal.
-1. Select an alert.
+### How does Defender for SQL collect logs from the SQL server?
-Alerts are designed to be self-contained, with detailed remediation steps and investigation information in each one. You can investigate further by using other Microsoft Defender for Cloud and Microsoft Sentinel capabilities for a broader view:
+Defender for SQL uses Xevent, beginning with SQL Server 2017. On previous versions of SQL Server, Defender for SQL collects the logs using the SQL server audit logs.
-- Enable SQL Server's auditing feature for further investigations. If you're a Microsoft Sentinel user, you can upload the SQL auditing logs from the Windows Security Log events to Sentinel and enjoy a rich investigation experience. [Learn more about SQL Server Auditing](/sql/relational-databases/security/auditing/create-a-server-audit-and-server-audit-specification?preserve-view=true&view=sql-server-ver15).
+### I see a parameter named enableCollectionOfSqlQueriesForSecurityResearch in the policy initiative. Does this mean that my data is collected for analysis?
-- To improve your security posture, use Defender for Cloud's recommendations for the host machine indicated in each alert to reduce the risks of future attacks.
-
-[Learn more about managing and responding to alerts](managing-and-responding-alerts.yml).
+This parameter isn't in use today. Its default value is *false*, meaning that unless you proactively change the value, it remains false. There's no effect from this parameter.
-## Next steps
+## Related content
For related information, see these resources: - [How Microsoft Defender for Azure SQL can protect SQL servers anywhere](https://www.youtube.com/watch?v=V7RdB6RSVpc). - [Security alerts for SQL Database and Azure Synapse Analytics](alerts-sql-database-and-azure-synapse-analytics.md)-- [Set up email notifications for security alerts](configure-email-notifications.md)-- [Learn more about Microsoft Sentinel](../sentinel/index.yml) - Check out [common questions](faq-defender-for-databases.yml) about Defender for Databases.
defender-for-cloud Enable Defender Sql At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-defender-sql-at-scale.md
+
+ Title: How to enable Microsoft Defender for SQL servers on machines at scale
+description: Learn how to protect your Microsoft SQL servers on Azure VMs, on-premises, and in hybrid and multicloud environments with Microsoft Defender for Cloud at scale.
+++ Last updated : 07/31/2024
+#customer intent: As a user, I want to learn how to enable Defender for SQL servers at scale so that I can protect my SQL servers efficiently.
++
+# Enable Microsoft Defender for SQL servers on machines at scale
+
+Microsoft Defender for Cloud's SQL servers on machines component of the Defender for Databases plan, protects SQL IaaS and Defender for SQL extensions. The SQL servers on machines component identifies and mitigates potential database vulnerabilities while detecting anomalous activity that could indicate threats to your databases.
+
+When [you enable the SQL Server on a machines](tutorial-enable-databases-plan.md#enable-specific-plans-database-protections) component of the Defender for Databases plan, the auto-provision process is it automatically initiated. The auto-provision process installs and configures all the necessary components for the plan to function. Such as the Azure Monitor Agent (AMA), SQL IaaS extension, and Defender for SQL extensions. The auto-provision process also sets up the workspace configuration, Data Collection Rules, identity (if needed), and the SQL IaaS extension.
+
+This page explains how you can enable the auto-provision process for Defender for SQL across multiple subscriptions simultaneously using a PowerShell script. This process applies to SQL servers hosted on Azure VMs, on-premises environments, and Azure Arc-enabled SQL servers. This article also discusses how to utilize extra functionalities that can accommodate various configurations such as:
+
+- Custom data collection rules
+
+- Custom identity management
+
+- Default workspace integration
+
+- Custom workspace configuration
+
+## Prerequisites
+
+- Gain knowledge on:
+ - [SQL server on VMs](https://azure.microsoft.com/products/virtual-machines/sql-server/)
+ - [SQL Server enabled by Azure Arc](/sql/sql-server/azure-arc/overview)
+ - [How to migrate to Azure Monitor Agent from Log Analytics agent](../azure-monitor/agents/azure-monitor-agent-migration.md)
+
+- [Connect AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md)
+- [Connect your GCP project to Microsoft Defender for Cloud](quickstart-onboard-gcp.md)
+
+- Install PowerShell on [Windows](/powershell/scripting/install/installing-powershell-on-windows), [Linux](/powershell/scripting/install/installing-powershell-on-linux), [macOS](/powershell/scripting/install/installing-powershell-on-macos), or [Azure Resource Manager (ARM)](/powershell/scripting/install/powershell-on-arm).
+- [Install the following PowerShell modules](/powershell/module/powershellget/install-module):
+ - Az.Resources
+ - Az.OperationalInsights
+ - Az.Accounts
+ - Az
+ - Az.PolicyInsights
+ - Az.Security
+
+- Permissions: requires VM contributor, contributor, or owner rules.
+
+## PowerShell script parameters and samples
+
+The PowerShell script that enables Microsoft Defender for SQL on Machines on a given subscription has several parameters that you can customize to fit your needs. The following table lists the parameters and their descriptions:
+
+| Parameter name | Required | Description |
+|--|--|--|
+| SubscriptionId: | Required | The Azure subscription ID that you want to enable Defender for SQL servers on machines for. |
+| RegisterSqlVmAgnet | Required | A flag indicating whether to register the SQL VM Agent in bulk. <br><br> Learn more about [registering multiple SQL VMs in Azure with the SQL IaaS Agent extension](/azure/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-vms-bulk?view=azuresql). |
+| WorkspaceResourceId | Optional | The resource ID of the Log Analytics workspace, if you want to use a custom workspace instead of the default one. |
+| DataCollectionRuleResourceId | Optional | The resource ID of the data collection rule, if you want to use a custom DCR instead of the default one. |
+| UserAssignedIdentityResourceId | Optional | The resource ID of the user assigned identity, if you want to use a custom user assigned identity instead of the default one. |
+
+The following sample script is applicable when you use a default Log Analytics workspace, data collection rule, and managed identity.
+
+```powershell
+Write-Host " Enable Defender for SQL on Machines example "
+$SubscriptionId = "<SubscriptionID>"
+.\EnableDefenderForSqlOnMachines.ps1 -SubscriptionId $SubscriptionId -RegisterSqlVmAgnet $RegisterSqlVmAgnet
+```
+
+The following sample script is applicable when you use a custom Log Analytics workspace, data collection rule, and managed identity.
+
+```powershell
+Write-Host " Enable Defender for SQL on Machines example "
+$SubscriptionId = "<SubscriptionID>"
+$RegisterSqlVmAgnet = "false"
+$WorkspaceResourceId = "/subscriptions/<SubscriptionID>/resourceGroups/someResourceGroup/providers/Microsoft.OperationalInsights/workspaces/someWorkspace"
+$DataCollectionRuleResourceId = "/subscriptions/<SubscriptionID>/resourceGroups/someOtherResourceGroup/providers/Microsoft.Insights/dataCollectionRules/someDcr"
+$UserAssignedIdentityResourceId = "/subscriptions/<SubscriptionID>/resourceGroups/someElseResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/someManagedIdentity"
+.\EnableDefenderForSqlOnMachines.ps1 -SubscriptionId $SubscriptionId -RegisterSqlVmAgnet $RegisterSqlVmAgnet -WorkspaceResourceId $WorkspaceResourceId -DataCollectionRuleResourceId $DataCollectionRuleResourceId -UserAssignedIdentityResourceId $UserAssignedIdentityResourceId
+```
+
+## Enable Defender for SQL servers on machines at scale
+
+You can enable Defender for SQL servers on machines at scale by following these steps.
+
+1. Open a PowerShell window.
+
+1. Copy the [EnableDefenderForSqlOnMachines.ps1](https://github.com/Azure/Microsoft-Defender-for-Cloud/blob/fd04330a79a4bcd48424bf7a4058f44216bc40e4/Powershell%20scripts/Enable%20Defender%20for%20SQL%20servers%20on%20machines/EnableDefenderForSqlOnMachines.ps1) script.
+
+1. Paste the script into PowerShell.
+
+1. Enter parameter information as needed.
+
+1. Run the script.
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [Scan your SQL servers for vulnerabilities](defender-for-sql-on-machines-vulnerability-assessment.md)
defender-for-cloud Recommendations Reference Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-ai.md
This recommendation replaces the old recommendation *Cognitive Services accounts
**Severity**: Medium
+### [Azure AI Services resources should use Azure Private Link](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/54f53ddf-6ebd-461e-a247-394c542bc5d1)
+
+**Description**: Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform reduces data leakage risks by handling the connectivity between the consumer and services over the Azure backbone network.
+
+Learn more about private links at: [https://aka.ms/AzurePrivateLink/Overview](https://aka.ms/AzurePrivateLink/Overview)
+
+This recommendation replaces the old recommendation *Cognitive Services should use private link*. It was formerly in category Data recommendations, and was updated to comply with the Azure AI Services naming format and align with the relevant resources.
+
+**Severity**: Medium
++ ### [(Enable if required) Azure AI Services resources should encrypt data at rest with a customer-managed key (CMK)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/18bf29b3-a844-e170-2826-4e95d0ba4dc9/showSecurityCenterCommandBar~/false) **Description**: Using customer-managed keys to encrypt data at rest provides more control over the key lifecycle, including rotation and management. This is particularly relevant for organizations with related compliance requirements.
This recommendation replaces the old recommendation *Cognitive services accounts
**Severity**: Low
+### [Diagnostic logs in Azure AI services resources should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dea5192e-1bb3-101b-b70c-4646546f5e1e)
+
+**Description**: Enable logs for Azure AI services resources. This enables you to recreate activity trails for investigation purposes, when a security incident occurs or your network is compromised.
+
+This recommendation replaces the old recommendation *Diagnostic logs in Search services should be enabled*. It was formerly in the category Cognitive Services and Cognitive Search, and was updated to comply with the Azure AI Services naming format and align with the relevant resources.
+
+**Severity**: Low
+ ### Resource logs in Azure Machine Learning Workspaces should be enabled (Preview) **Description & related policy**: Resource logs enable recreating activity trails to use for investigation purposes when a security incident occurs or when your network is compromised.
This recommendation replaces the old recommendation *Cognitive services accounts
**Severity**: Medium
-### [Diagnostic logs in Azure AI services resources should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dea5192e-1bb3-101b-b70c-4646546f5e1e)
-
-**Description**: Enable logs for Azure AI services resources. This enables you to recreate activity trails for investigation purposes, when a security incident occurs or your network is compromised.
-
-This recommendation replaces the old recommendation *Diagnostic logs in Search services should be enabled*. It was formerly in the category Cognitive Services and Cognitive Search, and was updated to comply with the Azure AI Services naming format and align with the relevant resources.
-
-**Severity**: Low
- ### Resource logs in Azure Databricks Workspaces should be enabled (Preview) **Description & related policy**: Resource logs enable recreating activity trails to use for investigation purposes when a security incident occurs or when your network is compromised.
defender-for-cloud Recommendations Reference Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-data.md
Secure your storage account with greater flexibility using customer-managed keys
**Severity**: Low
-### [Cognitive Services should use private link](recommendations-reference-data.md#cognitive-services-should-use-private-link)
-
-**Description**: Azure Private Link lets you connect your virtual networks to Azure services without a public IP address at the source or destination. The Azure Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Cognitive Services, you'll reduce the potential for data leakage. Learn more about [private links](../private-link/private-link-overview.md). (Related policy: Cognitive Services should use private link).
-
-**Severity**: Medium
-- ### [Diagnostic logs in Azure Data Lake Store should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/ad5bbaeb-7632-5edf-f1c2-752075831ce8) **Description**: Enable logs and retain them for up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised.
defender-for-cloud Release Notes Recommendations Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-recommendations-alerts.md
This article summarizes what's new in security recommendations and alerts in Mic
- Review a complete list of multicloud security recommendations and alerts: - [AI recommendations](/azure/defender-for-cloud/recommendations-reference-ai)
-
- [Compute recommendations](recommendations-reference-compute.md)
-
+ - [Container recommendations](recommendations-reference-container.md) - [Data recommendations](recommendations-reference-data.md) - [DevOps recommendations](recommendations-reference-devops.md)
New and updated recommendations and alerts are added to the table in date order.
| **Date** | **Type** | **State** | **Name** | | -- | | | |
-| July 30 | Recommendation | Preview | [AWS Bedrock should use AWS PrivateLink](recommendations-reference-ai.md#aws-bedrock-should-use-aws-privatelink) |
+|July 31|Recommendation|Update|[Azure AI Services resources should use Azure Private Link](/azure/defender-for-cloud/release-notes-recommendations-alerts)|
+|July 31|Recommendation|GA|[[EDR solution should be installed on Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/06e3a6db-6c0c-4ad9-943f-31d9d73ecf6c)](recommendations-reference-compute.md#edr-solution-should-be-installed-on-virtual-machineshttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkey06e3a6db-6c0c-4ad9-943f-31d9d73ecf6c)|
+|July 31|Recommendation|GA|[[EDR solution should be installed on EC2s](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/77d09952-2bc2-4495-8795-cc8391452f85)](recommendations-reference-compute.md#edr-solution-should-be-installed-on-ec2shttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkey77d09952-2bc2-4495-8795-cc8391452f85)|
+|July 31|Recommendation|GA|[[EDR solution should be installed on GCP Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/68e595c1-a031-4354-b37c-4bdf679732f1)](recommendations-reference-compute.md#edr-solution-should-be-installed-on-gcp-virtual-machineshttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkey68e595c1-a031-4354-b37c-4bdf679732f1)|
+|July 31|Recommendation|GA|[[EDR configuration issues should be resolved on virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dc5357d0-3858-4d17-a1a3-072840bff5be)](recommendations-reference-compute.md#edr-configuration-issues-should-be-resolved-on-virtual-machineshttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkeydc5357d0-3858-4d17-a1a3-072840bff5be)|
+|July 31|Recommendation|GA|[[EDR configuration issues should be resolved on EC2s](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/695abd03-82bd-4d7f-a94c-140e8a17666c)](recommendations-reference-compute.md#edr-configuration-issues-should-be-resolved-on-ec2shttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkey695abd03-82bd-4d7f-a94c-140e8a17666c)|
+|July 31|Recommendation|GA|[[EDR configuration issues should be resolved on GCP virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/f36a15fb-61a6-428c-b719-6319538ecfbc)](recommendations-reference-compute.md#edr-configuration-issues-should-be-resolved-on-gcp-virtual-machineshttpsportalazurecomblademicrosoft_azure_securityrecommendationsbladeassessmentkeyf36a15fb-61a6-428c-b719-6319538ecfbc)|
+| July 31 | Recommendation | Upcoming deprecation | [Adaptive network hardening recommendations should be applied on internet facing virtual machines](recommendations-reference-networking.md#adaptive-network-hardening-recommendations-should-be-applied-on-internet-facing-virtual-machines) |
+| July 31 | Alert | Upcoming deprecation | [Traffic detected from IP addresses recommended for blocking](alerts-azure-network-layer.md#traffic-detected-from-ip-addresses-recommended-for-blocking) |
+| July 30 |Recommendation | Preview | [AWS Bedrock should use AWS PrivateLink](recommendations-reference-ai.md#aws-bedrock-should-use-aws-privatelink) |
|July 22|Recommendation|Update|[(Enable if required) Azure AI Services resources should encrypt data at rest with a customer-managed key (CMK)](/azure/defender-for-cloud/recommendations-reference-ai)| | June 28 | Recommendation | GA | [Azure DevOps repositories should require minimum two-reviewer approval for code pushes](recommendations-reference-devops.md#preview-azure-devops-repositories-should-require-minimum-two-reviewer-approval-for-code-pushes) | | June 28 | Recommendation | GA | [Azure DevOps repositories should not allow requestors to approve their own Pull Requests](recommendations-reference-devops.md#preview-azure-devops-repositories-should-not-allow-requestors-to-approve-their-own-pull-requests) |
New and updated recommendations and alerts are added to the table in date order.
## Related content For information about new features, see [What's new in Defender for Cloud features](release-notes.md).+
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
This article summarizes what's new in Microsoft Defender for Cloud. It includes
| Date | Category | Update | | - | | |
+| July 31 | GA | [General availability of enhanced discovery and configuration recommendations for endpoint protection](#general-availability-of-enhanced-discovery-and-configuration-recommendations-for-endpoint-protection) |
+| July 31 | Upcoming update | [Adaptive network hardening deprecation](#adaptive-network-hardening-deprecation) |
| July 22 | Preview | [Security assessments for GitHub no longer requires additional licensing](#preview-security-assessments-for-github-no-longer-requires-additional-licensing) | | July 18 | Upcoming update | [Updated timelines toward MMA deprecation in Defender for Servers Plan 2](#updated-timelines-toward-mma-deprecation-in-defender-for-servers-plan-2) | | July 18 | Upcoming update | [Deprecation of MMA-related features as part of agent retirement](#deprecation-of-mma-related-features-as-part-of-agent-retirement) |
This article summarizes what's new in Microsoft Defender for Cloud. It includes
| July 9 | Upcoming update | [Inventory experience improvement](#inventory-experience-improvement) | | July 8 | Upcoming update | [Container mapping tool to run by default in GitHub](#container-mapping-tool-to-run-by-default-in-github) |
+### General availability of enhanced discovery and configuration recommendations for endpoint protection
+
+July 31, 2024
+
+Improved discovery features for endpoint protection solutions and enhanced identification of configuration issues are now GA and available for multicloud servers. These updates are included in the Defender for Servers Plan 2 and Defender Cloud Security Posture Management (CSPM).
+
+The enhanced recommendations feature uses [agentless machine scanning](/azure/defender-for-cloud/concept-agentless-data-collection), enabling comprehensive discovery and assessment of the configuration of [supported endpoint detection and response solutions](/azure/defender-for-cloud/endpoint-detection-response). When configuration issues are identified, remediation steps are provided.
+
+With this general availability release, the list of [supported solutions](/azure/defender-for-cloud/endpoint-detection-response) is expanded to include two more endpoint detection and response tools:
+
+- Singularity Platform by SentinelOne
+- Cortex XDR
+
+### Adaptive network hardening deprecation
+
+July 31, 2024
+
+**Estimated date for change: August 31, 2024**
+
+Defender for Server's adaptive network hardening is being deprecated.
+
+The feature deprecation includes the following experiences:
+
+- **Recommendation**: [Adaptive network hardening recommendations should be applied on internet facing virtual machines](recommendations-reference-networking.md#adaptive-network-hardening-recommendations-should-be-applied-on-internet-facing-virtual-machines) [assessment Key: f9f0eed0-f143-47bf-b856-671ea2eeed62]
+- **Alert**: [Traffic detected from IP addresses recommended for blocking](alerts-azure-network-layer.md#traffic-detected-from-ip-addresses-recommended-for-blocking)
+ ### Preview: Security assessments for GitHub no longer requires additional licensing July 22, 2024
July 18, 2024
**Estimated date for change**: August 2024 - With the [upcoming deprecation of Log Analytics agent in August](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-cloud-strategy-and-plan-towards-log/ba-p/3883341), all security value for server protection in Defender for Cloud will rely on integration with Microsoft Defender for Endpoint (MDE) as a single agent and on agentless capabilities provided by the cloud platform and agentless machine scanning.
-The following capabilities have updated timelines and plans, thus the support for them over MMA will be extended for Defender for Cloud customers to the end of November 2024:
+The following capabilities have updated timelines and plans, thus the support for them over MMA will be extended for Defender for Cloud customers to the end of November 2024:
-- **File Integrity Monitoring (FIM):** Public preview release for FIM new version over MDE is planned for __August 2024__. The GA version of FIM powered by Log Analytics agent will continue to be supported for existing customers until the end of __November 2024__.
+- **File Integrity Monitoring (FIM):** Public preview release for FIM new version over MDE is planned for **August 2024**. The GA version of FIM powered by Log Analytics agent will continue to be supported for existing customers until the end of **November 2024**.
-- **Security Baseline:** as an alternative to the version based on MMA, the current preview version based on Guest Configuration will be released to general availability in __September 2024.__ OS Security Baselines powered by Log Analytics agent will continue to be supported for existing customers until the end of **November 2024.**
+- **Security Baseline:** as an alternative to the version based on MMA, the current preview version based on Guest Configuration will be released to general availability in **September 2024.** OS Security Baselines powered by Log Analytics agent will continue to be supported for existing customers until the end of **November 2024.**
For more information, see [Prepare for retirement of the Log Analytics agent](prepare-deprecation-log-analytics-mma-agent.md).
dev-box Concept Dev Box Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-dev-box-role-based-access-control.md
+
+ Title: Azure role-based access control
+
+description: Learn how Microsoft Dev Box provides protection with Azure role-based access control (Azure RBAC) integration.
++++ Last updated : 07/31/2024+
+#Customer intent: As a platform engineer, I want to understand how to assign permissions in Dev Box so that I can give dev managers and developers only the permissions they need.
+
+# Azure role-based access control in Microsoft Dev Box
+
+This article describes the different built-in roles that Microsoft Dev
+Box supports, and how they map to organizational roles like platform
+engineer and dev manager.
+
+Azure role-based access control (RBAC) specifies built-in role
+definitions that outline the permissions to be applied. You assign a
+user or group this role definition via a role assignment for a
+particular scope. The scope can be an individual resource, a resource
+group, or across the subscription. In the next section, you learn which
+[built-in roles](#built-in-roles) Microsoft Dev Box supports.
+
+For more information, see [What is Azure role-based access control (Azure RBAC)](https://microsoft-my.sharepoint.com/azure/role-based-access-control/overview)?
+
+> [!Note]
+> When you make role assignment changes, it can take a few minutes for these updates to propagate.
+
+## Built-in roles
+
+In this article, the Azure built-in roles are logically grouped into
+three organizational role types, based on their scope of influence:
+
+- Platform engineer roles: influence permissions for dev centers,
+ catalogs, and projects
+
+- Dev
+
+- Developer roles: influence permissions for users
+
+The following are the built-in roles supported by Microsoft Dev Box:
+
+| Organizational role type | Built-in role | Description |
+|--|--||
+| Platform engineer | Owner | Grant full control to create/manage dev centers, catalogs, and projects, and grant permissions to other users. Learn more about the [Owner role](#owner-role). |
+| Platform engineer | Contributor | Grant full control to create/manage dev centers, catalogs, and projects, except for assigning roles to other users. Learn more about the [Contributor role](#contributor-role). |
+| Dev Manager | DevCenter Project Admin | Grant permission to manage certain aspects of projects and dev boxes. Learn more about the [DevCenter Project Admin role](#devcenter-project-admin-role). |
+| Developer | Dev Box User | Grant permission to create dev boxes and have full control over the dev boxes that they create. Learn more about the [Dev Box User role](#dev-box-user). |
+
+## Role assignment scope
+
+In Azure RBAC, *scope* is the set of resources that access applies to.
+When you assign a role, it\'s important to understand scope so that you
+grant just the access that is needed.
+
+In Azure, you can specify a scope at four levels: management group,
+subscription, resource group, and resource. Scopes are structured in a
+parent-child relationship. Each level of hierarchy makes the scope more
+specific. You can assign roles at any of these levels of scope. The
+level you select determines how widely the role is applied. Lower levels
+inherit role permissions from higher levels. Learn more about [scope for Azure RBAC](https://microsoft-my.sharepoint.com/azure/role-based-access-control/scope-overview).
+
+For Microsoft Dev Box, consider the following scopes:
+
+ | Scope | Description |
+ |--||
+ | Subscription | Used to manage billing and security for all Azure resources and services. Typically, only Platform engineers have subscription-level access because this role assignment grants access to all resources in the subscription. |
+ | Resource group | A logical container for grouping together resources. Role assignment for the resource group grants permission to the resource group and all resources within it, such as dev centers, dev box definitions, dev box pools, projects, and dev boxes. |
+ | Dev center (resource) | A collection of projects that require similar settings. Role assignment for the dev center grants permission to the dev center itself. Permissions assigned for the dev centers aren't inherited by other dev box resources. |
+ | Project (resource) | An Azure resource used to apply common configuration settings when you create a dev box. Role assignment for the project grants permission only to that specific project. |
+ | Dev box pool (resource) | A collection of dev boxes that you manage together and to which you apply similar settings. Role assignment for the dev box pool grants permission only to that specific dev box pool. |
+ | Dev box definition (resource) | An Azure resource that specifies a source image and size, including compute size and storage size. Role assignment for the dev box definition grants permission only to that specific dev box definition. |
++
+## Roles for common Dev Box activities
+
+The following table shows common Dev Box activities and the role needed for a user to perform that activity.
+
+| Activity | Role type | Role | Scope |
+|--||-|-|
+| Grant permission to create a resource group. | Platform engineer| Owner or Contributor | Subscription |
+| Grant permission to submit a Microsoft support ticket, including to request capacity. | Platform engineer| Owner, Contributor, Support Request Contributor | Subscription |
+| Grant permission to create virtual networks and subnets. | Platform engineer| Network Contributor | Resource group |
+| Grant permission to create a network connection. | Platform engineer| Owner or Contributor | Resource group |
+| Grant permission to assign roles to other users. | Platform engineer| Owner | Resource group |
+| Grant permission to: </br> - Create / manage dev centers. </br> - Add / remove network connections. </br> - Add / remove Azure compute galleries. </br> - Create / manage dev box definitions. </br> - Create / manage projects. </br> - Attach / manage catalog to a dev center or project (project-level catalogs must be enabled on the dev center). </br> - Configure dev box limits. | Platform engineer| Contributor | Resource group |
+| Grant permission to add or remove a network connection for a dev center. | Platform engineer| Contributor | Dev center |
+| Grant permission to enable / disable project catalogs. | Dev Manager | Contributor | Dev center |
+| Grant permission to: </br> - Add, sync, remove catalog (project-level catalogs must be enabled on the dev center). </br> - Create dev box pools. </br> - Stop, start, delete dev boxes in pools. | Dev Manager | DevCenter Project Admin | Project |
+| Create and manage your own dev boxes in a project. | User | Dev Box User | Project |
+| Create and manage catalogs in a GitHub or Azure Repos repository. | Dev Manager | Not governed by RBAC. </br> - The user must be assigned permissions through Azure DevOps or GitHub. | Repository |
+
+> [!Important]
+> An organization's subscription is used to manage billing and security for all Azure resources and services. You
+> can assign the Owner or Contributor role on the subscription.
+> Typically, only Platform engineers have subscription-level access because this includes full access to all resources in the subscription.
+
+## Platform engineer roles
+
+To grant users permission to manage Microsoft Dev Box within your
+organization's subscription, you should assign them the
+[Owner](#owner-role) or [Contributor](#contributor-role) role.
+
+Assign these roles to the *resource group*. The dev centers, network
+connections, dev box definitions, dev box pools, and projects within the
+resource group inherit these role assignments.
++
+### Owner role
+
+Assign the Owner role to give a user full control to create or manage
+Dev Box resources and grant permissions to other users. When a user has
+the Owner role in the resource group, they can do the following
+activities across all resources within the resource group:
+
+- Assign roles to platform engineers, so they can manage Dev Box
+ resources.
+
+- Create dev centers, network connections, dev box definitions, dev
+ box pools, and projects.
+
+- View, delete, and change settings for all dev centers, network
+ connections, dev box definitions, dev box pools, and projects.
+
+- Attach and detach catalogs.
+
+> [!Caution]
+> When you assign the Owner or Contributor role on the resource group, then these permissions also apply to non-Dev Box related resources that exist in the resource group.
+
+### Contributor role
+
+Assign the Contributor role to give a user full control to create or
+manage dev centers and projects within a resource group. The Contributor
+role has the same permissions as the Owner role, *except* for:
+
+- Performing role assignments.
+
+## Dev Manager role
+
+There's one dev manager role: DevCenter Project Admin. This role has
+more restricted permissions at lower-level scopes than the platform
+engineer roles. You can assign this role to dev managers to enable them
+to perform administrative tasks for their team.
++
+### DevCenter Project Admin role
+
+Assign the DevCenter Project Admin to enable:
+
+- Add, sync, remove catalog (project-level catalogs must be enabled on
+ the dev center).
+
+- Create dev box pools.
+
+- Stop, start, delete dev boxes in pools.
+
+## Developer role
+
+There's one developer role: Dev Box User. This role enables developers
+to create and manage their own dev boxes.
++
+### Dev Box User
+
+Assign the Dev Box User role to give users permission to create dev
+boxes and have full control over the dev boxes that they create.
+Developers can perform the following actions on any dev box they create:
+
+- Create
+- Start / stop
+- Restart
+- Delay scheduled shutdown
+- Delete
+
+## Identity and access management (IAM)
+
+The **Access control (IAM)** page in the Azure portal is used to
+configure Azure role-based access control on Microsoft Dev Box
+resources. You can use built-in roles for individuals and groups in
+Active Directory. The following screenshot shows Active Directory
+integration (Azure RBAC) using access control (IAM) in the Azure portal:
++
+For detailed steps, see [Assign Azure roles using the Azure portal](https://microsoft-my.sharepoint.com/azure/role-based-access-control/role-assignments-portal).
+
+## Dev center, resource group, and project structure
+
+Your organization should invest time up front to plan the placement of
+your dev centers, and the structure of resource groups and projects.
+
+**Dev centers:** Organize dev centers by the set of projects you would
+like to manage together, applying similar settings, and providing
+similar templates.
+
+Organizations can use one or more dev center. Typically, each sub-organization within the organization has its own dev center. You might consider creating multiple dev centers in the following cases:
+
+- If you want specific configurations to be available to a subset of
+ projects.
+
+- If different teams need to own and maintain the dev center resource
+ in Azure.
+
+**Projects:** Associated with each dev team or group of people working
+on one app or product.
+
+Planning is especially important when you assign roles to the resource
+group because it also applies permissions to all resources in the
+resource group, including dev centers, network connections, dev box
+definitions, dev box pools, and projects.
+
+To ensure that users are only granted permission to the appropriate
+resources:
+
+- Create resource groups that only contain Dev Box resources.
+
+- Organize projects according to the dev box definition and dev box
+ pools required and the developers who should have access. It\'s
+ important to note that dev box pools determine the location of dev
+ box creation. Developers should create dev boxes in a location close
+ to them for the least latency.
+
+For example, you might create separate projects for different developer
+teams to isolate each team's resources. Dev Managers in a project can
+then be assigned to the Project Admin role, which only grants them
+access to the resources of their team.
+
+> [!Important]
+> Plan the structure upfront because it's not possible to move Dev Box resources like projects to a different resource group after they\'re created.
+
+## Catalog structure
+
+Microsoft Dev Box uses catalogs to enable developers to deploy
+customizations for dev boxes by using a catalog of tasks and a
+configuration file to install software, add extensions, clone
+repositories, and more. 
+
+Microsoft Dev Box stores catalogs in either a [GitHub repository](https://docs.github.com/repositories/creating-and-managing-repositories/about-repositories) or an [Azure DevOps Services repository](/azure/devops/repos/get-started/what-is-repos). You can attach a catalog to a dev center or to a project.
+
+You can attach one or more catalogs to your dev center and manage all
+customizations at that level. To provide more granularity in how
+developers access customizations, you can attach catalogs at the project
+level. In planning where to attach catalogs, you should consider the
+needs of each development team.
+
+## Related content
+
+- [What is Azure role-based access control (Azure RBAC)](https://microsoft-my.sharepoint.com/azure/role-based-access-control/overview)
+- [Understand scope for Azure RBAC](https://microsoft-my.sharepoint.com/azure/role-based-access-control/scope-overview)
dev-box How To Configure Intune Conditional Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-intune-conditional-access-policies.md
After creating your device group and validated your dev box devices are members,
| | | | | Windows 365 | 0af06dc6-e4b5-4f28-818e-e78e62d137a5 | Used when retrieving the list of resources for the user and when users initiate actions on their dev box like Restart. | | Azure Virtual Desktop | 9cdead84-a844-4324-93f2-b2e6bb768d07 | Used to authenticate to the Gateway during the connection and when the client sends diagnostic information to the service. <br>Might also appear as Windows Virtual Desktop. |
- | Microsoft Remote Desktop | a4a365df-50f1-4397-bc59-1a1564b8bb9c | Used to authenticate users to the dev box. <br>Only needed when you configure single sign-on in a provisioning policy. |
+ | Microsoft Remote Desktop | a4a365df-50f1-4397-bc59-1a1564b8bb9c | Used to authenticate users to the dev box. <br>Only needed when you configure single sign-on in a provisioning policy. </br> |
+ | Windows Cloud Login | 270efc09-cd0d-444b-a71f-39af4910ec45 | Used to authenticate users to the dev box. This app replaces the `Microsoft Remote Desktop` app. <br>Only needed when you configure single sign-on in a provisioning policy. </br> |
| Microsoft Developer Portal | 0140a36d-95e1-4df5-918c-ca7ccd1fafc9 | Used to manage the Dev box portal. | 1. You should match your conditional access policies between these apps, which ensures that the policy applies to the developer portal, the connection to the Gateway, and the dev box for a consistent experience. If you want to exclude apps, you must also choose all of these apps.
dev-box How To Use Dev Home Customize Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-use-dev-home-customize-dev-box.md
- build-2024 Previously updated : 06/05/2024 Last updated : 07/30/2024 #customer intent: As a developer, I want to use the Dev Home app to create customizations for my dev boxes, so that I can manage my customizations.
To complete the steps in this article, you must:
## Install or update Dev Home
+You might see Dev Home in the Start menu. If you see it there, you can select it to open the app.
+ Dev Home is available in the Microsoft Store. To install or update Dev Home, go to the Dev Home (Preview) page in the [Microsoft Store](https://aka.ms/devhome) and select **Get** or **Update**.
-You might also see Dev Home in the Start menu. If you see it there, you can select it to open the app.
+## Sign in to Dev Home
+
+Dev Home allows you to work with many different services, like Microsoft Hyper-V, Windows Subsystem for Linux (WSL), and Microsoft Dev Box. To access your chosen service, you must sign in to your Microsoft account, or your Work or School account.
+
+To sign in:
+
+1. Open Dev Home.
+1. From the left menu, select **Settings**.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-settings.png" alt-text="Screenshot of Dev Home, showing the home page with Settings highlighted.":::
+
+1. Select **Accounts**.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-accounts.png" alt-text="Screenshot of Dev Home, showing the Settings page with Accounts highlighted.":::
+
+1. Select **Add account** and follow the prompts to sign in.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-sign-in.png" alt-text="Screenshot of Dev Home, showing the Accounts page with Add account highlighted.":::
## Add extensions
Dev Home uses extensions to provide more functionality. To support the Dev Box f
To add an extension:
-1. Open Dev Home.
-1. From the left menu, select **Extensions**, then in the list of extensions **Available in the Microsoft Store**, on the **Dev Home Azure Extension (Preview)**, select **Get**.
+1. In Dev Home, from the left menu, select **Extensions**.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-extensions.png" alt-text="Screenshot of Dev Home, showing the Extensions page.":::
+
+1. In the list of extensions **Available in the Microsoft Store**, on the **Dev Home Azure Extension (Preview)**, select **Get**.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-get-extension.png" alt-text="Screenshot of Dev Home, showing the Extensions page with the Dev Home Azure Extension highlighted.":::
+
+1. In the Microsoft Store dialog, select **Get** to install the extension.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-get-extension-store.png" alt-text="Screenshot of the Microsoft Store dialog with the Get button highlighted.":::
## Create a dev box
Dev Home provides a guided way for you to create a new dev box.
To create a new dev box:
-1. Open **Dev Home**.
-1. From the left menu, select **Environments**, and then select **Create Environment**.
+1. In **Dev Home**, from the left menu, select **Environments**.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-environments.png" alt-text="Screenshot of Dev Home, showing the Environments page.":::
+
+1. Select **Create Environment**.
- :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-create-environment.png" alt-text="Screenshot of Dev Home, showing the Environments page with Create Environment highlighted." lightbox="media/how-to-use-dev-home-customize-dev-box/dev-home-create-environment.png":::
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-create-environment.png" alt-text="Screenshot of Dev Home, showing the Environments page with Create Environment highlighted.":::
1. On the **Select environment** page, select **Microsoft DevBox**, and then select **Next**.
- :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-create-dev-box.png" alt-text="Screenshot of Dev Home, showing the Select environment page with Microsoft Dev Box highlighted." lightbox="media/how-to-use-dev-home-customize-dev-box/dev-home-create-dev-box.png":::
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-create-dev-box.png" alt-text="Screenshot of Dev Home, showing the Select environment page with Microsoft Dev Box highlighted.":::
1. On the **Configure your environment** page: - Enter a name for your dev box.
To create a new dev box:
- Select the **DevBox Pool** where you want to create the dev box. Select a pool located close to you to reduce latency. - Select **Next**.
- :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-configure-environment.png" alt-text="Screenshot showing the Configure your environment page." lightbox="media/how-to-use-dev-home-customize-dev-box/dev-home-configure-environment.png":::
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-configure-environment.png" alt-text="Screenshot showing the Configure your environment page.":::
1. On the **Review your environment** page, review the details and select **Create environment**.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-review-environment.png" alt-text="Screenshot showing the Review your environment page.":::
+
1. Select **Go to Environments** to see the status of your dev box.
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-go-to-environments.png" alt-text="Screenshot showing the Go to Environments button.":::
++ ## Connect to your dev box Dev Home provides a seamless way for you to use the Windows App to connect to your Dev Box from any device of your choice. You can customize the look and feel of the Windows App to suit the way you work, and switch between multiple services across accounts.
If the Windows App isn't installed, selecting Launch takes you to the web client
### Launch your dev box
-1. Open **Dev Home**.
-1. From the left menu, select **Environments**.
-1. Select the dev box you want to launch.
-1. Select **Launch**.
+1. In **Dev Home**, from the left menu, select **Environments**.
+1. For the dev box you want to launch, select **Launch**.
:::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-launch.png" alt-text="Screenshot showing a dev box with the Launch menu highlighted."::: 1. You can also start and stop the dev box from the **Launch** menu.
- :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-start-stop.png" alt-text="Screenshot of the Launch menu with Start and Stop options." lightbox="media/how-to-use-dev-home-customize-dev-box/dev-home-start-stop.png":::
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-start-stop.png" alt-text="Screenshot of the Launch menu with Start and Stop options.":::
For more information on the Windows App, see [Windows App](https://aka.ms/windowsapp).
-### Access your dev box from the start menu or task bar
+### Manage your dev box
-Dev home enables you to pin your dev box to the start menu or task bar.
+Dev home enables you to pin your dev box to the start menu or task bar, and to delete your dev box.
1. Open **Dev Home**. 1. From the left menu, select **Environments**.
-1. Select the dev box you want to pin or unpin.
-1. Select **Pin to start** or **Pin to taskbar**.
+1. Select the dev box you want to manage.
+1. Select **Pin to start**, **Pin to taskbar**, or **Delete**.
- :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-menu.png" alt-text="Screenshot showing a dev box with the Pin to start and Pin to taskbar options highlighted." lightbox="media/how-to-use-dev-home-customize-dev-box/dev-home-menu.png":::
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-options-menu.png" alt-text="Screenshot showing a dev box with the Pin to start, Pin to taskbar, and Delete options highlighted.":::
## Customize an existing dev box Dev home gives you the opportunity to clone repositories and add software to your existing dev box. Dev home uses the Winget catalog to provide a list of software that you can install on your dev box. 1. Open **Dev Home**.
-1. From the left menu, select **Machine configuration**, and then select **Set up an environment**.
+1. From the left menu, select **Machine configuration**.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-machine-configuration.png" alt-text="Screenshot showing Dev Home with Machine configuration highlighted.":::
+
+1. Select **Set up an environment**.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-set-up-environment.png" alt-text="Screenshot showing the Machine configuration page with Set up environment highlighted.":::
+ 1. Select the environment you want to customize, and then select **Next**.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-select-environment.png" alt-text="Screenshot showing the Select environment page.":::
+
1. On the **Set up an environment** page, if you want to clone a repository to your dev box, select **Add repository**. +
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-add-repository.png" alt-text="Screenshot showing the Add repository button.":::
+ 1. In the **Add repository** dialog, enter the source and destination paths for the repository you want to clone, and then select **Add**.
- :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-clone-repository.png" alt-text="Screenshot showing the Add repository dialog box." lightbox="media/how-to-use-dev-home-customize-dev-box/dev-home-clone-repository.png":::
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-clone-repository.png" alt-text="Screenshot showing the Add repository dialog box.":::
1. When you finish adding repositories, select **Next**.
-1. From the list of application Winget provides, choose the software you want to install on your dev box, and then select **Next**. You can also search for software by name.
- :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-software-install.png" alt-text="Screenshot showing the Add software page.":::
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-repository-next.png" alt-text="Screenshot showing the repositories to add, with the Next button highlighted dialog box.":::
+
+1. Next, you can choose software to install. From the list of applications Winget provides, choose the software you want to install on your dev box. You can also search for software by name.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-software-select.png" alt-text="Screenshot showing the Add software page with Visual Studio Community and PowerShell highlighted.":::
+
+1. When you finish selecting software, select **Next**.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-software-install.png" alt-text="Screenshot showing the Add software page with Next highlighted.":::
1. On the **Review and finish** page, under **See details**: 1. Select the **Environment** tab to see the virtual machine you're configuring. 1. Select the **Applications** tab to see a list of the software you're installing.
- 1. Select the **Repositories** tab to see the list of public GitHub repositories you're cloning
+ 1. Select the **Repositories** tab to see the list of public GitHub repositories you're cloning.
1. Select **I agree and want to continue**, and then select **Set up**.
+
+ :::image type="content" source="media/how-to-use-dev-home-customize-dev-box/dev-home-review-finish.png" alt-text="Screenshot showing the Review and finish page with the I agree and want to continue button highlighted.":::
+
+
+Notice that you can also generate a configuration file based on your selected repositories and software to use in the future to create dev boxes with the same customizations.
++
-You can also generate a configuration file based on your selected repositories and software to use in the future to create dev boxes with the same customizations.
## Related content
digital-twins Concepts Event Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-event-notifications.md
Last updated 11/10/2022 -+ # Optional fields. Don't forget to remove # if you need a field. #
An example message body, populated in AMQP's *data* section:
## Digital twin telemetry messages
-Digital twins can use the [SendTelemetry API](/rest/api/digital-twins/dataplane/twins/digitaltwins_sendtelemetry) to emit *telemetry messages* and send them to egress endpoints.
+Digital twins can use the [SendTelemetry API](/rest/api/digital-twins/dataplane/twins/digital-twins-send-telemetry) to emit *telemetry messages* and send them to egress endpoints.
### Properties
digital-twins Concepts Query Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-query-units.md
Last updated 03/01/2022 -+ # Optional fields. Don't forget to remove # if you need a field. #
To learn more about querying Azure Digital Twins, visit:
* [Query language](concepts-query-language.md) * [Query the twin graph](how-to-query-graph.md)
-* [Query API reference documentation](/rest/api/digital-twins/dataplane/query/querytwins)
+* [Query API reference documentation](/rest/api/digital-twins/dataplane/query/query-twins)
You can find Azure Digital Twins query-related limits in [Azure Digital Twins service limits](reference-service-limits.md).
digital-twins How To Create Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-create-endpoints.md
Last updated 02/08/2023 -+ # Optional fields. Don't forget to remove # if you need a field.
Next, create a SAS token for your storage account that the endpoint can use to a
# [Portal](#tab/portal)
-To create an endpoint with dead-lettering enabled, you must use the [CLI commands](/cli/azure/dt) or [control plane APIs](/rest/api/digital-twins/controlplane/endpoints/digitaltwinsendpoint_createorupdate) to create your endpoint, rather than the Azure portal.
+To create an endpoint with dead-lettering enabled, you must use the [CLI commands](/cli/azure/dt) or [control plane APIs](/rest/api/digital-twins/controlplane/endpoints/digital-twins-endpoint-create-or-update) to create your endpoint, rather than the Azure portal.
For instructions on how to create this type of endpoint with the Azure CLI, switch to the CLI tab for this section.
The value for the parameter is the dead letter SAS URI made up of the storage ac
>[!TIP] >To create a dead-letter endpoint with identity-based authentication, add both the dead-letter parameter from this section and the appropriate [managed identity parameter](#3-create-the-endpoint-with-identity-based-authentication) to the same command.
-You can also create dead letter endpoints using the [Azure Digital Twins control plane APIs](concepts-apis-sdks.md#control-plane-apis) instead of the CLI. To do so, view the [DigitalTwinsEndpoint documentation](/rest/api/digital-twins/controlplane/endpoints/digitaltwinsendpoint_createorupdate) to see how to structure the request and add the dead letter parameters.
+You can also create dead letter endpoints using the [Azure Digital Twins control plane APIs](concepts-apis-sdks.md#control-plane-apis) instead of the CLI. To do so, view the [DigitalTwinsEndpoint documentation](/rest/api/digital-twins/controlplane/endpoints/digital-twins-endpoint-create-or-update) to see how to structure the request and add the dead letter parameters.
digital-twins How To Create Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-create-routes.md
Last updated 1/3/2024 -+ # Optional fields. Don't forget to remove # if you need a field.
# Create event routes and filters in Azure Digital Twins
-This article walks you through the process of creating *event routes* using the [Azure portal](https://portal.azure.com), [Azure CLI az dt route commands](/cli/azure/dt/route), [Event Routes data plane APIs](/rest/api/digital-twins/dataplane/eventroutes), and the [.NET (C#) SDK](/dotnet/api/overview/azure/digitaltwins.core-readme).
+This article walks you through the process of creating *event routes* using the [Azure portal](https://portal.azure.com), [Azure CLI az dt route commands](/cli/azure/dt/route), [Event Routes data plane APIs](/rest/api/digital-twins/dataplane/event-routes), and the [.NET (C#) SDK](/dotnet/api/overview/azure/digitaltwins.core-readme).
Routing [event notifications](concepts-event-notifications.md) from Azure Digital Twins to downstream services or connected compute resources is a two-step process: create endpoints, then create event routes to send data to those endpoints. This article covers the second step, setting up routes to control which events are delivered to which Azure Digital Twin endpoints. To proceed with this article, you should have [endpoints](how-to-create-endpoints.md) already created.
If there's no route name, no messages are routed outside of Azure Digital Twins.
If there's a route name and the filter is `true`, all messages are routed to the endpoint. If there's a route name and a different filter is added, messages will be filtered based on the filter.
-Event routes can be created with the [Azure portal](https://portal.azure.com), [EventRoutes data plane APIs](/rest/api/digital-twins/dataplane/eventroutes), or [az dt route CLI commands](/cli/azure/dt/route). The rest of this section walks through the creation process.
+Event routes can be created with the [Azure portal](https://portal.azure.com), [EventRoutes data plane APIs](/rest/api/digital-twins/dataplane/event-routes), or [az dt route CLI commands](/cli/azure/dt/route). The rest of this section walks through the creation process.
# [Portal](#tab/portal2)
To create an event route with advanced filter options, toggle the switch for the
# [API](#tab/api)
-You can use the [Event Routes data plane APIs](/rest/api/digital-twins/dataplane/eventroutes) to write custom filters. To add a filter, you can use a PUT request to `https://<Your-Azure-Digital-Twins-host-name>/eventRoutes/<event-route-name>?api-version=2020-10-31` with the following body:
+You can use the [Event Routes data plane APIs](/rest/api/digital-twins/dataplane/event-routes) to write custom filters. To add a filter, you can use a PUT request to `https://<Your-Azure-Digital-Twins-host-name>/eventRoutes/<event-route-name>?api-version=2020-10-31` with the following body:
:::code language="json" source="~/digital-twins-docs-samples/api-requests/filter.json":::
digital-twins How To Manage Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-model.md
Last updated 06/11/2024 -+ # Optional fields. Don't forget to remove # if you need a field. #
To decommission a model, you can use the [DecommissionModel](/dotnet/api/azure.d
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/model_operations.cs" id="DecommissionModel":::
-You can also decommission a model using the REST API call [DigitalTwinModels Update](/rest/api/digital-twins/dataplane/models/digitaltwinmodels_update). The `decommissioned` property is the only property that can be replaced with this API call. The JSON Patch document will look something like this:
+You can also decommission a model using the REST API call [DigitalTwinModels Update](/rest/api/digital-twins/dataplane/models/digital-twin-models-update). The `decommissioned` property is the only property that can be replaced with this API call. The JSON Patch document will look something like this:
:::code language="json" source="~/digital-twins-docs-samples/models/patch-decommission-model.json":::
To delete a model, you can use the [DeleteModel](/dotnet/api/azure.digitaltwins.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/model_operations.cs" id="DeleteModel":::
-You can also delete a model with the [DigitalTwinModels Delete](/rest/api/digital-twins/dataplane/models/digitaltwinmodels_delete) REST API call.
+You can also delete a model with the [DigitalTwinModels Delete](/rest/api/digital-twins/dataplane/models/digital-twin-models-delete) REST API call.
#### After deletion: Twins without models
digital-twins How To Manage Twin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-twin.md
Last updated 1/3/2024 -+ # Optional fields. Don't forget to remove # if you need a field. #
The result of calling `object result = await client.GetDigitalTwinAsync("my-moon
The defined properties of the digital twin are returned as top-level properties on the digital twin. Metadata or system information that isn't part of the DTDL definition is returned with a `$` prefix. Metadata properties include the following values: * `$dtId`: The ID of the digital twin in this Azure Digital Twins instance
-* `$etag`: A standard HTTP field assigned by the web server. This is updated to a new value every time the twin is updated, which can be useful to determine whether the twin's data has been updated on the server since a previous check. You can use `If-Match` to perform updates and deletes that only complete if the entity's etag matches the etag provided. For more information on these operations, see the documentation for [DigitalTwins Update](/rest/api/digital-twins/dataplane/twins/digitaltwins_update) and [DigitalTwins Delete](/rest/api/digital-twins/dataplane/twins/digitaltwins_delete).
+* `$etag`: A standard HTTP field assigned by the web server. This is updated to a new value every time the twin is updated, which can be useful to determine whether the twin's data has been updated on the server since a previous check. You can use `If-Match` to perform updates and deletes that only complete if the entity's etag matches the etag provided. For more information on these operations, see the documentation for [DigitalTwins Update](/rest/api/digital-twins/dataplane/twins/digital-twins-update) and [DigitalTwins Delete](/rest/api/digital-twins/dataplane/twins/digital-twins-delete).
* `$metadata`: A set of metadata properties, which might include the following: - `$model`, the DTMI of the model of the digital twin. - `lastUpdateTime` for twin properties. This is a timestamp indicating the date and time that Azure Digital Twins processed the property update message
digital-twins How To Use Postman With Digital Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-postman-with-digital-twins.md
description: Learn how to authorize, configure, and use Postman to call the Azure Digital Twins APIs. This article shows you how to use both the control and data plane APIs. -+ Last updated 01/23/2023
You can now view your request under the collection, and select it to pull up its
To make a Postman request to one of the Azure Digital Twins APIs, you'll need the URL of the API and information about what details it requires. You can find this information in the [Azure Digital Twins REST API reference documentation](/rest/api/azure-digitaltwins/).
-To proceed with an example query, this article will use the [Azure Digital Twins Query API](/rest/api/digital-twins/dataplane/query/querytwins) to query for all the digital twins in an instance.
+To proceed with an example query, this article will use the [Azure Digital Twins Query API](/rest/api/digital-twins/dataplane/query/query-twins) to query for all the digital twins in an instance.
1. Get the request URL and type from the reference documentation. For the Query API, this is currently *POST* `https://digitaltwins-host-name/query?api-version=2020-10-31`. 1. In Postman, set the type for the request and enter the request URL, filling in placeholders in the URL as required. Use your instance's host name from the [Prerequisites section](#prerequisites).
dms Create Dms Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/create-dms-bicep.md
- Title: Create instance of DMS (Bicep)
-description: Learn how to create Database Migration Service by using Bicep.
-- Previously updated : 03/21/2022---
- - subject-armqs
- - mode-arm
- - devx-track-bicep
- - sql-migration-content
--
-# Quickstart: Create instance of Azure Database Migration Service using Bicep
-
-Use Bicep to deploy an instance of the Azure Database Migration Service.
--
-## Prerequisites
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-## Review the Bicep file
-
-The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azure-database-migration-simple-deploy/).
--
-Three Azure resources are defined in the Bicep file:
--- [Microsoft.Network/virtualNetworks](/azure/templates/microsoft.network/virtualnetworks): Creates the virtual network.-- [Microsoft.Network/virtualNetworks/subnets](/azure/templates/microsoft.network/virtualnetworks/subnets): Creates the subnet.-- [Microsoft.DataMigration/services](/azure/templates/microsoft.datamigration/services): Deploys an instance of the Azure Database Migration Service.-
-## Deploy the Bicep file
-
-1. Save the Bicep file as **main.bicep** to your local computer.
-1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
-
- # [CLI](#tab/CLI)
-
- ```azurecli
- az group create --name exampleRG --location eastus
- az deployment group create --resource-group exampleRG --template-file main.bicep --parameters serviceName=<service-name> vnetName=<vnet-name> subnetName=<subnet-name>
- ```
-
- # [PowerShell](#tab/PowerShell)
-
- ```azurepowershell
- New-AzResourceGroup -Name exampleRG -Location eastus
- New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -serviceName "<service-name>" -vnetName "<vnet-name>" -subnetName "<subnet-name>"
- ```
-
-
-
- > [!NOTE]
- > Replace **\<service-name\>** with the name of the new migration service. Replace **\<vnet-name\>** with the name of the new virtual network. Replace **\<subnet-name\>** with the name of the new subnet associated with the virtual network.
-
- When the deployment finishes, you should see a message indicating the deployment succeeded.
-
-## Review deployed resources
-
-Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
-
-# [CLI](#tab/CLI)
-
-```azurecli-interactive
-az resource list --resource-group exampleRG
-```
-
-# [PowerShell](#tab/PowerShell)
-
-```azurepowershell-interactive
-Get-AzResource -ResourceGroupName exampleRG
-```
---
-## Clean up resources
-
-When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
-
-# [CLI](#tab/CLI)
-
-```azurecli-interactive
-az group delete --name exampleRG
-```
-
-# [PowerShell](#tab/PowerShell)
-
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name exampleRG
-```
---
-## Next steps
-
-For other ways to deploy Azure Database Migration Service, see [Azure portal](quickstart-create-data-migration-service-portal.md).
-
-To learn more, see [an overview of Azure Database Migration Service](dms-overview.md).
dms Create Dms Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/create-dms-resource-manager-template.md
- Title: Create instance of DMS (Azure Resource Manager template)
-description: Learn how to create Database Migration Service by using Azure Resource Manager template (ARM template).
-- Previously updated : 06/29/2020---
- - subject-armqs
- - mode-arm
- - devx-track-arm-template
- - sql-migration-content
--
-# Quickstart: Create instance of Azure Database Migration Service using ARM template
-
-Use this Azure Resource Manager template (ARM template) to deploy an instance of the Azure Database Migration Service.
--
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
--
-## Prerequisites
-
-The Azure Database Migration Service ARM template requires the following:
--- The latest version of the [Azure CLI](/cli/azure/install-azure-cli) and/or [PowerShell](/powershell/scripting/install/installing-powershell).-- An Azure subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-
-## Review the template
-
-The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azure-database-migration-simple-deploy/).
--
-Three Azure resources are defined in the template:
--- [Microsoft.Network/virtualNetworks](/azure/templates/microsoft.network/virtualnetworks): Creates the virtual network.-- [Microsoft.Network/virtualNetworks/subnets](/azure/templates/microsoft.network/virtualnetworks/subnets): Creates the subnet.-- [Microsoft.DataMigration/services](/azure/templates/microsoft.datamigration/services): Deploys an instance of the Azure Database Migration Service.-
-More Azure Database Migration Services templates can be found in the [quickstart template gallery](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Datamigration&pageNumber=1&sort=Popular).
--
-## Deploy the template
-
-1. Select the following image to sign in to Azure and open a template. The template creates an instance of the Azure Database Migration Service.
-
- :::image type="content" source="~/reusable-content/ce-skilling/azure/media/template-deployments/deploy-to-azure-button.svg" alt-text="Button to deploy the Resource Manager template to Azure." border="false" link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.datamigration%2fazure-database-migration-simple-deploy%2fazuredeploy.json":::
-
-2. Select or enter the following values.
-
- * **Subscription**: Select an Azure subscription.
- * **Resource group**: Select an existing resource group from the drop down, or select **Create new** to create a new resource group.
- * **Region**: Location where the resources will be deployed.
- * **Service Name**: Name of the new migration service.
- * **Location**: The location of the resource group, leave as the default of `[resourceGroup().location]`.
- * **Vnet Name**: Name of the new virtual network.
- * **Subnet Name**: Name of the new subnet associated with the virtual network.
---
-3. Select **Review + create**. After the instance of Azure Database Migration Service has been deployed successfully, you get a notification.
--
-The Azure portal is used to deploy the template. In addition to the Azure portal, you can also use the Azure PowerShell, Azure CLI, and REST API. To learn other deployment methods, see [Deploy templates](../azure-resource-manager/templates/deploy-powershell.md).
-
-## Review deployed resources
-
-You can use the Azure CLI to check deployed resources.
--
-```azurecli-interactive
-echo "Enter the resource group where your SQL Server VM exists:" &&
-read resourcegroupName &&
-az resource list --resource-group $resourcegroupName
-```
--
-## Clean up resources
-
-When no longer needed, delete the resource group by using Azure CLI or Azure PowerShell:
-
-# [CLI](#tab/CLI)
-
-```azurecli-interactive
-echo "Enter the Resource Group name:" &&
-read resourceGroupName &&
-az group delete --name $resourceGroupName &&
-echo "Press [ENTER] to continue ..."
-```
-
-# [PowerShell](#tab/PowerShell)
-
-```azurepowershell-interactive
-$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
-Remove-AzResourceGroup -Name $resourceGroupName
-Write-Host "Press [ENTER] to continue..."
-```
---
-## Next steps
-
-For a step-by-step tutorial that guides you through the process of creating a template, see:
-
-> [!div class="nextstepaction"]
-> [Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)
-
-For other ways to deploy Azure Database Migration Service, see:
-- [Azure portal](quickstart-create-data-migration-service-portal.md)-
-To learn more, see [an overview of Azure Database Migration Service](dms-overview.md)
dms How To Monitor Migration Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/how-to-monitor-migration-activity.md
- Title: Monitor migration activity - Azure Database Migration Service
-description: Learn to use the Azure Database Migration Service to monitor migration activity.
--- Previously updated : 02/20/2020---
- - sql-migration-content
--
-# Monitor migration activity using the Azure Database Migration Service
-In this article, you learn how to monitor the progress of a migration at both a database level and a table level.
-
-## Monitor at the database level
-To monitor activity at the database level, view the database-level blade:
-
-![Database-level blade](media/how-to-monitor-migration-activity/dms-database-level-blade.png)
-
-> [!NOTE]
-> Selecting the database hyperlink will show you the list of tables and their migration progress.
-
-The following table lists the fields on the database-level blade and describes the various status values associated with each.
-
-<table id='overview' class='overview'>
- <thead>
- <tr>
- <th class="x-hidden-focus"><strong>Field name</strong></th>
- <th><strong>Field substatus</strong></th>
- <th><strong>Description</strong></th>
- </tr>
- </thead>
- <tbody>
- <tr>
- <td rowspan="3" class="ActivityStatus"><strong>Activity status</strong></td>
- <td>Running</td>
- <td>Migration activity is running.</td>
- </tr>
- <tr>
- <td>Succeeded</td>
- <td>Migration activity succeeded without issues.</td>
- </tr>
- <tr>
- <td>Faulted</td>
- <td>Migration failed. Select the ΓÇÿSee error detailsΓÇÖ link under migration details for the complete error message.</td>
- </tr>
- <tr>
- <td rowspan="4" class="Status"><strong>Status</strong></td>
- <td>Initializing</td>
- <td>DMS is setting up the migration pipeline.</td>
- </tr>
- <tr>
- <td>Running</td>
- <td>DMS pipeline is running and performing migration.</td>
- </tr>
- <tr>
- <td>Complete</td>
- <td>Migration completed.</td>
- </tr>
- <tr>
- <td>Failed</td>
- <td>Migration failed. Click on migration details to see migration errors.</td>
- </tr>
- <tr>
- <td rowspan="5" class="migration-details"><strong>Migration details</strong></td>
- <td>Initiating the migration pipeline</td>
- <td>DMS is setting up the migration pipeline.</td>
- </tr>
- <tr>
- <td>Full data load in progress</td>
- <td>DMS is performing initial load.</td>
- </tr>
- <tr>
- <td>Ready for Cutover</td>
- <td>After initial load is completed, DMS will mark database as ready for cutover. User should check if data has caught up on continuous sync.</td>
- </tr>
- <tr>
- <td>All changes applied</td>
- <td>Initial load and continuous sync are complete. This status also occurs after the database is cutover successfully.</td>
- </tr>
- <tr>
- <td>See error details</td>
- <td>Click on the link to show error details.</td>
- </tr>
- <tr>
- <td rowspan="1" class="duration"><strong>Duration</strong></td>
- <td>N/A</td>
- <td>Total time from migration activity being initialized to migration completed or migration faulted.</td>
- </tr>
- </tbody>
-</table>
-
-## Monitor at table level ΓÇô Quick Summary
-To monitor activity at the table level, view the table-level blade. The top portion of the blade shows the detailed number of rows migrated in full load and incremental updates.
-
-The bottom portion of the blade lists the tables and shows a quick summary of migration progress.
-
-![Table-level blade - quick summary](media/how-to-monitor-migration-activity/dms-table-level-blade-summary.png)
-
-The following table describes the fields shown in the table-level details.
-
-| Field name | Description |
-| - | - |
-| **Full load completed** | Number of tables completed full data load. |
-| **Full load queued** | Number of tables being queued for full load. |
-| **Full load loading** | Number of tables failed. |
-| **Incremental updates** | Number of change data capture (CDC) updates in rows applied to target. |
-| **Incremental inserts** | Number of CDC inserts in rows applied to target. |
-| **Incremental deletes** | Number of CDC deletes in rows applied to target. |
-| **Pending changes** | Number of CDC in rows that are still waiting to get applied to target. |
-| **Applied changes** | Total of CDC updates, inserts, and deletes in rows applied to target. |
-| **Tables in error state** | Number of tables that are in ΓÇÿerrorΓÇÖ state during migration. Some examples that tables can go into error state are when there are duplicates identified in the target or data isn't compatible loading in the target table. |
-
-## Monitor at table level ΓÇô Detailed Summary
-There are two tabs that show migration progress in Full load and Incremental data sync.
-
-![Full load tab](media/how-to-monitor-migration-activity/dms-full-load-tab.png)
-
-![Incremental data sync tab](media/how-to-monitor-migration-activity/dms-incremental-data-sync-tab.png)
-
-The following table describes the fields shown in table level migration progress.
-
-| Field name | Description |
-| - | - |
-| **Status - Syncing** | Continuous sync is running. |
-| **Insert** | Number of CDC inserts in rows applied to target. |
-| **Update** | Number of CDC updates in rows applied to target. |
-| **Delete** | Number of CDC deletes in rows applied to target. |
-| **Total Applied** | Total of CDC updates, inserts, and deletes in rows applied to target. |
-| **Data Errors** | Number of data errors happened in this table. Some examples of the errors are *511: Cannot create a row of size %d which is greater than the allowable maximum row size of %d, 8114: Error converting data type %ls to %ls.* Customer should query from dms_apply_exceptions table in Azure target to see the error details. |
-
-> [!NOTE]
-> CDC values of Insert, Update and Delete and Total Applied may decrease when database is cutover or migration is restarted.
-
-## Next steps
-- Review the migration guidance in the Microsoft [Database Migration Guide](/data-migration/).
dms Howto Sql Server To Azure Sql Managed Instance Powershell Offline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-managed-instance-powershell-offline.md
- Title: "PowerShell: Migrate SQL Server to SQL Managed Instance offline"-
-description: Learn to offline migrate from SQL Server to Azure SQL Managed Instance by using Azure PowerShell and the Azure Database Migration Service.
--- Previously updated : 12/16/2020---
- - fasttrack-edit
- - devx-track-azurepowershell
- - sql-migration-content
--
-# Migrate SQL Server to SQL Managed Instance offline with PowerShell & Azure Database Migration Service
-
-In this article, you offline migrate the **Adventureworks2016** database restored to an on-premises instance of SQL Server 2005 or above to an Azure SQL SQL Managed Instance by using Microsoft Azure PowerShell. You can migrate databases from a SQL Server instance to an SQL Managed Instance by using the `Az.DataMigration` module in Microsoft Azure PowerShell.
-
-In this article, you learn how to:
-> [!div class="checklist"]
->
-> * Create a resource group.
-> * Create an instance of Azure Database Migration Service.
-> * Create a migration project in an instance of Azure Database Migration Service.
-> * Run the migration offline.
--
-This article provides steps for an offline migration, but it's also possible to migrate [online](howto-sql-server-to-azure-sql-managed-instance-powershell-online.md).
--
-## Prerequisites
-
-To complete these steps, you need:
-
-* [SQL Server 2016 or above](https://www.microsoft.com/sql-server/sql-server-downloads) (any edition).
-* A local copy of the **AdventureWorks2016** database, which is available for download [here](/sql/samples/adventureworks-install-configure).
-* To enable the TCP/IP protocol, which is disabled by default with SQL Server Express installation. Enable the TCP/IP protocol by following the article [Enable or Disable a Server Network Protocol](/sql/database-engine/configure-windows/enable-or-disable-a-server-network-protocol#SSMSProcedure).
-* To configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).
-* An Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/) before you begin.
-* A SQL Managed Instance. You can create a SQL Managed Instance by following the detail in the article [Create an Azure SQL Managed Instance](/azure/azure-sql/managed-instance/instance-create-quickstart).
-* To download and install [Data Migration Assistant](https://www.microsoft.com/download/details.aspx?id=53595) v3.3 or later.
-* A Microsoft Azure Virtual Network created using the Azure Resource Manager deployment model, which provides the Azure Database Migration Service with site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md).
-* A completed assessment of your on-premises database and schema migration using Data Migration Assistant, as described in the article [Performing a SQL Server migration assessment](/sql/dma/dma-assesssqlonprem).
-* To download and install the `Az.DataMigration` module (version 0.7.2 or later) from the PowerShell Gallery by using [Install-Module PowerShell cmdlet](/powershell/module/powershellget/Install-Module).
-* To ensure that the credentials used to connect to source SQL Server instance have the [CONTROL SERVER](/sql/t-sql/statements/grant-server-permissions-transact-sql) permission.
-* To ensure that the credentials used to connect to target SQL Managed Instance has the CONTROL DATABASE permission on the target SQL Managed Instance databases.
--
-## Sign in to your Microsoft Azure subscription
-
-Sign in to your Azure subscription by using PowerShell. For more information, see the article [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
-
-## Create a resource group
-
-An Azure resource group is a logical container in which Azure resources are deployed and managed.
-
-Create a resource group by using the [`New-AzResourceGroup`](/powershell/module/az.resources/new-azresourcegroup) command.
-
-The following example creates a resource group named *myResourceGroup* in the *East US* region.
-
-```powershell
-New-AzResourceGroup -ResourceGroupName myResourceGroup -Location EastUS
-```
-
-## Create an instance of Azure Database Migration Service
-
-You can create new instance of Azure Database Migration Service by using the `New-AzDataMigrationService` cmdlet.
-This cmdlet expects the following required parameters:
-
-* *Azure Resource Group name*. You can use [`New-AzResourceGroup`](/powershell/module/az.resources/new-azresourcegroup) command to create an Azure Resource group as previously shown and provide its name as a parameter.
-* *Service name*. String that corresponds to the desired unique service name for Azure Database Migration Service.
-* *Location*. Specifies the location of the service. Specify an Azure data center location, such as West US or Southeast Asia.
-* *Sku*. This parameter corresponds to DMS Sku name. Currently supported Sku names are *Basic_1vCore*, *Basic_2vCores*, *GeneralPurpose_4vCores*.
-* *Virtual Subnet Identifier*. You can use the cmdlet [`New-AzVirtualNetworkSubnetConfig`](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) to create a subnet.
-
-The following example creates a service named *MyDMS* in the resource group *MyDMSResourceGroup* located in the *East US* region using a virtual network named *MyVNET* and a subnet named *MySubnet*.
-
-```powershell
-$vNet = Get-AzVirtualNetwork -ResourceGroupName MyDMSResourceGroup -Name MyVNET
-
-$vSubNet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $vNet -Name MySubnet
-
-$service = New-AzDms -ResourceGroupName myResourceGroup `
- -ServiceName MyDMS `
- -Location EastUS `
- -Sku Basic_2vCores `
- -VirtualSubnetId $vSubNet.Id`
-```
-
-## Create a migration project
-
-After creating an Azure Database Migration Service instance, create a migration project. An Azure Database Migration Service project requires connection information for both the source and target instances, as well as a list of databases that you want to migrate as part of the project.
-
-### Create a Database Connection Info object for the source and target connections
-
-You can create a Database Connection Info object by using the `New-AzDmsConnInfo` cmdlet, which expects the following parameters:
-
-* *ServerType*. The type of database connection requested, for example, SQL, Oracle, or MySQL. Use SQL for SQL Server and Azure SQL.
-* *DataSource*. The name or IP of a SQL Server instance or Azure SQL Database instance.
-* *AuthType*. The authentication type for connection, which can be either SqlAuthentication or WindowsAuthentication.
-* *TrustServerCertificate*. This parameter sets a value that indicates whether the channel is encrypted while bypassing walking the certificate chain to validate trust. The value can be `$true` or `$false`.
-
-The following example creates a Connection Info object for a source SQL Server called *MySourceSQLServer* using sql authentication:
-
-```powershell
-$sourceConnInfo = New-AzDmsConnInfo -ServerType SQL `
- -DataSource MySourceSQLServer `
- -AuthType SqlAuthentication `
- -TrustServerCertificate:$true
-```
-
-The next example shows creation of Connection Info for an Azure SQL Managed Instance named ΓÇÿtargetmanagedinstanceΓÇÖ:
-
-```powershell
-$targetResourceId = (Get-AzSqlInstance -Name "targetmanagedinstance").Id
-$targetConnInfo = New-AzDmsConnInfo -ServerType SQLMI -MiResourceId $targetResourceId
-```
-
-### Provide databases for the migration project
-
-Create a list of `AzDataMigrationDatabaseInfo` objects that specifies databases as part of the Azure Database Migration Service project, which can be provided as parameter for creation of the project. You can use the cmdlet `New-AzDataMigrationDatabaseInfo` to create `AzDataMigrationDatabaseInfo`.
-
-The following example creates the `AzDataMigrationDatabaseInfo` project for the **AdventureWorks2016** database and adds it to the list to be provided as parameter for project creation.
-
-```powershell
-$dbInfo1 = New-AzDataMigrationDatabaseInfo -SourceDatabaseName AdventureWorks
-$dbList = @($dbInfo1)
-```
-
-### Create a project object
-
-Finally, you can create an Azure Database Migration Service project called *MyDMSProject* located in *East US* using `New-AzDataMigrationProject` and add the previously created source and target connections and the list of databases to migrate.
-
-```powershell
-$project = New-AzDataMigrationProject -ResourceGroupName myResourceGroup `
- -ServiceName $service.Name `
- -ProjectName MyDMSProject `
- -Location EastUS `
- -SourceType SQL `
- -TargetType SQLMI `
- -SourceConnection $sourceConnInfo `
- -TargetConnection $targetConnInfo `
- -DatabaseInfo $dbList
-```
-
-## Create and start a migration task
-
-Next, create and start an Azure Database Migration Service task. This task requires connection credential information for both the source and target, as well as the list of database tables to be migrated and the information already provided with the project created as a prerequisite.
-
-### Create credential parameters for source and target
-
-Create connection security credentials as a [PSCredential](/dotnet/api/system.management.automation.pscredential) object.
-
-The following example shows the creation of *PSCredential* objects for both the source and target connections, providing passwords as string variables *$sourcePassword* and *$targetPassword*.
-
-```powershell
-$secpasswd = ConvertTo-SecureString -String $sourcePassword -AsPlainText -Force
-$sourceCred = New-Object System.Management.Automation.PSCredential ($sourceUserName, $secpasswd)
-$secpasswd = ConvertTo-SecureString -String $targetPassword -AsPlainText -Force
-$targetCred = New-Object System.Management.Automation.PSCredential ($targetUserName, $secpasswd)
-```
-
-### Create a backup FileShare object
-
-Now create a FileShare object representing the local SMB network share to which Azure Database Migration Service can take the source database backups using the `New-AzDmsFileShare` cmdlet.
-
-```powershell
-$backupPassword = ConvertTo-SecureString -String $password -AsPlainText -Force
-$backupCred = New-Object System.Management.Automation.PSCredential ($backupUserName, $backupPassword)
-
-$backupFileSharePath="\\10.0.0.76\SharedBackup"
-$backupFileShare = New-AzDmsFileShare -Path $backupFileSharePath -Credential $backupCred
-```
-
-### Create selected database object
-
-The next step is to select the source and target databases by using the `New-AzDmsSelectedDB` cmdlet.
-
-The following example is for migrating a single database from SQL Server to an Azure SQL Managed Instance:
-
-```powershell
-$selectedDbs = @()
-$selectedDbs += New-AzDmsSelectedDB -MigrateSqlServerSqlDbMi `
- -Name AdventureWorks2016 `
- -TargetDatabaseName AdventureWorks2016 `
- -BackupFileShare $backupFileShare `
-```
-
-If an entire SQL Server instance needs a lift-and-shift into an Azure SQL Managed Instance, then a loop to take all databases from the source is provided below. In the following example, for $Server, $SourceUserName, and $SourcePassword, provide your source SQL Server details.
-
-```powershell
-$Query = "(select name as Database_Name from master.sys.databases where Database_id>4)";
-$Databases= (Invoke-Sqlcmd -ServerInstance "$Server" -Username $SourceUserName
--Password $SourcePassword -database master -Query $Query)
-$selectedDbs=@()
-foreach($DataBase in $Databases.Database_Name)
- {
- $SourceDB=$DataBase
- $TargetDB=$DataBase
-
-$selectedDbs += New-AzureRmDmsSelectedDB -MigrateSqlServerSqlDbMi `
- -Name $SourceDB `
- -TargetDatabaseName $TargetDB `
- -BackupFileShare $backupFileShare
- }
-```
-
-### SAS URI for Azure Storage Container
-
-Create variable containing the SAS URI that provides the Azure Database Migration Service with access to the storage account container to which the service uploads the backup files.
-
-```powershell
-$blobSasUri="https://mystorage.blob.core.windows.net/test?st=2018-07-13T18%3A10%3A33Z&se=2019-07-14T18%3A10%3A00Z&sp=rwdl&sv=2018-03-28&sr=c&sig=qKlSA512EVtest3xYjvUg139tYSDrasbftY%3D"
-```
-
-> [!NOTE]
-> Azure Database Migration Service does not support using an account level SAS token. You must use a SAS URI for the storage account container. [Learn how to get the SAS URI for blob container](../vs-azure-tools-storage-explorer-blobs.md#get-the-sas-for-a-blob-container).
-
-### Additional configuration requirements
-
-There are a few additional requirements you need to address:
--
-* **Select logins**. Create a list of logins to be migrated as shown in the following example:
-
- ```powershell
- $selectedLogins = @("user1", "user2")
- ```
-
- > [!IMPORTANT]
- > Currently, Azure Database Migration Service only supports migrating SQL logins.
-
-* **Select agent jobs**. Create list of agent jobs to be migrated as shown in the following example:
-
- ```powershell
- $selectedAgentJobs = @("agentJob1", "agentJob2")
- ```
-
- > [!IMPORTANT]
- > Currently, Azure Database Migration Service only supports jobs with T-SQL subsystem job steps.
---
-### Create and start the migration task
-
-Use the `New-AzDataMigrationTask` cmdlet to create and start a migration task.
-
-#### Specify parameters
-
-The `New-AzDataMigrationTask` cmdlet expects the following parameters:
-
-* *TaskType*. Type of migration task to create for SQL Server to Azure SQL Managed Instance migration type *MigrateSqlServerSqlDbMi* is expected.
-* *Resource Group Name*. Name of Azure resource group in which to create the task.
-* *ServiceName*. Azure Database Migration Service instance in which to create the task.
-* *ProjectName*. Name of Azure Database Migration Service project in which to create the task.
-* *TaskName*. Name of task to be created.
-* *SourceConnection*. AzDmsConnInfo object representing source SQL Server connection.
-* *TargetConnection*. AzDmsConnInfo object representing target Azure SQL Managed Instance connection.
-* *SourceCred*. [PSCredential](/dotnet/api/system.management.automation.pscredential) object for connecting to source server.
-* *TargetCred*. [PSCredential](/dotnet/api/system.management.automation.pscredential) object for connecting to target server.
-* *SelectedDatabase*. AzDataMigrationSelectedDB object representing the source and target database mapping.
-* *BackupFileShare*. FileShare object representing the local network share that the Azure Database Migration Service can take the source database backups to.
-* *BackupBlobSasUri*. The SAS URI that provides the Azure Database Migration Service with access to the storage account container to which the service uploads the backup files. Learn how to get the SAS URI for blob container.
-* *SelectedLogins*. List of selected logins to migrate.
-* *SelectedAgentJobs*. List of selected agent jobs to migrate.
-* *SelectedLogins*. List of selected logins to migrate.
-* *SelectedAgentJobs*. List of selected agent jobs to migrate.
---
-#### Create and start a migration task
-
-The following example creates and starts an offline migration task named **myDMSTask**:
-
-```powershell
-$migTask = New-AzDataMigrationTask -TaskType MigrateSqlServerSqlDbMi `
- -ResourceGroupName myResourceGroup `
- -ServiceName $service.Name `
- -ProjectName $project.Name `
- -TaskName myDMSTask `
- -SourceConnection $sourceConnInfo `
- -SourceCred $sourceCred `
- -TargetConnection $targetConnInfo `
- -TargetCred $targetCred `
- -SelectedDatabase $selectedDbs `
- -BackupFileShare $backupFileShare `
- -BackupBlobSasUri $blobSasUri `
- -SelectedLogins $selectedLogins `
- -SelectedAgentJobs $selectedJobs `
-```
--
-## Monitor the migration
-
-To monitor the migration, perform the following tasks.
-
-1. Consolidate all the migration details into a variable called $CheckTask.
-
- To combine migration details such as properties, state, and database information associated with the migration, use the following code snippet:
-
- ```powershell
- $CheckTask = Get-AzDataMigrationTask -ResourceGroupName myResourceGroup `
- -ServiceName $service.Name `
- -ProjectName $project.Name `
- -Name myDMSTask `
- -ResultType DatabaseLevelOutput `
- -Expand
- Write-Host ΓÇÿ$CheckTask.ProjectTask.Properties.OutputΓÇÖ
- ```
-
-2. Use the `$CheckTask` variable to get the current state of the migration task.
-
- To use the `$CheckTask` variable to get the current state of the migration task, you can monitor the migration task running by querying the state property of the task, as shown in the following example:
-
- ```powershell
- if (($CheckTask.ProjectTask.Properties.State -eq "Running") -or ($CheckTask.ProjectTask.Properties.State -eq "Queued"))
- {
- Write-Host "migration task running"
- }
- else if($CheckTask.ProjectTask.Properties.State -eq "Succeeded")
- {
- Write-Host "Migration task is completed Successfully"
- }
- else if($CheckTask.ProjectTask.Properties.State -eq "Failed" -or $CheckTask.ProjectTask.Properties.State -eq "FailedInputValidation" -or $CheckTask.ProjectTask.Properties.State -eq "Faulted")
- {
- Write-Host "Migration Task Failed"
- }
- ```
--
-## Delete the instance of Azure Database Migration Service
-
-After the migration is complete, you can delete the Azure Database Migration Service instance:
-
-```powershell
-Remove-AzDms -ResourceGroupName myResourceGroup -ServiceName MyDMS
-```
--
-## Next steps
-
-Find out more about Azure Database Migration Service in the article [What is the Azure Database Migration Service?](./dms-overview.md).
-
-For information about additional migrating scenarios (source/target pairs), see the Microsoft [Database Migration Guide](/data-migration/).
dms Howto Sql Server To Azure Sql Managed Instance Powershell Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-managed-instance-powershell-online.md
- Title: "PowerShell: Migrate SQL Server to SQL Managed Instance online"-
-description: Learn to online migrate from SQL Server to Azure SQL Managed Instance by using Azure PowerShell and the Azure Database Migration Service.
--- Previously updated : 12/16/2020---
- - devx-track-azurepowershell
- - sql-migration-content
--
-# Migrate SQL Server to SQL Managed Instance online with PowerShell & Azure Database Migration Service
-
-In this article, you online migrate the **Adventureworks2016** database restored to an on-premises instance of SQL Server 2005 or above to an Azure SQL SQL Managed Instance by using Microsoft Azure PowerShell. You can migrate databases from a SQL Server instance to an SQL Managed Instance by using the `Az.DataMigration` module in Microsoft Azure PowerShell.
-
-In this article, you learn how to:
-> [!div class="checklist"]
->
-> * Create a resource group.
-> * Create an instance of Azure Database Migration Service.
-> * Create a migration project in an instance of Azure Database Migration Service.
-> * Run the migration online.
--
-This article provides steps for an online migration, but it's also possible to migrate [offline](howto-sql-server-to-azure-sql-managed-instance-powershell-offline.md).
--
-## Prerequisites
-
-To complete these steps, you need:
-
-* [SQL Server 2016 or above](https://www.microsoft.com/sql-server/sql-server-downloads) (any edition).
-* A local copy of the **AdventureWorks2016** database, which is available for download [here](/sql/samples/adventureworks-install-configure).
-* To enable the TCP/IP protocol, which is disabled by default with SQL Server Express installation. Enable the TCP/IP protocol by following the article [Enable or Disable a Server Network Protocol](/sql/database-engine/configure-windows/enable-or-disable-a-server-network-protocol#SSMSProcedure).
-* To configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).
-* An Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/) before you begin.
-* A SQL Managed Instance. You can create a SQL Managed Instance by following the detail in the article [Create a ASQL Managed Instance](/azure/azure-sql/managed-instance/instance-create-quickstart).
-* To download and install [Data Migration Assistant](https://www.microsoft.com/download/details.aspx?id=53595) v3.3 or later.
-* A Microsoft Azure Virtual Network created using the Azure Resource Manager deployment model, which provides the Azure Database Migration Service with site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md).
-* A completed assessment of your on-premises database and schema migration using Data Migration Assistant, as described in the article [Performing a SQL Server migration assessment](/sql/dma/dma-assesssqlonprem).
-* To download and install the `Az.DataMigration` module (version 0.7.2 or later) from the PowerShell Gallery by using [Install-Module PowerShell cmdlet](/powershell/module/powershellget/Install-Module).
-* To ensure that the credentials used to connect to source SQL Server instance have the [CONTROL SERVER](/sql/t-sql/statements/grant-server-permissions-transact-sql) permission.
-* To ensure that the credentials used to connect to target SQL Managed Instance has the CONTROL DATABASE permission on the target SQL Managed Instance databases.
-
- > [!IMPORTANT]
- > For online migrations, you must already have set up your Microsoft Entra credentials. For more information, see the article [Use the portal to create a Microsoft Entra application and service principal that can access resources](/entra/identity-platform/howto-create-service-principal-portal).
-
-## Create a resource group
-
-An Azure resource group is a logical container in which Azure resources are deployed and managed.
-
-Create a resource group by using the [`New-AzResourceGroup`](/powershell/module/az.resources/new-azresourcegroup) command.
-
-The following example creates a resource group named *myResourceGroup* in the *East US* region.
-
-```powershell
-New-AzResourceGroup -ResourceGroupName myResourceGroup -Location EastUS
-```
-
-## Create an instance of DMS
-
-You can create new instance of Azure Database Migration Service by using the `New-AzDataMigrationService` cmdlet.
-This cmdlet expects the following required parameters:
-
-* *Azure Resource Group name*. You can use [`New-AzResourceGroup`](/powershell/module/az.resources/new-azresourcegroup) command to create an Azure Resource group as previously shown and provide its name as a parameter.
-* *Service name*. String that corresponds to the desired unique service name for Azure Database Migration Service.
-* *Location*. Specifies the location of the service. Specify an Azure data center location, such as West US or Southeast Asia.
-* *Sku*. This parameter corresponds to DMS Sku name. Currently supported Sku names are *Basic_1vCore*, *Basic_2vCores*, *GeneralPurpose_4vCores*.
-* *Virtual Subnet Identifier*. You can use the cmdlet [`New-AzVirtualNetworkSubnetConfig`](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) to create a subnet.
-
-The following example creates a service named *MyDMS* in the resource group *MyDMSResourceGroup* located in the *East US* region using a virtual network named *MyVNET* and a subnet named *MySubnet*.
--
-```powershell
-$vNet = Get-AzVirtualNetwork -ResourceGroupName MyDMSResourceGroup -Name MyVNET
-
-$vSubNet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $vNet -Name MySubnet
-
-$service = New-AzDms -ResourceGroupName myResourceGroup `
- -ServiceName MyDMS `
- -Location EastUS `
- -Sku Basic_2vCores `
- -VirtualSubnetId $vSubNet.Id`
-```
-
-## Create a migration project
-
-After creating an Azure Database Migration Service instance, create a migration project. An Azure Database Migration Service project requires connection information for both the source and target instances, as well as a list of databases that you want to migrate as part of the project.
-Define source and target connectivity connection strings.
-
-The following script defines source SQL Server connection details:
-
-```powershell
-# Source connection properties
-$sourceDataSource = "<mysqlserver.domain.com/privateIP of source SQL>"
-$sourceUserName = "domain\user"
-$sourcePassword = "mypassword"
-```
-
-The following script defines the target SQL Managed Instance connection details:
-
-```powershell
-# Target MI connection properties
-$targetMIResourceId = "/subscriptions/<subid>/resourceGroups/<rg>/providers/Microsoft.Sql/managedInstances/<myMI>"
-$targetUserName = "<user>"
-$targetPassword = "<password>"
-```
---
-### Define source and target database mapping
-
-Provide databases to be migrated in this migration project
-
-The following script maps source database to the respective new database on the target SQL Managed Instance with the provided name.
-
-```powershell
-# Selected databases (Source database name to target database name mapping)
-$selectedDatabasesMap = New-Object System.Collections.Generic.Dictionary"[String,String]"
-$selectedDatabasesMap.Add("<source database name>", "<target database name> ")
-```
-
-For multiple databases, add the list of databases to the above script using the following format:
-
-```powershell
-$selectedDatabasesMap = New-Object System.Collections.Generic.Dictionary"[String,String]"
-$selectedDatabasesMap.Add("<source database name1>", "<target database name1> ")
-$selectedDatabasesMap.Add("<source database name2>", "<target database name2> ")
-```
-
-### Create DMS Project
-
-You can create an Azure Database Migration Service project within the DMS instance.
-
-```powershell
-# Create DMS project
-$project = New-AzDataMigrationProject `
- -ResourceGroupName $dmsResourceGroupName `
- -ServiceName $dmsServiceName `
- -ProjectName $dmsProjectName `
- -Location $dmsLocation `
- -SourceType SQL `
- -TargetType SQLMI `
-
-# Create selected databases object
-$selectedDatabases = @();
-foreach ($sourceDbName in $selectedDatabasesMap.Keys){
- $targetDbName = $($selectedDatabasesMap[$sourceDbName])
- $selectedDatabases += New-AzDmsSelectedDB -MigrateSqlServerSqlDbMi `
- -Name $sourceDbName `
- -TargetDatabaseName $targetDbName `
- -BackupFileShare $backupFileShare `
-}
-```
---
-### Create a backup FileShare object
-
-Now create a FileShare object representing the local SMB network share to which Azure Database Migration Service can take the source database backups using the New-AzDmsFileShare cmdlet.
-
-```powershell
-# SMB Backup share properties
-$smbBackupSharePath = "\\shareserver.domain.com\mybackup"
-$smbBackupShareUserName = "domain\user"
-$smbBackupSharePassword = "<password>"
-
-# Create backup file share object
-$smbBackupSharePasswordSecure = ConvertTo-SecureString -String $smbBackupSharePassword -AsPlainText -Force
-$smbBackupShareCredentials = New-Object System.Management.Automation.PSCredential ($smbBackupShareUserName, $smbBackupSharePasswordSecure)
-$backupFileShare = New-AzDmsFileShare -Path $smbBackupSharePath -Credential $smbBackupShareCredentials
-```
-
-### Define the Azure Storage
-
-Select Azure Storage Container to be used for migration:
-
-```powershell
-# Storage resource id
-$storageAccountResourceId = "/subscriptions/<subscriptionname>/resourceGroups/<rg>/providers/Microsoft.Storage/storageAccounts/<mystorage>"
-```
--
-<a name='configure-azure-active-directory-app'></a>
-
-### Configure Microsoft Entra App
-
-Provide the required details for Microsoft Entra ID for an online SQL Managed Instance migration:
-
-```powershell
-# AAD properties
-$AADAppId = "<appid-guid>"
-$AADAppKey = "<app-key>"
-
-# Create AAD object
-$AADAppKeySecure = ConvertTo-SecureString $AADAppKey -AsPlainText -Force
-$AADApp = New-AzDmsAadApp -ApplicationId $AADAppId -AppKey $AADAppKeySecure
-```
--
-## Create and start a migration task
-
-Next, create and start an Azure Database Migration Service task. Call the source and target using variables, and list the database tables to be migrated:
--
-```powershell
-# Managed Instance online migration properties
-$dmsTaskName = "testmigration1"
-
-# Create source connection info
-$sourceConnInfo = New-AzDmsConnInfo -ServerType SQL `
- -DataSource $sourceDataSource `
- -AuthType WindowsAuthentication `
- -TrustServerCertificate:$true
-$sourcePasswordSecure = ConvertTo-SecureString -String $sourcePassword -AsPlainText -Force
-$sourceCredentials = New-Object System.Management.Automation.PSCredential ($sourceUserName, $sourcePasswordSecure)
-
-# Create target connection info
-$targetConnInfo = New-AzDmsConnInfo -ServerType SQLMI `
- -MiResourceId $targetMIResourceId
-$targetPasswordSecure = ConvertTo-SecureString -String $targetPassword -AsPlainText -Force
-$targetCredentials = New-Object System.Management.Automation.PSCredential ($targetUserName, $targetPasswordSecure)
-```
-
-The following example creates and starts an online migration task:
-
-```powershell
-# Create DMS migration task
-$migTask = New-AzDataMigrationTask -TaskType MigrateSqlServerSqlDbMiSync `
- -ResourceGroupName $dmsResourceGroupName `
- -ServiceName $dmsServiceName `
- -ProjectName $dmsProjectName `
- -TaskName $dmsTaskName `
- -SourceConnection $sourceConnInfo `
- -SourceCred $sourceCredentials `
- -TargetConnection $targetConnInfo `
- -TargetCred $targetCredentials `
- -SelectedDatabase $selectedDatabases `
- -BackupFileShare $backupFileShare `
- -AzureActiveDirectoryApp $AADApp `
- -StorageResourceId $storageAccountResourceId
-```
-
-For more information, see [New-AzDataMigrationTask](/powershell/module/az.datamigration/new-azdatamigrationtask).
-
-## Monitor the migration
-
-To monitor the migration, perform the following tasks.
-
-### Check the status of task
-
-```powershell
-# Get migration task status details
-$migTask = Get-AzDataMigrationTask `
- -ResourceGroupName $dmsResourceGroupName `
- -ServiceName $dmsServiceName `
- -ProjectName $dmsProjectName `
- -Name $dmsTaskName `
- -ResultType DatabaseLevelOutput `
- -Expand
-
-# Task state will be either of 'Queued', 'Running', 'Succeeded', 'Failed', 'FailedInputValidation' or 'Faulted'
-$taskState = $migTask.ProjectTask.Properties.State
-
-# Display task state
-$taskState | Format-List
-```
-
-Use the following to get list of errors:-
-
-```powershell
-# Get task errors
-$taskErrors = $migTask.ProjectTask.Properties.Errors
-
-# Display task errors
-foreach($taskError in $taskErrors){
- $taskError | Format-List
-}
--
-# Get database level details
-$databaseLevelOutputs = $migTask.ProjectTask.Properties.Output
-
-# Display database level details
-foreach($databaseLevelOutput in $databaseLevelOutputs){
-
- # This is the source database name.
- $databaseName = $databaseLevelOutput.SourceDatabaseName;
-
- Write-Host "=========="
- Write-Host "Start migration details for database " $databaseName
- # This is the status for that database - It will be either of:
- # INITIAL, FULL_BACKUP_UPLOADING, FULL_BACKUP_UPLOADED, LOG_FILES_UPLOADING,
- # CUTOVER_IN_PROGRESS, CUTOVER_INITIATED, CUTOVER_COMPLETED, COMPLETED, CANCELLED, FAILED
- $databaseMigrationState = $databaseLevelOutput.MigrationState;
-
- # Details about last restored backup. This contains file names, LSN, backup date, etc
- $databaseLastRestoredBackup = $databaseLevelOutput.LastRestoredBackupSetInfo
-
- # Details about last restored backup. This contains file names, LSN, backup date, etc
- $databaseLastRestoredBackup = $databaseLevelOutput.LastRestoredBackupSetInfo
-
- # Details about last Currently active/most recent backups. This contains file names, LSN, backup date, etc
- $databaseActiveBackpusets = $databaseLevelOutput.ActiveBackupSets
-
- # Display info
- $databaseLevelOutput | Format-List
-
- Write-Host "Currently active/most recent backupset details:"
- $databaseActiveBackpusets | select BackupStartDate, BackupFinishedDate, FirstLsn, LastLsn -ExpandProperty ListOfBackupFiles | Format-List
-
- Write-Host "Last restored backupset details:"
- $databaseLastRestoredBackupFiles | Format-List
-
- Write-Host "End migration details for database " $databaseName
- Write-Host "=========="
-}
-```
-
-## Performing the cutover
-
-With an online migration, a full backup and restore of databases is performed, and then work proceeds on restoring the Transaction Logs stored in the BackupFileShare.
-
-When the database in a Azure SQL Managed Instance is updated with latest data and is in sync with the source database, you can perform a cutover.
-
-The following example will complete the cutover\migration. Users invoke this command at their discretion.
-
-```powershell
-$command = Invoke-AzDmsCommand -CommandType CompleteSqlMiSync `
- -ResourceGroupName myResourceGroup `
- -ServiceName $service.Name `
- -ProjectName $project.Name `
- -TaskName myDMSTask `
- -DatabaseName "Source DB Name"
-```
-
-## Deleting the instance of Azure Database Migration Service
-
-After the migration is complete, you can delete the Azure Database Migration Service instance:
-
-```powershell
-Remove-AzDms -ResourceGroupName myResourceGroup -ServiceName MyDMS
-```
-
-## Additional resources
-
-For information about additional migrating scenarios (source/target pairs), see the Microsoft [Database Migration Guide](/data-migration/).
-
-## Next steps
-
-Find out more about Azure Database Migration Service in the article [What is the Azure Database Migration Service?](./dms-overview.md).
dms Howto Sql Server To Azure Sql Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-powershell.md
- Title: "PowerShell: Migrate SQL Server to SQL Database"-
-description: Learn to migrate a database from SQL Server to Azure SQL Database by using Azure PowerShell with the Azure Database Migration Service.
--- Previously updated : 02/20/2020---
- - devx-track-azurepowershell
- - sql-migration-content
--
-# Migrate a SQL Server database to Azure SQL Database using Azure PowerShell
-
-In this article, you migrate the **Adventureworks2012** database restored to an on-premises instance of SQL Server 2016 or above to Azure SQL Database by using Microsoft Azure PowerShell. You can migrate databases from a SQL Server instance to Azure SQL Database by using the `Az.DataMigration` module in Microsoft Azure PowerShell.
-
-In this article, you learn how to:
-> [!div class="checklist"]
->
-> * Create a resource group.
-> * Create an instance of the Azure Database Migration Service.
-> * Create a migration project in an Azure Database Migration Service instance.
-> * Run the migration.
-
-## Prerequisites
-
-To complete these steps, you need:
-
-* [SQL Server 2016 or above](https://www.microsoft.com/sql-server/sql-server-downloads) (any edition)
-* To enable the TCP/IP protocol, which is disabled by default with SQL Server Express installation. Enable the TCP/IP protocol by following the article [Enable or Disable a Server Network Protocol](/sql/database-engine/configure-windows/enable-or-disable-a-server-network-protocol#SSMSProcedure).
-* To configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).
-* An Azure SQL Database instance. You can create an Azure SQL Database instance by following the detail in the article [Create a database in Azure SQL Database in the Azure portal](/azure/azure-sql/database/single-database-create-quickstart).
-* [Data Migration Assistant](https://www.microsoft.com/download/details.aspx?id=53595) v3.3 or later.
-* To have created a Microsoft Azure Virtual Network by using the Azure Resource Manager deployment model, which provides the Azure Database Migration Service with site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md).
-* To have completed assessment of your on-premises database and schema migration using Data Migration Assistant as described in the article [Performing a SQL Server migration assessment](/sql/dma/dma-assesssqlonprem)
-* To download and install the Az.DataMigration module from the PowerShell Gallery by using [Install-Module PowerShell cmdlet](/powershell/module/powershellget/Install-Module); be sure to open the PowerShell command window using run as an Administrator.
-* To ensure that the credentials used to connect to source SQL Server instance has the [CONTROL SERVER](/sql/t-sql/statements/grant-server-permissions-transact-sql) permission.
-* To ensure that the credentials used to connect to target Azure SQL DB instance has the CONTROL DATABASE permission on the target Azure SQL Database databases.
-* An Azure subscription. If you don't have one, create a [free](https://azure.microsoft.com/free/) account before you begin.
-
-## Log in to your Microsoft Azure subscription
-
-Use the directions in the article [Log in with Azure PowerShell](/powershell/azure/authenticate-azureps) to sign in to your Azure subscription by using PowerShell.
-
-## Create a resource group
-
-An Azure resource group is a logical container into which Azure resources are deployed and managed. Create a resource group before you can create a virtual machine.
-
-Create a resource group by using the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) command.
-
-The following example creates a resource group named *myResourceGroup* in the *EastUS* region.
-
-```powershell
-New-AzResourceGroup -ResourceGroupName myResourceGroup -Location EastUS
-```
-
-## Create an instance of Azure Database Migration Service
-
-You can create new instance of Azure Database Migration Service by using the `New-AzDataMigrationService` cmdlet.
-This cmdlet expects the following required parameters:
-
-* *Azure Resource Group name*. You can use [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) command to create Azure Resource group as previously shown and provide its name as a parameter.
-* *Service name*. String that corresponds to the desired unique service name for Azure Database Migration Service
-* *Location*. Specifies the location of the service. Specify an Azure data center location, such as West US or Southeast Asia
-* *Sku*. This parameter corresponds to DMS Sku name. The currently supported Sku name is *GeneralPurpose_4vCores*.
-* *Virtual Subnet Identifier*. You can use cmdlet [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) to create a subnet.
-
-The following example creates a service named *MyDMS* in the resource group *MyDMSResourceGroup* located in the *East US* region using a virtual network named *MyVNET* and subnet called *MySubnet*.
-
-```powershell
- $vNet = Get-AzVirtualNetwork -ResourceGroupName MyDMSResourceGroup -Name MyVNET
-
-$vSubNet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $vNet -Name MySubnet
-
-$service = New-AzDms -ResourceGroupName MyDMSResourceGroup `
- -ServiceName MyDMS `
- -Location EastUS `
- -Sku GeneralPurpose_4vCores `
- -VirtualSubnetId $vSubNet.Id`
-```
-
-## Create a migration project
-
-After creating an Azure Database Migration Service instance, create a migration project. An Azure Database Migration Service project requires connection information for both the source and target instances, as well as a list of databases that you want to migrate as part of the project.
-
-### Create a Database Connection Info object for the source and target connections
-
-You can create a Database Connection Info object by using the `New-AzDmsConnInfo` cmdlet. This cmdlet expects the following parameters:
-
-* *ServerType*. The type of database connection requested, for example, SQL, Oracle, or MySQL. Use SQL for SQL Server and Azure SQL.
-* *DataSource*. The name or IP of a SQL Server instance or Azure SQL Database.
-* *AuthType*. The authentication type for connection, which can be either SqlAuthentication or WindowsAuthentication.
-* *TrustServerCertificate* parameter sets a value that indicates whether the channel is encrypted while bypassing walking the certificate chain to validate trust. Value can be true or false.
-
-The following example creates Connection Info object for source SQL Server called MySourceSQLServer using sql authentication:
-
-```powershell
-$sourceConnInfo = New-AzDmsConnInfo -ServerType SQL `
- -DataSource MySourceSQLServer `
- -AuthType SqlAuthentication `
- -TrustServerCertificate:$true
-```
-
-> [!NOTE]
-> If the migration ends with an error when providing source DataSource as public IP address or the DNS of SQL Server, then use the name of the Azure VM running the SQL Server.
-
-The next example shows creation of Connection Info for a server called SQLAzureTarget using sql authentication:
-
-```powershell
-$targetConnInfo = New-AzDmsConnInfo -ServerType SQL `
- -DataSource "sqlazuretarget.database.windows.net" `
- -AuthType SqlAuthentication `
- -TrustServerCertificate:$false
-```
-
-### Provide databases for the migration project
-
-Create a list of `AzDataMigrationDatabaseInfo` objects that specifies databases as part of the Azure Database Migration project that can be provided as parameter for creation of the project. The Cmdlet `New-AzDataMigrationDatabaseInfo` can be used to create AzDataMigrationDatabaseInfo.
-
-The following example creates `AzDataMigrationDatabaseInfo` project for the **AdventureWorks2016** database and adds it to the list to be provided as parameter for project creation.
-
-```powershell
-$dbInfo1 = New-AzDataMigrationDatabaseInfo -SourceDatabaseName AdventureWorks2016
-$dbList = @($dbInfo1)
-```
-
-### Create a project object
-
-Finally you can create Azure Database Migration project called *MyDMSProject* located in *East US* using `New-AzDataMigrationProject` and adding the previously created source and target connections and the list of databases to migrate.
-
-```powershell
-$project = New-AzDataMigrationProject -ResourceGroupName myResourceGroup `
- -ServiceName $service.Name `
- -ProjectName MyDMSProject `
- -Location EastUS `
- -SourceType SQL `
- -TargetType SQLDB `
- -SourceConnection $sourceConnInfo `
- -TargetConnection $targetConnInfo `
- -DatabaseInfo $dbList
-```
-
-## Create and start a migration task
-
-Finally, create and start Azure Database Migration task. Azure Database Migration task requires connection credential information for both source and target and list of database tables to be migrated in addition to the information already provided with the project created as a prerequisite.
-
-### Create credential parameters for source and target
-
-Connection security credentials can be created as a [PSCredential](/dotnet/api/system.management.automation.pscredential) object.
-
-The following example shows the creation of *PSCredential* objects for both source and target connections providing passwords as string variables *$sourcePassword* and *$targetPassword*.
-
-```powershell
-$secpasswd = ConvertTo-SecureString -String $sourcePassword -AsPlainText -Force
-$sourceCred = New-Object System.Management.Automation.PSCredential ($sourceUserName, $secpasswd)
-$secpasswd = ConvertTo-SecureString -String $targetPassword -AsPlainText -Force
-$targetCred = New-Object System.Management.Automation.PSCredential ($targetUserName, $secpasswd)
-```
-
-### Create a table map and select source and target parameters for migration
-
-Another parameter needed for migration is mapping of tables from source to target to be migrated. Create dictionary of tables that provides a mapping between source and target tables for migration. The following example illustrates mapping between source and target tables Human Resources schema for the AdventureWorks 2016 database.
-
-```powershell
-$tableMap = New-Object 'system.collections.generic.dictionary[string,string]'
-$tableMap.Add("HumanResources.Department", "HumanResources.Department")
-$tableMap.Add("HumanResources.Employee","HumanResources.Employee")
-$tableMap.Add("HumanResources.EmployeeDepartmentHistory","HumanResources.EmployeeDepartmentHistory")
-$tableMap.Add("HumanResources.EmployeePayHistory","HumanResources.EmployeePayHistory")
-$tableMap.Add("HumanResources.JobCandidate","HumanResources.JobCandidate")
-$tableMap.Add("HumanResources.Shift","HumanResources.Shift")
-```
-
-The next step is to select the source and target databases and provide table mapping to migrate as a parameter by using the `New-AzDmsSelectedDB` cmdlet, as shown in the following example:
-
-```powershell
-$selectedDbs = New-AzDmsSelectedDB -MigrateSqlServerSqlDb -Name AdventureWorks2016 `
- -TargetDatabaseName AdventureWorks2016 `
- -TableMap $tableMap
-```
-
-### Create the migration task and start it
-
-Use the `New-AzDataMigrationTask` cmdlet to create and start a migration task. This cmdlet expects the following parameters:
-
-* *TaskType*. Type of migration task to create for SQL Server to Azure SQL Database migration type *MigrateSqlServerSqlDb* is expected.
-* *Resource Group Name*. Name of Azure resource group in which to create the task.
-* *ServiceName*. Azure Database Migration Service instance in which to create the task.
-* *ProjectName*. Name of Azure Database Migration Service project in which to create the task.
-* *TaskName*. Name of task to be created.
-* *SourceConnection*. AzDmsConnInfo object representing source SQL Server connection.
-* *TargetConnection*. AzDmsConnInfo object representing target Azure SQL Database connection.
-* *SourceCred*. [PSCredential](/dotnet/api/system.management.automation.pscredential) object for connecting to source server.
-* *TargetCred*. [PSCredential](/dotnet/api/system.management.automation.pscredential) object for connecting to target server.
-* *SelectedDatabase*. AzDataMigrationSelectedDB object representing the source and target database mapping.
-* *SchemaValidation*. (optional, switch parameter) Following the migration, performs a comparison of the schema information between source and target.
-* *DataIntegrityValidation*. (optional, switch parameter) Following the migration, performs a checksum-based data integrity validation between source and target.
-* *QueryAnalysisValidation*. (optional, switch parameter) Following the migration, performs a quick and intelligent query analysis by retrieving queries from the source database and executes them in the target.
-
-The following example creates and starts a migration task named myDMSTask:
-
-```powershell
-$migTask = New-AzDataMigrationTask -TaskType MigrateSqlServerSqlDb `
- -ResourceGroupName myResourceGroup `
- -ServiceName $service.Name `
- -ProjectName $project.Name `
- -TaskName myDMSTask `
- -SourceConnection $sourceConnInfo `
- -SourceCred $sourceCred `
- -TargetConnection $targetConnInfo `
- -TargetCred $targetCred `
- -SelectedDatabase $selectedDbs `
-```
-
-The following example creates and starts the same migration task as above but also performs all three validations:
-
-```powershell
-$migTask = New-AzDataMigrationTask -TaskType MigrateSqlServerSqlDb `
- -ResourceGroupName myResourceGroup `
- -ServiceName $service.Name `
- -ProjectName $project.Name `
- -TaskName myDMSTask `
- -SourceConnection $sourceConnInfo `
- -SourceCred $sourceCred `
- -TargetConnection $targetConnInfo `
- -TargetCred $targetCred `
- -SelectedDatabase $selectedDbs `
- -SchemaValidation `
- -DataIntegrityValidation `
- -QueryAnalysisValidation `
-```
-
-## Monitor the migration
-
-You can monitor the migration task running by querying the state property of the task as shown in the following example:
-
-```powershell
-if (($mytask.ProjectTask.Properties.State -eq "Running") -or ($mytask.ProjectTask.Properties.State -eq "Queued"))
-{
- write-host "migration task running"
-}
-```
-
-## Deleting the DMS instance
-
-After the migration is complete, you can delete the Azure DMS instance:
-
-```powershell
-Remove-AzDms -ResourceGroupName myResourceGroup -ServiceName MyDMS
-```
-
-## Next step
-
-* Review the migration guidance in the Microsoft [Database Migration Guide](/data-migration/).
dms Pre Reqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/pre-reqs.md
- Title: Prerequisites for Azure Database Migration Service
-description: Learn about an overview of the prerequisites for using the Azure Database Migration Service to perform database migrations.
--- Previously updated : 02/25/2020---
- - sql-migration-content
--
-# Overview of prerequisites for using the Azure Database Migration Service
-
-There are several prerequisites required to ensure Azure Database Migration Service runs smoothly when performing database migrations. Some of the prerequisites apply across all scenarios (source-target pairs) supported by the service, while other prerequisites are unique to a specific scenario.
-
-Prerequisites associated with using the Azure Database Migration Service are listed in the following sections.
-
-## Prerequisites common across migration scenarios
-
-Azure Database Migration Service prerequisites that are common across all supported migration scenarios include the need to:
-
-* Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md).
-* Ensure that your virtual network Network Security Group (NSG) rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
-* When using a firewall appliance in front of your source database(s), you may need to add firewall rules to allow Azure Database Migration Service to access the source database(s) for migration.
-* Configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).
-* Enable the TCP/IP protocol, which is disabled by default during SQL Server Express installation, by following the instructions in the article [Enable or Disable a Server Network Protocol](/sql/database-engine/configure-windows/enable-or-disable-a-server-network-protocol#SSMSProcedure).
-
- > [!IMPORTANT]
- > Creating an instance of Azure Database Migration Service requires access to virtual network settings that are normally not within the same resource group. As a result, the user creating an instance of DMS requires permission at subscription level. To create the required roles, which you can assign as needed, run the following script:
- >
- > ```
- >
- > $readerActions = `
- > "Microsoft.Network/networkInterfaces/ipConfigurations/read", `
- > "Microsoft.DataMigration/*/read", `
- > "Microsoft.Resources/subscriptions/resourceGroups/read"
- >
- > $writerActions = `
- > "Microsoft.DataMigration/*/write", `
- > "Microsoft.DataMigration/*/delete", `
- > "Microsoft.DataMigration/*/action", `
- > "Microsoft.Network/virtualNetworks/subnets/join/action", `
- > "Microsoft.Network/virtualNetworks/write", `
- > "Microsoft.Network/virtualNetworks/read", `
- > "Microsoft.Resources/deployments/validate/action", `
- > "Microsoft.Resources/deployments/*/read", `
- > "Microsoft.Resources/deployments/*/write"
- >
- > $writerActions += $readerActions
- >
- > # TODO: replace with actual subscription IDs
- > $subScopes = ,"/subscriptions/00000000-0000-0000-0000-000000000000/","/subscriptions/11111111-1111-1111-1111-111111111111/"
- >
- > function New-DmsReaderRole() {
- > $aRole = [Microsoft.Azure.Commands.Resources.Models.Authorization.PSRoleDefinition]::new()
- > $aRole.Name = "Azure Database Migration Reader"
- > $aRole.Description = "Lets you perform read only actions on DMS service/project/tasks."
- > $aRole.IsCustom = $true
- > $aRole.Actions = $readerActions
- > $aRole.NotActions = @()
- >
- > $aRole.AssignableScopes = $subScopes
- > #Create the role
- > New-AzRoleDefinition -Role $aRole
- > }
- >
- > function New-DmsContributorRole() {
- > $aRole = [Microsoft.Azure.Commands.Resources.Models.Authorization.PSRoleDefinition]::new()
- > $aRole.Name = "Azure Database Migration Contributor"
- > $aRole.Description = "Lets you perform CRUD actions on DMS service/project/tasks."
- > $aRole.IsCustom = $true
- > $aRole.Actions = $writerActions
- > $aRole.NotActions = @()
- >
- > $aRole.AssignableScopes = $subScopes
- > #Create the role
- > New-AzRoleDefinition -Role $aRole
- > }
- >
- > function Update-DmsReaderRole() {
- > $aRole = Get-AzRoleDefinition "Azure Database Migration Reader"
- > $aRole.Actions = $readerActions
- > $aRole.NotActions = @()
- > Set-AzRoleDefinition -Role $aRole
- > }
- >
- > function Update-DmsContributorRole() {
- > $aRole = Get-AzRoleDefinition "Azure Database Migration Contributor"
- > $aRole.Actions = $writerActions
- > $aRole.NotActions = @()
- > Set-AzRoleDefinition -Role $aRole
- > }
- >
- > # Invoke above functions
- > New-DmsReaderRole
- > New-DmsContributorRole
- > Update-DmsReaderRole
- > Update-DmsContributorRole
- > ```
-
-## Prerequisites for migrating SQL Server to Azure SQL Database
-
-In addition to Azure Database Migration Service prerequisites that are common to all migration scenarios, there are also prerequisites that apply specifically to one scenario or another.
-
-When using the Azure Database Migration Service to perform SQL Server to Azure SQL Database migrations, in addition to the prerequisites that are common to all migration scenarios, be sure to address the following additional prerequisites:
-
-* Create an instance of Azure SQL Database, which you do by following the detail in the article [Create a database in Azure SQL Database in the Azure portal](/azure/azure-sql/database/single-database-create-quickstart).
-* Download and install the [Data Migration Assistant](https://www.microsoft.com/download/details.aspx?id=53595) v3.3 or later.
-* Open your Windows Firewall to allow the Azure Database Migration Service to access the source SQL Server, which by default is TCP port 1433.
-* If you are running multiple named SQL Server instances using dynamic ports, you may wish to enable the SQL Browser Service and allow access to UDP port 1434 through your firewalls so that the Azure Database Migration Service can connect to a named instance on your source server.
-* Create a server-level [firewall rule](/azure/azure-sql/database/firewall-configure) for SQL Database to allow the Azure Database Migration Service access to the target databases. Provide the subnet range of the virtual network used for the Azure Database Migration Service.
-* Ensure that the credentials used to connect to source SQL Server instance have [CONTROL SERVER](/sql/t-sql/statements/grant-server-permissions-transact-sql) permissions.
-* Ensure that the credentials used to connect to target database have CONTROL DATABASE permission on the target database.
-
- > [!NOTE]
- > For a complete listing of the prerequisites required to use the Azure Database Migration Service to perform migrations from SQL Server to Azure SQL Database, see the tutorial [Migrate SQL Server to Azure SQL Database](./tutorial-sql-server-to-azure-sql.md).
- >
-
-## Prerequisites for migrating SQL Server to Azure SQL Managed Instance
-
-* Create a SQL Managed Instance by following the detail in the article [Create a Azure SQL Managed Instance in the Azure portal](/azure/azure-sql/managed-instance/instance-create-quickstart).
-* Open your firewalls to allow SMB traffic on port 445 for the Azure Database Migration Service IP address or subnet range.
-* Open your Windows Firewall to allow the Azure Database Migration Service to access the source SQL Server, which by default is TCP port 1433.
-* If you are running multiple named SQL Server instances using dynamic ports, you may wish to enable the SQL Browser Service and allow access to UDP port 1434 through your firewalls so that the Azure Database Migration Service can connect to a named instance on your source server.
-* Ensure that the logins used to connect the source SQL Server and target Managed Instance are members of the sysadmin server role.
-* Create a network share that the Azure Database Migration Service can use to back up the source database.
-* Ensure that the service account running the source SQL Server instance has write privileges on the network share that you created and that the computer account for the source server has read/write access to the same share.
-* Make a note of a Windows user (and password) that has full control privilege on the network share that you previously created. The Azure Database Migration Service impersonates the user credential to upload the backup files to Azure Storage container for restore operation.
-* Create a blob container and retrieve its SAS URI by using the steps in the article [Manage Azure Blob Storage resources with Storage Explorer](../vs-azure-tools-storage-explorer-blobs.md#get-the-sas-for-a-blob-container). Be sure to select all permissions (Read, Write, Delete, List) on the policy window while creating the SAS URI.
-* Ensure both Azure Database Migration Service IP address and Azure SQL Managed Instance subnet can communicate with blob container.
-
- > [!NOTE]
- > For a complete listing of the prerequisites required to use the Azure Database Migration Service to perform migrations from SQL Server to SQL Managed Instance, see the tutorial [Migrate SQL Server to SQL Managed Instance](./tutorial-sql-server-to-managed-instance.md).
-
-## Next steps
-
-For an overview of the Azure Database Migration Service and regional availability, see the article [What is the Azure Database Migration Service](dms-overview.md).
dms Quickstart Create Data Migration Service Hybrid Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/quickstart-create-data-migration-service-hybrid-portal.md
- Title: "Quickstart: Create a hybrid mode instance with Azure portal"-
-description: Use the Azure portal to create an instance of Azure Database Migration Service in hybrid mode.
--- Previously updated : 03/13/2020---
- - mode-ui
- - subject-rbac-steps
- - sql-migration-content
--
-# Quickstart: Create a hybrid mode instance with Azure portal & Azure Database Migration Service
-
-Azure Database Migration Service hybrid mode manages database migrations by using a migration worker that's hosted on-premises together with an instance of Azure Database Migration Service running in the cloud. Hybrid mode is especially useful for scenarios in which there's a lack of site-to-site connectivity between the on-premises network and Azure or if there's limited site-to-site connectivity bandwidth.
-
->[!NOTE]
->Currently, Azure Database Migration Service running in hybrid mode supports SQL Server migrations to:
->
->- Azure SQL Managed Instance with near zero downtime (online).
->- Azure SQL Database single database with some downtime (offline).
->- MongoDb to Azure CosmosDB with near zero downtime (online).
->- MongoDb to Azure CosmosDB with some downtime (offline).
-
-In this Quickstart, you use the Azure portal to create an instance of Azure Database Migration Service in hybrid mode. Afterwards, you download, install, and set up the hybrid worker in your on-premises network. During preview, you can use Azure Database Migration Service hybrid mode to migrate data from an on-premises instance of SQL Server to Azure SQL Database.
-
-> [!NOTE]
-> The Azure Database Migration Service hybrid installer runs on Microsoft Windows Server 2012 R2, Window Server 2016, Windows Server 2019, and Windows 10.
-
-> [!IMPORTANT]
-> The Azure Database Migration Service hybrid installer requires .NET 4.7.2 or later. To find the latest versions of .NET, see the [Download .NET Framework](https://dotnet.microsoft.com/download/dotnet-framework) page.
-
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
-
-## Sign in to the Azure portal
-
-From a browser, sign in to the [Azure portal](https://portal.azure.com).
-
-The default view is your service dashboard.
-
-## Register the resource provider
-
-Register the Microsoft.DataMigration resource provider before you create your first instance of Azure Database Migration Service.
-
-1. In the Azure portal, select **Subscriptions**, select the subscription in which you want to create the instance of Azure Database Migration Service, and then select **Resource providers**.
-
- ![Search resource provider](media/quickstart-create-data-migration-service-hybrid-portal/dms-portal-search-resource-provider.png)
-
-2. Search for migration, and then to the right of **Microsoft.DataMigration**, select **Register**.
-
- ![Register resource provider](media/quickstart-create-data-migration-service-hybrid-portal/dms-portal-register-resource-provider.png)
-
-## Create an instance of the service
-
-1. Select +**Create a resource** to create an instance of Azure Database Migration Service.
-
-2. Search the Marketplace for "migration", select **Azure Database Migration Service**, and then on the **Azure Database Migration Service** screen, select **Create**.
-
-3. On the **Create Migration Service** screen:
-
- - Choose a **Service Name** that is memorable and unique to identify your instance of Azure Database Migration Service.
- - Select the Azure **Subscription** in which you want to create the instance.
- - Select an existing **Resource Group** or create a new one.
- - Choose the **Location** that is closest to your source or target server.
- - For **Service mode**, select **Hybrid (Preview)**.
-
- ![Create migration service - basics](media/quickstart-create-data-migration-service-hybrid-portal/dms-create-service-basics.png)
-
-4. Select **Review + create**.
-
-5. On the **Review + create** tab, review the Terms, verify the other information provided, and then select **Create**.
-
- ![Create migration service - Review + create](media/quickstart-create-data-migration-service-hybrid-portal/dms-create-service-review-and-create.png)
-
- After a few moments, your instance of Azure Database Migration Service in hybrid mode is created and ready to set up. The Azure Database Migration Service instance displays as shown in the following image:
-
- ![Azure Database Migration Service hybrid mode instance](media/quickstart-create-data-migration-service-hybrid-portal/dms-instance-hybrid-mode.png)
-
-6. After the service created, select **Properties**, and then copy the value displayed in the **Resource Id** box, which you'll use to install the Azure Database Migration Service hybrid worker.
-
- ![Azure Database Migration Service hybrid mode properties](media/quickstart-create-data-migration-service-hybrid-portal/dms-copy-resource-id.png)
-
-## Create Azure App registration ID
-
-You need to create an Azure App registration ID that the on-premises hybrid worker can use to communicate with Azure Database Migration Service in the cloud.
-
-1. In the Azure portal, select **Microsoft Entra ID**, select **App registrations**, and then select **New registration**.
-2. Specify a name for the application, and then, under **Supported account types**, select the type of accounts to support to specify who can use the application.
-
- ![Azure Database Migration Service hybrid mode register application](media/quickstart-create-data-migration-service-hybrid-portal/dms-register-application.png)
-
-3. Use the default values for the **Redirect URI (optional)** fields, and then select **Register**.
-
-4. After App ID registration is completed, make a note of the **Application (client) ID**, which you'll use while installing the hybrid worker.
-
-5. In the Azure portal, navigate to Azure Database Migration Service.
-
-6. In the navigation menu, select **Access control (IAM)**.
-
-7. Select **Add** > **Add role assignment**.
-
- :::image type="content" source="~/reusable-content/ce-skilling/azure/media/role-based-access-control/add-role-assignment-menu-generic.png" alt-text="Screenshot showing Access control (IAM) page with Add role assignment menu open.":::
-
-8. On the **Role** tab, select the **Contributor** role.
-
- :::image type="content" source="~/reusable-content/ce-skilling/azure/media/role-based-access-control/add-role-assignment-role-generic.png" alt-text="Screenshot showing Add role assignment page with Role tab selected.":::
-
-9. On the **Members** tab, select **User, group, or service principal**, and then select the App ID name.
-
-10. On the **Review + assign** tab, select **Review + assign** to assign the role.
-
- For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml).
-
-## Download and install the hybrid worker
-
-1. In the Azure portal, navigate to your instance of Azure Database Migration Service.
-
-2. Under **Settings**, select **Hybrid**, and then select **Installer download** to download the hybrid worker.
-
- ![Azure Database Migration Service hybrid worker download](media/quickstart-create-data-migration-service-hybrid-portal/dms-installer-download.png)
-
-3. Extract the ZIP file on the server that will be hosting the Azure Database Migration Service hybrid worker.
-
- > [!IMPORTANT]
- > The Azure Database Migration Service hybrid installer requires .NET 4.7.2 or later. To find the latest versions of .NET, see the [Download .NET Framework](https://dotnet.microsoft.com/download/dotnet-framework) page.
-
-4. In the install folder, locate and open the **dmsSettings.json** file, specify the **ApplicationId** and **resourceId**, and then save the file.
-
- ![Azure Database Migration Service hybrid worker settings](media/quickstart-create-data-migration-service-hybrid-portal/dms-settings.png)
-
-5. Generate a certificate that Azure Database Migration Service can use to authenticate the communication from the hybrid worker by using the following command.
-
- ```
- <drive>:\<folder>\Install>DMSWorkerBootstrap.exe -a GenerateCert
- ```
-
- A certificate is generated in the Install folder.
-
- ![Azure Database Migration Service hybrid worker certificate](media/quickstart-create-data-migration-service-hybrid-portal/dms-certificate.png)
-
-6. In the Azure portal, navigate to the App ID, under **Manage**, select **Certificated & secrets**, and then select **Upload certificate** to select the public certificate you generated.
-
- ![Azure Database Migration Service hybrid worker certificate upload](media/quickstart-create-data-migration-service-hybrid-portal/dms-app-upload-certificate.png)
-
-7. Install the Azure Database Migration Service hybrid worker on your on-premises server by running the following command:
-
- ```
- <drive>:\<folder>\Install>DMSWorkerBootstrap.exe -a Install -IAcceptDMSLicenseTerms -d
- ```
-
- > [!NOTE]
- > When running the install command, you can also use the following parameters:
- >
- > - **-TelemetryOptOut** - Stops the worker from sending telemetry but continues to log locally minimally. The installer still sends telemetry.
- > - **-p {InstallLocation}**. Enables changing the installation path, which by default is ΓÇ£C:\Program Files\DatabaseMigrationServiceHybridΓÇ¥.
-
-8. If the installer runs without error, then the service will show an online status in Azure Database Migration Service and you're ready to migrate your databases.
-
- ![Azure Database Migration Service online](media/quickstart-create-data-migration-service-hybrid-portal/dms-instance-hybrid-mode-online.png)
-
-## Uninstall Azure Database Migration Service hybrid mode
-
-Currently, uninstalling Azure Database Migration Service hybrid mode is supported only via the Azure Database Migration Service hybrid worker installer on your on-premises server, by using the following command:
-
-```
-<drive>:\<folder>\Install>DMSWorkerBootstrap.exe -a uninstall
-```
-
-> [!NOTE]
-> When running the uninstall command, you can also use the ΓÇ£-ReuseCertΓÇ¥ parameter, which keeps the AdApp cert generated by the generateCert workflow. This enables using the same cert that was previously generated and uploaded.
-
-## Set up the Azure Database Migration Service hybrid worker using PowerShell
-
-In addition to installing the Azure Database Migration Service hybrid worker via the Azure portal, we provide a [PowerShell script](https://techcommunity.microsoft.com/gxcuf89792/attachments/gxcuf89792/MicrosoftDataMigration/119/1/DMS_Hybrid_Script.zip) that you can use to automate the worker installation steps after you create a new instance of Azure Database Migration Service in hybrid mode. The script:
-
-1. Creates a new AdApp.
-2. Downloads the installer.
-3. Runs the generateCert workflow.
-4. Uploads the certificate.
-5. Adds the AdApp as contributor to your Azure Database Migration Service instance.
-6. Runs the install workflow.
-
-This script is intended for quick prototyping when the user already has all the necessary permissions in the environment. Note that in your production environment, the AdApp and Cert may have different requirements, so the script could fail.
-
-> [!IMPORTANT]
-> This script assumes that there is an existing instance of Azure Database Migration Service in hybrid mode and that the Azure account used has permissions to create AdApps in the tenant and to modify Azure RBAC on the subscription.
-
-Fill in the parameters at the top of the script, and then run the script from an Administrator PowerShell instance.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Migrate SQL Server to an Azure SQL Managed Instance online](tutorial-sql-server-managed-instance-online.md)
-> [Migrate SQL Server to Azure SQL Database offline](tutorial-sql-server-to-azure-sql.md)
dms Quickstart Create Data Migration Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/quickstart-create-data-migration-service-portal.md
- Title: "Quickstart: Create an instance using the Azure portal"-
-description: Use the Azure portal to create an instance of Azure Database Migration Service.
--- Previously updated : 01/29/2021---
- - mode-ui
- - sql-migration-content
--
-# Quickstart: Create an instance of the Azure Database Migration Service by using the Azure portal
-
-In this quickstart, you use the Azure portal to create an instance of Azure Database Migration Service. After you create the instance, you can use it to migrate data from multiple database sources to Azure data platforms, such as from SQL Server to Azure SQL Database or from SQL Server to an Azure SQL Managed Instance.
-
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
-
-## Sign in to the Azure portal
-
-From a web browser, sign in to the [Azure portal](https://portal.azure.com). The default view is your service dashboard.
-
-> [!NOTE]
-> You can create up to 10 instances of DMS per subscription per region. If you require a greater number of instances, please create a support ticket.
-
-<! Register the resource provider -->
-
-<! Create an instance of the service -->
-
-## Clean up resources
-
-You can clean up the resources created in this quickstart by deleting the [Azure resource group](../azure-resource-manager/management/overview.md). To delete the resource group, navigate to the instance of the Azure Database Migration Service that you created. Select the **Resource group** name, and then select **Delete resource group**. This action deletes all assets in the resource group as well as the group itself.
-
-## Next steps
-
-* [Migrate SQL Server to Azure SQL Database](tutorial-sql-server-to-azure-sql.md)
-* [Migrate SQL Server to an Azure SQL Managed Instance offline](tutorial-sql-server-to-managed-instance.md)
-* [Migrate SQL Server to an Azure SQL Managed Instance online](tutorial-sql-server-managed-instance-online.md)
dms Resource Custom Roles Sql Db Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-db-managed-instance.md
- Title: "Custom roles: Online SQL Server to SQL Managed Instance migrations"-
-description: Learn to use the custom roles for SQL Server to Azure SQL Managed Instance online migrations.
--- Previously updated : 02/08/2021---
- - sql-migration-content
--
-# Custom roles for SQL Server to Azure SQL Managed Instance online migrations
-
-Azure Database Migration Service uses an APP ID to interact with Azure Services. The APP ID requires either the Contributor role at the Subscription level (which many Corporate security departments won't allow) or creation of custom roles that grant the specific permissions that Azure Database Migration Service requires. Since there's a limit of 2,000 custom roles in Microsoft Entra ID, you may want to combine all permissions required specifically by the APP ID into one or two custom roles, and then grant the APP ID the custom role on specific objects or resource groups (vs. at the subscription level). If the number of custom roles isn't a concern, you can split the custom roles by resource type, to create three custom roles in total as described below.
-
-The AssignableScopes section of the role definition json string allows you to control where the permissions appear in the **Add Role Assignment** UI in the portal. You'll likely want to define the role at the resource group or even resource level to avoid cluttering the UI with extra roles. Note that this doesn't perform the actual role assignment.
-
-## Minimum number of roles
-
-We currently recommend creating a minimum of two custom roles for the APP ID, one at the resource level and the other at the subscription level.
-
-> [!NOTE]
-> The last custom role requirement may eventually be removed, as new SQL Managed Instance code is deployed to Azure.
-
-**Custom Role for the APP ID**. This role is required for Azure Database Migration Service migration at the *resource* or *resource group* level that hosts the Azure Database Migration Service (for more information about the APP ID, see the article [Use the portal to create a Microsoft Entra application and service principal that can access resources](/entra/identity-platform/howto-create-service-principal-portal)).
-
-```json
-{
- "Name": "DMS Role - App ID",
- "IsCustom": true,
- "Description": "DMS App ID access to complete MI migrations",
- "Actions": [
- "Microsoft.Storage/storageAccounts/read",
- "Microsoft.Storage/storageAccounts/listKeys/action",
- "Microsoft.Storage/storageaccounts/blobservices/read",
- "Microsoft.Storage/storageaccounts/blobservices/write",
- "Microsoft.Sql/managedInstances/read",
- "Microsoft.Sql/managedInstances/write",
- "Microsoft.Sql/managedInstances/databases/read",
- "Microsoft.Sql/managedInstances/databases/write",
- "Microsoft.Sql/managedInstances/databases/delete",
- "Microsoft.Sql/managedInstances/metrics/read",
- "Microsoft.DataMigration/locations/*",
- "Microsoft.DataMigration/services/*"
- ],
- "NotActions": [
- ],
- "AssignableScopes": [
- "/subscriptions/<subscription_id>/ResourceGroups/<StorageAccount_rg_name>",
- "/subscriptions/<subscription_id>/ResourceGroups/<ManagedInstance_rg_name>",
- "/subscriptions/<subscription_id>/ResourceGroups/<DMS_rg_name>",
- ]
-}
-```
-
-**Custom role for the APP ID - subscription**. This role is required for Azure Database Migration Service migration at *subscription* level that hosts the SQL Managed Instance.
-
-```json
-{
- "Name": "DMS Role - App ID - Sub",
- "IsCustom": true,
- "Description": "DMS App ID access at subscription level to complete MI migrations",
- "Actions": [
- "Microsoft.Sql/locations/managedDatabaseRestoreAzureAsyncOperation/*"
- ],
- "NotActions": [
- ],
- "AssignableScopes": [
- "/subscriptions/<subscription_id>"
- ]
-}
-```
-
-The json above must be stored in two text files, and you can use either the AzureRM, AZ PowerShell cmdlets, or Azure CLI to create the roles using either **New-AzureRmRoleDefinition (AzureRM)** or **New-AzRoleDefinition (AZ)**.
-
-For more information, see the article [Azure custom roles](../role-based-access-control/custom-roles.md).
-
-After you create these custom roles, you must add role assignments to users and APP ID(s) to the appropriate resources or resource groups:
-
-* The ΓÇ£DMS Role - App IDΓÇ¥ role must be granted to the APP ID that will be used for the migrations, and also at the Storage Account, Azure Database Migration Service instance, and SQL Managed Instance resource levels. It is granted at the resource or resource group level that hosts the Azure Database Migration Service.
-* The ΓÇ£DMS Role - App ID - SubΓÇ¥ role must be granted to the APP ID at the subscription level that hosts the SQL Managed Instance (granting at the resource or resource group will fail). This requirement is temporary until a code update is deployed.
-
-## Expanded number of roles
-
-If the number of custom roles in your Microsoft Entra ID isn't a concern, we recommend you create a total of three roles. You'll still need the ΓÇ£DMS Role - App ID ΓÇô SubΓÇ¥ role, but the ΓÇ£DMS Role - App IDΓÇ¥ role above is split by resource type into two different roles.
-
-**Custom role for the APP ID for SQL Managed Instance**
-
-```json
-{
- "Name": "DMS Role - App ID - SQL MI",
- "IsCustom": true,
- "Description": "DMS App ID access to complete MI migrations",
- "Actions": [
- "Microsoft.Sql/managedInstances/read",
- "Microsoft.Sql/managedInstances/write",
- "Microsoft.Sql/managedInstances/databases/read",
- "Microsoft.Sql/managedInstances/databases/write",
- "Microsoft.Sql/managedInstances/databases/delete",
- "Microsoft.Sql/managedInstances/metrics/read"
- ],
- "NotActions": [
- ],
- "AssignableScopes": [
- "/subscriptions/<subscription_id>/resourceGroups/<ManagedInstance_rg_name>"
- ]
-}
-```
-
-**Custom role for the APP ID for Storage**
-
-```json
-{
- "Name": "DMS Role - App ID - Storage",
- "IsCustom": true,
- "Description": "DMS App ID storage access to complete MI migrations",
- "Actions": [
-"Microsoft.Storage/storageAccounts/read",
- "Microsoft.Storage/storageAccounts/listKeys/action",
- "Microsoft.Storage/storageaccounts/blobservices/read",
- "Microsoft.Storage/storageaccounts/blobservices/write"
- ],
- "NotActions": [
- ],
- "AssignableScopes": [
- "/subscriptions/<subscription_id>/resourceGroups/<StorageAccount_rg_name>"
- ]
-}
-```
-
-## Role assignment
-
-To assign a role to users/APP ID, open the Azure portal, perform the following steps:
-
-1. Navigate to the resource group or resource (except for the role that needs to be granted on the subscription), go to **Access Control**, and then scroll to find the custom roles you just created.
-
-2. Select the appropriate role, select the APP ID, and then save the changes.
-
- Your APP ID(s) now appears listed on the **Role assignments** tab.
-
-## Next steps
-
-* Review the migration guidance for your scenario in the Microsoft [Database Migration Guide](/data-migration/).
dms Resource Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-network-topologies.md
- Title: Network topologies for SQL Managed Instance migrations-
-description: Learn the source and target configurations for Azure SQL Managed Instance migrations using the Azure Database Migration Service.
--- Previously updated : 01/08/2020---
- - sql-migration-content
--
-# Network topologies for Azure SQL Managed Instance migrations using Azure Database Migration Service
-
-This article discusses various network topologies that Azure Database Migration Service can work with to provide a comprehensive migration experience from SQL Servers to Azure SQL Managed Instance.
-
-## Azure SQL Managed Instance configured for Hybrid workloads
-
-Use this topology if your Azure SQL Managed Instance is connected to your on-premises network. This approach provides the most simplified network routing and yields maximum data throughput during the migration.
-
-![Network Topology for Hybrid Workloads](media/resource-network-topologies/hybrid-workloads.png)
-
-**Requirements**
--- In this scenario, the SQL Managed Instance and the Azure Database Migration Service instance are created in the same Microsoft Azure Virtual Network, but they use different subnets. -- The virtual network used in this scenario is also connected to the on-premises network by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md).-
-## SQL Managed Instance isolated from the on-premises network
-
-Use this network topology if your environment requires one or more of the following scenarios:
--- The SQL Managed Instance is isolated from on-premises connectivity, but your Azure Database Migration Service instance is connected to the on-premises network.-- If Azure role-based access control (Azure RBAC) policies are in place and you need to limit the users to accessing the same subscription that is hosting the SQL Managed Instance.-- The virtual networks used for the SQL Managed Instance and Azure Database Migration Service are in different subscriptions.-
-![Network Topology for Managed Instance isolated from the on-premises network](media/resource-network-topologies/mi-isolated-workload.png)
-
-**Requirements**
--- The virtual network that Azure Database Migration Service uses for this scenario must also be connected to the on-premises network by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md).-- Set up [VNet network peering](../virtual-network/virtual-network-peering-overview.md) between the virtual network used for SQL Managed Instance and Azure Database Migration Service.-
-## Cloud-to-cloud migrations: Shared virtual network
-
-Use this topology if the source SQL Server is hosted in an Azure VM and shares the same virtual network with SQL Managed Instance and Azure Database Migration Service.
-
-![Network Topology for Cloud-to-Cloud migrations with a shared VNet](media/resource-network-topologies/cloud-to-cloud.png)
-
-**Requirements**
--- No additional requirements.-
-## Cloud to cloud migrations: Isolated virtual network
-
-Use this network topology if your environment requires one or more of the following scenarios:
--- The SQL Managed Instance is provisioned in an isolated virtual network.-- If Azure role-based access control (Azure RBAC) policies are in place and you need to limit the users to accessing the same subscription that is hosting SQL Managed Instance.-- The virtual networks used for SQL Managed Instance and Azure Database Migration Service are in different subscriptions.-
-![Network Topology for Cloud-to-Cloud migrations with an isolated VNet](media/resource-network-topologies/cloud-to-cloud-isolated.png)
-
-**Requirements**
--- Set up [VNet network peering](../virtual-network/virtual-network-peering-overview.md) between the virtual network used for SQL Managed Instance and Azure Database Migration Service.-
-## Inbound security rules
-
-| **NAME** | **PORT** | **PROTOCOL** | **SOURCE** | **DESTINATION** | **ACTION** |
-||-|--||--||
-| DMS_subnet | Any | Any | DMS SUBNET | Any | Allow |
-
-## Outbound security rules
-
-| **NAME** | **PORT** | **PROTOCOL** | **SOURCE** | **DESTINATION** | **ACTION** | **Reason for rule** |
-||-|--||||--|
-| ServiceBus | 443, ServiceTag: ServiceBus | TCP | Any | Any | Allow | Management plane communication through Service Bus. <br/>(If Microsoft peering is enabled, you may not need this rule.) |
-| Storage | 443, ServiceTag: Storage | TCP | Any | Any | Allow | Management plane using Azure blob storage. <br/>(If Microsoft peering is enabled, you may not need this rule.) |
-| Diagnostics | 443, ServiceTag: AzureMonitor | TCP | Any | Any | Allow | DMS uses this rule to collect diagnostic information for troubleshooting purposes. <br/>(If Microsoft peering is enabled, you may not need this rule.) |
-| SQL Source server | 1433 (or TCP IP port that SQL Server is listening to) | TCP | Any | On-premises address space | Allow | SQL Server source connectivity from DMS <br/>(If you have site-to-site connectivity, you may not need this rule.) |
-| SQL Server named instance | 1434 | UDP | Any | On-premises address space | Allow | SQL Server named instance source connectivity from DMS <br/>(If you have site-to-site connectivity, you may not need this rule.) |
-| SMB share | 445 (if scenario neeeds) | TCP | Any | On-premises address space | Allow | SMB network share for DMS to store database backup files for migrations to Azure SQL Database MI and SQL Servers on Azure VM <br/>(If you have site-to-site connectivity, you may not need this rule). |
-| DMS_subnet | Any | Any | Any | DMS_Subnet | Allow | |
-
-## See also
--- [Migrate SQL Server to SQL Managed Instance](./tutorial-sql-server-to-managed-instance.md)-- [Overview of prerequisites for using Azure Database Migration Service](./pre-reqs.md)-- [Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md)-
-## Next steps
--- For an overview of Azure Database Migration Service, see the article [What is Azure Database Migration Service?](dms-overview.md).-- For current information about regional availability of Azure Database Migration Service, see the [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=database-migration) page.
dms Tutorial Sql Server Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online.md
- Title: "Tutorial: Migrate SQL Server online to SQL Managed Instance"-
-description: Learn to perform an online migration from SQL Server to an Azure SQL Managed Instance by using Azure Database Migration Service (classic)
--- Previously updated : 06/07/2023---
- - sql-migration-content
--
-# Tutorial: Migrate SQL Server to an Azure SQL Managed Instance online using DMS (classic)
--
-> [!NOTE]
-> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Managed Instance by using the [Azure SQL migration extension for Azure Data Studio](/data-migration/sql-server/managed-instance/database-migration-service).
->
-> To compare features between versions, review [compare versions](dms-overview.md#compare-versions).
-
-You can use Azure Database Migration Service to migrate the databases from a SQL Server instance to an [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview) with minimal downtime. For extra methods that may require some manual effort, see the article [SQL Server instance migration to Azure SQL Managed Instance](/azure/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide).
-
-In this tutorial, you migrate the [AdventureWorks2016](/sql/samples/adventureworks-install-configure#download-backup-files) database from an on-premises instance of SQL Server to a SQL Managed Instance with minimal downtime by using Azure Database Migration Service.
-
-You learn how to:
-> [!div class="checklist"]
->
-> * Register the Azure DataMigration resource provider.
-> * Create an instance of Azure Database Migration Service.
-> * Create a migration project and start online migration by using Azure Database Migration Service.
-> * Monitor the migration.
-> * Perform the migration cutover when you are ready.
-
-> [!IMPORTANT]
-> For online migrations from SQL Server to SQL Managed Instance using Azure Database Migration Service, you must provide the full database backup and subsequent log backups in the SMB network share that the service can use to migrate your databases. Azure Database Migration Service does not initiate any backups, and instead uses existing backups, which you may already have as part of your disaster recovery plan, for the migration.
-> Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups (that is, full and t-log) into a single backup media isn't supported.
-> Use compressed backups to reduce the likelihood of experiencing potential issues associated with migrating large backups.
-
-> [!NOTE]
-> Using Azure Database Migration Service to perform an online migration requires creating an instance based on the Premium pricing tier.
-
-> [!IMPORTANT]
-> For an optimal migration experience, Microsoft recommends creating an instance of Azure Database Migration Service in the same Azure region as the target database. Moving data across regions or geographies can slow down the migration process and introduce errors.
-
-> [!IMPORTANT]
-> Reduce the duration of the online migration process as much as possible to minimize the risk of interruption caused by instance reconfiguration or planned maintenance. In case of such an event, migration process will start from the beginning. In case of planned maintenance, there is a grace period of 36 hours before migration process is restarted.
--
-This article describes an online migration from SQL Server to a SQL Managed Instance. For an offline migration, see [Migrate SQL Server to a SQL Managed Instance offline using DMS](tutorial-sql-server-to-managed-instance.md).
-
-## Prerequisites
-
-To complete this tutorial, you need to:
-
-* Download and install [SQL Server 2016 or later](https://www.microsoft.com/sql-server/sql-server-downloads).
-* Enable the TCP/IP protocol, which is disabled by default during SQL Server Express installation, by following the instructions in the article [Enable or Disable a Server Network Protocol](/sql/database-engine/configure-windows/enable-or-disable-a-server-network-protocol#SSMSProcedure).
-* [Restore the AdventureWorks2016 database to the SQL Server instance.](/sql/samples/adventureworks-install-configure#restore-to-sql-server)
-* Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). [Learn network topologies for SQL Managed Instance migrations using Azure Database Migration Service](./resource-network-topologies.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details.
-
- > [!NOTE]
- > During virtual network setup, if you use ExpressRoute with network peering to Microsoft, add the following service [endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) to the subnet in which the service will be provisioned:
- >
- > * Target database endpoint (for example, SQL endpoint, Azure Cosmos DB endpoint, and so on)
- > * Storage endpoint
- > * Service bus endpoint
- >
- > This configuration is necessary because Azure Database Migration Service lacks internet connectivity.
- >
- >If you donΓÇÖt have site-to-site connectivity between the on-premises network and Azure or if there is limited site-to-site connectivity bandwidth, consider using Azure Database Migration Service in hybrid mode (Preview). Hybrid mode leverages an on-premises migration worker together with an instance of Azure Database Migration Service running in the cloud. To create an instance of Azure Database Migration Service in hybrid mode, see the article [Create an instance of Azure Database Migration Service in hybrid mode using the Azure portal](./quickstart-create-data-migration-service-portal.md).
-
- > [!IMPORTANT]
- > Regarding the storage account used as part of the migration, you must either:
- > * Choose to allow all network to access the storage account.
- > * Turn on [subnet delegation](../virtual-network/manage-subnet-delegation.md) on MI subnet and update the Storage Account firewall rules to allow this subnet.
- > * You can't use an Azure Storage account that has a private endpoint with Azure Database Migration Service.
-
-* Ensure that your virtual network Network Security Group rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
-* Configure your [Windows Firewall for source database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).
-* Open your Windows Firewall to allow Azure Database Migration Service to access the source SQL Server, which by default is TCP port 1433. If your default instance is listening on some other port, add that to the firewall.
-* If you're running multiple named SQL Server instances using dynamic ports, you may wish to enable the SQL Browser Service and allow access to UDP port 1434 through your firewalls so that Azure Database Migration Service can connect to a named instance on your source server.
-* If you're using a firewall appliance in front of your source databases, you may need to add firewall rules to allow Azure Database Migration Service to access the source database(s) for migration, and files via SMB port 445.
-* Create a SQL Managed Instance by following the detail in the article [Create a SQL Managed Instance in the Azure portal](/azure/azure-sql/managed-instance/instance-create-quickstart).
-* Ensure that the logins used to connect the source SQL Server and the target SQL Managed Instance are members of the sysadmin server role.
-* Provide an SMB network share that contains all your database full database backup files and subsequent transaction log backup files, which Azure Database Migration Service can use for database migration.
-* Ensure that the service account running the source SQL Server instance has write privileges on the network share that you created and that the computer account for the source server has read/write access to the same share.
-* Make a note of a Windows user (and password) that has full control privilege on the network share that you previously created. Azure Database Migration Service impersonates the user credential to upload the backup files to Azure Storage container for restore operation.
-* Create a Microsoft Entra Application ID that generates the Application ID key that Azure Database Migration Service can use to connect to target Azure SQL Managed Instance and Azure Storage Container. For more information, see the article [Use portal to create a Microsoft Entra application and service principal that can access resources](/entra/identity-platform/howto-create-service-principal-portal).
-
- > [!NOTE]
- > The Application ID used by the Azure Database Migration Service supports secret (password-based) authentication for service principals. It does not support certificate-based authentication.
-
- > [!NOTE]
- > Azure Database Migration Service requires the Contributor permission on the subscription for the specified Application ID. Alternatively, you can create custom roles that grant the specific permissions that Azure Database Migration Service requires. For step-by-step guidance about using custom roles, see the article [Custom roles for SQL Server to SQL Managed Instance online migrations](./resource-custom-roles-sql-db-managed-instance.md).
-
-* Create or make a note of **Standard Performance tier**, Azure Storage Account, that allows DMS service to upload the database backup files to and use for migrating databases. Make sure to create the Azure Storage Account in the same region as the Azure Database Migration Service instance is created.
-
- > [!NOTE]
- > When you migrate a database that's protected by [Transparent Data Encryption](/azure/azure-sql/database/transparent-data-encryption-tde-overview) to a managed instance by using online migration, the corresponding certificate from the on-premises or Azure VM SQL Server instance must be migrated before the database restore. For detailed steps, see [Migrate a TDE cert to a managed instance](/azure/azure-sql/database/transparent-data-encryption-tde-overview).
---
-> [!NOTE]
-> For additional detail, see the article [Network topologies for Azure SQL Managed Instance migrations using Azure Database Migration Service](./resource-network-topologies.md).
-
-## Create a migration project
-
-After an instance of the service is created, locate it within the Azure portal, open it, and then create a new migration project.
-
-1. In the Azure portal menu, select **All services**. Search for and select **Azure Database Migration Services**.
-
- ![Locate all instances of Azure Database Migration Service](media/tutorial-sql-server-to-managed-instance-online/dms-search.png)
-
-2. On the **Azure Database Migration Services** screen, select the Azure Database Migration Service instance that you created.
-
-3. Select **New Migration Project**.
-
- ![Locate your instance of Azure Database Migration Service](media/tutorial-sql-server-to-managed-instance-online/dms-create-project-1.png)
-
-4. On the **New migration project** screen, specify a name for the project, in the **Source server type** text box, select **SQL Server**, in the **Target server type** text box, select **Azure SQL Database Managed Instance**, and then for **Choose type of activity**, select **Online data migration**.
-
- ![Create Database Migration Service Project](media/tutorial-sql-server-to-managed-instance-online/dms-create-project-2.png)
-
-5. Select **Create and run activity** to create the project and run the migration activity.
-
-## Specify source details
-
-1. On the **Select source** screen, specify the connection details for the source SQL Server instance.
-
- Make sure to use a Fully Qualified Domain Name (FQDN) for the source SQL Server instance name. You can also use the IP Address for situations in which DNS name resolution isn't possible.
-
-2. If you haven't installed a trusted certificate on your server, select the **Trust server certificate** check box.
-
- When a trusted certificate isn't installed, SQL Server generates a self-signed certificate when the instance is started. This certificate is used to encrypt the credentials for client connections.
-
- > [!CAUTION]
- > TLS connections that are encrypted using a self-signed certificate does not provide strong security. They are susceptible to man-in-the-middle attacks. You should not rely on TLS using self-signed certificates in a production environment or on servers that are connected to the internet.
-
- ![Source Details](media/tutorial-sql-server-to-managed-instance-online/dms-source-details.png)
-
-3. Select **Next: Select target**
-
-## Specify target details
-
-1. On the **Select target** screen, specify the **Application ID** and **Key** that the DMS instance can use to connect to the target instance of SQL Managed Instance and the Azure Storage Account.
-
- For more information, see the article [Use portal to create a Microsoft Entra application and service principal that can access resources](/entra/identity-platform/howto-create-service-principal-portal).
-
-2. Select the **Subscription** containing the target instance of SQL Managed Instance, and then choose the target SQL Managed instance.
-
- If you haven't already provisioned the SQL Managed Instance, select the [link](/azure/azure-sql/managed-instance/instance-create-quickstart) to help you provision the instance. When the SQL Managed Instance is ready, return to this specific project to execute the migration.
-
-3. Provide **SQL User** and **Password** to connect to the SQL Managed Instance.
-
- ![Select Target](media/tutorial-sql-server-to-managed-instance-online/dms-target-details.png)
-
-4. Select **Next: Select databases**.
-
-## Specify source databases
-
-1. On the **Select databases** screen, select the source databases that you want to migrate.
-
- ![Select Source Databases](media/tutorial-sql-server-to-managed-instance-online/dms-source-database.png)
-
- > [!IMPORTANT]
- > If you use SQL Server Integration Services (SSIS), DMS does not currently support migrating the catalog database for your SSIS projects/packages (SSISDB) from SQL Server to SQL Managed Instance. However, you can provision SSIS in Azure Data Factory (ADF) and redeploy your SSIS projects/packages to the destination SSISDB hosted by SQL Managed Instance. For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
-
-2. Select **Next: Configure migration settings**.
-
-## Configure migration settings
-
-1. On the **Configure migration settings** screen, provide the following details:
-
- | Parameter | Description |
- |--||
- |**SMB Network location share** | The local SMB network share or Azure file share that contains the full database backup files and transaction log backup files that Azure Database Migration Service can use for migration. The service account running the source SQL Server instance must have read\write privileges on this network share. Provide an FQDN or IP addresses of the server in the network share, for example, '\\\servername.domainname.com\backupfolder' or '\\\IP address\backupfolder'. For improved performance, it's recommended to use separate folder for each database to be migrated. You can provide the database level file share path by using the **Advanced Settings** option. If you're running into issues connecting to the SMB share, see [SMB share](known-issues-azure-sql-db-managed-instance-online.md#smb-file-share-connectivity). |
- |**User name** | Make sure that the Windows user has full control privilege on the network share that you provided above. Azure Database Migration Service impersonates the user credential to upload the backup files to Azure Storage container for restore operation. If using Azure File share, use the storage account name prepended with AZURE\ as the username. |
- |**Password** | Password for the user. If using Azure file share, use a storage account key as the password. |
- |**Subscription of the Azure Storage Account** | Select the subscription that contains the Azure Storage Account. |
- |**Azure Storage Account** | Select the Azure Storage Account that DMS can upload the backup files from the SMB network share to and use for database migration. We recommend selecting the Storage Account in the same region as the DMS service for optimal file upload performance. |
-
- ![Configure Migration Settings](media/tutorial-sql-server-to-managed-instance-online/dms-configure-migration-settings.png)
-
- > [!NOTE]
- > If Azure Database Migration Service shows error ΓÇÿSystem Error 53ΓÇÖ or ΓÇÿSystem Error 57ΓÇÖ, the cause might result from an inability of Azure Database Migration Service to access Azure file share. If you encounter one of these errors, please grant access to the storage account from the virtual network using the instructions [here](../storage/common/storage-network-security.md?toc=%2fazure%2fvirtual-network%2ftoc.json#grant-access-from-a-virtual-network).
-
- > [!IMPORTANT]
- > If loopback check functionality is enabled and the source SQL Server and file share are on the same computer, then source won't be able to access the files hare using FQDN. To fix this issue, disable loopback check functionality using the instructions [here](https://support.microsoft.com/help/926642/error-message-when-you-try-to-access-a-server-locally-by-using-its-fqd).
-
-2. Select **Next: Summary**.
-
-## Review the migration summary
-
-1. On the **Summary** screen, in the **Activity name** text box, specify a name for the migration activity.
-
-2. Review and verify the details associated with the migration project.
-
- ![Migration project summary](media/tutorial-sql-server-to-managed-instance-online/dms-project-summary.png)
-
-## Run and monitor the migration
-
-1. Select **Start migration**.
-
-2. The migration activity window appears to display the current databases migration status. Select **Refresh** to update the display.
-
- ![Migration activity in progress](media/tutorial-sql-server-to-managed-instance-online/dms-monitor-migration.png)
-
- You can further expand the databases and logins categories to monitor the migration status of the respective server objects.
-
- ![Migration activity status](media/tutorial-sql-server-to-managed-instance-online/dms-monitor-migration-extend.png)
-
-## Performing migration cutover
-
-After the full database backup is restored on the target instance of SQL Managed Instance, the database is available for performing a migration cutover.
-
-1. When you're ready to complete the online database migration, select **Start Cutover**.
-
-2. Stop all the incoming traffic to source databases.
-
-3. Take the [tail-log backup], make the backup file available in the SMB network share, and then wait until this final transaction log backup is restored.
-
- At that point, you see **Pending changes** set to 0.
-
-4. Select **Confirm**, and then select **Apply**.
-
- ![Preparing to complete cutover](media/tutorial-sql-server-to-managed-instance-online/dms-complete-cutover.png)
-
- > [!IMPORTANT]
- > After the cutover, availability of SQL Managed Instance with Business Critical service tier only can take significantly longer than General Purpose as three secondary replicas have to be seeded for Always On High Availability group. This operation duration depends on the size of data, for more information, see [Management operations duration](/azure/azure-sql/managed-instance/management-operations-overview#duration).
-
-5. When the database migration status shows **Completed**, connect your applications to the new target instance of SQL Managed Instance.
-
- ![Cutover complete](media/tutorial-sql-server-to-managed-instance-online/dms-cutover-complete.png)
-
-## Additional resources
-
-* For a tutorial showing you how to migrate a database to SQL Managed Instance using the T-SQL RESTORE command, see [Restore a backup to SQL Managed Instance using the restore command](/azure/azure-sql/managed-instance/restore-sample-database-quickstart).
-* For information about SQL Managed Instance, see [What is SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview).
-* For information about connecting apps to SQL Managed Instance, see [Connect applications](/azure/azure-sql/managed-instance/connect-application-instance).
dms Tutorial Sql Server To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-azure-sql.md
- Title: "Tutorial: Migrate SQL Server offline to Azure SQL Database"-
-description: Learn to migrate from SQL Server to Azure SQL Database offline by using Azure Database Migration Service (classic).
--- Previously updated : 10/10/2023---
- - sql-migration-content
--
-# Tutorial: Migrate SQL Server to Azure SQL Database using DMS (classic)
--
-> [!NOTE]
-> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Database by using the [Azure SQL migration extension for Azure Data Studio](/data-migration/sql-server/database/database-migration-service).
->
-> To compare features between versions, review [compare versions](dms-overview.md#compare-versions).
-
-You can use Azure Database Migration Service to migrate the databases from a SQL Server instance to [Azure SQL Database](/azure/sql-database/). In this tutorial, you migrate the [AdventureWorks2016](/sql/samples/adventureworks-install-configure#download-backup-files) database restored to an on-premises instance of SQL Server 2016 (or later) to a single database or pooled database in Azure SQL Database by using Azure Database Migration Service.
-
-You will learn how to:
-> [!div class="checklist"]
->
-> - Assess and evaluate your on-premises database for any blocking issues by using the Data Migration Assistant.
-> - Use the Data Migration Assistant to migrate the database sample schema.
-> - Register the Azure DataMigration resource provider.
-> - Create an instance of Azure Database Migration Service.
-> - Create a migration project by using Azure Database Migration Service.
-> - Run the migration.
-> - Monitor the migration.
--
-## Prerequisites
-
-To complete this tutorial, you need to:
--- Download and install [SQL Server 2016 or later](https://www.microsoft.com/sql-server/sql-server-downloads).-- Enable the TCP/IP protocol, which is disabled by default during SQL Server Express installation, by following the instructions in the article [Enable or Disable a Server Network Protocol](/sql/database-engine/configure-windows/enable-or-disable-a-server-network-protocol#SSMSProcedure).-- [Restore the AdventureWorks2016 database to the SQL Server instance.](/sql/samples/adventureworks-install-configure#restore-to-sql-server)-- Create a database in Azure SQL Database, which you do by following the details in the article [Create a database in Azure SQL Database using the Azure portal](/azure/azure-sql/database/single-database-create-quickstart). For purposes of this tutorial, the name of the Azure SQL Database is assumed to be **AdventureWorksAzure**, but you can provide whatever name you wish.-
- > [!NOTE]
- > If you use SQL Server Integration Services (SSIS) and want to migrate the catalog database for your SSIS projects/packages (SSISDB) from SQL Server to Azure SQL Database, the destination SSISDB will be created and managed automatically on your behalf when you provision SSIS in Azure Data Factory (ADF). For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
-
-- Download and install the latest version of the [Data Migration Assistant](https://www.microsoft.com/download/details.aspx?id=53595).-- Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details.-
- > [!NOTE]
- > During virtual network setup, if you use ExpressRoute with network peering to Microsoft, add the following service [endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) to the subnet in which the service will be provisioned:
- >
- > - Target database endpoint (for example, SQL endpoint, Azure Cosmos DB endpoint, and so on)
- > - Storage endpoint
- > - Service bus endpoint
- >
- > This configuration is necessary because Azure Database Migration Service lacks internet connectivity.
- >
- >If you don't have site-to-site connectivity between the on-premises network and Azure or if there is limited site-to-site connectivity bandwidth, consider using Azure Database Migration Service in hybrid mode (Preview). Hybrid mode leverages an on-premises migration worker together with an instance of Azure Database Migration Service running in the cloud. To create an instance of Azure Database Migration Service in hybrid mode, see the article [Create an instance of Azure Database Migration Service in hybrid mode using the Azure portal](./quickstart-create-data-migration-service-hybrid-portal.md).
--- Ensure that your virtual network Network Security Group outbound security rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage, and AzureMonitor. For more detail on Azure virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).-- Configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).-- Open your firewall on Windows to allow Azure Database Migration Service to access the source SQL Server, which by default is TCP port 1433. If your default instance is listening on some other port, add that to the firewall.-- If you're running multiple named SQL Server instances using dynamic ports, you might wish to enable the SQL Browser Service and allow access to UDP port 1434 through your firewalls so that Azure Database Migration Service can connect to a named instance on your source server.-- When using a firewall appliance in front of your source database(s), you might need to add firewall rules to allow Azure Database Migration Service to access the source database(s) for migration.-- Create a server-level IP [firewall rule](/azure/azure-sql/database/firewall-configure) for Azure SQL Database to allow Azure Database Migration Service access to the target databases. Provide the subnet range of the virtual network used for Azure Database Migration Service.-- Ensure that the credentials used to connect to source SQL Server instance have [CONTROL SERVER](/sql/t-sql/statements/grant-server-permissions-transact-sql) permissions.-- Ensure that the credentials used to connect to target Azure SQL Database instance have [CONTROL DATABASE](/sql/t-sql/statements/grant-database-permissions-transact-sql) permission on the target databases.-
- > [!IMPORTANT]
- > Creating an instance of Azure Database Migration Service requires access to virtual network settings that are normally not within the same resource group. As a result, the user creating an instance of DMS requires permission at subscription level. To create the required roles, which you can assign as needed, run the following script:
- >
- > ```
- >
- > $readerActions = `
- > "Microsoft.Network/networkInterfaces/ipConfigurations/read", `
- > "Microsoft.DataMigration/*/read", `
- > "Microsoft.Resources/subscriptions/resourceGroups/read"
- >
- > $writerActions = `
- > "Microsoft.DataMigration/services/*/write", `
- > "Microsoft.DataMigration/services/*/delete", `
- > "Microsoft.DataMigration/services/*/action", `
- > "Microsoft.Network/virtualNetworks/subnets/join/action", `
- > "Microsoft.Network/virtualNetworks/write", `
- > "Microsoft.Network/virtualNetworks/read", `
- > "Microsoft.Resources/deployments/validate/action", `
- > "Microsoft.Resources/deployments/*/read", `
- > "Microsoft.Resources/deployments/*/write"
- >
- > $writerActions += $readerActions
- >
- > # TODO: replace with actual subscription IDs
- > $subScopes = ,"/subscriptions/00000000-0000-0000-0000-000000000000/","/subscriptions/11111111-1111-1111-1111-111111111111/"
- >
- > function New-DmsReaderRole() {
- > $aRole = [Microsoft.Azure.Commands.Resources.Models.Authorization.PSRoleDefinition]::new()
- > $aRole.Name = "Azure Database Migration Reader"
- > $aRole.Description = "Lets you perform read only actions on DMS service/project/tasks."
- > $aRole.IsCustom = $true
- > $aRole.Actions = $readerActions
- > $aRole.NotActions = @()
- >
- > $aRole.AssignableScopes = $subScopes
- > #Create the role
- > New-AzRoleDefinition -Role $aRole
- > }
- >
- > function New-DmsContributorRole() {
- > $aRole = [Microsoft.Azure.Commands.Resources.Models.Authorization.PSRoleDefinition]::new()
- > $aRole.Name = "Azure Database Migration Contributor"
- > $aRole.Description = "Lets you perform CRUD actions on DMS service/project/tasks."
- > $aRole.IsCustom = $true
- > $aRole.Actions = $writerActions
- > $aRole.NotActions = @()
- >
- > $aRole.AssignableScopes = $subScopes
- > #Create the role
- > New-AzRoleDefinition -Role $aRole
- > }
- >
- > function Update-DmsReaderRole() {
- > $aRole = Get-AzRoleDefinition "Azure Database Migration Reader"
- > $aRole.Actions = $readerActions
- > $aRole.NotActions = @()
- > Set-AzRoleDefinition -Role $aRole
- > }
- >
- > function Update-DmsConributorRole() {
- > $aRole = Get-AzRoleDefinition "Azure Database Migration Contributor"
- > $aRole.Actions = $writerActions
- > $aRole.NotActions = @()
- > Set-AzRoleDefinition -Role $aRole
- > }
- >
- > # Invoke above functions
- > New-DmsReaderRole
- > New-DmsContributorRole
- > Update-DmsReaderRole
- > Update-DmsConributorRole
- > ```
-
-## Assess your on-premises database
-
-Before you can migrate data from a SQL Server instance to a single database or pooled database in Azure SQL Database, you need to assess the SQL Server database for any blocking issues that might prevent migration. Using the Data Migration Assistant, follow the steps described in the article [Performing a SQL Server migration assessment](/sql/dma/dma-assesssqlonprem) to complete the on-premises database assessment. A summary of the required steps follows:
-
-1. In the Data Migration Assistant, select the New (+) icon, and then select the **Assessment** project type.
-2. Specify a project name. From the **Assessment type** drop-down list, select **Database Engine**, in the **Source server type** text box, select **SQL Server**, in the **Target server type** text box, select **Azure SQL Database**, and then select **Create** to create the project.
-
- When you're assessing the source SQL Server database migrating to a single database or pooled database in Azure SQL Database, you can choose one or both of the following assessment report types:
-
- - Check database compatibility
- - Check feature parity
-
- Both report types are selected by default.
-
-3. In the Data Migration Assistant, on the **Options** screen, select **Next**.
-4. On the **Select sources** screen, in the **Connect to a server** dialog box, provide the connection details to your SQL Server, and then select **Connect**.
-5. In the **Add sources** dialog box, select **AdventureWorks2016**, select **Add**, and then select **Start Assessment**.
-
- > [!NOTE]
- > If you use SSIS, DMA does not currently support the assessment of the source SSISDB. However, SSIS projects/packages will be assessed/validated as they are redeployed to the destination SSISDB hosted by Azure SQL Database. For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
-
- When the assessment is complete, the results display as shown in the following graphic:
-
- ![Assess data migration](media/tutorial-sql-server-to-azure-sql/dma-assessments.png)
-
- For databases in Azure SQL Database, the assessments identify feature parity issues and migration blocking issues for deploying to a single database or pooled database.
-
- - The **SQL Server feature parity** category provides a comprehensive set of recommendations, alternative approaches available in Azure, and mitigating steps to help you plan the effort into your migration projects.
- - The **Compatibility issues** category identifies partially supported or unsupported features that reflect compatibility issues that might block migrating SQL Server database(s) to Azure SQL Database. Recommendations are also provided to help you address those issues.
-
-6. Review the assessment results for migration blocking issues and feature parity issues by selecting the specific options.
-
-## Migrate the sample schema
-
-After you're comfortable with the assessment and satisfied that the selected database is a viable candidate for migration to a single database or pooled database in Azure SQL Database, use DMA to migrate the schema to Azure SQL Database.
-
-> [!NOTE]
-> Before you create a migration project in Data Migration Assistant, be sure that you have already provisioned a database in Azure as mentioned in the prerequisites.
-
-> [!IMPORTANT]
-> If you use SSIS, DMA does not currently support the migration of source SSISDB, but you can redeploy your SSIS projects/packages to the destination SSISDB hosted by Azure SQL Database. For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
-
-To migrate the **AdventureWorks2016** schema to a single database or pooled database Azure SQL Database, perform the following steps:
-
-1. In the Data Migration Assistant, select the New (+) icon, and then under **Project type**, select **Migration**.
-2. Specify a project name, in the **Source server type** text box, select **SQL Server**, and then in the **Target server type** text box, select **Azure SQL Database**.
-3. Under **Migration Scope**, select **Schema only**.
-
- After performing the previous steps, the Data Migration Assistant interface should appear as shown in the following graphic:
-
- ![Create Data Migration Assistant Project](media/tutorial-sql-server-to-azure-sql/dma-create-project.png)
-
-4. Select **Create** to create the project.
-5. In the Data Migration Assistant, specify the source connection details for your SQL Server, select **Connect**, and then select the **AdventureWorks2016** database.
-
- ![Data Migration Assistant Source Connection Details](media/tutorial-sql-server-to-azure-sql/dma-source-connect.png)
-
-6. Select **Next**, under **Connect to target server**, specify the target connection details for the Azure SQL Database, select **Connect**, and then select the **AdventureWorksAzure** database you had pre-provisioned in Azure SQL Database.
-
- ![Data Migration Assistant Target Connection Details](media/tutorial-sql-server-to-azure-sql/dma-target-connect.png)
-
-7. Select **Next** to advance to the **Select objects** screen, on which you can specify the schema objects in the **AdventureWorks2016** database that need to be deployed to Azure SQL Database.
-
- By default, all objects are selected.
-
- ![Generate SQL Scripts](media/tutorial-sql-server-to-azure-sql/dma-assessment-source.png)
-
-8. Select **Generate SQL script** to create the SQL scripts, and then review the scripts for any errors.
-
- ![Schema Script](media/tutorial-sql-server-to-azure-sql/dma-schema-script.png)
-
-9. Select **Deploy schema** to deploy the schema to Azure SQL Database, and then after the schema is deployed, check the target server for any anomalies.
-
- ![Deploy Schema](media/tutorial-sql-server-to-azure-sql/dma-schema-deploy.png)
---
-## Create a migration project
-
-After the service is created, locate it within the Azure portal, open it, and then create a new migration project.
-
-1. In the Azure portal menu, select **All services**. Search for and select **Azure Database Migration Services**.
-
- ![Locate all instances of Azure Database Migration Service](media/tutorial-sql-server-to-azure-sql/dms-search.png)
-
-2. On the **Azure Database Migration Services** screen, select the Azure Database Migration Service instance that you created.
-
-3. Select **New Migration Project**.
-
- ![Locate your instance of Azure Database Migration Service](media/tutorial-sql-server-to-azure-sql/dms-instance-search.png)
-
-4. On the **New migration project** screen, specify a name for the project, in the **Source server type** text box, select **SQL Server**, in the **Target server type** text box, select **Azure SQL Database**, and then for **Choose Migration activity type**, select **Data migration**.
-
- ![Create Database Migration Service Project](media/tutorial-sql-server-to-azure-sql/dms-create-project-2.png)
-
-5. Select **Create and run activity** to create the project and run the migration activity.
-
-## Specify source details
-
-1. On the **Select source** screen, specify the connection details for the source SQL Server instance.
-
- Make sure to use a Fully Qualified Domain Name (FQDN) for the source SQL Server instance name. You can also use the IP Address for situations in which DNS name resolution isn't possible.
-
-2. If you have not installed a trusted certificate on your source server, select the **Trust server certificate** check box.
-
- When a trusted certificate is not installed, SQL Server generates a self-signed certificate when the instance is started. This certificate is used to encrypt the credentials for client connections.
-
- > [!CAUTION]
- > TLS connections that are encrypted using a self-signed certificate do not provide strong security. They are susceptible to man-in-the-middle attacks. You should not rely on TLS using self-signed certificates in a production environment or on servers that are connected to the internet.
-
- > [!IMPORTANT]
- > If you use SSIS, DMS does not currently support the migration of source SSISDB, but you can redeploy your SSIS projects/packages to the destination SSISDB hosted by Azure SQL Database. For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
-
- ![Source Details](media/tutorial-sql-server-to-azure-sql/dms-source-details-2.png)
-
-3. Select **Next: Select databases**.
-
-## Select databases for migration
-
-Select either all databases or specific databases that you want to migrate to Azure SQL Database. DMS provides you with the expected migration time for selected databases. If the migration downtimes are acceptable continue with the migration. If the migration downtimes are not acceptable, consider migrating to [SQL Managed Instance with near-zero downtime](tutorial-sql-server-managed-instance-online.md) or submit ideas/suggestions for improvement, and other feedback in the [Azure Community forum ΓÇö Azure Database Migration Service](https://feedback.azure.com/d365community/forum/2dd7eb75-ef24-ec11-b6e6-000d3a4f0da0).
-
-1. Choose the database(s) you want to migrate from the list of available databases.
-1. Review the expected downtime. If it's acceptable, select **Next: Select target >>**
-
- ![Source databases](media/tutorial-sql-server-to-azure-sql/select-database.png)
---
-## Specify target details
-
-1. On the **Select target** screen, provide authentication settings to your Azure SQL Database.
-
- ![Select target](media/tutorial-sql-server-to-azure-sql/select-target.png)
-
- > [!NOTE]
- > Currently, SQL authentication is the only supported authentication type.
-
-1. Select **Next: Map to target databases** screen, map the source and the target database for migration.
-
- If the target database contains the same database name as the source database, Azure Database Migration Service selects the target database by default.
-
- ![Map to target databases](media/tutorial-sql-server-to-azure-sql/dms-map-targets-activity-2.png)
-
-1. Select **Next: Configuration migration settings**, expand the table listing, and then review the list of affected fields.
-
- Azure Database Migration Service auto selects all the empty source tables that exist on the target Azure SQL Database instance. If you want to remigrate tables that already include data, you need to explicitly select the tables on this blade.
-
- ![Select tables](media/tutorial-sql-server-to-azure-sql/dms-configure-setting-activity-2.png)
-
-1. Select **Next: Summary**, review the migration configuration and in the **Activity name** text box, specify a name for the migration activity.
-
- ![Choose validation option](media/tutorial-sql-server-to-azure-sql/dms-configuration-2.png)
-
-## Run the migration
--- Select **Start migration**.-
- The migration activity window appears, and the **Status** of the activity is **Pending**.
-
- ![Activity Status](media/tutorial-sql-server-to-azure-sql/dms-activity-status-1.png)
-
-## Monitor the migration
-
-1. On the migration activity screen, select **Refresh** to update the display until the **Status** of the migration shows as **Completed**.
-
- ![Activity Status Completed](media/tutorial-sql-server-to-azure-sql/dms-completed-activity-1.png)
-
-2. Verify the target database(s) on the target **Azure SQL Database**.
-
-## Additional resources
--- For information about Azure Database Migration Service, see the article [What is Azure Database Migration Service?](./dms-overview.md).-- For information about Azure SQL Database, see the article [What is the Azure SQL Database service?](/azure/azure-sql/database/sql-database-paas-overview).
dms Tutorial Sql Server To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-managed-instance.md
- Title: "Tutorial: Migrate SQL Server to SQL Managed Instance"-
-description: Learn to migrate from SQL Server to an Azure SQL Managed Instance by using Azure Database Migration Service (classic).
--- Previously updated : 02/08/2023---
- - fasttrack-edit
- - sql-migration-content
--
-# Tutorial: Migrate SQL Server to an Azure SQL Managed Instance offline using DMS (classic)
--
-> [!NOTE]
-> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Managed Instance by using the [Azure SQL migration extension for Azure Data Studio](/data-migration/sql-server/managed-instance/database-migration-service).
->
-> To compare features between versions, review [compare versions](dms-overview.md#compare-versions).
-
-You can use Azure Database Migration Service to migrate the databases from a SQL Server instance to an [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview). For additional methods that may require some manual effort, see the article [SQL Server to Azure SQL Managed Instance](/azure/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide).
-
-In this tutorial, you migrate the [AdventureWorks2016](/sql/samples/adventureworks-install-configure#download-backup-files) database from an on-premises instance of SQL Server to a SQL Managed Instance by using Azure Database Migration Service.
-
-You will learn how to:
-> [!div class="checklist"]
->
-> - Register the Azure DataMigration resource provider.
-> - Create an instance of Azure Database Migration Service.
-> - Create a migration project by using Azure Database Migration Service.
-> - Run the migration.
-> - Monitor the migration.
-
-> [!IMPORTANT]
-> For offline migrations from SQL Server to SQL Managed Instance, Azure Database Migration Service can create the backup files for you. Alternately, you can provide the latest full database backup in the SMB network share that the service will use to migrate your databases. Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups into a single backup media is not supported. Note that you can use compressed backups as well, to reduce the likelihood of experiencing potential issues with migrating large backups.
--
-This article describes an offline migration from SQL Server to a SQL Managed Instance. For an online migration, see [Migrate SQL Server to an SQL Managed Instance online using DMS](tutorial-sql-server-managed-instance-online.md).
-
-## Prerequisites
-
-To complete this tutorial, you need to:
--- Download and install [SQL Server 2016 or later](https://www.microsoft.com/sql-server/sql-server-downloads).-- Enable the TCP/IP protocol, which is disabled by default during SQL Server Express installation, by following the instructions in the article [Enable or Disable a Server Network Protocol](/sql/database-engine/configure-windows/enable-or-disable-a-server-network-protocol#SSMSProcedure).-- [Restore the AdventureWorks2016 database to the SQL Server instance.](/sql/samples/adventureworks-install-configure#restore-to-sql-server)-- Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). [Learn network topologies for SQL Managed Instance migrations using Azure Database Migration Service](./resource-network-topologies.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details.-
- > [!NOTE]
- > During virtual network setup, if you use ExpressRoute with network peering to Microsoft, add the following service [endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) to the subnet in which the service will be provisioned:
- > - Target database endpoint (for example, SQL endpoint, Azure Cosmos DB endpoint, and so on)
- > - Storage endpoint
- > - Service bus endpoint
- >
- > This configuration is necessary because Azure Database Migration Service lacks internet connectivity.
--- Ensure that your virtual network Network Security Group rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage, and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).-- Configure your [Windows Firewall for source database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).-- Open your Windows Firewall to allow Azure Database Migration Service to access the source SQL Server, which by default is TCP port 1433. If your default instance is listening on some other port, add that to the firewall.-- If you're running multiple named SQL Server instances using dynamic ports, you may wish to enable the SQL Browser Service and allow access to UDP port 1434 through your firewalls so that Azure Database Migration Service can connect to a named instance on your source server.-- If you're using a firewall appliance in front of your source databases, you may need to add firewall rules to allow Azure Database Migration Service to access the source database(s) for migration, as well as files via SMB port 445.-- Create a SQL Managed Instance by following the detail in the article [Create a SQL Managed Instance in the Azure portal](/azure/azure-sql/managed-instance/instance-create-quickstart).-- Ensure that the logins used to connect the source SQL Server and target SQL Managed Instance are members of the sysadmin server role.-
- >[!NOTE]
- >By default, Azure Database Migration Service only supports migrating SQL logins. However, you can enable the ability to migrate Windows logins by:
- >
- >- Ensuring that the target SQL Managed Instance has AAD read access, which can be configured via the Azure portal by a user with the **Global Administrator** role.
- >- Configuring your Azure Database Migration Service instance to enable Windows user/group login migrations, which is set up via the Azure portal, on the Configuration page. After enabling this setting, restart the service for the changes to take effect.
- >
- > After restarting the service, Windows user/group logins appear in the list of logins available for migration. For any Windows user/group logins you migrate, you are prompted to provide the associated domain name. Service user accounts (account with domain name NT AUTHORITY) and virtual user accounts (account name with domain name NT SERVICE) are not supported.
--- Create a network share that Azure Database Migration Service can use to back up the source database.-- Ensure that the service account running the source SQL Server instance has write privileges on the network share that you created and that the computer account for the source server has read/write access to the same share.-- Make a note of a Windows user (and password) that has full control privilege on the network share that you previously created. Azure Database Migration Service impersonates the user credential to upload the backup files to Azure Storage container for restore operation.-- Create a blob container and retrieve its SAS URI by using the steps in the article [Manage Azure Blob Storage resources with Storage Explorer](../vs-azure-tools-storage-explorer-blobs.md#get-the-sas-for-a-blob-container), be sure to select all permissions (Read, Write, Delete, List) on the policy window while creating the SAS URI. This detail provides Azure Database Migration Service with access to your storage account container for uploading the backup files used for migrating databases to SQL Managed Instance.-
- > [!NOTE]
- > - Azure Database Migration Service does not support using an account level SAS token when configuring the Storage Account settings during the [Configure Migration Settings](#configure-migration-settings) step.
-
-- Ensure both the Azure Database Migration Service IP address and the Azure SQL Managed Instance subnet can communicate with the blob container.--
-
-> [!NOTE]
-> For additional detail, see the article [Network topologies for Azure SQL Managed Instance migrations using Azure Database Migration Service](./resource-network-topologies.md).
-
-## Create a migration project
-
-After an instance of the service is created, locate it within the Azure portal, open it, and then create a new migration project.
-
-1. In the Azure portal menu, select **All services**. Search for and select **Azure Database Migration Services**.
-
- ![Locate all instances of Azure Database Migration Service](media/tutorial-sql-server-to-managed-instance/dms-search.png)
-
-2. On the **Azure Database Migration Services** screen, select the Azure Database Migration Service instance that you created.
-
-3. Select **New Migration Project**.
-
- ![Locate your instance of Azure Database Migration Service](media/tutorial-sql-server-to-managed-instance/dms-create-project-1.png)
-
-4. On the **New migration project** screen, specify a name for the project, in the **Source server type** text box, select **SQL Server**, in the **Target server type** text box, select **Azure SQL Database Managed Instance**, and then for **Choose type of activity**, select **Offline data migration**.
-
- ![Create Database Migration Service Project](media/tutorial-sql-server-to-managed-instance/dms-create-project-2.png)
-
-5. Select **Create and run activity** to create the project and run the migration activity.
-
-## Specify source details
-
-1. On the **Select source** screen, specify the connection details for the source SQL Server instance.
-
- Make sure to use a Fully Qualified Domain Name (FQDN) for the source SQL Server instance name. You can also use the IP Address for situations in which DNS name resolution isn't possible.
-
-2. If you haven't installed a trusted certificate on your server, select the **Trust server certificate** check box.
-
- When a trusted certificate isn't installed, SQL Server generates a self-signed certificate when the instance is started. This certificate is used to encrypt the credentials for client connections.
-
- > [!CAUTION]
- > TLS connections that are encrypted using a self-signed certificate does not provide strong security. They are susceptible to man-in-the-middle attacks. You should not rely on TLS using self-signed certificates in a production environment or on servers that are connected to the internet.
-
- ![Source Details](media/tutorial-sql-server-to-managed-instance/dms-source-details.png)
-
-3. Select **Next: Select target**
-
-## Specify target details
-
-1. On the **Select target** screen, specify the connection details for the target, which is the pre-provisioned SQL Managed Instance to which you're migrating the **AdventureWorks2016** database.
-
- If you haven't already provisioned the SQL Managed Instance, select the [link](/azure/azure-sql/managed-instance/instance-create-quickstart) to help you provision the instance. You can still continue with project creation and then, when the SQL Managed Instance is ready, return to this specific project to execute the migration.
-
- ![Select Target](media/tutorial-sql-server-to-managed-instance/dms-target-details.png)
-
-2. Select **Next: Select databases**. On the **Select databases** screen, select the **AdventureWorks2016** database for migration.
-
- ![Select Source Databases](media/tutorial-sql-server-to-managed-instance/dms-source-database.png)
-
- > [!IMPORTANT]
- > If you use SQL Server Integration Services (SSIS), DMS does not currently support migrating the catalog database for your SSIS projects/packages (SSISDB) from SQL Server to SQL Managed Instance. However, you can provision SSIS in Azure Data Factory (ADF) and redeploy your SSIS projects/packages to the destination SSISDB hosted by SQL Managed Instance. For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
-
-3. Select **Next: Select logins**
-
-## Select logins
-
-1. On the **Select logins** screen, select the logins that you want to migrate.
-
- >[!NOTE]
- >By default, Azure Database Migration Service only supports migrating SQL logins. To enable support for migrating Windows logins, see the **Prerequisites** section of this tutorial.
-
- ![Select logins](media/tutorial-sql-server-to-managed-instance/dms-select-logins.png)
-
-2. Select **Next: Configure migration settings**.
-
-## Configure migration settings
-
-1. On the **Configure migration settings** screen, provide the following details:
-
- | Parameter | Description |
- |--||
- |**Choose source backup option** | Choose the option **I will provide latest backup files** when you already have full backup files available for DMS to use for database migration. Choose the option **I will let Azure Database Migration Service create backup files** when you want DMS to take the source database full backup at first and use it for migration. |
- |**Network location share** | The local SMB network share that Azure Database Migration Service can take the source database backups to. The service account running source SQL Server instance must have write privileges on this network share. Provide an FQDN or IP addresses of the server in the network share, for example, '\\\servername.domainname.com\backupfolder' or '\\\IP address\backupfolder'.|
- |**User name** | Make sure that the Windows user has full control privilege on the network share that you provided above. Azure Database Migration Service will impersonate the user credential to upload the backup files to Azure Storage container for restore operation. If TDE-enabled databases are selected for migration, the above windows user must be the built-in administrator account and [User Account Control](/windows/security/identity-protection/user-account-control/user-account-control-overview) must be disabled for Azure Database Migration Service to upload and delete the certificates files.) |
- |**Password** | Password for the user. |
- |**Storage account settings** | The SAS URI that provides Azure Database Migration Service with access to your storage account container to which the service uploads the backup files and that is used for migrating databases to SQL Managed Instance. [Learn how to get the SAS URI for blob container](../vs-azure-tools-storage-explorer-blobs.md#get-the-sas-for-a-blob-container). This SAS URI must be for the blob container, not for the storage account.|
- |**TDE Settings** | If you're migrating the source databases with Transparent Data Encryption (TDE) enabled, you need to have write privileges on the target SQL Managed Instance. Select the subscription in which the SQL Managed Instance provisioned from the drop-down menu. Select the target **Azure SQL Database Managed Instance** in the drop-down menu. |
-
- ![Configure Migration Settings](media/tutorial-sql-server-to-managed-instance/dms-configure-migration-settings.png)
-
-2. Select **Next: Summary**.
-
-## Review the migration summary
-
-1. On the **Summary** screen, in the **Activity name** text box, specify a name for the migration activity.
-
-2. Review and verify the details associated with the migration project.
-
- ![Migration project summary](media/tutorial-sql-server-to-managed-instance/dms-project-summary.png)
-
-## Run the migration
--- Select **Start migration**.-
- The migration activity window appears that displays the current migration status of the databases and logins.
-
-## Monitor the migration
-
-1. In the migration activity screen, select **Refresh** to update the display.
-
- ![Screenshot that shows the migration activity screen and the Refresh button.](media/tutorial-sql-server-to-managed-instance/dms-monitor-migration.png)
-
-2. You can further expand the databases and logins categories to monitor the migration status of the respective server objects.
-
- ![Migration activity in progress](media/tutorial-sql-server-to-managed-instance/dms-monitor-migration-extend.png)
-
-3. After the migration completes, verify the target database on the SQL Managed Instance environment.
-
-## Additional resources
--- For a tutorial showing you how to migrate a database to SQL Managed Instance using the T-SQL RESTORE command, see [Restore a backup to SQL Managed Instance using the restore command](/azure/azure-sql/managed-instance/restore-sample-database-quickstart).-- For information about SQL Managed Instance, see [What is SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview).-- For information about connecting apps to SQL Managed Instance, see [Connect applications](/azure/azure-sql/managed-instance/connect-application-instance).
event-grid Event Handlers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-handlers.md
Title: Azure Event Grid event handlers description: Describes supported event handlers for Azure Event Grid. Azure Automation, Functions, Event Hubs, Hybrid Connections, Logic Apps, Service Bus, Queue Storage, Webhooks. Previously updated : 06/16/2023 Last updated : 07/31/2024 # Event handlers in Azure Event Grid
event-hubs Event Hubs Kafka Connect Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-kafka-connect-tutorial.md
Title: Integrate with Apache Kafka Connect- Azure Event Hubs | Microsoft Docs
-description: This article provides information on how to use Kafka Connect with Azure Event Hubs for Kafka.
+ Title: Integrate with Apache Kafka Connect
+description: This article provides a walkthrough that shows you how to use Kafka Connect with Azure Event Hubs for Kafka.
Previously updated : 05/18/2023 Last updated : 07/31/2024
+# customer intent: As a developer, I want to know how to use Apache Kafka Connect with Azure Event Hubs for Kafka.
# Integrate Apache Kafka Connect support on Azure Event Hubs
-[Apache Kafka Connect](https://kafka.apache.org/documentation/#connect) is a framework to connect and import/export data from/to any external system such as MySQL, HDFS, and file system through a Kafka cluster. This tutorial walks you through using Kafka Connect framework with Event Hubs.
+[Apache Kafka Connect](https://kafka.apache.org/documentation/#connect) is a framework to connect and import/export data from/to any external system such as MySQL, HDFS, and file system through a Kafka cluster. This article walks you through using Kafka Connect framework with Event Hubs.
-
-This tutorial walks you through integrating Kafka Connect with an event hub and deploying basic FileStreamSource and FileStreamSink connectors. While these connectors aren't meant for production use, they demonstrate an end-to-end Kafka Connect scenario where Azure Event Hubs acts as a Kafka broker.
+This article walks you through integrating Kafka Connect with an event hub and deploying basic `FileStreamSource` and `FileStreamSink` connectors. While these connectors aren't meant for production use, they demonstrate an end-to-end Kafka Connect scenario where Azure Event Hubs acts as a Kafka broker.
> [!NOTE] > This sample is available on [GitHub](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/connect).
-In this tutorial, you take the following steps:
-
-> [!div class="checklist"]
-> * Create an Event Hubs namespace
-> * Clone the example project
-> * Configure Kafka Connect for Event Hubs
-> * Run Kafka Connect
-> * Create connectors
- ## Prerequisites To complete this walkthrough, make sure you have the following prerequisites:
An Event Hubs namespace is required to send and receive from any Event Hubs serv
## Clone the example project Clone the Azure Event Hubs repository and navigate to the tutorials/connect subfolder:
-```
+```bash
git clone https://github.com/Azure/azure-event-hubs-for-kafka.git cd azure-event-hubs-for-kafka/tutorials/connect ``` ## Configure Kafka Connect for Event Hubs
-Minimal reconfiguration is necessary when redirecting Kafka Connect throughput from Kafka to Event Hubs. The following `connect-distributed.properties` sample illustrates how to configure Connect to authenticate and communicate with the Kafka endpoint on Event Hubs:
+Minimal reconfiguration is necessary when redirecting Kafka Connect throughput from Kafka to Event Hubs. The following `connect-distributed.properties` sample illustrates how to configure Connect to authenticate and communicate with the Kafka endpoint on Event Hubs:
```properties # e.g. namespace.servicebus.windows.net:9093
plugin.path={KAFKA.DIRECTORY}/libs # path to the libs directory within the Kafka
In this step, a Kafka Connect worker is started locally in distributed mode, using Event Hubs to maintain cluster state.
-1. Save the above `connect-distributed.properties` file locally. Be sure to replace all values in braces.
+1. Save the `connect-distributed.properties` file locally. Be sure to replace all values in braces.
2. Navigate to the location of the Kafka release on your machine.
-4. Run `./bin/connect-distributed.sh /PATH/TO/connect-distributed.properties`. The Connect worker REST API is ready for interaction when you see `'INFO Finished starting connectors and tasks'`.
+4. Run `./bin/connect-distributed.sh /PATH/TO/connect-distributed.properties`. The Connect worker REST API is ready for interaction when you see `'INFO Finished starting connectors and tasks'`.
> [!NOTE] > Kafka Connect uses the Kafka AdminClient API to automatically create topics with recommended configurations, including compaction. A quick check of the namespace in the Azure portal reveals that the Connect worker's internal topics have been created automatically.
In this step, a Kafka Connect worker is started locally in distributed mode, usi
>Kafka Connect internal topics **must use compaction**. The Event Hubs team is not responsible for fixing improper configurations if internal Connect topics are incorrectly configured. ### Create connectors
-This section walks you through spinning up FileStreamSource and FileStreamSink connectors.
+This section walks you through spinning up `FileStreamSource` and `FileStreamSink` connectors.
1. Create a directory for input and output data files. ```bash mkdir ~/connect-quickstart ```
-2. Create two files: one file with seed data from which the FileStreamSource connector reads, and another to which our FileStreamSink connector writes.
+2. Create two files: one file with seed data from which the `FileStreamSource` connector reads, and another to which our `FileStreamSink` connector writes.
```bash seq 1000 > ~/connect-quickstart/input.txt touch ~/connect-quickstart/output.txt ```
-3. Create a FileStreamSource connector. Be sure to replace the curly braces with your home directory path.
+3. Create a `FileStreamSource` connector. Be sure to replace the curly braces with your home directory path.
```bash curl -s -X POST -H "Content-Type: application/json" --data '{"name": "file-source","config": {"connector.class":"org.apache.kafka.connect.file.FileStreamSourceConnector","tasks.max":"1","topic":"connect-quickstart","file": "{YOUR/HOME/PATH}/connect-quickstart/input.txt"}}' http://localhost:8083/connectors ```
- You should see the event hub `connect-quickstart` on your Event Hubs instance after running the above command.
+ You should see the event hub `connect-quickstart` on your Event Hubs instance after running the command.
4. Check status of source connector. ```bash curl -s http://localhost:8083/connectors/file-source/status ```
- Optionally, you can use [Service Bus Explorer](https://github.com/paolosalvatori/ServiceBusExplorer/releases) to verify that events have arrived in the `connect-quickstart` topic.
+ Optionally, you can use [Service Bus Explorer](https://github.com/paolosalvatori/ServiceBusExplorer/releases) to verify that events arrived in the `connect-quickstart` topic.
-5. Create a FileStreamSink Connector. Again, make sure you replace the curly braces with your home directory path.
+5. Create a FileStreamSink Connector. Again, make sure you replace the curly braces with your home directory path.
```bash curl -X POST -H "Content-Type: application/json" --data '{"name": "file-sink", "config": {"connector.class":"org.apache.kafka.connect.file.FileStreamSinkConnector", "tasks.max":"1", "topics":"connect-quickstart", "file": "{YOUR/HOME/PATH}/connect-quickstart/output.txt"}}' http://localhost:8083/connectors ```
This section walks you through spinning up FileStreamSource and FileStreamSink c
``` ### Cleanup
-Kafka Connect creates Event Hubs topics to store configurations, offsets, and status that persist even after the Connect cluster has been taken down. Unless this persistence is desired, it's recommended that these topics are deleted. You may also want to delete the `connect-quickstart` Event Hubs that were created during this walkthrough.
+Kafka Connect creates Event Hubs topics to store configurations, offsets, and status that persist even after the Connect cluster has been taken down. Unless this persistence is desired, we recommend that you delete these topics. You might also want to delete the `connect-quickstart` Event Hubs that were created during this walkthrough.
-## Next steps
+## Related content
To learn more about Event Hubs for Kafka, see the following articles: -- [Mirror a Kafka broker in an event hub](event-hubs-kafka-mirror-maker-tutorial.md)-- [Connect Apache Spark to an event hub](event-hubs-kafka-spark-tutorial.md)-- [Connect Apache Flink to an event hub](event-hubs-kafka-flink-tutorial.md)-- [Explore samples on our GitHub](https://github.com/Azure/azure-event-hubs-for-kafka)-- [Connect Akka Streams to an event hub](event-hubs-kafka-akka-streams-tutorial.md) - [Apache Kafka developer guide for Azure Event Hubs](apache-kafka-developer-guide.md)
+- [Explore samples on our GitHub](https://github.com/Azure/azure-event-hubs-for-kafka)
++
event-hubs Event Processor Balance Partition Load https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-processor-balance-partition-load.md
Title: Balance partition load across multiple instances - Azure Event Hubs | Microsoft Docs
+ Title: Balance partition load across multiple instances
description: Describes how to balance partition load across multiple instances of your application using an event processor and the Azure Event Hubs SDK. - Previously updated : 11/14/2022+ Last updated : 07/31/2024
+#customer intent: As a developer, I want to know how to run multiple instances of my processing client to read data from an event hub.
# Balance partition load across multiple instances of your application
When the checkpoint is performed to mark an event as processed, an entry in chec
By default, the function that processes events is called sequentially for a given partition. Subsequent events and calls to this function from the same partition queue up behind the scenes as the event pump continues to run in the background on other threads. Events from different partitions can be processed concurrently and any shared state that is accessed across partitions have to be synchronized.
-## Next steps
+## Related content
See the following quick starts: - [.NET Core](event-hubs-dotnet-standard-getstarted-send.md)
event-hubs Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/private-link-service.md
Title: Integrate Azure Event Hubs with Azure Private Link Service
-description: Learn how to integrate Azure Event Hubs with Azure Private Link Service
Previously updated : 02/15/2023--
+description: This article describes how to allow access to your Event Hubs namespace only via private endpoints by using the Azure Private Link Service.
Last updated : 07/31/2024+
+# customer intent: As an IT admin, I want to restrict access to an Event Hubs namespace to a private endpoint in a virtual network.
# Allow access to Azure Event Hubs namespaces via private endpoints Azure Private Link Service enables you to access Azure Services (for example, Azure Event Hubs, Azure Storage, and Azure Cosmos DB) and Azure hosted customer/partner services over a **private endpoint** in your virtual network.
-A private endpoint is a network interface that connects you privately and securely to a service powered by Azure Private Link. The private endpoint uses a private IP address from your virtual network, effectively bringing the service into your virtual network. All traffic to the service can be routed through the private endpoint, so no gateways, NAT devices, ExpressRoute or VPN connections, or public IP addresses are needed. Traffic between your virtual network and the service traverses over the Microsoft backbone network, eliminating exposure from the public Internet. You can connect to an instance of an Azure resource, giving you the highest level of granularity in access control.
+A private endpoint is a network interface that connects you privately and securely to a service powered by Azure Private Link. The private endpoint uses a private IP address from your virtual network, effectively bringing the service into your virtual network. All traffic to the service is routed through the private endpoint, so no gateways, NAT devices, ExpressRoute or VPN connections, or public IP addresses are needed. Traffic between your virtual network and the service traverses over the Microsoft backbone network, eliminating exposure from the public Internet. You can connect to an instance of an Azure resource, giving you the highest level of granularity in access control.
For more information, see [What is Azure Private Link?](../private-link/private-link-overview.md) ## Important points - This feature isn't supported in the **basic** tier.-- Enabling private endpoints can prevent other Azure services from interacting with Event Hubs. Requests that are blocked include those from other Azure services, from the Azure portal, from logging and metrics services, and so on. As an exception, you can allow access to Event Hubs resources from certain **trusted services** even when private endpoints are enabled. For a list of trusted services, see [Trusted services](#trusted-microsoft-services).
+- Enabling private endpoints can prevent other Azure services from interacting with Event Hubs. Requests that are blocked include those from other Azure services, from the Azure portal, from logging and metrics services, and so on. As an exception, you can allow access to Event Hubs resources from certain **trusted services** even when private endpoints are enabled. For a list of trusted services, see [Trusted services](#trusted-microsoft-services).
- Specify **at least one IP rule or virtual network rule** for the namespace to allow traffic only from the specified IP addresses or subnet of a virtual network. If there are no IP and virtual network rules, the namespace can be accessed over the public internet (using the access key). ## Add a private endpoint using Azure portal
To integrate an Event Hubs namespace with Azure Private Link, you need the follo
- A subnet in the virtual network. You can use the **default** subnet. - Owner or contributor permissions for both the namespace and the virtual network.
-Your private endpoint and virtual network must be in the same region. When you select a region for the private endpoint using the portal, it will automatically filter only virtual networks that are in that region. Your namespace can be in a different region.
+Your private endpoint and virtual network must be in the same region. When you select a region for the private endpoint using the portal, it automatically filters virtual networks that are in that region. Your namespace can be in a different region.
Your private endpoint uses a private IP address in your virtual network.
If you already have an Event Hubs namespace, you can create a private link conne
1. On the **Networking** page, for **Public network access**, select **Disabled** if you want the namespace to be accessed only via private endpoints. 1. For **Allow trusted Microsoft services to bypass this firewall**, select **Yes** if you want to allow [trusted Microsoft services](#trusted-microsoft-services) to bypass this firewall.
- :::image type="content" source="./media/private-link-service/public-access-disabled.png" alt-text="Screenshot of the Networking page with public network access as Disabled.":::
+ :::image type="content" source="./media/private-link-service/public-access-disabled.png" alt-text="Screenshot of the Networking page with public network access as Disabled." lightbox="./media/private-link-service/public-access-disabled.png":::
1. Switch to the **Private endpoint connections** tab. 1. Select the **+ Private Endpoint** button at the top of the page.
If you already have an Event Hubs namespace, you can create a private link conne
1. On the **Tags** page, create any tags (names and values) that you want to associate with the private endpoint resource. Then, select **Review + create** button at the bottom of the page. 1. On the **Review + create**, review all the settings, and select **Create** to create the private endpoint.
- ![Create Private Endpoint - Review and Create page](./media/private-link-service/create-private-endpoint-review-create-page.png)
-12. Confirm that you see the private endpoint connection you created shows up in the list of endpoints. In this example, the private endpoint is auto-approved because you connected to an Azure resource in your directory and you have sufficient permissions.
+ ![Screenshot that shows the Review + create page.](./media/private-link-service/create-private-endpoint-review-create-page.png)
+12. Confirm that you see the private endpoint connection you created shows up in the list of endpoints. Refresh the page and switch to the **Private endpoint connections** tab. In this example, the private endpoint is auto-approved because you connected to an Azure resource in your directory and you have sufficient permissions.
- ![Private endpoint created](./media/private-link-service/private-endpoint-created.png)
+ ![Screenshot that shows the Private endpoint connections page with the newly created private endpoint.](./media/private-link-service/private-endpoint-created.png)
[!INCLUDE [event-hubs-trusted-services](./includes/event-hubs-trusted-services.md)]
There are four provisioning states:
| None | Pending | Connection is created manually and is pending approval from the Private Link resource owner. | | Approve | Approved | Connection was automatically or manually approved and is ready to be used. | | Reject | Rejected | Connection was rejected by the private link resource owner. |
-| Remove | Disconnected | Connection was removed by the private link resource owner, the private endpoint becomes informative and should be deleted for cleanup. |
+| Remove | Disconnected | Connection was removed by the private link resource owner. The private endpoint becomes informative and should be deleted for cleanup. |
### Approve, reject, or remove a private endpoint connection
There are four provisioning states:
2. Select the **private endpoint** you wish to approve 3. Select the **Approve** button.
- ![Approve private endpoint](./media/private-link-service/approve-private-endpoint.png)
+ :::image type="content" source="./media/private-link-service/approve-private-endpoint.png" alt-text="Screenshot that shows the Private endpoint connections tab with the Approve button highlighted.":::
4. On the **Approve connection** page, add a comment (optional), and select **Yes**. If you select **No**, nothing happens. 5. You should see the status of the private endpoint connection in the list changed to **Approved**.
There are four provisioning states:
1. If there are any private endpoint connections you want to reject, whether it's a pending request or existing connection, select the connection and select the **Reject** button.
- ![Reject private endpoint](./media/private-link-service/private-endpoint-reject-button.png)
+ :::image type="content" source="./media/private-link-service/private-endpoint-reject-button.png" alt-text="Screenshot that shows the Private endpoint connections tab with the Reject button highlighted.":::
2. On the **Reject connection** page, enter a comment (optional), and select **Yes**. If you select **No**, nothing happens. 3. You should see the status of the private endpoint connection in the list changed to **Rejected**.
Aliases: <event-hubs-namespace-name>.servicebus.windows.net
For more, see [Azure Private Link service: Limitations](../private-link/private-link-service-overview.md#limitations)
-## Next steps
+## Related content
- Learn more about [Azure Private Link](../private-link/private-link-service-overview.md) - Learn more about [Azure Event Hubs](event-hubs-about.md)
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
Title: Connectivity providers and locations for Azure ExpressRoute
description: This article provides a detailed overview of peering locations served by each ExpressRoute connectivity provider to connect to Azure. -+ Last updated 04/21/2024
The following table shows locations by service provider. If you want to view ava
| **[Retelit](https://www.retelit.it/EN/Home.aspx)** | Supported | Supported | Milan | | **RISQ** |Supported | Supported | Quebec City<br/>Montreal | | **SCSK** |Supported | Supported | Tokyo3 |
-| **[Sejong Telecom](https://www.sejongtelecom.net/en/pages/service/cloud_ms)** | Supported | Supported | Seoul |
+| **[Sejong Telecom](https://www.sejongtelecom.net/)** | Supported | Supported | Seoul |
| **[SES](https://www.ses.com/networks/signature-solutions/signature-cloud/ses-and-azure-expressroute)** | Supported | Supported | London2<br/>Washington DC | | **[SIFY](https://sifytechnologies.com/)** | Supported | Supported | Chennai<br/>Mumbai2 | | **[SingTel](https://www.singtel.com/about-us/news-releases/singtel-provide-secure-private-access-microsoft-azure-public-cloud)** |Supported |Supported | Hong Kong2<br/>Singapore<br/>Singapore2 |
Enabling private connectivity to fit your needs can be challenging, based on the
| **Orange Networks** | Europe | | **[Perficient](https://www.perficient.com/Partners/Microsoft/Cloud/Azure-ExpressRoute)** | North America | | **[Presidio](https://www.presidio.com/subpage/1107/microsoft-azure)** | North America |
-| **[sol-tec](https://www.sol-tec.com/what-we-do/)** | Europe |
+| **[sol-tec](https://www.advania.co.uk/our-services/azure-and-cloud/)** | Europe |
| **[Venha Pra Nuvem](https://venhapranuvem.com.br/)** | South America | | **[Vigilant.IT](https://vigilant.it/networking-services/microsoft-azure-networking/)** | Australia |
expressroute Expressroute Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-routing.md
ExpressRoute can't be configured as transit routers. You have to rely on your co
Default routes are permitted only on Azure private peering sessions. In such a case, ExpressRoute routes all traffic from the associated virtual networks to your network. Advertising default routes into private peering results in the internet path from Azure being blocked. You must rely on your corporate edge to route traffic from and to the internet for services hosted in Azure.
-To enable connectivity to other Azure services and infrastructure services, you must make sure one of the following items is in place:
-
-* You use user-defined routing to allow internet connectivity for every subnet requiring Internet connectivity.
+Some services are not able to be accessed from your corporate edge. To enable connectivity to other Azure services and infrastructure services, you must use user-defined routing to allow internet connectivity for every subnet requiring Internet connectivity for these services.
> [!NOTE] > Advertising default routes will break Windows and other VM license activation. For information about a work around, see [use user defined routes to enable KMS activation](/archive/blogs/mast/use-azure-custom-routes-to-enable-kms-activation-with-forced-tunneling).
hdinsight Hdinsight Apps Install Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-apps-install-applications.md
Title: Install third-party applications on Azure HDInsight description: Learn how to install third-party Apache Hadoop applications on Azure HDInsight.-+ Last updated 12/08/2023
The following list shows the published applications:
|[AtScale Intelligence Platform](https://aws.amazon.com/marketplace/pp/AtScale-AtScale-Intelligence-Platform/B07BWWHH18) |Hadoop |AtScale turns your HDInsight cluster into a scale-out OLAP server, allowing you to query billions of rows of data interactively using the BI tools you already know, own, and love ΓÇô from Microsoft Excel, Power BI, Tableau Software to QlikView. | |[Datameer](https://azuremarketplace.microsoft.com/marketplace/apps/datameer.datameer) |Hadoop |Datameer's self-service scalable platform for preparing, exploring, and governing your data for analytics accelerates turning complex multisource data into valuable business-ready information, delivering faster, smarter insights at an enterprise-scale. | |[Dataiku DSS on HDInsight](https://azuremarketplace.microsoft.com/marketplace/apps/dataiku.dataiku-data-science-studio) |Hadoop, Spark |Dataiku DSS in an enterprise data science platform that lets data scientists and data analysts collaborate to design and run new data products and services more efficiently, turning raw data into impactful predictions. |
-|[WANdisco Fusion HDI App](https://community.wandisco.com/s/article/Use-WANdisco-Fusion-for-parallel-operation-of-ADLS-Gen1-and-Gen2) |Hadoop, Spark, HBase, Kafka |Keeping data consistent in a distributed environment is a massive data operations challenge. WANdisco Fusion, an enterprise-class software platform, solves this problem by enabling unstructured data consistency across any environment. |
+|[WANdisco Fusion HDI App](https://docs.wandisco.com/bigdata/wdfusion/adls/) |Hadoop, Spark, HBase, Kafka |Keeping data consistent in a distributed environment is a massive data operations challenge. WANdisco Fusion, an enterprise-class software platform, solves this problem by enabling unstructured data consistency across any environment. |
|H2O SparklingWater for HDInsight |Spark |H2O Sparkling Water supports the following distributed algorithms: GLM, Naïve Bayes, Distributed Random Forest, Gradient Boosting Machine, Deep Neural Networks, Deep learning, K-means, PCA, Generalized Low Rank Models, Anomaly Detection, Autoencoders. | |[Striim for Real-Time Data Integration to HDInsight](https://azuremarketplace.microsoft.com/marketplace/apps/striim.striimbyol) |Hadoop, HBase, Spark, Kafka |Striim (pronounced "stream") is an end-to-end streaming data integration + intelligence platform, enabling continuous ingestion, processing, and analytics of disparate data streams. | |[Jumbune Enterprise-Accelerating BigData Analytics](https://azuremarketplace.microsoft.com/marketplace/apps/impetus-infotech-india-pvt-ltd.impetus_jumbune) |Hadoop, Spark |At a high level, Jumbune assists enterprises by, 1. Accelerating Tez, MapReduce & Spark engine based Hive, Java, Scala workload performance. 2. Proactive Hadoop Cluster Monitoring, 3. Establishing Data Quality management on distributed file system. |
healthcare-apis Api Versioning Dicom Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/api-versioning-dicom-service.md
Title: API versioning for DICOM service - Azure Health Data Services
description: This guide gives an overview of the API version policies for the DICOM service. --++ Last updated 10/13/2023
healthcare-apis Change Feed Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/change-feed-overview.md
Title: Change feed overview for the DICOM service in Azure Health Data Services description: Learn how to use the change feed in the DICOM service to access the logs of all the changes that occur in your organization's medical imaging data. The change feed allows you to query, process, and act upon the change events in a scalable and efficient way. --++ Last updated 1/18/2024
healthcare-apis Configure Cross Origin Resource Sharing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/configure-cross-origin-resource-sharing.md
Last updated 10/09/2023 --++ # Configure cross-origin resource sharing
healthcare-apis Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/configure-customer-managed-keys.md
Title: Configure customer-managed keys (CMK) for the DICOM service in Azure Health Data Services description: Use customer-managed keys (CMK) to encrypt data in the DICOM service. Create and manage CMK in Azure Key Vault and update the encryption key with a managed identity. -+ Last updated 11/20/2023
healthcare-apis Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/customer-managed-keys.md
Title: Best practices for customer-managed keys for the DICOM service in Azure Health Data Services description: Encrypt your data with customer-managed keys (CMK) in the DICOM service in Azure Health Data Services. Get tips on requirements, best practices, limitations, and troubleshooting. -+ Last updated 11/20/2023
healthcare-apis Data Partitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/data-partitions.md
Title: Enable data partitioning for the DICOM service in Azure Health Data Services description: Learn how to enable data partitioning for efficient storage and management of medical images for the DICOM service in Azure Health Data Services. -+ Last updated 03/26/2024
healthcare-apis Deploy Dicom Services In Azure Data Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/deploy-dicom-services-in-azure-data-lake.md
Title: Deploy the DICOM service with Azure Data Lake Storage description: Learn how to deploy the DICOM service and store all your DICOM data in its native format with a data lake in Azure Health Data Services. --++ Last updated 11/21/2023
healthcare-apis Dicom Data Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-data-lake.md
Title: Manage medical imaging data with the DICOM service and Azure Data Lake Storage description: Learn how to use the DICOM service in Azure Health Data Services to store, access, and analyze medical imaging data in the cloud. Explore the benefits, architecture, and data contracts of the integration of the DICOM service with Azure Data Lake Storage. --++ Last updated 03/11/2024
healthcare-apis Dicom Extended Query Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-extended-query-tags-overview.md
Title: DICOM extended query tags overview - Azure Health Data Services description: In this article, you'll learn the concepts of Extended Query Tags. --++ Last updated 10/9/2023
healthcare-apis Dicom Register Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-register-application.md
Title: Register a client application for the DICOM service in Microsoft Entra ID description: Learn how to register a client application for the DICOM service in Microsoft Entra ID. --++ Last updated 09/02/2022
healthcare-apis Dicom Service V2 Api Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-service-v2-api-changes.md
Title: DICOM Service API v2 Changes - Azure Health Data Services
description: This guide gives an overview of the changes in the v2 API for the DICOM service. --++ Last updated 10/13/2023
healthcare-apis Dicom Services Conformance Statement V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement-v2.md
Title: DICOM Conformance Statement version 2 for Azure Health Data Services
description: Read about the features and specifications of the DICOM service v2 API, which supports a subset of the DICOMweb Standard for medical imaging data. A DICOM Conformance Statement is a technical document that describes how a device or software implements the DICOM standard. --++ Last updated 1/18/2024
healthcare-apis Dicom Services Conformance Statement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement.md
Title: DICOM Conformance Statement version 1 for Azure Health Data Services
description: Read about the features and specifications of the DICOM service v1 API, which supports a subset of the DICOMweb Standard for medical imaging data. A DICOM Conformance Statement is a technical document that describes how a device or software implements the DICOM standard. --++ Last updated 10/13/2023
healthcare-apis Dicomweb Standard Apis C Sharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-c-sharp.md
Title: Use C# and DICOMweb Standard APIs in Azure Health Data Services description: Learn how to use C# and DICOMweb Standard APIs to store, retrieve, search, and delete DICOM files in the DICOM service. --++ Last updated 10/18/2023
healthcare-apis Dicomweb Standard Apis Curl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-curl.md
Title: Use cURL and DICOMweb Standard APIs in Azure Health Data Services description: Use cURL and DICOMweb Standard APIs to store, retrieve, search, and delete DICOM files in the DICOM service. --++ Last updated 10/18/2023
healthcare-apis Dicomweb Standard Apis Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-python.md
Title: Use Python and DICOMweb Standard APIs in Azure Health Data Services description: Use Python and DICOMweb Standard APIs to store, retrieve, search, and delete DICOM files in the DICOM service. --++ Last updated 02/15/2022
healthcare-apis Dicomweb Standard Apis With Dicom Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-with-dicom-services.md
Title: Access DICOMweb APIs to manage DICOM data in Azure Health Data Services description: Learn how to use DICOMweb APIs to store, review, search, and delete DICOM objects. Learn how to use custom APIs to track changes and assign unique tags to DICOM data. --++ Last updated 05/29/2024
healthcare-apis Enable Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/enable-diagnostic-logging.md
Title: Enable diagnostic logging in the DICOM service - Azure Health Data Services description: This article explains how to enable diagnostic logging in the DICOM service. --++ Last updated 10/13/2023
healthcare-apis Export Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/export-files.md
Title: Export DICOM files by using the export API of the DICOM service description: This how-to guide explains how to export DICOM files to an Azure Blob Storage account. --++ Last updated 10/30/2023
healthcare-apis Get Access Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/get-access-token.md
Title: Get an access token for the DICOM service in Azure Health Data Services description: Find out how to secure your access to the DICOM service with a token. Use the Azure command-line tool and unique identifiers to manage your medical images. --++ Last updated 10/13/2023
healthcare-apis Get Started With Analytics Dicom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/get-started-with-analytics-dicom.md
Title: Get started using DICOM data in analytics workloads - Azure Health Data S
description: Learn how to use Azure Data Factory and Microsoft Fabric to perform analytics on DICOM data. --++ Last updated 10/13/2023
healthcare-apis Import Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/import-files.md
Title: Import DICOM files into the DICOM service description: Learn how to import DICOM files by using bulk import in Azure Health Data Services. --++ Last updated 10/05/2023
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/overview.md
Title: Overview of the DICOM service in Azure Health Data Services description: The DICOM service is a cloud-based solution for storing, managing, and exchanging medical imaging data securely and efficiently with any DICOMwebΓäó-enabled systems or applications. Learn more about its benefits and use cases. --++ Last updated 10/13/2023
healthcare-apis Pull Dicom Changes From Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/pull-dicom-changes-from-change-feed.md
Title: Access DICOM Change Feed logs by using C# and the DICOM client package in Azure Health Data Services description: Learn how to use C# code to consume Change Feed, a feature of the DICOM service that provides logs of all the changes in your organization's medical imaging data. The code example uses the DICOM client package to access and process the Change Feed. --++ Last updated 1/18/2024
healthcare-apis References For Dicom Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/references-for-dicom-service.md
Title: References for DICOM service - Azure Health Data Services description: This reference provides related resources for the DICOM service. --++ Last updated 06/03/2022
healthcare-apis Update Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/update-files.md
Title: Update files in the DICOM service in Azure Health Data Services description: Learn how to use the bulk update API in Azure Health Data Services to modify DICOM attributes for multiple files in the DICOM service. This article explains the benefits, requirements, and steps of the bulk update operation. --++ Last updated 1/18/2024
healthcare-apis Healthcare Apis Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-quickstart.md
Title: Azure Health Data Services quickstart description: Learn how to create a workspace for Azure Health Data Services by using the Azure portal. The workspace is a centralized logical container for instances of the FHIR service, DICOM service, and MedTech service. -+ Last updated 06/07/2024
healthcare-apis Release Notes 2021 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes-2021.md
Title: Release notes for 2021 Azure Health Data Services monthly releases
description: 2021 - Explore the new capabilities and benefits of Azure Health Data Services in 2021. Learn about the features and enhancements introduced in the FHIR, DICOM, and MedTech services that help you manage and analyze health data. -+ Last updated 03/13/2024
healthcare-apis Release Notes 2022 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes-2022.md
Title: Release notes for 2022 Azure Health Data Services monthly releases
description: 2022 - Explore the Azure Health Data Services release notes for 2022. Learn about the features and enhancements introduced in the FHIR, DICOM, and MedTech services that help you manage and analyze health data. -+ Last updated 03/13/2024
healthcare-apis Release Notes 2023 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes-2023.md
Title: Release notes for 2023 Azure Health Data Services monthly releases
description: 2023 - Find out about features and improvements introduced in 2023 for the FHIR, DICOM, and MedTech services in Azure Health Data Services. Review the monthly release notes and learn how to get the most out of healthcare data. -+ Last updated 03/13/2024
healthcare-apis Release Notes 2024 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes-2024.md
Title: Release notes for 2024 Azure Health Data Services monthly releases
description: 2024 - Stay updated with the latest features and improvements for the FHIR, DICOM, and MedTech services in Azure Health Data Services in 2024. Read the monthly release notes and learn how to get the most out of healthcare data. -+ Last updated 07/29/2024
key-vault Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-bicep.md
description: Quickstart showing how to create Azure key vaults, and add key to t
-+
key-vault Hsm Protected Keys Byok https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/hsm-protected-keys-byok.md
description: Use this article to help you plan for, generate, and transfer your
-+ Last updated 01/30/2024
For more information on login options via the CLI, take a look at [sign in with
||||| |Cryptomathic|ISV (Enterprise Key Management System)|Multiple HSM brands and models including<ul><li>nCipher</li><li>Thales</li><li>Utimaco</li></ul>See [Cryptomathic site for details](https://www.cryptomathic.com/)|| |Entrust|Manufacturer,<br/>HSM as a service|<ul><li>nShield family of HSMs</li><li>nShield as a service</ul>|[nCipher new BYOK tool and documentation](https://www.ncipher.com/products/key-management/cloud-microsoft-azure)|
-|Fortanix|Manufacturer,<br/>HSM as a service|<ul><li>Self-Defending Key Management Service (SDKMS)</li><li>Equinix SmartKey</li></ul>|[Exporting SDKMS keys to Cloud Providers for BYOK - Azure Key Vault](https://support.fortanix.com/hc/en-us/articles/360040071192-Exporting-SDKMS-keys-to-Cloud-Providers-for-BYOK-Azure-Key-Vault)|
+|Fortanix|Manufacturer,<br/>HSM as a service|<ul><li>Self-Defending Key Management Service (SDKMS)</li><li>Equinix SmartKey</li></ul>|[Exporting SDKMS keys to Cloud Providers for BYOK - Azure Key Vault](https://support.fortanix.com/hc/articles/11620525047828-Fortanix-DSM-Azure-Key-Vault-BYOK-Bring-Your-Own-Key)|
|IBM|Manufacturer|IBM 476x, CryptoExpress|[IBM Enterprise Key Management Foundation](https://www.ibm.com/security/key-management/ekmf-bring-your-own-key-azure)| |Marvell|Manufacturer|All LiquidSecurity HSMs with<ul><li>Firmware version 2.0.4 or later</li><li>Firmware version 3.2 or newer</li></ul>|[Marvell BYOK tool and documentation](https://www.marvell.com/products/security-solutions/nitrox-hs-adapters/exporting-marvell-hsm-keys-to-cloud-azure-key-vault.html)| |Securosys SA|Manufacturer, HSM as a service|Primus HSM family, Securosys Clouds HSM|[Primus BYOK tool and documentation](https://www.securosys.com/primus-azure-byok)|
key-vault About Managed Storage Account Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/about-managed-storage-account-keys.md
description: Overview of Azure Key Vault managed storage account keys.
-+ Last updated 01/30/2024
key-vault About Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/about-secrets.md
description: Overview of Azure Key Vault secrets.
-+ Last updated 01/30/2024
key-vault Javascript Developer Guide Backup Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/javascript-developer-guide-backup-secrets.md
Title: Back up Azure Key Vault secret with JavaScript
description: Back up and restore Key Vault secret using JavaScript. -+
key-vault Javascript Developer Guide Delete Secret https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/javascript-developer-guide-delete-secret.md
Title: Delete Azure Key Vault secret with JavaScript
description: Delete, restore, or purge a Key Vault secret using JavaScript. -+
key-vault Javascript Developer Guide Enable Disable Secret https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/javascript-developer-guide-enable-disable-secret.md
Title: Enable a Azure Key Vault secret with JavaScript
description: Enable or disable a Key Vault secret using JavaScript. -+
key-vault Javascript Developer Guide Find Secret https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/javascript-developer-guide-find-secret.md
Title: Find or list Azure Key Vault secrets with JavaScript
description: Find a set of secrets or list secrets or secret version in a Key Vault JavaScript. -+
key-vault Javascript Developer Guide Get Secret https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/javascript-developer-guide-get-secret.md
Title: Get Azure Key Vault secret with JavaScript
description: Get the current secret or a specific version of a secret in Azure Key Vault with JavaScript. -+
key-vault Javascript Developer Guide Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/javascript-developer-guide-get-started.md
Title: Getting started with Azure Key Vault secret in JavaScript
description: Set up your environment, install npm packages, and authenticate to Azure to get started using Key Vault secrets in JavaScript -+
key-vault Javascript Developer Guide Set Update Rotate Secret https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/javascript-developer-guide-set-update-rotate-secret.md
Title: Create, update, or rotate Azure Key Vault secrets with JavaScript
description: Create or update with the set method, or rotate secrets with JavaScript. -+
key-vault Multiline Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/multiline-secrets.md
Title: Store a multiline secret in Azure Key Vault
description: Tutorial showing how to set multiline secrets from Azure Key Vault using Azure CLI and PowerShell -+
key-vault Overview Storage Keys Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/overview-storage-keys-powershell.md
Title: Azure Key Vault managed storage account - PowerShell version description: The managed storage account feature provides a seamless integration, between Azure Key Vault and an Azure storage account. -+
key-vault Overview Storage Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/overview-storage-keys.md
Title: Manage storage account keys with Azure Key Vault and the Azure CLI
description: Storage account keys provide seamless integration between Azure Key Vault and key-based access to an Azure storage account. -+
key-vault Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-bicep.md
Title: Azure Quickstart - Create an Azure key vault and a secret using Bicep | M
description: Quickstart showing how to create Azure key vaults, and add secrets to the vaults using Bicep. -+
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-cli.md
Title: Quickstart - Set and retrieve a secret from Azure Key Vault description: Quickstart showing how to set and retrieve a secret from Azure Key Vault using Azure CLI -+
key-vault Quick Create Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-go.md
description: Learn how to create, retrieve, and delete secrets from an Azure key
Last updated 01/10/2024-+ ms.devlang: golang
key-vault Quick Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-java.md
Last updated 01/11/2023-+ ms.devlang: java
key-vault Quick Create Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-net.md
description: Learn how to create, retrieve, and delete secrets from an Azure key
Last updated 01/20/2023-+ ms.devlang: csharp
key-vault Quick Create Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-node.md
description: Learn how to create, retrieve, and delete secrets from an Azure key
Last updated 02/02/2023-+ ms.devlang: javascript
key-vault Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-portal.md
Title: Azure Quickstart - Set and retrieve a secret from Key Vault using Azure p
description: Quickstart showing how to set and retrieve a secret from Azure Key Vault using the Azure portal -+ Last updated 04/04/2024
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-powershell.md
Title: Quickstart - Set & retrieve a secret from Key Vault using PowerShell
description: In this quickstart, learn how to create, retrieve, and delete secrets from an Azure Key Vault using Azure PowerShell. -+
key-vault Quick Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-python.md
description: Learn how to create, retrieve, and delete secrets from an Azure key
Last updated 02/03/2023-+ ms.devlang: python
key-vault Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-template.md
tags: azure-resource-manager-+
key-vault Secrets Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/secrets-best-practices.md
Title: Best practices for secrets management - Azure Key Vault | Microsoft Docs
description: Learn about best practices for Azure Key Vault secrets management. tags: azure-key-vault-+ Last updated 09/21/2021
key-vault Storage Keys Sas Tokens Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/storage-keys-sas-tokens-code.md
Title: Fetch shared access signature tokens in code | Azure Key Vault description: The managed storage account feature provides a seamless integration between Azure Key Vault and an Azure storage account. This sample uses the Azure SDK for .NET to manage SAS tokens. -+
key-vault Tutorial Rotation Dual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/tutorial-rotation-dual.md
description: Use this tutorial to learn how to automate the rotation of a secret
tags: 'rotation'-+ Last updated 01/30/2024
key-vault Tutorial Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/tutorial-rotation.md
tags: 'rotation' -+ Last updated 01/20/2023
load-balancer Load Balancer Custom Probe Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-custom-probe-overview.md
For HTTP/S probes, if the configured interval is longer than the above timeout p
## Probe source IP address
-For Load Balancer's health probe to mark up your instance, you **must** allow 168.63.129.16 IP address in any Azure [network security groups](../virtual-network/network-security-groups-overview.md) and local firewall policies. The **AzureLoadBalancer** service tag identifies this source IP address in your [network security groups](../virtual-network/network-security-groups-overview.md) and permits health probe traffic by default. You can learn more about this IP [here](../virtual-network/what-is-ip-address-168-63-129-16.md).
+For Azure Load Balancer's health probe to mark up your instance, you **must** allow 168.63.129.16 IP address in any Azure [network security groups](../virtual-network/network-security-groups-overview.md) and local firewall policies. The **AzureLoadBalancer** service tag identifies this source IP address in your [network security groups](../virtual-network/network-security-groups-overview.md) and permits health probe traffic by default. You can learn more about this IP [here](../virtual-network/what-is-ip-address-168-63-129-16.md).
-If you don't allow the [source IP](#probe-source-ip-address) of the probe in your firewall policies, the health probe fails as it is unable to reach your instance. In turn, Azure Load Balancer marks your instance as *down* due to the health probe failure. This misconfiguration can cause your load balanced application scenario to fail. All IPv4 Load Balancer health probes originate from the IP address 168.63.129.16 as their source. IPv6 probes use a link-local address as their source.
+If you don't allow the [source IP](#probe-source-ip-address) of the probe in your firewall policies, the health probe fails as it is unable to reach your instance. In turn, Azure Load Balancer marks your instance as *down* due to the health probe failure. This misconfiguration can cause your load balanced application scenario to fail. All IPv4 Load Balancer health probes originate from the IP address 168.63.129.16 as their source. IPv6 probes use a link-local address (fe80::1234:5678:9abc) as their source. For a dual-stack Azure Load Balancer, you must [configure a Network Security Group](./virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-cli.md#create-a-network-security-group-rule-for-inbound-and-outbound-connections) for the IPv6 health probe to function.
## Limitations
load-balancer Load Balancer Tcp Reset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-tcp-reset.md
Title: Load Balancer TCP Reset and idle timeout in Azure
-description: With this article, learn about Azure Load Balancer with bidirectional TCP RST packets on idle timeout.
+description: With this article, learn about Azure Load Balancer with bidirectional TCP Reset packets on idle timeout.
Previously updated : 01/19/2024 Last updated : 07/31/2024 # Load Balancer TCP Reset and Idle Timeout
-You can use [Standard Load Balancer](./load-balancer-overview.md) to create a more predictable application behavior for your scenarios by enabling TCP Reset on Idle for a given rule. Load Balancer's default behavior is to silently drop flows when the idle timeout of a flow is reached. Enabling TCP reset causes Load Balancer to send bidirectional TCP Resets (TCP RST packets) on idle timeout to inform your application endpoints that the connection timed out and is no longer usable. Endpoints can immediately establish a new connection if needed.
+You can use [Standard Load Balancer](./load-balancer-overview.md) to create a more predictable application behavior for your scenarios by enabling TCP Reset on Idle for a given rule. Load Balancer's default behavior is to silently drop flows when the idle timeout of a flow is reached. Enabling TCP reset causes Load Balancer to send bidirectional TCP Resets (TCP reset packets) on idle timeout to inform your application endpoints that the connection timed out and is no longer usable. Endpoints can immediately establish a new connection if needed.
:::image type="content" source="media/load-balancer-tcp-reset/load-balancer-tcp-reset.png" alt-text="Diagram shows default TCP reset behavior of network nodes."::: ## TCP reset
-You change this default behavior and enable sending TCP Resets on idle timeout on inbound NAT rules, load balancing rules, and [outbound rules](./load-balancer-outbound-connections.md#outboundrules). When enabled per rule, Load Balancer sends bidirectional TCP Resets (TCP RST packets) to both client and server endpoints at the time of idle timeout for all matching flows.
+You change this default behavior and enable sending TCP Resets on idle timeout on inbound NAT rules, load balancing rules, and [outbound rules](./load-balancer-outbound-connections.md#outboundrules). When enabled per rule, Load Balancer sends bidirectional TCP Resets (TCP RST packets) to both client and server endpoints at the time of idle timeout for all matching flows.
-Endpoints receiving TCP RST packets close the corresponding socket immediately. This provides an immediate notification to the endpoint's connection release and any future communication on the same TCP connection will fail. Applications can purge connections when the socket closes and reestablish connections as needed without waiting for the TCP connection to eventually time-out.
+Endpoints receiving TCP rest packets close the corresponding socket immediately. This provides an immediate notification to the endpoint's connection release and any future communication on the same TCP connection will fail. Applications can purge connections when the socket closes and reestablish connections as needed without waiting for the TCP connection to eventually time out.
For many scenarios, TCP reset can reduce the need to send TCP (or application layer) keepalives to refresh the idle timeout of a flow.
-If your idle durations exceed configuration limits or your application shows an undesirable behavior with TCP Resets enabled, you can still need to use TCP keepalives, or application layer keepalives, to monitor the liveness of the TCP connections. Further, keepalives can also remain useful for when the connection is proxied somewhere in the path, particularly application layer keepalives.
+If your idle durations exceed configuration limits or your application shows an undesirable behavior with TCP Resets enabled, you can still need to use TCP keepalives, or application layer keepalives, to monitor the liveness of the TCP connections. Further, keepalives can also remain useful for when the connection is proxied somewhere in the path, particularly application layer keepalives.
By carefully examining the entire end to end scenario, you can determine the benefits from enabling TCP Resets and adjusting the idle timeout. Then you decide if more steps can be required to ensure the desired application behavior.
Azure Load Balancer has a 4 minutes to 100-minutes timeout range for Load Balanc
When the connection is closed, your client application can receive the following error message: "The underlying connection was closed: A connection that was expected to be kept alive was closed by the server."
-If TCP RSTs are enabled, and it's missed for any reason, RSTs will be sent for any subsequent packets. If the TCP RST option isn't enabled, then packets will be silently dropped.
+If TCP resets are enabled, and it's missed for any reason, resets for any subsequent packets. If the TCP reset option isn't enabled, then packets are silently dropped.
A common practice is to use a TCP keep-alive. This practice keeps the connection active for a longer period. For more information, see these [.NET examples](/dotnet/api/system.net.servicepoint.settcpkeepalive). With keep-alive enabled, packets are sent during periods of inactivity on the connection. Keep-alive packets ensure the idle timeout value isn't reached and the connection is maintained for a long period.
It's important to take into account how the idle timeout values set for differen
### Outbound - If there's an outbound rule with an idle timeout value different than 4 minutes (which is what public IP outbound idle timeout is locked at), the outbound rule idle timeout takes precedence.-- Because a NAT gateway will always take precedence over load balancer outbound rules (and over public IP addresses assigned directly to VMs), the idle timeout value assigned to the NAT gateway will be used. (Along the same lines, the locked public IP outbound idle timeouts of 4 minutes of any IPs assigned to the NAT GW aren't considered.)
+- Because a NAT gateway will always take precedence over load balancer outbound rules (and over public IP addresses assigned directly to VMs), the idle timeout value assigned to the NAT gateway will be used. (Along the same lines, the locked public IP outbound idle timeouts of 4 minutes of any IPs assigned to the NAT GW aren't considered.)
## Limitations - TCP reset only sent during TCP connection in ESTABLISHED state. - TCP idle timeout doesn't affect load balancing rules on UDP protocol.-- TCP reset isn't supported for ILB HA ports when a network virtual appliance is in the path. A workaround could be to use outbound rule with TCP reset from NVA.
+- TCP reset isn't supported for Internal Load Balancer HA ports when a network virtual appliance is in the path. A workaround could be to use outbound rule with TCP reset from Network Virtual Appliance.
## Next steps - Learn about [Standard Load Balancer](./load-balancer-overview.md). - Learn about [outbound rules](./load-balancer-outbound-connections.md#outboundrules).-- [Configure TCP RST on Idle Timeout](load-balancer-tcp-idle-timeout.md)
+- [Configure TCP RST on Idle Timeout](load-balancer-tcp-idle-timeout.md)
load-balancer Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/whats-new.md
The product group is actively working on resolutions for the following known iss
|Issue |Description |Mitigation | | - |||
-| IP based LB outbound IP | IP based LB uses Azure's Default Outbound Access IP for outbound | In order to prevent outbound access from this IP, use NAT Gateway for a predictable IP address and to prevent SNAT port exhaustion |
-| numberOfProbes, "Unhealthy threshold" | Health probe configuration property numberOfProbes, otherwise known as "Unhealthy threshold" in Portal, isn't respected. Load Balancer health probes will probe up/down immediately after one probe regardless of the property's configured value | To control the number of successful or failed consecutive probes necessary to mark backend instances as healthy or unhealthy, please leverage the property ["probeThreshold"](/azure/templates/microsoft.network/loadbalancers?pivots=deployment-language-arm-template#probepropertiesformat-1) instead |
+| IP-based Load Balancer outbound IP | IP-based Load Balancers are currently not secure-by-default and will use the backend instances' default outbound access IPs for outbound connections. If the Load Balancer is a public Load Balancer, either the default outbound access IPs or the Load Balancer's frontend IP may be used. | In order to prevent backend instances behind an IP-based Load Balancer from using default outbound access, use NAT Gateway for a predictable IP address and to prevent SNAT port exhaustion, or leverage the private subnet feature to secure your Load Balancer. |
+| numberOfProbes, "Unhealthy threshold" | Health probe configuration property numberOfProbes, otherwise known as "Unhealthy threshold" in Portal, isn't respected. Load Balancer health probes will probe up/down immediately after one probe regardless of the property's configured value. | To control the number of successful or failed consecutive probes necessary to mark backend instances as healthy or unhealthy, please leverage the property ["probeThreshold"](/azure/templates/microsoft.network/loadbalancers?pivots=deployment-language-arm-template#probepropertiesformat-1) instead. |
logic-apps Logic Apps Data Operations Code Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-data-operations-code-samples.md
ms.suite: integration Previously updated : 12/13/2023 Last updated : 07/31/2024 # Data operation code samples for Azure Logic Apps [!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)]
-Here are the code samples for the data operation action definitions in the article, [Perform data operations](../logic-apps/logic-apps-perform-data-operations.md). You can use these samples for when you want to try the examples with your own logic app's underlying workflow definition, Azure subscription, and API connections. Just copy and paste these action definitions into the code view editor for your logic app's workflow definition, and then modify the definitions for your specific workflow.
+Here are the code samples for the data operation action definitions in the article, [Perform data operations](logic-apps-perform-data-operations.md). You can use these samples for when you want to try the examples with your own logic app's underlying workflow definition, Azure subscription, and API connections. Just copy and paste these action definitions into the code view editor for your logic app's workflow definition, and then modify the definitions for your specific workflow.
Based on JavaScript Object Notation (JSON) standards, these action definitions appear in alphabetical order. However, in the Logic App Designer, each definition appears in the correct sequence within your workflow because each action definition's `runAfter` property specifies the run order.
Based on JavaScript Object Notation (JSON) standards, these action definitions a
## Compose
-To try the [**Compose** action example](../logic-apps/logic-apps-perform-data-operations.md#compose-action),
+To try the [**Compose** action example](logic-apps-perform-data-operations.md#compose-action),
here are the action definitions you can use: ```json
here are the action definitions you can use:
{ "name": "firstNameVar", "type": "String",
- "value": "Sophie "
+ "value": "Sophia "
} ] },
here are the action definitions you can use:
{ "name": "lastNameVar", "type": "String",
- "value": "Owen"
+ "value": "Owens"
} ] },
here are the action definitions you can use:
## Create CSV table
-To try the [**Create CSV table** action example](../logic-apps/logic-apps-perform-data-operations.md#create-csv-table-action), here are the action definitions you can use:
+To try the [**Create CSV table** action example](logic-apps-perform-data-operations.md#create-csv-table-action), here are the action definitions you can use:
```json "actions": {
To try the [**Create CSV table** action example](../logic-apps/logic-apps-perfor
## Create HTML table
-To try the [**Create HTML table** action example](../logic-apps/logic-apps-perform-data-operations.md#create-html-table-action),
+To try the [**Create HTML table** action example](logic-apps-perform-data-operations.md#create-html-table-action),
here are the action definitions you can use: ```json
here are the action definitions you can use:
## Filter array
-To try the [**Filter array** action example](../logic-apps/logic-apps-perform-data-operations.md#filter-array-action), here are the action definitions you can use:
+To try the [**Filter array** action example](logic-apps-perform-data-operations.md#filter-array-action), here are the action definitions you can use:
```json "actions": {
To try the [**Filter array** action example](../logic-apps/logic-apps-perform-da
## Join
-To try the [**Join** action example](../logic-apps/logic-apps-perform-data-operations.md#join-action), here are the action definitions you can use:
+To try the [**Join** action example](logic-apps-perform-data-operations.md#join-action), here are the action definitions you can use:
```json "actions": {
To try the [**Join** action example](../logic-apps/logic-apps-perform-data-opera
## Parse JSON
-To try the [**Parse JSON** action example](../logic-apps/logic-apps-perform-data-operations.md#parse-json-action), here are the action definitions you can use:
+To try the [**Parse JSON** action example](logic-apps-perform-data-operations.md#parse-json-action), here are the action definitions you can use:
```json "actions": {
To try the [**Parse JSON** action example](../logic-apps/logic-apps-perform-data
"type": "Object", "value": { "Member": {
- "Email": "Sophie.Owen@contoso.com",
- "FirstName": "Sophie",
- "LastName": "Owen"
+ "Email": "Sophia.Owens@fabrikam.com",
+ "FirstName": "Sophia",
+ "LastName": "Owens"
} } }
To try the [**Parse JSON** action example](../logic-apps/logic-apps-perform-data
## Select
-To try the [**Select** action example](../logic-apps/logic-apps-perform-data-operations.md#select-action), the following action definitions create a JSON object array from an integer array:
+To try the [**Select** action example](logic-apps-perform-data-operations.md#select-action), the following action definitions create a JSON object array from an integer array:
```json "actions": {
To try the [**Select** action example](../logic-apps/logic-apps-perform-data-ope
}, ```
-The following example shows action definitions that create a string array from a JSON object array, but for this task, next to the **Map** box, switch to text mode (![Icon for text mode.](media/logic-apps-perform-data-operations/text-mode.png)) in the designer, or use the code view editor instead:
+The following example shows action definitions that create a string array from a JSON object array, but for this task, next to the **Map** box, switch to text mode (**T** icon) in the designer, or use the code view editor instead:
```json "actions": {
The following example shows action definitions that create a string array from a
## Next steps
-* [Perform data operations](../logic-apps/logic-apps-perform-data-operations.md)
+* [Perform data operations](logic-apps-perform-data-operations.md)
logic-apps Logic Apps Perform Data Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-perform-data-operations.md
ms.suite: integration Previously updated : 12/13/2023 Last updated : 07/31/2024 # Customer intent: As a developer using Azure Logic Apps, I want to perform various data operations on various data types for my workflow in Azure Logic Apps.
This how-to guide shows how you can work with data in your logic app workflow in
* Create an array based on the specified properties for all the items in another array. * Create a string from all the items in an array and separate those items using a specified character.
-For other ways to work with data, review the [data manipulation functions](workflow-definition-language-functions-reference.md) that Azure Logic Apps provides.
- ## Prerequisites * An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* The logic app workflow where you want to perform the data operation. This workflow must already have a [trigger](logic-apps-overview.md#logic-app-concepts) as the first step in your workflow. Both Consumption and Standard logic app workflows support the data operations described in this guide.
+* The logic app workflow where you want to perform the data operation. Both Consumption and Standard logic app workflows support the data operations described in this guide.
- All data operations are available only as actions. So, before you can use these actions, your workflow must already start with a trigger and include any other actions required to create the outputs that you want to use in the data operation.
+ All data operations are available only as actions. So, before you can use these actions, your workflow must already start with a [trigger](logic-apps-overview.md#logic-app-concepts) as the first step and include any other actions required to create the outputs that you want to use in the data operation.
## Data operation actions
The following actions help you work with data in JavaScript Object Notation (JSO
| [**Compose**](#compose-action) | Create a message, or string, from multiple inputs that can have various data types. You can then use this string as a single input, rather than repeatedly entering the same inputs. For example, you can create a single JSON message from various inputs. | | [**Parse JSON**](#parse-json-action) | Create user-friendly data tokens for properties in JSON content so that you can more easily use the properties in your logic apps. |
-To create more complex JSON transformations, see [Perform advanced JSON transformations with Liquid templates](../logic-apps/logic-apps-enterprise-integration-liquid-transform.md).
+To create more complex JSON transformations, see [Perform advanced JSON transformations with Liquid templates](logic-apps-enterprise-integration-liquid-transform.md).
### Array actions
For example, you can construct a JSON message from multiple variables, such as s
`{ "age": <ageVar>, "fullName": "<lastNameVar>, <firstNameVar>" }`
-and creates the following output:
+And creates the following output:
`{"age":35,"fullName":"Owens,Sophia"}`
-To try the **Compose** action, follow these steps by using the workflow designer. Or, if you prefer working in the code view editor, you can copy the example **Compose** and **Initialize variable** action definitions from this guide into your own logic app's underlying workflow definition: [Data operation code examples - Compose](../logic-apps/logic-apps-data-operations-code-samples.md#compose-action-example). For more information about the **Compose** action in the underlying JSON workflow definition, see the [Compose action](logic-apps-workflow-actions-triggers.md#compose-action).
+To try the **Compose** action, follow these steps by using the workflow designer. Or, if you prefer working in the code view editor, you can copy the example **Compose** and **Initialize variable** action definitions from this guide into your own logic app's underlying workflow definition: [Data operation code examples - Compose](logic-apps-data-operations-code-samples.md#compose-action-example). For more information about the **Compose** action in the underlying JSON workflow definition, see the [Compose action](logic-apps-workflow-actions-triggers.md#compose-action).
### [Consumption](#tab/consumption) 1. In the [Azure portal](https://portal.azure.com), Visual Studio, or Visual Studio Code, open your logic app workflow in the designer.
- This example uses the Azure portal and a sample workflow with the **Recurrence** trigger followed by several **Initialize variable** actions. These actions are set up to create two string variables and an integer variable.
+ This example uses the Azure portal and a sample workflow with the **Recurrence** trigger followed by several **Variables** actions named **Initialize variable**. These actions are set up to create two string variables and an integer variable.
+
+ | Operation | Properties and values |
+ |--|--|
+ | **Initialize variable** | - **Name**: firstNameVar <br>- **Type**: String <br>- **Value**: Sophia |
+ | **Initialize variable** | - **Name**: lastNameVar <br>- **Type**: String <br>- **Value**: Owens |
+ | **Initialize variable** | - **Name**: ageVar <br>- **Type**: Integer <br>- **Value**: 35 |
- ![Screenshot showing the Azure portal and the designer with a sample Consumption workflow for the Compose action.](./media/logic-apps-perform-data-operations/sample-start-compose-action-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/sample-start-compose-action-consumption.png" alt-text="Screenshot shows Azure portal, Consumption workflow designer, and example workflow for Compose action." lightbox="media/logic-apps-perform-data-operations/sample-start-compose-action-consumption.png":::
-1. In your workflow where you want to create the output, follow one of these steps:
+1. [Follow these general steps to add the **Data Operations** action named **Compose**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
- * To add an action under the last step, select **New step**.
+1. On the designer, select the **Compose** action, if not already selected. In the **Inputs** box, enter the inputs to use for creating the output.
- * To add an action between steps, move your mouse over the connecting arrow so the plus sign (**+**) appears. Select the plus sign, and then select **Add an action**.
+ For this example, follow these steps:
-1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **compose**.
+ 1. In the **Inputs** box, enter the following sample JSON object, including the spacing as shown:
-1. From the actions list, select the action named **Compose**.
+ ```json
+ {
+ "age": ,
+ "fullName": " , "
+ }
+ ```
- ![Screenshot showing the designer for a Consumption workflow, the "Choose an operation" search box with "compose" entered, and the "Compose" action selected.](./media/logic-apps-perform-data-operations/select-compose-action-consumption.png)
+ 1. In the JSON object, put your cursor in the corresponding locations, select the dynamic content list (lightning icon), and then select the corresponding variable from the list:
-1. In the **Inputs** box, enter the inputs to use for creating the output.
+ | JSON property | Variable |
+ ||-|
+ | **`age`** | **ageVar** |
+ | **`fullName`** | "**lastNameVar**, **firstNameVar**" |
- For this example, select inside the **Inputs** box, which opens the dynamic content list. From that list, select the previously created variables:
+ The following example shows both added and not yet added variables:
- ![Screenshot showing the designer for a Consumption workflow, the "Compose" action, and the selected inputs to use.](./media/logic-apps-perform-data-operations/configure-compose-action-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/configure-compose-action.png" alt-text="Screenshot shows Consumption workflow, Compose action, dynamic content list, and selected inputs to use." lightbox="media/logic-apps-perform-data-operations/configure-compose-action.png":::
- The following screenshot shows the finished example **Compose** action:
+ The following example shows the finished sample **Compose** action:
- ![Screenshot showing the designer for a Consumption workflow and the finished example for the "Compose" action.](./media/logic-apps-perform-data-operations/finished-compose-action-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/finished-compose-action.png" alt-text="Screenshot shows Consumption workflow and finished example Compose action." lightbox="media/logic-apps-perform-data-operations/finished-compose-action.png":::
1. Save your workflow. On the designer toolbar, select **Save**.
To try the **Compose** action, follow these steps by using the workflow designer
1. In the [Azure portal](https://portal.azure.com) or Visual Studio Code, open your logic app workflow in the designer.
- This example uses the Azure portal and a sample workflow with the **Recurrence** trigger followed by several **Initialize variable** actions. These actions are set up to create two string variables and an integer variable.
+ This example uses the Azure portal and a sample workflow with the **Recurrence** trigger followed by several **Variables** actions named **Initialize variable**. These actions are set up to create two string variables and an integer variable.
- ![Screenshot showing the Azure portal and the designer for a sample Standard workflow for the Compose action.](./media/logic-apps-perform-data-operations/sample-start-compose-action-standard.png)
+ | Operation | Properties and values |
+ |--|--|
+ | **Initialize variable** | - **Name**: firstNameVar <br>- **Type**: String <br>- **Value**: Sophia |
+ | **Initialize variable** | - **Name**: lastNameVar <br>- **Type**: String <br>- **Value**: Owens |
+ | **Initialize variable** | - **Name**: ageVar <br>- **Type**: Integer <br>- **Value**: 35 |
-1. In your workflow where you want to create the output, follow one of these steps:
+ :::image type="content" source="media/logic-apps-perform-data-operations/sample-start-compose-action-standard.png" alt-text="Screenshot shows Azure portal, Standard workflow designer, and example workflow for Compose action." lightbox="media/logic-apps-perform-data-operations/sample-start-compose-action-standard.png":::
- * To add an action under the last step, select the plus sign (**+**), and then select **Add an action**.
+1. [Follow these general steps to add the **Data Operations** action named **Compose**](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
- * To add an action between steps, select the plus sign (**+**) between those steps, and then select **Add an action**.
+1. On the designer, select the **Compose** action, if not already selected. In the **Inputs** box, enter the inputs to use for creating the output.
-1. After the connector gallery opens, [follow these general steps to find the **Data Operations** action named **Compose**](create-workflow-with-trigger-or-action.md?tabs=standard#add-an-action-to-run-a-task).
+ For this example, follow these steps:
- > [!NOTE]
- >
- > If the connector results box shows the message that **We couldn't find any results for compose**,
- > you get this result because the connector name is actually **Data Operations**, not **Compose**,
- > which is the action name.
+ 1. In the **Inputs** box, enter the following sample JSON object, including the spacing as shown:
-1. After the action information box opens, in the **Inputs** box, enter the inputs to use for creating the output.
+ ```json
+ {
+ "age": ,
+ "fullName": " , "
+ }
+ ```
- For this example, select inside the **Inputs** box, and then select the lightning icon, which opens the dynamic content list. From that list, select the previously created variables:
+ 1. In the JSON object, put your cursor in the corresponding locations, select the dynamic content list (lightning icon), and then select the corresponding variable from the list:
- ![Screenshot showing the designer for a Standard workflow, the "Compose" action, and the selected inputs to use.](./media/logic-apps-perform-data-operations/configure-compose-action-standard.png)
+ | JSON property | Variable |
+ ||-|
+ | **`age`** | **ageVar** |
+ | **`fullName`** | "**lastNameVar**, **firstNameVar**" |
- The following screenshot shows the finished example **Compose** action:
+ The following example shows both added and not yet added variables:
- ![Screenshot showing the designer for a Standard workflow and the finished example for the "Compose" action.](./media/logic-apps-perform-data-operations/finished-compose-action-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/configure-compose-action.png" alt-text="Screenshot shows Standard workflow, Compose action, dynamic content list, and selected inputs to use." lightbox="media/logic-apps-perform-data-operations/configure-compose-action.png":::
+
+ The following example shows the finished sample **Compose** action:
+
+ :::image type="content" source="media/logic-apps-perform-data-operations/finished-compose-action.png" alt-text="Screenshot shows Standard workflow and finished example Compose action." lightbox="media/logic-apps-perform-data-operations/finished-compose-action.png":::
1. Save your workflow. On the designer toolbar, select **Save**.
To try the **Compose** action, follow these steps by using the workflow designer
To confirm whether the **Compose** action creates the expected results, send yourself a notification that includes output from the **Compose** action.
-#### [Consumption](#tab/consumption)
- 1. In your workflow, add an action that can send you the results from the **Compose** action. This example continues by using the Office 365 Outlook action named **Send an email**.
-1. In this action, for each box where you want the results to appear, select inside each box, which opens the dynamic content list. From that list, under the **Compose** action, select **Outputs**.
+1. In this action, for each box where you want the results to appear, select inside each box, and then select the dynamic content list. From that list, under the **Compose** action, select **Outputs**.
For this example, the result appears in the email's body, so add the **Outputs** field to the **Body** box.
- ![Screenshot showing the Azure portal, designer for an example Consumption workflow, and the "Send an email" action with the output from the preceding "Compose" action.](./media/logic-apps-perform-data-operations/send-email-compose-action-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/send-email-compose-action.png" alt-text="Screenshot shows workflow designer, the action named Send an email, and output from the preceding Compose action." lightbox="media/logic-apps-perform-data-operations/send-email-compose-action.png":::
-1. Save your workflow, and then manually run your workflow. On the designer toolbar, select **Run Trigger** > **Run**.
+1. Save your workflow, and then manually run your workflow.
-#### [Standard](#tab/standard)
+ - Consumption workflow: On the designer toolbar, select **Run** > **Run**.
+ - Standard workflow: On the workflow navigation menu, select **Overview**. On the **Overview** page toolbar, select **Run** > **Run**.
-1. In your workflow, add an action that can send you the results from the **Compose** action.
+If you used the Office 365 Outlook action, the following example shows the result:
- This example continues by using the Office 365 Outlook action named **Send an email**.
-
-1. In this action, for each box where you want the results to appear, select inside each box, and then select the lightning icon, which opens the dynamic content list. From that list, under the **Compose** action, select **Outputs**.
-
- > [!NOTE]
- >
- > If the dynamic content list shows the message that **We can't find any outputs to match this input format**,
- > select **See more** next to the **Compose** label in the list.
- >
- > ![Screenshot showing a Standard workflow and the dynamic content list with "See more" selected for the "Compose" action.](./media/logic-apps-perform-data-operations/send-email-compose-action-see-more.png)
-
- For this example, the result appears in the email's body, so add the **Outputs** field to the **Body** box.
-
- ![Screenshot showing the Azure portal, designer for an example Standard workflow, and the "Send an email" action with the output from the preceding "Compose" action.](./media/logic-apps-perform-data-operations/send-email-compose-action-standard.png)
-
-1. Save your workflow, and then manually run your workflow. On the workflow navigation menu, select **Overview** > **Run Trigger** > **Run**.
---
-If you used the Office 365 Outlook action, you get a result similar to the following screenshot:
-
-![Screenshot showing an email with the "Compose" action results.](./media/logic-apps-perform-data-operations/compose-email-results.png)
<a name="create-csv-table-action"></a>
To try the **Create CSV table** action, follow these steps by using the workflo
This example uses the Azure portal and a sample workflow with the **Recurrence** trigger followed by an **Initialize variable** action. The action is set up to create a variable where the initial value is an array that has some properties and values in JSON format.
- ![Screenshot showing the Azure portal and the designer with a sample Consumption workflow for the "Create CSV table" action.](./media/logic-apps-perform-data-operations/sample-start-create-table-action-consumption.png)
-
-1. In your workflow where you want to create the CSV table, follow one of these steps:
+ | Operation | Properties and values |
+ |--|--|
+ | **Initialize variable** | - **Name**: myJSONArray <br>- **Type**: Array <br>- **Value**: `[ { "Description": "Apples", "Product_ID": 1 }, { "Description": "Oranges", "Product_ID": 2 }]` |
- * To add an action under the last step, select **New step**.
+ :::image type="content" source="media/logic-apps-perform-data-operations/sample-start-create-table-action-consumption.png" alt-text="Screenshot shows Consumption workflow designer, and example workflow for action named Create CSV table." lightbox="media/logic-apps-perform-data-operations/sample-start-create-table-action-consumption.png":::
- * To add an action between steps, move your mouse over the connecting arrow so the plus sign (**+**) appears. Select the plus sign, and then select **Add an action**.
+1. [Follow these general steps to add the **Data Operations** action named **Create CSV table**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **create csv table**.
+1. On the designer, select the **Create CSV table** action, if not already selected. In the **From** box, enter the array or expression to use for creating the table.
-1. From the actions list, select the action named **Create CSV table**.
+ For this example, select inside the **From** box, and select the dynamic content list (lightning icon). From that list, select the **myJSONArray** variable:
- ![Screenshot showing the designer for a Consumption workflow, the "Choose an operation" search box with "create csv table" entered, and the "Create CSV table" action selected.](./media/logic-apps-perform-data-operations/select-create-csv-table-action-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/configure-create-csv-table-action.png" alt-text="Screenshot shows Consumption workflow, action named Create CSV table, and the selected input to use." lightbox="media/logic-apps-perform-data-operations/configure-create-csv-table-action.png":::
-1. In the **From** box, enter the array or expression to use for creating the table.
-
- For this example, select inside the **From** box, which opens the dynamic content list. From that list, select the previously created variable:
-
- ![Screenshot showing the designer for a Consumption workflow, the "Create CSV table" action, and the selected input to use.](./media/logic-apps-perform-data-operations/configure-create-csv-table-action-consumption.png)
-
- > [!NOTE]
+ > [!TIP]
> > To create user-friendly tokens for the properties in JSON objects so that you can select
- > those properties as inputs, use the action named [Parse JSON](#parse-json-action)
+ > those properties as inputs, use the action named [**Parse JSON**](#parse-json-action)
> before you use the **Create CSV table** action. The following screenshot shows the finished example **Create CSV table** action:
- ![Screenshot showing the designer for a Consumption workflow and the finished example for the "Create CSV table" action.](./media/logic-apps-perform-data-operations/finished-create-csv-table-action-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/finished-create-csv-table-action.png" alt-text="Screenshot shows Consumption workflow and finished example action named Create CSV table." lightbox="media/logic-apps-perform-data-operations/finished-create-csv-table-action.png":::
1. Save your workflow. On the designer toolbar, select **Save**.
To try the **Create CSV table** action, follow these steps by using the workflo
This example uses the Azure portal and a sample workflow with the **Recurrence** trigger followed by an **Initialize variable** action. The action is set up to create a variable where the initial value is an array that has some properties and values in JSON format.
- ![Screenshot showing the Azure portal and the designer with a sample Standard workflow for the "Create CSV table" action.](./media/logic-apps-perform-data-operations/sample-start-create-table-action-standard.png)
+ | Operation | Properties and values |
+ |--|--|
+ | **Initialize variable** | - **Name**: myJSONArray <br>- **Type**: Array <br>- **Value**: `[ { "Description": "Apples", "Product_ID": 1 }, { "Description": "Oranges", "Product_ID": 2 }]` |
-1. In your workflow where you want to create the output, follow one of these steps:
+ :::image type="content" source="media/logic-apps-perform-data-operations/sample-start-create-table-action-standard.png" alt-text="Screenshot shows Azure portal, Standard workflow designer, and example workflow for action named Create CSV table." lightbox="media/logic-apps-perform-data-operations/sample-start-create-table-action-standard.png":::
- * To add an action under the last step, select the plus sign (**+**), and then select **Add an action**.
+1. [Follow these general steps to add the **Data Operations** action named **Create CSV table**](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
- * To add an action between steps, select the plus sign (**+**) between those steps, and then select **Add an action**.
+1. On the designer, select the **Create CSV table** action, if not already selected. In the **From** box, enter the array or expression to use for creating the table.
-1. After the connector gallery opens, [follow these general steps to find the **Data Operations** action named **Create CSV table**](create-workflow-with-trigger-or-action.md?tabs=standard#add-an-action-to-run-a-task).
+ For this example, select inside the **From** box, and select the dynamic content list (lightning icon). From that list, select the **myJSONArray** variable:
-1. After the action information box appears, in the **From** box, enter the array or expression to use for creating the table.
+ :::image type="content" source="media/logic-apps-perform-data-operations/configure-create-csv-table-action.png" alt-text="Screenshot shows Standard workflow, action named Create CSV table, and the selected input to use." lightbox="media/logic-apps-perform-data-operations/configure-create-csv-table-action.png":::
- For this example, select inside the **From** box, and then select the lightning icon, which opens the dynamic content list. From that list, select the previously created variable:
-
- ![Screenshot showing the designer for a Standard workflow, the "Create CSV table" action, and the selected input to use.](./media/logic-apps-perform-data-operations/configure-create-csv-table-action-standard.png)
-
- > [!NOTE]
+ > [!TIP]
> > To create user-friendly tokens for the properties in JSON objects so that you can select
- > those properties as inputs, use the action named [Parse JSON](#parse-json-action)
+ > those properties as inputs, use the action named [**Parse JSON**](#parse-json-action)
> before you use the **Create CSV table** action. The following screenshot shows the finished example **Create CSV table** action:
- ![Screenshot showing the designer for a Standard workflow and the finished example for the "Create CSV table" action.](./media/logic-apps-perform-data-operations/finished-create-csv-table-action-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/finished-create-csv-table-action.png" alt-text="Screenshot shows Standard workflow and finished example action named Create CSV table." lightbox="media/logic-apps-perform-data-operations/finished-create-csv-table-action.png":::
1. Save your workflow. On the designer toolbar, select **Save**.
To try the **Create CSV table** action, follow these steps by using the workflo
By default, the **Columns** property is set to automatically create the table columns based on the array items. To specify custom headers and values, follow these steps:
-1. If the **Columns** property doesn't appear in the action information box, from the **Add new parameters** list, select **Columns**.
+1. If the **Columns** property doesn't appear in the action information box, from the **Advanced parameters** list, select **Columns**.
1. Open the **Columns** list, and select **Custom**.
By default, the **Columns** property is set to automatically create the table co
1. In the **Value** property, specify the custom value to use instead.
-To return values from the array, you can use the [`item()` function](workflow-definition-language-functions-reference.md#item) with the **Create CSV table** action. In a `For_each` loop, you can use the [`items()` function](workflow-definition-language-functions-reference.md#items).
+To return values from the array, you can use the [**`item()`** function](workflow-definition-language-functions-reference.md#item) with the **Create CSV table** action. In a **`For_each`** loop, you can use the [**`items()`** function](workflow-definition-language-functions-reference.md#items).
For example, suppose you want table columns that have only the property values and not the property names from an array. To return only these values, follow these steps for working in designer view or in code view.
Oranges,2
In the **Create CSV table** action, keep the **Header** column empty. On each row in the **Value** column, dereference each array property that you want. Each row under **Value** returns all the values for the specified array property and becomes a column in your table.
-##### [Consumption](#tab/consumption)
-
-1. For each array property that you want, in the **Value** column, select inside the edit box, which opens the dynamic content list.
-
-1. From that list, select **Expression** to open the expression editor instead.
-
-1. In the expression editor, enter the following expression but replace `<array-property-name>` with the array property name for the value that you want.
-
- Syntax: `item()?['<array-property-name>']`
-
- Examples:
-
- * `item()?['Description']`
- * `item()?['Product_ID']`
-
- ![Screenshot showing the "Create CSV table" action in a Consumption workflow and how to dereference the "Description" array property.](./media/logic-apps-perform-data-operations/csv-table-expression-consumption.png)
-
-1. Repeat the preceding steps for each array property. When you're done, your action looks similar to the following example:
-
- ![Screenshot showing the "Create CSV table" action in a Consumption workflow and the "item()" function.](./media/logic-apps-perform-data-operations/finished-csv-expression-consumption.png)
-
-1. To resolve expressions into more descriptive versions, switch to code view and back to designer view, and then reopen the collapsed action:
-
- The **Create CSV table** action now appears similar to the following example:
-
- ![Screenshot showing the "Create CSV table" action in a Consumption workflow and resolved expressions without headers.](./media/logic-apps-perform-data-operations/resolved-csv-expression-consumption.png)
-
-##### [Standard](#tab/standard)
- 1. For each array property that you want, in the **Value** column, select inside the edit box, and then select the function icon, which opens the expression editor. Make sure that the **Function** list appears selected.
-1. In the expression editor, enter the following expression but replace `<array-property-name>` with the array property name for the value that you want. When you're done with each expression, select **Add**.
+1. In the expression editor, enter the following expression, but replace `<array-property-name>` with the array property name for the value that you want. When you're done with each expression, select **Add**.
Syntax: `item()?['<array-property-name>']`
In the **Create CSV table** action, keep the **Header** column empty. On each ro
* `item()?['Description']` * `item()?['Product_ID']`
- ![Screenshot showing the "Create CSV table" action in a Standard workflow and how to dereference the "Description" array property.](./media/logic-apps-perform-data-operations/csv-table-expression-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/csv-table-expression.png" alt-text="Screenshot shows workflow designer, action named Create CSV table, and how to dereference array property named Description." lightbox="media/logic-apps-perform-data-operations/csv-table-expression.png":::
-1. Repeat the preceding steps for each array property. When you're done, your action looks similar to the following example:
+ For more information, see [**item()** function](workflow-definition-language-functions-reference.md#item).
- ![Screenshot showing the "Create CSV table" action in a Standard workflow and the "item()" function.](./media/logic-apps-perform-data-operations/finished-csv-expression-standard.png)
+1. Repeat the preceding steps for each array property. When you're done, your action looks similar to the following example:
-
+ :::image type="content" source="media/logic-apps-perform-data-operations/finished-csv-expression.png" alt-text="Screenshot shows action named Create CSV table and function named item()." lightbox="media/logic-apps-perform-data-operations/finished-csv-expression.png":::
#### Work in code view
In the action's JSON definition, within the `columns` array, set the `header` pr
To confirm whether the **Create CSV table** action creates the expected results, send yourself a notification that includes output from the **Create CSV table** action.
-#### [Consumption](#tab/consumption)
-
-1. In your workflow, add an action that can send you the results from the **Create CSV table** action.
-
- This example continues by using the Office 365 Outlook action named **Send an email**.
-
-1. In this action, for each box where you want the results to appear, select inside the box, which opens the dynamic content list. Under the **Create CSV table** action, select **Output**.
-
- ![Screenshot showing a Consumption workflow with the "Send an email" action and the "Output" field from the preceding "Create CSV table" action entered in the email body.](./media/logic-apps-perform-data-operations/send-email-create-csv-table-action-consumption.png)
-
- > [!NOTE]
- >
- > If the dynamic content list shows the message that **We can't find any outputs to match this input format**,
- > select **See more** next to the **Create CSV table** label in the list.
- >
- > ![Screenshot showing a Consumption workflow and the dynamic content list with "See more" selected for the "Create CSV table" action.](./media/logic-apps-perform-data-operations/send-email-create-csv-table-action-see-more.png)
-
-1. Save your workflow, and then manually run your workflow. On the designer toolbar, select **Run Trigger** > **Run**.
-
-#### [Standard](#tab/standard)
- 1. In your workflow, add an action that can send you the results from the **Create CSV table** action. This example continues by using the Office 365 Outlook action named **Send an email**. 1. In this action, for each box where you want the results to appear, select inside each box, which opens the dynamic content list. From that list, under the **Create CSV table** action, select **Output**.
- ![Screenshot showing a Standard workflow with the "Send an email" action and the "Output" field from the preceding "Create CSV table" action entered in the email body.](./media/logic-apps-perform-data-operations/send-email-create-csv-table-action-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/send-email-create-csv-table-action.png" alt-text="Screenshot shows workflow with action named Send an email. The Body property contains the field named Output from preceding action named Create CSV table." lightbox="media/logic-apps-perform-data-operations/send-email-create-csv-table-action.png":::
- > [!NOTE]
- >
- > If the dynamic content list shows the message that **We can't find any outputs to match this input format**,
- > select **See more** next to the **Create CSV table** label in the list.
- >
- > ![Screenshot showing a Standard workflow and the dynamic content list with "See more" selected for the "Create CSV table" action.](./media/logic-apps-perform-data-operations/send-email-create-csv-table-action-see-more.png)
+1. Save your workflow, and then manually run your workflow.
-1. Save your workflow, and then manually run your workflow. On the workflow navigation menu, select **Overview** > **Run Trigger** > **Run**.
+ - Consumption workflow: On the designer toolbar, select **Run** > **Run**.
+ - Standard workflow: On the workflow navigation menu, select **Overview**. On the **Overview** page toolbar, select **Run** > **Run**.
-
+If you used the Office 365 Outlook action, the following example shows the result:
-If you used the Office 365 Outlook action, you get a result similar to the following screenshot:
-
-![Screenshot showing an email with the "Create CSV table" action results.](./media/logic-apps-perform-data-operations/create-csv-table-email-results.png)
> [!NOTE] >
To try the **Create HTML table** action, follow these steps by using the workflo
This example uses the Azure portal and a sample workflow with the **Recurrence** trigger followed by an **Initialize variable** action. The action is set up to create a variable where the initial value is an array that has some properties and values in JSON format.
- ![Screenshot showing the Azure portal and the designer with a sample Consumption workflow for the "Create HTML table" action.](./media/logic-apps-perform-data-operations/sample-start-create-table-action-consumption.png)
-
-1. In your workflow where you want to create an HTML table, follow one of these steps:
-
- * To add an action under the last step, select **New step**.
-
- * To add an action between steps, move your mouse over the connecting arrow so the plus sign (**+**) appears. Select the plus sign, and then select **Add an action**.
+ | Operation | Properties and values |
+ |--|--|
+ | **Initialize variable** | - **Name**: myJSONArray <br>- **Type**: Array <br>- **Value**: `[ { "Description": "Apples", "Product_ID": 1 }, { "Description": "Oranges", "Product_ID": 2 }]` |
-1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **create html table**.
+ :::image type="content" source="media/logic-apps-perform-data-operations/sample-start-create-table-action-consumption.png" alt-text="Screenshot shows Azure portal, Consumption workflow designer, and sample workflow for action named Create HTML table." lightbox="media/logic-apps-perform-data-operations/sample-start-create-table-action-consumption.png":::
-1. From the actions list, select the action named **Create HTML table**.
+1. [Follow these general steps to add the **Data Operations** action named **Create HTML table**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
- ![Screenshot showing the designer for a Consumption workflow, the "Choose an operation" search box with "create html table" entered, and the "Create HTML table" action selected.](./media/logic-apps-perform-data-operations/select-create-html-table-action-consumption.png)
+1. On the designer, select the **Create HTML table** action, if not already selected. In the **From** box, enter the array or expression to use for creating the table.
-1. In the **From** box, enter the array or expression to use for creating the table.
+ For this example, select inside the **From** box, and select the dynamic content list (lightning icon). From that list, select the **myJSONArray** variable:
- For this example, select inside the **From** box, which opens the dynamic content list. From that list, select the previously created variable:
+ :::image type="content" source="media/logic-apps-perform-data-operations/configure-create-html-table-action.png" alt-text="Screenshot shows Consumption workflow, action named Create HTML table, and the selected input to use." lightbox="media/logic-apps-perform-data-operations/configure-create-html-table-action.png":::
- ![Screenshot showing the designer for a Consumption workflow, the "Create HTML table" action, and the selected input to use.](./media/logic-apps-perform-data-operations/configure-create-html-table-action-consumption.png)
-
- > [!NOTE]
+ > [!TIP]
> > To create user-friendly tokens for the properties in JSON objects so that you can select
- > those properties as inputs, use the action named [Parse JSON](#parse-json-action)
+ > those properties as inputs, use the action named [**Parse JSON**](#parse-json-action)
> before you use the **Create HTML table** action. The following screenshot shows the finished example **Create HTML table** action:
- ![Screenshot showing the designer for a Consumption workflow and the finished example for the "Create HTML table" action.](./media/logic-apps-perform-data-operations/finished-create-html-table-action-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/finished-create-html-table-action.png" alt-text="Screenshot shows Consumption workflow and finished example action named Create HTML table." lightbox="media/logic-apps-perform-data-operations/finished-create-html-table-action.png":::
1. Save your workflow. On the designer toolbar, select **Save**.
To try the **Create HTML table** action, follow these steps by using the workflo
This example uses the Azure portal and a sample workflow with the **Recurrence** trigger followed by an **Initialize variable** action. The action is set up to create a variable where the initial value is an array that has some properties and values in JSON format.
- ![Screenshot showing the Azure portal and the designer with a sample Standard workflow for the "Create HTML table" action.](./media/logic-apps-perform-data-operations/sample-start-create-table-action-standard.png)
+ | Operation | Properties and values |
+ |--|--|
+ | **Initialize variable** | - **Name**: myJSONArray <br>- **Type**: Array <br>- **Value**: `[ { "Description": "Apples", "Product_ID": 1 }, { "Description": "Oranges", "Product_ID": 2 }]` |
-1. In your workflow where you want to create the output, follow one of these steps:
+ :::image type="content" source="media/logic-apps-perform-data-operations/sample-start-create-table-action-standard.png" alt-text="Screenshot shows Azure portal, Standard workflow designer, and sample workflow for action named Create HTML table." lightbox="media/logic-apps-perform-data-operations/sample-start-create-table-action-standard.png":::
- * To add an action under the last step, select the plus sign (**+**), and then select **Add an action**.
+1. [Follow these general steps to add the **Data Operations** action named **Create HTML table**](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
- * To add an action between steps, select the plus sign (**+**) between those steps, and then select **Add an action**.
+1. On the designer, select the **Create HTML table** action, if not already selected. In the **From** box, enter the array or expression to use for creating the table.
-1. After the connector gallery opens, [follow these general steps to find the **Data Operations** action named **Create HTML table**](create-workflow-with-trigger-or-action.md?tabs=standard#add-an-action-to-run-a-task).
+ For this example, select inside the **From** box, and select the dynamic content list (lightning icon). From that list, select the **myJSONArray** variable:
-1. After the action information box appears, in the **From** box, enter the array or expression to use for creating the table.
-
- For this example, select inside the **From** box, and then select the lightning icon, which opens the dynamic content list. From that list, select the previously created variable:
+ :::image type="content" source="media/logic-apps-perform-data-operations/configure-create-html-table-action.png" alt-text="Screenshot shows Standard workflow, action named Create HTML table, and the selected input to use." lightbox="media/logic-apps-perform-data-operations/configure-create-html-table-action.png":::
- ![Screenshot showing the designer for a Standard workflow, the "Create HTML table" action, and the selected input to use.](./media/logic-apps-perform-data-operations/configure-create-html-table-action-standard.png)
-
- > [!NOTE]
+ > [!TIP]
> > To create user-friendly tokens for the properties in JSON objects so that you can select
- > those properties as inputs, use the action named [Parse JSON](#parse-json-action)
- > before you use the **Create CSV table** action.
+ > those properties as inputs, use the action named [**Parse JSON**](#parse-json-action)
+ > before you use the **Create HTML table** action.
The following screenshot shows the finished example **Create HTML table** action:
- ![Screenshot showing the designer for a Standard workflow and the finished example for the "Create HTML table" action.](./media/logic-apps-perform-data-operations/finished-create-html-table-action-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/finished-create-html-table-action.png" alt-text="Screenshot shows Standard workflow and finished example action named Create HTML table." lightbox="media/logic-apps-perform-data-operations/finished-create-html-table-action.png":::
1. Save your workflow. On the designer toolbar, select **Save**.
To try the **Create HTML table** action, follow these steps by using the workflo
By default, the **Columns** property is set to automatically create the table columns based on the array items. To specify custom headers and values, follow these steps:
+1. If the **Columns** property doesn't appear in the action information box, from the **Advanced parameters** list, select **Columns**.
+ 1. Open the **Columns** list, and select **Custom**. 1. In the **Header** property, specify the custom header text to use instead.
Oranges,2
In the **Create HTML table** action, keep the **Header** column empty. On each row in the **Value** column, dereference each array property that you want. Each row under **Value** returns all the values for the specified array property and becomes a column in your table.
-##### [Consumption](#tab/consumption)
-
-1. For each array property that you want, in the **Value** column, select inside the edit box, which opens the dynamic content list.
-
-1. From that list, select **Expression** to open the expression editor instead.
+1. For each array property that you want, in the **Value** column, select inside the edit box, and then select the function icon, which opens the expression editor. Make sure that the **Function** list appears selected.
-1. In the expression editor, enter the following expression, but replace `<array-property-name>` with the array property name for the value that you want, and then select **OK**. For more information, see [**item()** function](workflow-definition-language-functions-reference.md#item).
+1. In the expression editor, enter the following expression, but replace `<array-property-name>` with the array property name for the value that you want. When you're done with each expression, select **Add**.
Syntax: `item()?['<array-property-name>']`
In the **Create HTML table** action, keep the **Header** column empty. On each r
* `item()?['Description']` * `item()?['Product_ID']`
- ![Screenshot showing the "Create HTML table" action in a Consumption workflow and how to dereference the "Description" array property.](./media/logic-apps-perform-data-operations/html-table-expression-consumption.png)
-
-1. Repeat the preceding steps for each array property. When you're done, your action looks similar to the following example:
-
- ![Screenshot showing the "Create HTML table" action in a Consumption workflow and the "item()" function.](./media/logic-apps-perform-data-operations/finished-html-expression-consumption.png)
-
-1. To resolve expressions into more descriptive versions, switch to code view and back to designer view, and then reopen the collapsed action:
-
- The **Create HTML table** action now appears similar to the following example:
-
- ![Screenshot showing the "Create HTML table" action in a Consumption workflow and resolved expressions without headers.](./media/logic-apps-perform-data-operations/resolved-html-expression-consumption.png)
-
-##### [Standard](#tab/standard)
-
-1. For each array property that you want, in the **Value** column, select inside the edit box, and then select the function icon, which opens the expression editor.
-
-1. In the expression editor, enter the following expression, but replace `<array-property-name>` with the array property name for the value that you want, and then select **Add**. For more information, see [**item()** function](workflow-definition-language-functions-reference.md#item).
-
- Syntax: `item()?['<array-property-name>']`
-
- Examples:
-
- * `item()?['Description']`
- * `item()?['Product_ID']`
+ :::image type="content" source="media/logic-apps-perform-data-operations/html-table-expression.png" alt-text="Screenshot shows workflow designer, action named Create HTML table, and how to dereference array property named Description." lightbox="media/logic-apps-perform-data-operations/html-table-expression.png":::
- ![Screenshot showing the "Create HTML table" action in a Standard workflow and how to dereference the "Description" array property.](./media/logic-apps-perform-data-operations/html-table-expression-standard.png)
+ For more information, see [**item()** function](workflow-definition-language-functions-reference.md#item).
1. Repeat the preceding steps for each array property. When you're done, your action looks similar to the following example:
- ![Screenshot showing the "Create HTML table" action in a Standard workflow and the "item()" function.](./media/logic-apps-perform-data-operations/finished-html-expression-standard.png)
--
+ :::image type="content" source="media/logic-apps-perform-data-operations/finished-html-expression.png" alt-text="Screenshot shows action named Create HTML table and function named item()." lightbox="media/logic-apps-perform-data-operations/finished-html-expression.png":::
#### Work in code view
In the action's JSON definition, within the `columns` array, set the `header` pr
To confirm whether the **Create HTML table** action creates the expected results, send yourself a notification that includes output from the **Create HTML table** action.
-#### [Consumption](#tab/consumption)
-
-1. In your workflow, add an action that can send you the results from the **Create HTML table** action.
-
- This example continues by using the Office 365 Outlook action named **Send an email**.
-
-1. In this action, for each box where you want the results to appear, select inside each box, which opens the dynamic content list. From that list, under the **Create HTML table** action, select **Output**.
-
- ![Screenshot showing a Consumption workflow with the "Send an email" action and the "Output" field from the preceding "Create HTML table" action entered in the email body.](./media/logic-apps-perform-data-operations/send-email-create-html-table-action-consumption.png)
-
- > [!NOTE]
- >
- > * If the dynamic content list shows the message that **We can't find any outputs to match this input format**,
- > select **See more** next to the **Create HTML table** label in the list.
- >
- > ![Screenshot showing a Consumption workflow and the dynamic content list with "See more" selected for the "Create HTML table" action.](./media/logic-apps-perform-data-operations/send-email-create-html-table-action-see-more.png)
- >
- > * When you include the HTML table output in an email action, make sure that you set the **Is HTML** property
- > to **Yes** in the email action's advanced options. That way, the email action correctly formats the HTML table.
- > However, if your table is returned with incorrect formatting, see [how to check your table data formatting](#format-table-data).
-
-1. Save your workflow, and then manually run your workflow. On the designer toolbar, select **Run Trigger** > **Run**.
-
-#### [Standard](#tab/standard)
- 1. In your workflow, add an action that can send you the results from the **Create HTML table** action. This example continues by using the Office 365 Outlook action named **Send an email**. 1. In this action, for each box where you want the results to appear, select inside each box, and then select the lightning icon, which opens the dynamic content list. From that list, under the **Create HTML table** action, select **Output**.
- ![Screenshot showing a Standard workflow with the "Send an email" action and the "Output" field from the preceding "Create HTML table" action entered in the email body.](./media/logic-apps-perform-data-operations/send-email-create-html-table-action-standard.png)
-
- > [!NOTE]
- >
- > If the dynamic content list shows the message that **We can't find any outputs to match this input format**,
- > select **See more** next to the **Create HTML table** label in the list.
- >
- > ![Screenshot showing a Standard workflow and the dynamic content list with "See more" selected for the "Create HTML table" action.](./media/logic-apps-perform-data-operations/send-email-create-html-table-action-see-more.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/send-email-create-html-table-action.png" alt-text="Screenshot shows workflow with action named Send an email. The Body property contains the Output field from preceding action named Create HTML table." lightbox="media/logic-apps-perform-data-operations/send-email-create-html-table-action.png":::
-1. Save your workflow, and then manually run your workflow. On the workflow navigation menu, select **Overview** > **Run Trigger** > **Run**.
+1. Save your workflow, and then manually run your workflow.
-
+ - Consumption workflow: On the designer toolbar, select **Run** > **Run**.
+ - Standard workflow: On the workflow navigation menu, select **Overview**. On the **Overview** page toolbar, select **Run** > **Run**.
-If you used the Office 365 Outlook action, you get a result similar to the following screenshot:
+If you used the Office 365 Outlook action, the following example shows the result:
-![Screenshot showing an email with the "Create HTML table" results.](./media/logic-apps-perform-data-operations/create-html-table-email-results.png)
<a name="filter-array-action"></a>
To try the **Filter array** action, follow these steps by using the workflow des
This example uses the Azure portal and a sample workflow with the **Recurrence** trigger followed by an **Initialize variable** action. The action is set up to create where the initial value is an array that has some sample integer values.
+ | Operation | Properties and values |
+ |--|--|
+ | **Initialize variable** | - **Name**: myIntegerArray <br>- **Type**: Array <br>- **Value**: `[1,2,3,4]` |
+ > [!NOTE] > > Although this example uses a simple integer array, this action is especially useful for JSON > object arrays where you can filter based on the objects' properties and values.
- ![Screenshot showing the Azure portal and the designer with a sample Consumption workflow for the "Filter array" action.](./media/logic-apps-perform-data-operations/sample-start-filter-array-action-consumption.png)
-
-1. In your workflow where you want to create the filtered array, follow one of these steps:
+ :::image type="content" source="media/logic-apps-perform-data-operations/sample-start-filter-array-action-consumption.png" alt-text="Screenshot shows Azure portal, Consumption workflow designer, and example workflow for action named Filter array." lightbox="media/logic-apps-perform-data-operations/sample-start-filter-array-action-consumption.png":::
- * To add an action under the last step, select **New step**.
+1. [Follow these general steps to find the **Data Operations** action named **Filter array**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
- * To add an action between steps, move your mouse over the connecting arrow so the plus sign (**+**) appears. Select the plus sign, and then select **Add an action**.
+1. On the designer, select the **Filter array** action, if not already selected. In the **From** box, enter the array or expression to use as the filter.
-1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **filter array**.
-
-1. From the actions list, select the action named **Filter array**.
-
- ![Screenshot showing the designer for a Consumption workflow, the "Choose an operation" search box with "filter array" entered, and the "Filter array" action selected.](./media/logic-apps-perform-data-operations/select-filter-array-action-consumption.png)
-
-1. In the **From** box, enter the array or expression to use as the filter.
-
- For this example, select the **From** box, which opens the dynamic content list. From that list, select the previously created variable:
+ For this example, select inside the **From** box, and then select the lightning icon, which opens the dynamic content list. From that list, select the previously created variable:
- ![Screenshot showing the designer for a Consumption workflow, the "Filter array" action, and the selected input to use.](./media/logic-apps-perform-data-operations/configure-filter-array-action-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/configure-filter-array-action.png" alt-text="Screenshot shows Consumption workflow, action named Filter array, and selected input to use." lightbox="media/logic-apps-perform-data-operations/configure-filter-array-action.png":::
1. For the condition, specify the array items to compare, select the comparison operator, and specify the comparison value. This example uses the [**item()** function](workflow-definition-language-functions-reference.md#item) to access each item in the array, while the **Filter array** action searches for array items where the value is greater than one. The following screenshot shows the finished example **Filter array** action:
- ![Screenshot showing the designer for a Consumption workflow and the finished example for the "Filter array" action.](./media/logic-apps-perform-data-operations/finished-filter-array-action-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/finished-filter-array-action.png" alt-text="Screenshot shows Consumption workflow and finished example action named Filter array." lightbox="media/logic-apps-perform-data-operations/finished-filter-array-action.png":::
1. Save your workflow. On the designer toolbar, select **Save**.
To try the **Filter array** action, follow these steps by using the workflow des
This example uses the Azure portal and a sample workflow with the **Recurrence** trigger followed by an **Initialize variable** action. The action is set up to create where the initial value is an array that has some sample integer values.
+ | Operation | Properties and values |
+ |--|--|
+ | **Initialize variable** | - **Name**: myIntegerArray <br>- **Type**: Array <br>- **Value**: `[1,2,3,4]` |
+ > [!NOTE] > > Although this example uses a simple integer array, this action is especially useful for JSON > object arrays where you can filter based on the objects' properties and values.
- ![Screenshot showing the Azure portal and the designer with a sample Standard workflow for the "Filter array" action.](./media/logic-apps-perform-data-operations/sample-start-filter-array-action-standard.png)
-
-1. In your workflow where you want to create the filtered array, follow one of these steps:
-
- * To add an action under the last step, select the plus sign (**+**), and then select **Add an action**.
+ :::image type="content" source="media/logic-apps-perform-data-operations/sample-start-filter-array-action-standard.png" alt-text="Screenshot shows Azure portal, Standard workflow designer, and example workflow for action named Filter array." lightbox="media/logic-apps-perform-data-operations/sample-start-filter-array-action-standard.png":::
- * To add an action between steps, select the plus sign (**+**) between those steps, and then select **Add an action**.
+1. [Follow these general steps to find the **Data Operations** action named **Filter array**](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
-1. After the connector gallery opens, [follow these general steps to find the **Data Operations** action named **Filter array**](create-workflow-with-trigger-or-action.md?tabs=standard#add-an-action-to-run-a-task).
-
-1. After the action information box appears, in the **From** box, enter the array or expression to use as the filter.
+1. On the designer, select the **Filter array** action, if not already selected. In the **From** box, enter the array or expression to use as the filter.
For this example, select inside the **From** box, and then select the lightning icon, which opens the dynamic content list. From that list, select the previously created variable:
- ![Screenshot showing the designer for a Standard workflow, the "Filter array" action, and the selected input to use.](./media/logic-apps-perform-data-operations/configure-filter-array-action-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/configure-filter-array-action.png" alt-text="Screenshot shows Standard workflow, action named Filter array, and selected input to use." lightbox="media/logic-apps-perform-data-operations/configure-filter-array-action.png":::
1. For the condition, specify the array items to compare, select the comparison operator, and specify the comparison value. This example uses the [**item()** function](workflow-definition-language-functions-reference.md#item) to access each item in the array, while the **Filter array** action searches for array items where the value is greater than one. The following screenshot shows the finished example **Filter array** action:
- ![Screenshot showing the designer for a Standard workflow and the finished example for the "Filter array" action.](./media/logic-apps-perform-data-operations/finished-filter-array-action-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/finished-filter-array-action.png" alt-text="Screenshot shows Standard workflow and finished example action named Filter array." lightbox="media/logic-apps-perform-data-operations/finished-filter-array-action.png":::
1. Save your workflow. On the designer toolbar, select **Save**.
To try the **Filter array** action, follow these steps by using the workflow des
To confirm whether **Filter array** action creates the expected results, send yourself a notification that includes output from the **Filter array** action.
-#### [Consumption](#tab/consumption)
- 1. In your workflow, add an action that can send you the results from the **Filter array** action. This example continues by using the Office 365 Outlook action named **Send an email**. 1. In this action, complete the following steps:
- 1. For each box where you want the results to appear, select inside each box, which opens the dynamic content list.
-
- 1. From that list, select **Expression** to open the expression editor instead.
-
- 1. To get the array output from the **Filter array** action, enter the following expression, which uses the [**actionBody()** function](workflow-definition-language-functions-reference.md#actionBody) with the **Filter array** action name, and then select **OK**.
-
- `actionBody('Filter_array')`
-
- ![Screenshot showing a Consumption workflow with the "Send an email" action and the action outputs from the "Filter array" action.](./media/logic-apps-perform-data-operations/send-email-filter-array-action-consumption.png)
-
- The resolved expression specifies to show the outputs from the **Filter_array** action in the email body when sent:
-
- ![Screenshot showing a Consumption workflow with the finished "Send an email" action for the "Filter array" action.](./media/logic-apps-perform-data-operations/send-email-filter-array-action-complete-consumption.png)
-
-1. Save your workflow, and then manually run your workflow. On the designer toolbar, select **Run Trigger** > **Run**.
-
-#### [Standard](#tab/standard)
-
-1. In your workflow, add an action that can send you the results from the **Filter array** action.
-
-1. In this action, complete the following steps:
-
- 1. For each box where you want the results to appear, select inside box, and then select the function icon, which opens the expression editor. Make sure that the **Function** list appears selected.
+ 1. For each box where you want the results to appear, select inside each box, and then select the function icon, which opens the expression editor. Make sure that the **Function** list appears selected.
- 1. To get the array output from the **Filter array** action, enter the following expression, which uses the [**actionBody()** function](workflow-definition-language-functions-reference.md#actionBody) with the **Filter array** action name, and then select **OK**.
+ 1. To get the array output from the **Filter array** action, enter the following expression, which uses the [**body()** function](workflow-definition-language-functions-reference.md#body) with the **Filter array** action name, and then select **Add**.
- `actionBody('Filter_array')`
+ `body('Filter_array')`
- ![Screenshot showing a Standard workflow with the "Send an email" action and the action outputs from the "Filter array" action.](./media/logic-apps-perform-data-operations/send-email-filter-array-action-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/send-email-filter-array-action.png" alt-text="Screenshot shows workflow with action named Send an email. The Body property contains the body() function, which gets the body content from the preceding action named Filter array." lightbox="media/logic-apps-perform-data-operations/send-email-filter-array-action.png":::
The resolved expression specifies to show the outputs from the **Filter_array** action in the email body when sent:
- ![Screenshot showing a Standard workflow with the finished "Send an email" action for the "Filter array" action.](./media/logic-apps-perform-data-operations/send-email-filter-array-action-complete-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/send-email-filter-array-action-complete.png" alt-text="Screenshot shows Standard workflow and finished example action for Send an email." lightbox="media/logic-apps-perform-data-operations/send-email-filter-array-action-complete.png":::
-1. Save your workflow, and then manually run your workflow. On the workflow navigation menu, select **Overview** > **Run Trigger** > **Run**.
+1. Save your workflow, and then manually run your workflow.
-
+ - Consumption workflow: On the designer toolbar, select **Run** > **Run**.
+ - Standard workflow: On the workflow navigation menu, select **Overview**. On the **Overview** page toolbar, select **Run** > **Run**.
-If you used the Office 365 Outlook action, you get a result similar to the following screenshot:
+If you used the Office 365 Outlook action, the following example shows the result:
-![Screenshot showing an email with the "Filter array" action results.](./media/logic-apps-perform-data-operations/filter-array-email-results.png)
<a name="join-action"></a>
To try the **Join** action, follow these steps by using the workflow designer. O
This example uses the Azure portal and a sample workflow with the **Recurrence** trigger followed by an **Initialize variable** action. This action is set up to create a variable where the initial value is an array that has some sample integer values.
- ![Screenshot showing the Azure portal and the designer with a sample Consumption workflow for the "Join" action.](./media/logic-apps-perform-data-operations/sample-start-join-action-consumption.png)
-
-1. In your workflow where you want to create the string from an array, follow one of these steps:
+ | Operation | Properties and values |
+ |--|--|
+ | **Initialize variable** | - **Name**: myIntegerArray <br>- **Type**: Array <br>- **Value**: `[1,2,3,4]` |
- * To add an action under the last step, select **New step**.
+ :::image type="content" source="media/logic-apps-perform-data-operations/sample-start-join-action-consumption.png" alt-text="Screenshot shows Azure portal, Consumption workflow designer, and example workflow for the action named Join." lightbox="media/logic-apps-perform-data-operations/sample-start-join-action-consumption.png":::
- * To add an action between steps, move your mouse over the connecting arrow so the plus sign (**+**) appears. Select the plus sign, and then select **Add an action**.
+1. [Follow these general steps to find the **Data Operations** action named **Join**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **join**.
-
-1. From the actions list, select the action named **Join**.
-
- ![Screenshot showing the designer for a Consumption workflow, the "Choose an operation" search box, and the "Join" action selected.](./media/logic-apps-perform-data-operations/select-join-action-consumption.png)
+1. On the designer, select the **Join** action, if not already selected. In the **From** box, enter the array that has the items that you want to join as a string.
1. In the **From** box, enter the array that has the items you want to join as a string.
- For this example, select inside the **From** box, which opens the dynamic content list. From that list, select the previously created variable:
+ For this example, select inside the **From** box, and then select the lightning icon, which opens the dynamic content list. From that list, select the previously created variable:
- ![Screenshot showing the designer for a Consumption workflow, the "Join" action, and the selected array output to use join as a string.](./media/logic-apps-perform-data-operations/configure-join-action-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/configure-join-action.png" alt-text="Screenshot shows Consumption workflow, action named Join, and selected array output to join as a string." lightbox="media/logic-apps-perform-data-operations/configure-join-action.png":::
-1. In the **Join with** box, enter the character to use for separating each array item.
+1. In the **Join With** box, enter the character to use for separating each array item.
- This example uses a colon (**:**) as the separator.
+ This example uses a colon (**:**) as the separator for the **Join With** property.
- ![Screenshot showing where to provide the separator character.](./media/logic-apps-perform-data-operations/finished-join-action-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/finished-join-action.png" alt-text="Screenshot shows Consumption workflow and the finished example for the action named Join." lightbox="media/logic-apps-perform-data-operations/finished-join-action.png":::
1. Save your workflow. On the designer toolbar, select **Save**.
To try the **Join** action, follow these steps by using the workflow designer. O
This example uses the Azure portal and a sample workflow with the **Recurrence** trigger followed by an **Initialize variable** action. The action is set up to create where the initial value is an array that has some sample integer values.
- ![Screenshot showing the Azure portal and the designer with a sample Standard workflow for the "Join" action.](./media/logic-apps-perform-data-operations/sample-start-join-action-standard.png)
-
-1. In your workflow where you want to create the filtered array, follow one of these steps:
+ | Operation | Properties and values |
+ |--|--|
+ | **Initialize variable** | - **Name**: myIntegerArray <br>- **Type**: Array <br>- **Value**: `[1,2,3,4]` |
- * To add an action under the last step, select the plus sign (**+**), and then select **Add an action**.
+ :::image type="content" source="media/logic-apps-perform-data-operations/sample-start-join-action-standard.png" alt-text="Screenshot shows Azure portal, Standard workflow designer, and example workflow for the action named Join." lightbox="media/logic-apps-perform-data-operations/sample-start-join-action-standard.png":::
- * To add an action between steps, select the plus sign (**+**) between those steps, and then select **Add an action**.
+1. [Follow these general steps to find the **Data Operations** action named **Join**](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
-1. After the connector gallery opens, [follow these general steps to find the **Data Operations** action named **Join**](create-workflow-with-trigger-or-action.md?tabs=standard#add-an-action-to-run-a-task).
-
-1. After the action information box appears, in the **From** box, enter the array that has the items you want to join as a string.
+1. On the designer, select the **Join** action, if not already selected. In the **From** box, enter the array that has the items that you want to join as a string.
For this example, select inside the **From** box, and then select the lightning icon, which opens the dynamic content list. From that list, select the previously created variable:
- ![Screenshot showing the designer for a Standard workflow, the "Join" action, and the selected input to use.](./media/logic-apps-perform-data-operations/configure-join-action-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/configure-join-action.png" alt-text="Screenshot shows Standard workflow, action named Join, and selected array output to join as a string." lightbox="media/logic-apps-perform-data-operations/configure-join-action.png":::
-1. In the **Join with** box, enter the character to use for separating each array item.
+1. In the **Join With** box, enter the character to use for separating each array item.
- This example uses a colon (**:**) as the separator.
+ This example uses a colon (**:**) as the separator for the **Join With** property.
- ![Screenshot showing the designer for a Standard workflow and the finished example for the "Join" action.](./media/logic-apps-perform-data-operations/finished-join-action-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/finished-join-action.png" alt-text="Screenshot shows Standard workflow and the finished example for the action named Join." lightbox="media/logic-apps-perform-data-operations/finished-join-action.png":::
1. Save your workflow. On the designer toolbar, select **Save**.
To try the **Join** action, follow these steps by using the workflow designer. O
To confirm whether the **Join** action creates the expected results, send yourself a notification that includes output from the **Join** action.
-#### [Consumption](#tab/consumption)
- 1. In your workflow, add an action that can send you the results from the **Join** action. This example continues by using the Office 365 Outlook action named **Send an email**.
-1. In this action, for each box where you want the results to appear, select inside each box, which opens the dynamic content list. From that list, under the **Join** action, select **Output**.
+1. In this action, for each box where you want the results to appear, select inside each box, and then select the lightning icon, which opens the dynamic content list. From that list, under the **Join** action, select **Output**.
- ![Screenshot showing a Consumption workflow with the finished "Send an email" action for the "Join" action.](./media/logic-apps-perform-data-operations/send-email-join-action-complete-consumption.png)
-
- > [!NOTE]
- >
- > If the dynamic content list shows the message that **We can't find any outputs to match this input format**,
- > select **See more** next to the **Join** label in the list.
- >
- > ![Screenshot showing a Consumption workflow and the dynamic content list with "See more" selected for the "Join" action.](./media/logic-apps-perform-data-operations/send-email-join-action-see-more.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/send-email-join-action-complete.png" alt-text="Screenshot shows a workflow with the finished action named Send an email for the Join action." lightbox="media/logic-apps-perform-data-operations/send-email-join-action-complete.png":::
-1. Save your workflow, and then manually run your workflow. On the designer toolbar, select **Run Trigger** > **Run**.
+1. Save your workflow, and then manually run your workflow.
-#### [Standard](#tab/standard)
+ - Consumption workflow: On the designer toolbar, select **Run** > **Run**.
+ - Standard workflow: On the workflow navigation menu, select **Overview**. On the **Overview** page toolbar, select **Run** > **Run**.
-1. In your workflow, add an action that can send you the results from the **Join** action.
+If you used the Office 365 Outlook action, the following example shows the result:
- This example continues by using the Office 365 Outlook action named **Send an email**.
-
-1. In this action, for each box where you want the results to appear, select inside each box, which opens the dynamic content list. From that list, under the **Join** action, select **Output**.
-
- ![Screenshot showing a Standard workflow with the finished "Send an email" action for the "Join" action.](./media/logic-apps-perform-data-operations/send-email-join-action-complete-standard.png)
-
- > [!NOTE]
- >
- > If the dynamic content list shows the message that **We can't find any outputs to match this input format**,
- > select **See more** next to the **Join** label in the list.
- >
- > ![Screenshot showing a Standard workflow and the dynamic content list with "See more" selected for the "Join" action.](./media/logic-apps-perform-data-operations/send-email-join-action-see-more.png)
-
-1. Save your workflow, and then manually run your workflow. On the workflow navigation menu, select **Overview** > **Run Trigger** > **Run**.
---
-If you used the Office 365 Outlook action, you get a result similar to the following screenshot:
-
-![Screenshot showing an email with the "Join" action results.](./media/logic-apps-perform-data-operations/join-email-results.png)
<a name="parse-json-action"></a>
For more information about this action in your underlying workflow definition, s
} ```
- ![Screenshot showing the Azure portal and the designer with a sample Consumption workflow for the "Parse JSON" action.](./media/logic-apps-perform-data-operations/sample-start-parse-json-action-consumption.png)
-
-1. In your workflow where you want to parse the JSON object, follow one of these steps:
-
- * To add an action under the last step, select **New step**.
-
- * To add an action between steps, move your mouse over the connecting arrow so the plus sign (**+**) appears. Select the plus sign, and then select **Add an action**.
-
-1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **parse json**.
-
-1. From the actions list, select the action named **Parse JSON**.
+ :::image type="content" source="media/logic-apps-perform-data-operations/sample-start-parse-json-action-consumption.png" alt-text="Screenshot shows Azure portal, Consumption workflow designer, and example workflow for action named Parse JSON." lightbox="media/logic-apps-perform-data-operations/sample-start-parse-json-action-consumption.png":::
- ![Screenshot showing the designer for a Consumption workflow, the "Choose an operation" search box, and the "Parse JSON" action selected.](./media/logic-apps-perform-data-operations/select-parse-json-action-consumption.png)
+1. [Follow these general steps to find the **Data Operations** action named **Parse JSON**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. In the **Content** box, enter the JSON object that you want to parse.
+1. On the designer, select the **Parse JSON** action, if not already selected. In the **Content** box, enter the JSON object that you want to parse.
- For this example, select inside the **Content** box, which opens the dynamic content list. From that list, select the previously created variable:
+ For this example, select inside the **Content** box, and then select the lightning icon, which opens the dynamic content list. From that list, select the previously created variable:
- ![Screenshot showing the designer for a Consumption workflow, the "Parse JSON" action, and the selected JSON object variable to use in the "Parse JSON" action.](./media/logic-apps-perform-data-operations/configure-parse-json-action-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/configure-parse-json-action.png" alt-text="Screenshot shows Consumption workflow, action named Parse JSON, and the selected JSON object variable to parse." lightbox="media/logic-apps-perform-data-operations/configure-parse-json-action.png":::
1. In the **Schema** box, enter the JSON schema that describes the JSON object, or *payload*, that you want to parse.
For more information about this action in your underlying workflow definition, s
} ```
- ![Screenshot showing the designer for a Consumption workflow, the "Parse JSON" action, and the JSON schema for the JSON object that you want to parse.](./media/logic-apps-perform-data-operations/provide-schema-parse-json-action-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/provide-schema-parse-json-action.png" alt-text="Screenshot shows Consumption workflow, action named Parse JSON, and JSON schema for the JSON object that you want to parse." lightbox="media/logic-apps-perform-data-operations/provide-schema-parse-json-action.png":::
If you don't have a schema, you can generate the schema from the JSON object:
For more information about this action in your underlying workflow definition, s
```json { "Member": {
- "Email": "Sophia.Owen@fabrikam.com",
+ "Email": "Sophia.Owens@fabrikam.com",
"FirstName": "Sophia",
- "LastName": "Owen"
+ "LastName": "Owens"
} } ```
- ![Screenshot showing the designer for a Consumption workflow, the "Parse JSON" action, and the "Enter or paste a sample JSON payload" box with the JSON entered to generate the schema.](./media/logic-apps-perform-data-operations/generate-schema-parse-json-action-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/generate-schema-parse-json-action.png" alt-text="Screenshot shows Consumption workflow, action named Parse JSON, and box named Enter or paste a sample JSON payload, which contains JSON sample to generate the schema." lightbox="media/logic-apps-perform-data-operations/generate-schema-parse-json-action.png":::
1. Save your workflow. On the designer toolbar, select **Save**.
For more information about this action in your underlying workflow definition, s
```json { "Member": {
- "Email": "Sophia.Owen@fabrikam.com",
+ "Email": "Sophia.Owens@fabrikam.com",
"FirstName": "Sophia",
- "LastName": "Owen"
+ "LastName": "Owens"
} } ```
- ![Screenshot showing the Azure portal and the designer with a sample Standard workflow for the "Parse JSON" action.](./media/logic-apps-perform-data-operations/sample-start-parse-json-action-standard.png)
-
-1. In your workflow where you want to parse the JSON object, follow one of these steps:
+ :::image type="content" source="media/logic-apps-perform-data-operations/sample-start-parse-json-action-standard.png" alt-text="Screenshot shows Azure portal, Standard workflow designer, and example workflow for action named Parse JSON." lightbox="media/logic-apps-perform-data-operations/sample-start-parse-json-action-standard.png":::
- * To add an action under the last step, select the plus sign (**+**), and then select **Add an action**.
-
- * To add an action between steps, select the plus sign (**+**) between those steps, and then select **Add an action**.
+1. [Follow these general steps to find the **Data Operations** action named **Parse JSON**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. After the connector gallery opens, [follow these general steps to find the **Data Operations** action named **Parse JSON**](create-workflow-with-trigger-or-action.md?tabs=standard#add-an-action-to-run-a-task).
-
-1. After the action information box appears, in the **Content** box, enter the JSON object that you want to parse.
+1. On the designer, select the **Parse JSON** action, if not already selected. In the **Content** box, enter the JSON object that you want to parse.
For this example, select inside the **Content** box, and then select the lightning icon, which opens the dynamic content list. From that list, select the previously created variable:
- ![Screenshot showing the designer for a Standard workflow, the "Parse JSON" action, and the selected JSON object variable to use in the "Parse JSON" action.](./media/logic-apps-perform-data-operations/configure-parse-json-action-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/configure-parse-json-action.png" alt-text="Screenshot shows Standard workflow, action named Parse JSON, and the selected JSON object variable to parse." lightbox="media/logic-apps-perform-data-operations/configure-parse-json-action.png":::
1. In the **Schema** box, enter the JSON schema that describes the JSON object, or *payload*. that you want to parse.
For more information about this action in your underlying workflow definition, s
} ```
- ![Screenshot showing the designer for a Standard workflow, the "Parse JSON" action, and the JSON schema for the JSON object that you want to parse.](./media/logic-apps-perform-data-operations/provide-schema-parse-json-action-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/provide-schema-parse-json-action.png" alt-text="Screenshot shows Standard workflow, action named Parse JSON, and JSON schema for the JSON object that you want to parse." lightbox="media/logic-apps-perform-data-operations/provide-schema-parse-json-action.png":::
If you don't have a schema, you can generate the schema from the JSON object:
For more information about this action in your underlying workflow definition, s
```json { "Member": {
- "Email": "Sophia.Owen@fabrikam.com",
+ "Email": "Sophia.Owens@fabrikam.com",
"FirstName": "Sophia",
- "LastName": "Owen"
+ "LastName": "Owens"
} } ```
- ![Screenshot showing the designer for a Standard workflow, the "Parse JSON" action, and the "Enter or paste a sample JSON payload" box with the JSON entered to generate the schema.](./media/logic-apps-perform-data-operations/generate-schema-parse-json-action-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/generate-schema-parse-json-action.png" alt-text="Screenshot shows Standard workflow, action named Parse JSON, and box named Enter or paste a sample JSON payload, which contains JSON sample to generate the schema." lightbox="media/logic-apps-perform-data-operations/generate-schema-parse-json-action.png":::
1. Save your workflow. On the designer toolbar, select **Save**.
For more information about this action in your underlying workflow definition, s
To confirm whether the **Parse JSON** action creates the expected results, send yourself a notification that includes output from the **Parse JSON** action.
-#### [Consumption](#tab/consumption)
- 1. In your workflow, add an action that can send you the results from the **Parse JSON** action. This example continues by using the Office 365 Outlook action named **Send an email**.
-1. In this action, for each edit box where you want the results to appear, select inside each box, which opens the dynamic content list. From that list, under the **Parse JSON** action, you can now select the properties from the parsed JSON object.
+1. In this action, for each box where you want the results to appear, select inside each edit box, and then select the lightning icon, which opens the dynamic content list. From that list, under the **Parse JSON** action, select the properties from the parsed JSON object.
- This example selects the following properties: **FirstName**, **LastName**, and **Email**
+ This example selects the following properties: **Body FirstName**, **Body LastName**, and **Body Email**
- ![Screenshot showing a Consumption workflow with JSON properties in the "Send an email" action.](./media/logic-apps-perform-data-operations/send-email-parse-json-action-consumption.png)
-
- > [!NOTE]
- >
- > If the dynamic content list shows the message that **We can't find any outputs to match this input format**,
- > select **See more** next to the **Parse JSON** label in the list.
- >
- > ![Screenshot showing a Standard workflow and the dynamic content list with "See more" selected for the "Parse JSON" action.](./media/logic-apps-perform-data-operations/send-email-parse-json-action-see-more.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/send-email-parse-json-action.png" alt-text="Screenshot shows Standard workflow with JSON properties in the action named Send an email." lightbox="media/logic-apps-perform-data-operations/send-email-parse-json-action.png":::
When you're done, the **Send an email** action looks similar to the following example:
- ![Screenshot showing a Consumption workflow with the finished "Send an email" action for the "Parse JSON" action.](./media/logic-apps-perform-data-operations/send-email-parse-json-action-2-consumption.png)
-
-1. Save your workflow, and then manually run your workflow. On the designer toolbar, select **Run Trigger** > **Run**.
-
-#### [Standard](#tab/standard)
+ :::image type="content" source="media/logic-apps-perform-data-operations/send-email-parse-json-action-complete.png" alt-text="Screenshot shows workflow with finished action named Send an email for action named Parse JSON." lightbox="media/logic-apps-perform-data-operations/send-email-parse-json-action-complete.png":::
-1. In your workflow, add an action that can send you the results from the **Parse JSON** action.
-
- This example continues by using the Office 365 Outlook action named **Send an email**.
+1. Save your workflow, and then manually run your workflow.
-1. In this action, for each box where you want the results to appear, select inside each edit box, which opens the dynamic content list. From that list, under the **Parse JSON** action, you can now select the properties from the parsed JSON object.
+ - Consumption workflow: On the designer toolbar, select **Run** > **Run**.
+ - Standard workflow: On the workflow navigation menu, select **Overview**. On the **Overview** page toolbar, select **Run** > **Run**.
- This example selects the following properties: **FirstName**, **LastName**, and **Email**
+If you used the Office 365 Outlook action, the following example shows the result:
- ![Screenshot showing a Standard workflow with JSON properties in the "Send an email" action.](./media/logic-apps-perform-data-operations/send-email-parse-json-action-standard.png)
-
- > [!NOTE]
- >
- > If the dynamic content list shows the message that **We can't find any outputs to match this input format**,
- > select **See more** next to the **Parse JSON** label in the list.
- >
- > ![Screenshot showing a Standard workflow and the dynamic content list with "See more" selected for the "Parse JSON" action.](./media/logic-apps-perform-data-operations/send-email-parse-json-action-see-more.png)
-
- When you're done, the **Send an email** action looks similar to the following example:
-
- ![Screenshot showing a Standard workflow with the finished "Send an email" action for the "Parse JSON" action.](./media/logic-apps-perform-data-operations/send-email-parse-json-action-complete-standard.png)
-
-1. Save your workflow, and then manually run your workflow. On the workflow navigation menu, select **Overview** > **Run Trigger** > **Run**.
---
-If you used the Office 365 Outlook action, you get a result similar to the following screenshot:
-
-![Screenshot showing an email with the "Parse JSON" action results.](./media/logic-apps-perform-data-operations/parse-json-email-results.png)
<a name="select-action"></a>
If you used the Office 365 Outlook action, you get a result similar to the follo
By default, the **Select** action creates an array that contains JSON objects built from the values in an existing array. For example, you can create a JSON object for each value in an integer array by specifying the properties that each JSON object must have and mapping the values from the source array to those properties. Although you can change the component JSON objects, the output array always has the same number of items as the source array. To use the output array from the **Select** action, subsequent actions must either accept arrays as input, or you might have to transform the output array into another compatible format.
-To try the **Select** action, follow these steps by using the workflow designer. Or, if you prefer working in the code view editor, you can copy the example **Select** and **Initialize variable** action definitions from this guide into your own logic app's underlying workflow definition: [Data operation code examples - Select](../logic-apps/logic-apps-data-operations-code-samples.md#select-action-example). For more information about this action in your underlying workflow definition, see [Select action](logic-apps-workflow-actions-triggers.md#select-action).
+To try the **Select** action, follow these steps by using the workflow designer. Or, if you prefer working in the code view editor, you can copy the example **Select** and **Initialize variable** action definitions from this guide into your own logic app's underlying workflow definition: [Data operation code examples - Select](logic-apps-data-operations-code-samples.md#select-action-example). For more information about this action in your underlying workflow definition, see [Select action](logic-apps-workflow-actions-triggers.md#select-action).
> [!TIP] > > For an example that creates create an array with strings or integers built from the values in a JSON object array, > see the **Select** and **Initliaze variable** action definitions in
-> [Data operation code examples - Select](../logic-apps/logic-apps-data-operations-code-samples.md#select-action-example).
+> [Data operation code examples - Select](logic-apps-data-operations-code-samples.md#select-action-example).
### [Consumption](#tab/consumption)
To try the **Select** action, follow these steps by using the workflow designer.
This example uses the Azure portal and a sample workflow with the **Recurrence** trigger followed by an **Initialize variable** action. The action is set up for create a variable where the initial value is an array that has some sample integers.
- ![Screenshot showing the Azure portal and the designer with a sample Consumption workflow for the "Select" action.](./media/logic-apps-perform-data-operations/sample-start-select-action-consumption.png)
-
-1. In your workflow where you want to create the JSON object array, follow one of these steps:
-
- * To add an action under the last step, select **New step**.
-
- * To add an action between steps, move your mouse over the connecting arrow so the plus sign (**+**) appears. Select the plus sign, and then select **Add an action**.
-
-1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **select**.
+ | Operation | Properties and values |
+ |--|--|
+ | **Initialize variable** | - **Name**: myIntegerArray <br>- **Type**: Array <br>- **Value**: `[1,2,3,4]` |
-1. From the actions list, select the action named **Select**.
+ :::image type="content" source="media/logic-apps-perform-data-operations/sample-start-select-action-consumption.png" alt-text="Screenshot shows Azure portal, Consumption workflow designer, and example workflow for the action named Select." lightbox="media/logic-apps-perform-data-operations/sample-start-select-action-consumption.png":::
- ![Screenshot showing the designer for a Consumption workflow, the "Choose an operation" search box, and the "Select" action selected.](./media/logic-apps-perform-data-operations/select-select-action-consumption.png)
+1. [Follow these general steps to find the **Data Operations** action named **Select**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. In the **From** box, enter the source array that you want to use.
+1. On the designer, select the **Select** action, if not already selected. In the **From** box, enter the source array that you want to use.
- For this example, select inside the **From** box, which opens the dynamic content list. From that list, select the previously created variable:
+ For this example, select inside the **From** box, and then select the lightning icon, which opens the dynamic content list. From that list, select the previously created variable:
- ![Screenshot showing the designer for a Consumption workflow, the "Select" action, and the selected source array variable to use in the "Select" action.](./media/logic-apps-perform-data-operations/configure-select-action-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/configure-select-action.png" alt-text="Screenshot shows Consumption workflow, action named Select, and the selected source array variable to use." lightbox="media/logic-apps-perform-data-operations/configure-select-action.png":::
1. For the **Map** property, in the left column, enter a property name to describe all the values in the source array.
To try the **Select** action, follow these steps by using the workflow designer.
This example uses the [**item()** function](workflow-definition-language-functions-reference.md#item) to iterate through and access each item in the array.
- 1. Select inside the right column, which opens the dynamic content list.
-
- 1. From that list, select **Expression** to open the expression editor instead.
+ 1. Select inside the right column, and then select the function icon, which opens the expression editor. Make sure that the **Function** list appears selected.
- 1. In the expression editor, enter the function named **item()**, and then select **OK**.
+ 1. In the expression editor, enter the function named **item()**, and then select **Add**.
- ![Screenshot showing the designer for a Consumption workflow, the "Select" action, and the JSON object property and values to create the JSON object array.](./media/logic-apps-perform-data-operations/configure-select-action-2-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/configure-select-action-expression.png" alt-text="Screenshot shows Consumption workflow, the action named Select, and the JSON object property and values to create the JSON object array." lightbox="media/logic-apps-perform-data-operations/configure-select-action-expression.png":::
The **Select** action now appears similar to the following example:
- ![Screenshot showing the "Select" action in a Consumption workflow and the finished example for the "Select" action.](./media/logic-apps-perform-data-operations/finished-select-action-consumption.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/finished-select-action.png" alt-text="Screenshot shows Consumption workflow and the finished example action named Select." lightbox="media/logic-apps-perform-data-operations/finished-select-action.png":::
1. Save your workflow. On the designer toolbar, select **Save**.
To try the **Select** action, follow these steps by using the workflow designer.
This example uses the Azure portal and a sample workflow with the **Recurrence** trigger followed by an **Initialize variable** action. The action is set up for create a variable where the initial value is an array that has some sample integers.
- ![Screenshot showing the Azure portal and the designer with a sample Standard workflow for the "Select" action.](./media/logic-apps-perform-data-operations/sample-start-select-action-standard.png)
+ | Operation | Properties and values |
+ |--|--|
+ | **Initialize variable** | - **Name**: myIntegerArray <br>- **Type**: Array <br>- **Value**: `[1,2,3,4]` |
-1. In your workflow where you want to create the JSON object array, follow one of these steps:
+ :::image type="content" source="media/logic-apps-perform-data-operations/sample-start-select-action-standard.png" alt-text="Screenshot shows Azure portal, Standard workflow designer, and example workflow for the action named Select." lightbox="media/logic-apps-perform-data-operations/sample-start-select-action-standard.png":::
- * To add an action under the last step, select the plus sign (**+**), and then select **Add an action**.
+1. [Follow these general steps to find the **Data Operations** action named **Select**](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
- * To add an action between steps, select the plus sign (**+**) between those steps, and then select **Add an action**.
-
-1. After the connector gallery opens, [follow these general steps to find the **Data Operations** action named **Select**](create-workflow-with-trigger-or-action.md?tabs=standard#add-an-action-to-run-a-task).
-
-1. After the action information box appears, in the **From** box, enter the source array that you want to use.
+1. On the designer, select the **Select** action, if not already selected. In the **From** box, enter the source array that you want to use.
For this example, select inside the **From** box, and then select the lightning icon, which opens the dynamic content list. From that list, select the previously created variable:
- ![Screenshot showing the designer for a Standard workflow, the "Select" action, and the selected source array variable to use in the "Select" action.](./media/logic-apps-perform-data-operations/configure-select-action-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/configure-select-action.png" alt-text="Screenshot shows Standard workflow, action named Select, and the selected source array variable to use." lightbox="media/logic-apps-perform-data-operations/configure-select-action.png":::
1. For the **Map** property, in the left column, enter a property name to describe all the values in the source array.
To try the **Select** action, follow these steps by using the workflow designer.
1. In the expression editor, enter the function named **item()**, and then select **Add**.
- ![Screenshot showing the designer for a Standard workflow, the "Select" action, and the JSON object property and values to create the JSON object array.](./media/logic-apps-perform-data-operations/configure-select-action-2-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/configure-select-action-expression.png" alt-text="Screenshot shows Standard workflow, the action named Select, and the JSON object property and values to create the JSON object array." lightbox="media/logic-apps-perform-data-operations/configure-select-action-expression.png":::
The **Select** action now appears similar to the following example:
- ![Screenshot showing the "Select" action in a Standard workflow and the finished example for the "Select" action.](./media/logic-apps-perform-data-operations/finished-select-action-standard.png)
-
-1. Save your workflow. On the designer toolbar, select **Save**.
+ :::image type="content" source="media/logic-apps-perform-data-operations/finished-select-action.png" alt-text="Screenshot shows Standard workflow and the finished example action named Select." lightbox="media/logic-apps-perform-data-operations/finished-select-action.png":::
To try the **Select** action, follow these steps by using the workflow designer.
To confirm whether the **Select** action creates the expected results, send yourself a notification that includes output from the **Select** action.
-#### [Consumption](#tab/consumption)
-
-1. In your workflow, add an action that can send you the results from the **Select** action.
-
- This example continues by using the Office 365 Outlook action named **Send an email**.
-
-1. In this action, complete the following steps:
-
- 1. For each box where you want the results to appear, select inside each box, which opens the dynamic content list.
-
- 1. From that list, select **Expression** to open the expression editor instead.
-
- 1. To get the array output from the **Select** action, enter the following expression, which uses the [**actionBody()** function](workflow-definition-language-functions-reference.md#actionBody) with the **Select** action name, and select **OK**:
-
- `actionBody('Select')`
-
- ![Screenshot showing a Consumption workflow with the "Send an email" action and the action outputs from the "Select" action.](./media/logic-apps-perform-data-operations/send-email-select-action-consumption.png)
-
- The resolved expression specifies to show the outputs from the **Select** action in the email body when sent:
-
- ![Screenshot showing a Consumption workflow with the finished "Send an email" action for the "Select" action.](./media/logic-apps-perform-data-operations/send-email-select-action-complete-consumption.png)
-
- When you're done, the **Send an email** action looks similar to the following example:
-
-1. Save your workflow, and then manually run your workflow. On the designer toolbar, select **Run Trigger** > **Run**.
-
-#### [Standard](#tab/standard)
- 1. In your workflow, add an action that can send you the results from the **Select** action. 1. In this action, complete the following steps: 1. For each box where you want the results to appear, select inside each box, and then select the function icon, which opens the expression editor. Make sure that the **Function** list appears selected.
- 1. To get the array output from the **Select** action, enter the following expression, which uses the [**actionBody()** function](workflow-definition-language-functions-reference.md#actionBody) with the **Select** action name, and select **OK**:
+ 1. To get the array output from the **Select** action, enter the following expression, which uses the [**body()** function](workflow-definition-language-functions-reference.md#body) with the **Select** action name, and select **Add**:
- `actionBody('Select')`
+ `body('Select')`
- ![Screenshot showing a Standard workflow with the "Send an email" action and the action outputs from the "Select" action.](./media/logic-apps-perform-data-operations/send-email-select-action-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/send-email-select-action.png" alt-text="Screenshot shows workflow with action named Send an email, and action outputs from the Select action.":::
- The resolved expression specifies to show the outputs from the **Filter_array** action in the email body when sent:
+ The resolved expression specifies to show the outputs from the **Select** action in the email body when sent:
- ![Screenshot showing a Standard workflow with the finished "Send an email" action for the "Select" action.](./media/logic-apps-perform-data-operations/send-email-select-action-complete-standard.png)
+ :::image type="content" source="media/logic-apps-perform-data-operations/send-email-select-action-complete.png" alt-text="Screenshot shows workflow and finished action named Send an email for the Select action." lightbox="media/logic-apps-perform-data-operations/send-email-select-action-complete.png":::
-1. Save your workflow, and then manually run your workflow. On the workflow navigation menu, select **Overview** > **Run Trigger** > **Run**.
+1. Save your workflow, and then manually run your workflow.
-
+ - Consumption workflow: On the designer toolbar, select **Run** > **Run**.
+ - Standard workflow: On the workflow navigation menu, select **Overview**. On the **Overview** page toolbar, select **Run** > **Run**.
-If you used the Office 365 Outlook action, you get a result similar to the following screenshot:
+If you used the Office 365 Outlook action, the following example shows the result:
-![Screenshot showing an email with the "Select" action results.](./media/logic-apps-perform-data-operations/select-email-results.png)
## Troubleshooting
For example:
} } ```
-## Next steps
+## Related content
* [Managed connectors for Azure Logic Apps](../connectors/managed.md) * [Built-in connectors for Azure Logic Apps](../connectors/built-in.md)
logic-apps Workflow Definition Language Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/workflow-definition-language-functions-reference.md
For the full reference about each function, see the
| Workflow function | Task | | -- | - | | [action](../logic-apps/workflow-definition-language-functions-reference.md#action) | Return the current action's output at runtime, or values from other JSON name-and-value pairs. See also [actions](../logic-apps/workflow-definition-language-functions-reference.md#actions). |
-| [actionBody](../logic-apps/workflow-definition-language-functions-reference.md#actionBody) | Return an action's `body` output at runtime. See also [body](../logic-apps/workflow-definition-language-functions-reference.md#body). |
-| [actionOutputs](../logic-apps/workflow-definition-language-functions-reference.md#actionOutputs) | Return an action's output at runtime. See [outputs](../logic-apps/workflow-definition-language-functions-reference.md#outputs) and [actions](../logic-apps/workflow-definition-language-functions-reference.md#actions). |
| [actions](../logic-apps/workflow-definition-language-functions-reference.md#actions) | Return an action's output at runtime, or values from other JSON name-and-value pairs. See also [action](../logic-apps/workflow-definition-language-functions-reference.md#action). |
-| [body](#body) | Return an action's `body` output at runtime. See also [actionBody](../logic-apps/workflow-definition-language-functions-reference.md#actionBody). |
+| [body](#body) | Return an action's `body` output at runtime. |
| [formDataMultiValues](../logic-apps/workflow-definition-language-functions-reference.md#formDataMultiValues) | Create an array with the values that match a key name in *form-data* or *form-encoded* action outputs. | | [formDataValue](../logic-apps/workflow-definition-language-functions-reference.md#formDataValue) | Return a single value that matches a key name in an action's *form-data* or *form-encoded output*. | | [item](../logic-apps/workflow-definition-language-functions-reference.md#item) | If this function appears inside a repeating action over an array, return the current item in the array during the action's current iteration. |
action().outputs.body.<property>
| <*action-output*> | String | The output from the current action or property | ||||
-<a name="actionBody"></a>
-
-### actionBody
-
-Return an action's `body` output at runtime.
-Shorthand for `actions('<actionName>').outputs.body`.
-See [body()](#body) and [actions()](#actions).
-
-```
-actionBody('<actionName>')
-```
-
-| Parameter | Required | Type | Description |
-| | -- | - | -- |
-| <*actionName*> | Yes | String | The name for the action's `body` output that you want |
-|||||
-
-| Return value | Type | Description |
-| | --| -- |
-| <*action-body-output*> | String | The `body` output from the specified action |
-||||
-
-*Example*
-
-This example gets the `body` output from the Twitter action `Get user`:
-
-```
-actionBody('Get_user')
-```
-
-And returns this result:
-
-```json
-"body": {
- "FullName": "Contoso Corporation",
- "Location": "Generic Town, USA",
- "Id": 283541717,
- "UserName": "ContosoInc",
- "FollowersCount": 172,
- "Description": "Leading the way in transforming the digital workplace.",
- "StatusesCount": 93,
- "FriendsCount": 126,
- "FavouritesCount": 46,
- "ProfileImageUrl": "https://pbs.twimg.com/profile_images/908820389907722240/gG9zaHcd_400x400.jpg"
-}
-```
-
-<a name="actionOutputs"></a>
-
-### actionOutputs
-
-Return an action's output at runtime. and is shorthand for `actions('<actionName>').outputs`. See [actions()](#actions). The `actionOutputs()` function resolves to `outputs()` in the designer, so consider using [outputs()](#outputs), rather than `actionOutputs()`. Although both functions work the same way, `outputs()` is preferred.
-
-```
-actionOutputs('<actionName>')
-```
-
-| Parameter | Required | Type | Description |
-| | -- | - | -- |
-| <*actionName*> | Yes | String | The name for the action's output that you want |
-|||||
-
-| Return value | Type | Description |
-| | --| -- |
-| <*output*> | String | The output from the specified action |
-||||
-
-*Example*
-
-This example gets the output from the Twitter action `Get user`:
-
-```
-actionOutputs('Get_user')
-```
-
-And returns this result:
-
-```json
-{
- "statusCode": 200,
- "headers": {
- "Pragma": "no-cache",
- "Vary": "Accept-Encoding",
- "x-ms-request-id": "a916ec8f52211265d98159adde2efe0b",
- "X-Content-Type-Options": "nosniff",
- "Timing-Allow-Origin": "*",
- "Cache-Control": "no-cache",
- "Date": "Mon, 09 Apr 2018 18:47:12 GMT",
- "Set-Cookie": "ARRAffinity=b9400932367ab5e3b6802e3d6158afffb12fcde8666715f5a5fbd4142d0f0b7d;Path=/;HttpOnly;Domain=twitter-wus.azconn-wus.p.azurewebsites.net",
- "X-AspNet-Version": "4.0.30319",
- "X-Powered-By": "ASP.NET",
- "Content-Type": "application/json; charset=utf-8",
- "Expires": "-1",
- "Content-Length": "339"
- },
- "body": {
- "FullName": "Contoso Corporation",
- "Location": "Generic Town, USA",
- "Id": 283541717,
- "UserName": "ContosoInc",
- "FollowersCount": 172,
- "Description": "Leading the way in transforming the digital workplace.",
- "StatusesCount": 93,
- "FriendsCount": 126,
- "FavouritesCount": 46,
- "ProfileImageUrl": "https://pbs.twimg.com/profile_images/908820389907722240/gG9zaHcd_400x400.jpg"
- }
-}
-```
- <a name="actions"></a> ### actions
or values from other JSON name-and-value pairs,
which you can assign to an expression. By default, the function references the entire action object, but you can optionally specify a property whose value that you want.
-For shorthand versions, see [actionBody()](#actionBody),
-[actionOutputs()](#actionOutputs), and [body()](#body).
-For the current action, see [action()](#action).
+For shorthand versions, see [body()](#body). For the current action, see [action()](#action).
> [!TIP] > The `actions()` function returns output as a string. If you need to work with a returned value as a JSON object, you first need to convert the string value. You can transform the string value into a JSON object using the [Parse JSON action](logic-apps-perform-data-operations.md#parse-json-action).
You can use this function expression to send the string bytes with the `applicat
### body
-Return an action's `body` output at runtime. Shorthand for `actions('<actionName>').outputs.body`. See [actionBody()](#actionBody) and [actions()](#actions).
+Return an action's `body` output at runtime. Shorthand for `actions('<actionName>').outputs.body`. See [actions()](#actions).
``` body('<actionName>')
And return these results:
### outputs
-Return an action's outputs at runtime. Use this function, rather than `actionOutputs()`, which resolves to `outputs()` in the designer. Although both functions work the same way, `outputs()` is preferred.
+Return an action's outputs at runtime.
``` outputs('<actionName>')
machine-learning Apache Spark Azure Ml Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/apache-spark-azure-ml-concepts.md
Title: "Apache Spark in Azure Machine Learning"
description: This article explains the options for accessing Apache Spark in Azure Machine Learning. -+
machine-learning Apache Spark Environment Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/apache-spark-environment-configuration.md
description: Learn how to configure your Apache Spark environment for interactiv
-+ Last updated 04/19/2024
machine-learning Concept Deep Learning Vs Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-deep-learning-vs-machine-learning.md
Title: 'Deep learning vs. machine learning'
description: Learn how deep learning relates to machine learning and AI. In Azure Machine Learning, use deep learning models for fraud detection, object detection, and more. -+
machine-learning Concept Model Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-catalog.md
Title: Model Catalog and Collections
description: Overview of models in the model catalog. -+
machine-learning Concept Top Level Entities In Managed Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-top-level-entities-in-managed-feature-store.md
description: Learn about how Azure Machine Learning uses managed feature stores
-+ Last updated 10/31/2023
machine-learning Concept Train Machine Learning Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-machine-learning-model.md
Title: 'Build & train models'
description: Learn how to train models with Azure Machine Learning. Explore the different training methods and choose the right one for your project. -+
machine-learning Concept What Is Managed Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-what-is-managed-feature-store.md
description: Learn about the managed feature store in Azure Machine Learning
-+
machine-learning Feature Retrieval Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/feature-retrieval-concepts.md
Title: Feature retrieval specification and usage in training and inference description: The feature retrieval specification, and how to use it for training and inference tasks.-+
machine-learning Feature Set Materialization Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/feature-set-materialization-concepts.md
Title: Feature set materialization concepts description: Build and use feature set materialization resources.-+
machine-learning Feature Set Specification Transformation Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/feature-set-specification-transformation-concepts.md
Title: Feature set specification transformation concepts description: The feature set specification, transformations, and best practices.-+
machine-learning How To Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-connection.md
Title: Create connections to external data sources (preview)
description: Learn how to use connections to connect to External data sources for training with Azure Machine Learning. -+
machine-learning How To Create Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-data-assets.md
Title: Create Data Assets description: Learn how to create Azure Machine Learning data assets-+
machine-learning How To Datastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-datastore.md
Title: Use datastores
description: Learn how to use datastores to connect to Azure storage services during training with Azure Machine Learning. -+
machine-learning How To Deploy Models From Huggingface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-from-huggingface.md
Title: Deploy models from HuggingFace hub to Azure Machine Learning online endpo
description: Deploy and score transformers based large language models from the Hugging Face hub. -+
machine-learning How To Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-export-delete-data.md
Title: Export or delete workspace data
description: Learn how to export or delete your workspace with the Azure Machine Learning studio. -+
machine-learning How To Import Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-import-data-assets.md
Title: Import data (preview)
description: Learn how to import data from external sources to the Azure Machine Learning platform. -+
machine-learning How To Label Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-label-data.md
description: Use data labeling tools to rapidly label text or label images for a
-+ Last updated 08/16/2023
machine-learning How To Manage Imported Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-imported-data-assets.md
Title: Manage imported data assets (preview)
description: Learn how to manage imported data assets also known as edit autodeletion. -+ Previously updated : 06/19/2023 Last updated : 07/30/2024 # Manage imported data assets (preview)+ [!INCLUDE [dev v2](includes/machine-learning-dev-v2.md)]
-In this article, you'll learn how to manage imported data assets from a life-cycle perspective. We learn how to modify or update auto delete settings on the data assets imported into a managed datastore (`workspacemanagedstore`) that Microsoft manages for the customer.
+In this article, you learn how to manage imported data assets from a life-cycle perspective. You'll learn how to modify or update auto delete settings on the data assets imported into a managed datastore (`workspacemanagedstore`) that Microsoft manages for the customer.
> [!NOTE] > Auto delete settings capability, or lifecycle management, is currently offered only through the imported data assets in managed datastore, also known as `workspacemanagedstore`.
These steps describe how to modify the auto delete settings of an imported data
1. Navigate to [Azure Machine Learning studio](https://ml.azure.com)
-1. As shown in the next screenshot, under **Assets** in the left navigation, select **Data**. At the **Data assets** tab, select an imported data asset located in the **workspacemanageddatastore**
+1. As shown in the next screenshot, under **Assets** in the left navigation, select **Data**. At the **Data assets** tab, select an imported data asset located in the **workspacemanageddatastore**.
:::image type="content" source="./media/how-to-manage-imported-data-assets/data-assets-list.png" lightbox="./media/how-to-manage-imported-data-assets/data-assets-list.png" alt-text="Screenshot highlighting the imported data asset name in workspace managed datastore in the Data assets tab.":::
These steps describe how to modify the auto delete settings of an imported data
:::image type="content" source="./media/how-to-manage-imported-data-assets/data-assets-details.png" lightbox="./media/how-to-manage-imported-data-assets/data-assets-details.png" alt-text="Screenshot showing the edit of the auto delete setting.":::
-1. To change the auto delete **Condition** setting, select **Created greater than**, and change **Value** to any numerical value. Then, select **Save** as shown in the next screenshot:
+1. To change the auto delete **Condition** setting, select **Created greater than**, and change **Value** to any numeric value. Then, select **Save** as shown in this screenshot:
:::image type="content" source="./media/how-to-manage-imported-data-assets/edit-managed-data-asset-details.png" lightbox="./media/how-to-manage-imported-data-assets/edit-managed-data-asset-details.png" alt-text="Screenshot that shows the managed data asset auto delete settings choices."::: > [!NOTE] > At this time, the supported values range from 1 day to 3 years.
-1. After a successful edit, you'll return to the data asset detail page. This page shows the updated values in **Auto delete settings** property box, as shown in the next screenshot:
+1. After a successful edit, you'll return to the data asset detail page. That page shows the updated values in **Auto delete settings** property box, as shown in the next screenshot:
:::image type="content" source="./media/how-to-manage-imported-data-assets/new-managed-data-asset-details.png" lightbox="./media/how-to-manage-imported-data-assets/new-managed-data-asset-details.png" alt-text="Screenshot showing the managed data asset auto delete settings."::: > [!NOTE]
- > The auto delete setting is available only on imported data assets in a workspacemanaged datastore, as shown in the above screenshot.
+ > The auto delete setting is available only on imported data assets in a **workspacemanaged** datastore, as shown in the above screenshot.
These steps describe how to delete or clear the auto delete settings of an impor
:::image type="content" source="./media/how-to-manage-imported-data-assets/data-assets-list.png" lightbox="./media/how-to-manage-imported-data-assets/data-assets-list.png" alt-text="Screenshot highlighting the imported data asset name in workspace managed datastore in the Data assets tab.":::
-1. As shown in the next screenshot, the details page of the data asset has an **Auto delete setting** property. This property is currently active on the data asset. Verify that you have the correct **Version:** of the data asset selected in the drop-down, and select the pencil icon to edit the property.
+1. As shown in the next screenshot, the data asset details page has an **Auto delete setting** property. This property is currently active on the data asset. Verify that you have the correct **Version:** of the data asset selected in the drop-down, and select the pencil icon to edit the property.
:::image type="content" source="./media/how-to-manage-imported-data-assets/data-assets-details.png" lightbox="./media/how-to-manage-imported-data-assets/data-assets-details.png" alt-text="Screenshot showing the edit of the auto delete setting.":::
This Azure CLI code sample shows the data assets with certain conditions, or wit
## Next steps - [Access data in a job](how-to-read-write-data-v2.md#access-data-in-a-job)-- [Working with tables in Azure Machine Learning](how-to-mltable.md)-- [Access data from Azure cloud storage during interactive development](how-to-access-data-interactive.md)
+- [Access data from Azure cloud storage during interactive development](./how-to-access-data-interactive.md)
+- [Working with tables in Azure Machine Learning](./how-to-mltable.md)
machine-learning How To Manage Synapse Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-synapse-spark-pool.md
description: Learn how to attach and manage Spark pools with Azure Synapse.
-+ Last updated 04/12/2024
machine-learning How To Mltable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-mltable.md
Title: Working with tables in Azure Machine Learning
description: Learn how to work with tables (meltable) in Azure Machine Learning. -+
machine-learning How To Network Isolation Model Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-network-isolation-model-catalog.md
Title: Use Model Catalog collections with workspace managed virtual network
description: Learn how to use the Model Catalog in an isolated network. -+
machine-learning How To Read Write Data V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-read-write-data-v2.md
Title: Access data in a job
description: Learn how to read and write data in Azure Machine Learning training jobs. -+
machine-learning How To Regulate Registry Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-regulate-registry-deployments.md
Title: Regulate deployments in Model Catalog using policies
description: Learn about using the Azure Machine Learning built-in policy to deny registry model deployments -+
machine-learning How To Schedule Data Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-schedule-data-import.md
Title: Schedule data import (preview)
description: Learn how to schedule an automated data import that brings in data from external sources. -+ Previously updated : 06/19/2023 Last updated : 07/28/2024
[!INCLUDE [dev v2](includes/machine-learning-dev-v2.md)]
-In this article, you'll learn how to programmatically schedule data imports and use the schedule UI to do the same. You can create a schedule based on elapsed time. Time-based schedules can be used to take care of routine tasks, such as importing the data regularly to keep them up-to-date. After learning how to create schedules, you'll learn how to retrieve, update and deactivate them via CLI, SDK, and studio UI.
+In this article, you'll learn how to programmatically schedule data imports, using the schedule UI to do it. You can create a schedule based on elapsed time. Time-based schedules can handle routine tasks - for example, regular data imports to keep them up-to-date. After learning how to create schedules, you'll learn how to retrieve, update and deactivate them via CLI, SDK, and studio UI resources.
## Prerequisites -- You must have an Azure subscription to use Azure Machine Learning. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+- You need an Azure subscription to use Azure Machine Learning. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
# [Azure CLI](#tab/cli)
In this article, you'll learn how to programmatically schedule data imports and
## Schedule data import
-To import data on a recurring basis, you must create a schedule. A `Schedule` associates a data import action, and a trigger. The trigger can either be `cron` that use cron expression to describe the wait between runs or `recurrence` that specify using what frequency to trigger job. In each case, you must first define an import data definition. An existing data import, or a data import that is defined inline, works for this. Refer to [Create a data import in CLI, SDK and UI](how-to-import-data-assets.md).
+To import data on a recurring basis, you must create a schedule. A `Schedule` associates a data import action with a trigger. The trigger can either be `cron`, which uses a cron expression to describe the delay between runs, or a `recurrence`, which specifies the frequency to trigger a job. In each case, you must first build an import data definition. An existing data import, or a data import that is defined inline, works for this. For more information, visit [Create a data import in CLI, SDK and UI](how-to-import-data-assets.md).
## Create a schedule
import_data:
```
-`trigger` contains the following properties:
+A `trigger` contains these properties:
-- **(Required)** `type` specifies the schedule type, either `recurrence` or `cron`. See the following section for more details.
+- **(Required)** `type` specifies the schedule type, either `recurrence` or `cron`. The following section has more information.
Next, run this command in the CLI:
ml_client.schedules.begin_create_or_update(import_schedule).result()
``` `RecurrenceTrigger` contains following properties: -- **(Required)** To provide better coding experience, we use `RecurrenceTrigger` for recurrence schedule.
+- **(Required)** For a better coding experience, use `RecurrenceTrigger` for the recurrence schedule.
# [Studio](#tab/azure-studio)
-When you have a data import with satisfactory performance and outputs, you can set up a schedule to automatically trigger this import.
+When your data import has satisfactory performance and outputs, you can set up a schedule to automatically trigger that import.
1. Navigate to [Azure Machine Learning studio](https://ml.azure.com)
- 1. Under **Assets** in the left navigation, select **Data**. On the **Data import** tab, select the imported data asset to which you want to attach a schedule. The **Import jobs history** page should appear, as shown in this screenshot:
+ 1. Under **Assets** in the left navigation, select **Data**. At the **Data import** tab, select the imported data asset to which you want to attach a schedule. The **Import jobs history** page should appear, as shown in this screenshot:
:::image type="content" source="./media/how-to-schedule-data-import/data-import-list.png" lightbox="./media/how-to-schedule-data-import/data-import-list.png" alt-text="Screenshot highlighting the imported data asset name in the Data imports tab.":::
- 1. At the **Import jobs history** page, select the latest **Import job name** link, to open the pipelines job details page as shown in this screenshot:
+ 1. At the **Import jobs history** page, select the latest **Import job name** hyperlink URL, to open the pipelines job details page as shown in this screenshot:
:::image type="content" source="./media/how-to-schedule-data-import/data-import-history.png" lightbox="./media/how-to-schedule-data-import/data-import-history.png" alt-text="Screenshot highlighting the imported data asset guid in the Import jobs history tab.":::
When you have a data import with satisfactory performance and outputs, you can s
- **Name**: the unique identifier of the schedule within the workspace. - **Description**: the schedule description.
- - **Trigger**: the recurrence pattern of the schedule, which includes the following properties.
+ - **Trigger**: the recurrence pattern of the schedule, which includes these properties:
- **Time zone**: the trigger time calculation is based on this time zone; (UTC) Coordinated Universal Time by default. - **Recurrence** or **Cron expression**: select recurrence to specify the recurring pattern. Under **Recurrence**, you can specify the recurrence frequency - by minutes, hours, days, weeks, or months. - **Start**: the schedule first becomes active on this date. By default, the creation date of this schedule.
When you have a data import with satisfactory performance and outputs, you can s
> [!NOTE] > These properties apply to CLI and SDK: -- **(Required)** `frequency` specifies the unit of time that describes how often the schedule fires. Can have values of `minute`, `hour`, `day`, `week`, or `month`.
+- **(Required)** `frequency` specifies the unit of time that describes how often the schedule fires. Can have values
+ - `minute`
+ - `hour`
+ - `day`
+ - `week`
+ - `month`
- **(Required)** `interval` specifies how often the schedule fires based on the frequency, which is the number of time units to wait until the schedule fires again.
When you have a data import with satisfactory performance and outputs, you can s
- `hours` should be an integer or a list, ranging between 0 and 23. - `minutes` should be an integer or a list, ranging between 0 and 59. - `weekdays` a string or list ranging from `monday` to `sunday`.
- - If `schedule` is omitted, the job(s) triggers according to the logic of `start_time`, `frequency` and `interval`.
+ - If `schedule` is omitted, the job(s) triggers fire according to the logic of `start_time`, `frequency` and `interval`.
- (Optional) `start_time` describes the start date and time, with a timezone. If `start_time` is omitted, start_time equals the job creation time. For a start time in the past, the first job runs at the next calculated run time. - (Optional) `end_time` describes the end date and time with a timezone. If `end_time` is omitted, the schedule continues to trigger jobs until the schedule is manually disabled. -- (Optional) `time_zone` specifies the time zone of the recurrence. If omitted, the default timezone is UTC. To learn more about timezone values, see [appendix for timezone values](reference-yaml-schedule.md#appendix).
+- (Optional) `time_zone` specifies the time zone of the recurrence. If omitted, the default timezone is UTC. For more information about timezone values, visit [appendix for timezone values](reference-yaml-schedule.md#appendix).
### Create a time-based schedule with cron expression
import_data:
connection: azureml:my_snowflake_connection ```
-The `trigger` section defines the schedule details and contains following properties:
+The `trigger` section defines the schedule details and contains these properties:
-- **(Required)** `type` specifies the schedule type is `cron`.
+- **(Required)** `type` specifies the `cron` schedule type.
```cli > az ml schedule create -f <file-name>.yml
ml_client.schedules.begin_create_or_update(import_schedule).result()
```
-The `CronTrigger` section defines the schedule details and contains following properties:
+The `CronTrigger` section defines the schedule details and contains these properties:
-- **(Required)** To provide better coding experience, we use `CronTrigger` for recurrence schedule.
+- **(Required)** For a better coding experience, use `CronTrigger` for the recurrence schedule.
The list continues here: # [Studio](#tab/azure-studio)
-When you have a data import with satisfactory performance and outputs, you can set up a schedule to automatically trigger this import.
+When your data import has satisfactory performance and outputs, you can set up a schedule to automatically trigger that import.
1. Navigate to [Azure Machine Learning studio](https://ml.azure.com)
When you have a data import with satisfactory performance and outputs, you can s
:::image type="content" source="./media/how-to-schedule-data-import/data-import-list.png" lightbox="./media/how-to-schedule-data-import/data-import-list.png" alt-text="Screenshot highlighting the imported data asset name in the Data imports tab.":::
- 1. At the **Import jobs history** page, select the latest **Import job name** link, to open the pipelines job details page as shown in this screenshot:
+ 1. At the **Import jobs history** page, select the latest **Import job name** hyperlink URL, to open the pipelines job details page as shown in this screenshot:
:::image type="content" source="./media/how-to-schedule-data-import/data-import-history.png" lightbox="./media/how-to-schedule-data-import/data-import-history.png" alt-text="Screenshot highlighting the imported data asset guid in the Import jobs history tab.":::
When you have a data import with satisfactory performance and outputs, you can s
- **Name**: the unique identifier of the schedule within the workspace. - **Description**: the schedule description.
- - **Trigger**: the recurrence pattern of the schedule, which includes the following properties.
+ - **Trigger**: the recurrence pattern of the schedule, which includes these properties:
- **Time zone**: the trigger time calculation is based on this time zone; (UTC) Coordinated Universal Time by default.
- - **Recurrence** or **Cron expression**: select recurrence to specify the recurring pattern. **Cron expression** allows you to specify more flexible and customized recurrence pattern.
+ - **Recurrence** or **Cron expression**: select recurrence to specify the recurring pattern. With **Cron expression**, you can specify a more flexible and customized recurrence pattern.
- **Start**: the schedule first becomes active on this date. By default, the creation date of this schedule. - **End**: the schedule will become inactive after this date. By default, it's NONE, which means that the schedule remains active until you manually disable it. - **Tags**: the selected schedule tags.
When you have a data import with satisfactory performance and outputs, you can s
- A single wildcard (`*`), which covers all values for the field. A `*`, in days, means all days of a month (which varies with month and year). - The `expression: "15 16 * * 1"` in the sample above means the 16:15PM on every Monday.
- - The next table lists the valid values for each field:
+ - This table lists the valid values for each field:
| Field | Range | Comment | |-|-|--|
When you have a data import with satisfactory performance and outputs, you can s
| `MONTHS` | - | Not supported. The value is ignored and treated as `*`. | | `DAYS-OF-WEEK` | 0-6 | Zero (0) means Sunday. Names of days also accepted. |
- - To learn more about crontab expressions, see [Crontab Expression wiki on GitHub](https://github.com/atifaziz/NCrontab/wiki/Crontab-Expression).
+ - For more information about crontab expressions, visit the [Crontab Expression wiki resource on GitHub](https://github.com/atifaziz/NCrontab/wiki/Crontab-Expression).
> [!IMPORTANT] > `DAYS` and `MONTH` are not supported. If you pass one of these values, it will be ignored and treated as `*`.
When you have a data import with satisfactory performance and outputs, you can s
- (Optional) `end_time` describes the end date, and time with a timezone. If `end_time` is omitted, the schedule continues to trigger jobs until the schedule is manually disabled. -- (Optional) `time_zone`specifies the time zone of the expression. If omitted, the timezone is UTC by default. See [appendix for timezone values](reference-yaml-schedule.md#appendix).
+- (Optional) `time_zone`specifies the time zone of the expression. If `time_zone` is omitted, the timezone is UTC by default. For more information about timezone values, visit [appendix for timezone values](reference-yaml-schedule.md#appendix).
Limitations:
You can select a schedule name to show the schedule details page. The schedule d
- **Overview**: basic information for the specified schedule.
- :::image type="content" source="./media/how-to-schedule-data-import/schedule-detail-overview.png" alt-text="Screenshot of the overview tab in the schedule details page." :::
+ :::image type="content" source="./media/how-to-schedule-data-import/schedule-detail-overview.png" alt-text="Screenshot of the overview tab in the schedule details page.":::
- **Job definition**: defines the job that the specified schedule triggers, as shown in this screenshot:
az ml schedule update -n simple_cron_data_import_schedule --set description="ne
``` > [!NOTE]
-> To update more than just tags/description, it is recommended to use `az ml schedule create --file update_schedule.yml`
+> To update more than just tags/description, we recommend use of `az ml schedule create --file update_schedule.yml`
# [Python SDK](#tab/python)
To change the import frequency, or to create a new association for the data impo
1. Navigate to [Azure Machine Learning studio](https://ml.azure.com)
- 1. Under **Assets** in the left navigation, select **Data**. On the **Data import** tab, select the imported data asset to which you want to attach a schedule. Then, the **Import jobs history** page opens, as shown in this screenshot:
+ 1. Under **Assets** in the left navigation, select **Data**. On the **Data import** tab, select the imported data asset to which you want to attach a schedule. The **Import jobs history** page should appear, as shown in this screenshot:
:::image type="content" source="./media/how-to-schedule-data-import/data-import-list.png" alt-text="Screenshot highlighting the imported data asset name in the Data imports tab.":::
To change the import frequency, or to create a new association for the data impo
:::image type="content" source="./media/how-to-schedule-data-import/update-select-schedule.png" alt-text="Screenshot of update select schedule showing the select schedule tab." ::: > [!IMPORTANT]
- > Make sure to select the correct schedule to update. Once you finish the update, the schedule will trigger different data imports.
+ > Make sure you select the correct schedule to update. Once you finish the update, the schedule will trigger different data imports.
1. You can also modify the source, query and change the destination path, for future data imports that the schedule triggers.
print(job_schedule)
# [Studio](#tab/azure-studio)
-On the schedule details page, you can enable the current schedule. You can also enable schedules at the **All schedules** tab.
+At the schedule details page, you can enable the current schedule. You can also enable schedules at the **All schedules** tab.
## Delete a schedule > [!IMPORTANT]
-> A schedule must be disabled before deletion. Deletion is an unrecoverable action. After a schedule is deleted, you can never access or recover it.
+> A schedule must be disabled before deletion. Deletion is a permanent, unrecoverable action. After a schedule is deleted, you can never access or recover it.
# [Azure CLI](#tab/cli)
You can delete a schedule from the schedule details page or the all schedules ta
Schedules are generally used for production. To prevent problems, workspace admins may want to restrict schedule creation and management permissions within a workspace.
-There are currently three action rules related to schedules, and you can configure them in Azure portal. See [how to manage access to an Azure Machine Learning workspace.](how-to-assign-roles.md#create-custom-role) to learn more.
+There are currently three action rules related to schedules, and you can configure them in the Azure portal. For more information, visit [how to manage access to an Azure Machine Learning workspace.](how-to-assign-roles.md#create-custom-role).
| Action | Description | Rule | |--|-||
machine-learning How To Setup Access Control Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-access-control-feature-store.md
description: Learn how to access to an Azure Machine Learning managed feature st
-+
machine-learning How To Submit Spark Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-submit-spark-jobs.md
description: Learn how to submit standalone and pipeline Spark jobs in Azure Mac
-+ Last updated 10/05/2023
machine-learning How To Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-model.md
-+ Last updated 09/10/2023
machine-learning How To Train With Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-ui.md
Title: Create a Training Job with the job creation UI
description: Learn how to submit a training job in Azure Machine Learning studio -+
machine-learning How To Troubleshoot Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-data-access.md
-+ Last updated 02/23/2024
machine-learning How To Troubleshoot Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-environments.md
Title: Troubleshoot environment images
description: Learn how to troubleshoot issues with environment image builds and package installations. -+
This issue can happen when building wheels for mpi4py fails.
Ensure that you have a working MPI installation (preference for MPI-3 support and for MPI built with shared/dynamic libraries) * See [mpi4py installation](https://aka.ms/azureml/environment/install-mpi4py)
-* If needed, follow these [steps on building MPI](https://mpi4py.readthedocs.io/en/stable/appendix.html#building-mpi-from-sources)
+* If needed, follow these [steps on building MPI](https://mpi4py.readthedocs.io/en/stable/develop.html#building)
Ensure that you're using a compatible python version * Python 3.8+ is recommended due to older versions reaching end-of-life
machine-learning How To Tune Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-tune-hyperparameters.md
-+ Last updated 06/25/2024
machine-learning How To Use Foundation Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-foundation-models.md
Title: How to use Open Source foundation models curated by Azure Machine Learnin
description: Learn how to discover, evaluate, fine-tune and deploy Open Source foundation models in Azure Machine Learning -+
machine-learning Interactive Data Wrangling With Apache Spark Azure Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/interactive-data-wrangling-with-apache-spark-azure-ml.md
description: Learn how to use Apache Spark to wrangle data with Azure Machine Le
-+ Last updated 10/05/2023
machine-learning Migrate To V2 Assets Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-assets-data.md
Title: 'Upgrade data management to SDK v2'
description: Upgrade data management from v1 to v2 of Azure Machine Learning SDK -+
machine-learning Migrate To V2 Resource Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-resource-compute.md
Title: 'Upgrade compute management to v2'
description: Upgrade compute management from v1 to v2 of Azure Machine Learning SDK -+
machine-learning Offline Retrieval Point In Time Join Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/offline-retrieval-point-in-time-join-concepts.md
Title: Offline feature retrieval using a point-in-time join description: Use a point-in-time join for offline feature retrieval.-+
machine-learning Concept Llmops Maturity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/concept-llmops-maturity.md
Title: Advance your maturity level for LLMOps
description: Learn about the different stages of Large Language Operations (LLMOps) and how to advance your organization's capabilities. -+
machine-learning How To End To End Llmops With Prompt Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-end-to-end-llmops-with-prompt-flow.md
git clone https://github.com/microsoft/llmops-promptflow-template.git
2. **Set up env file**: create .env file at top folder level and provide information for items mentioned. Add as many connection names as needed. All the flow examples in this repo use AzureOpenAI connection named `aoai`. Add a line `aoai={"api_key": "","api_base": "","api_type": "azure","api_version": "2024-02-01"}` with updated values for api_key and api_base. If extra connections with different names are used in your flows, they should be added accordingly. Currently, flow with AzureOpenAI as provider as supported. ```bash- experiment_name= connection_name_1={ "api_key": "","api_base": "","api_type": "azure","api_version": "2023-03-15-preview"} connection_name_2={ "api_key": "","api_base": "","api_type": "azure","api_version": "2023-03-15-preview"}
connection_name_2={ "api_key": "","api_base": "","api_type": "azure","api_versio
3. Prepare the local conda or virtual environment to install the dependencies. ```bash- python -m pip install promptflow promptflow-tools promptflow-sdk jinja2 promptflow[azure] openai promptflow-sdk[builtins] python-dotenv- ``` 4. Bring or write your flows into the template based on documentation [here](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/how_to_onboard_new_flows.md).
machine-learning How To Manage Compute Session https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-manage-compute-session.md
Title: Manage prompt flow compute session
description: Learn how to manage prompt flow compute session in Azure Machine Learning studio. -+ - build-2024
machine-learning How To Monitor Generative Ai Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-monitor-generative-ai-applications.md
description: Monitor the safety and quality of generative AI applications deploy
-+ reviewer: s-polly
machine-learning Quickstart Spark Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/quickstart-spark-jobs.md
description: Learn how to submit Apache Spark jobs with Azure Machine Learning.
-+
machine-learning Reference Yaml Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-data.md
Title: 'CLI (v2) data YAML schema'
description: Reference documentation for the CLI (v2) data YAML schema. -+
machine-learning Reference Yaml Datastore Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-datastore-blob.md
Title: 'CLI (v2) Azure Blob datastore YAML schema'
description: Reference documentation for the CLI (v2) Azure Blob datastore YAML schema. -+
machine-learning Reference Yaml Datastore Data Lake Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-datastore-data-lake-gen1.md
Title: 'CLI (v2) Azure Data Lake Gen1 datastore YAML schema'
description: Reference documentation for the CLI (v2) Azure Data Lake Gen1 datastore YAML schema. -+
machine-learning Reference Yaml Datastore Data Lake Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-datastore-data-lake-gen2.md
Title: 'CLI (v2) Azure Data Lake Gen2 datastore YAML schema'
description: Reference documentation for the CLI (v2) Azure Data Lake Gen2 datastore YAML schema. -+
machine-learning Reference Yaml Feature Entity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-feature-entity.md
Title: 'CLI (v2) feature entity YAML schema'
description: Reference documentation for the CLI (v2) feature entity YAML schema. -+
machine-learning Reference Yaml Feature Retrieval Spec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-feature-retrieval-spec.md
Title: 'CLI (v2) feature retrieval specification YAML schema'
description: Reference documentation for the CLI (v2) feature retrieval specification YAML schema. -+
machine-learning Reference Yaml Feature Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-feature-set.md
Title: 'CLI (v2) feature set YAML schema'
description: Reference documentation for the CLI (v2) feature set YAML schema. -+
machine-learning Reference Yaml Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-feature-store.md
Title: 'CLI (v2) feature store YAML schema'
description: Reference documentation for the CLI (v2) feature store YAML schema. -+
machine-learning Reference Yaml Featureset Spec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-featureset-spec.md
Title: 'CLI (v2) feature set specification YAML schema'
description: Reference documentation for the CLI (v2) feature set spec YAML schema. -+
machine-learning Reference Yaml Mltable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-mltable.md
Title: 'CLI (v2) MLtable YAML schema'
description: Reference documentation for the CLI (v2) MLTable YAML schema. -+
machine-learning Troubleshooting Managed Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/troubleshooting-managed-feature-store.md
Title: Troubleshoot managed feature store errors description: Information required to troubleshoot common errors with the managed feature store in Azure Machine Learning. -+
machine-learning Algorithm Cheat Sheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/algorithm-cheat-sheet.md
Title: Machine Learning Algorithm Cheat Sheet - designer
description: A printable Machine Learning Algorithm Cheat Sheet helps you choose the right algorithm for your predictive model in Azure Machine Learning designer. -+
machine-learning How To Data Prep Synapse Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-data-prep-synapse-spark-pool.md
Title: Data wrangling with Apache Spark pools (deprecated)
description: Learn how to attach and launch Apache Spark pools for data wrangling with Azure Synapse Analytics and Azure Machine Learning. -+
machine-learning How To Designer Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-designer-import-data.md
Title: Import data into the designer
description: Learn how to import data into Azure Machine Learning designer using Azure Machine Learning datasets and the Import Data component. -+
machine-learning How To Designer Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-designer-transform-data.md
Title: Transform data in the designer
description: Learn how to import and transform data in Azure Machine Learning designer to create your own datasets. -+
machine-learning How To Inference Onnx Automl Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-inference-onnx-automl-image-models.md
Title: Local inference using ONNX for AutoML image (v1) description: Use ONNX with Azure Machine Learning automated ML to make predictions on computer vision models for classification, object detection, and instance segmentation. (v1)--++
machine-learning How To Link Synapse Ml Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-link-synapse-ml-workspaces.md
Title: Create a linked service with Synapse and Azure Machine Learning workspace
description: Learn how to link Azure Synapse and Azure Machine Learning workspaces, and attach Apache Spark pools for a unified data wrangling experience. -+
machine-learning How To Monitor Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-monitor-datasets.md
Title: Detect data drift on datasets (preview)
description: Learn how to set up data drift detection in Azure Learning. Create datasets monitors (preview), monitor for data drift, and set up alerts. -+
machine-learning How To Move Data In Out Of Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-move-data-in-out-of-pipelines.md
Title: 'Moving data in ML pipelines'
description: Learn how Azure Machine Learning pipelines ingest data, and how to manage and move data between pipeline steps. -+
machine-learning How To Prepare Datasets For Automl Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-prepare-datasets-for-automl-images.md
Title: Prepare data for computer vision tasks v1 description: Image data preparation for Azure Machine Learning automated ML to train computer vision models on classification, object detection, and segmentation v1--++
machine-learning How To Select Algorithms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-select-algorithms.md
Title: How to select a machine learning algorithm
description: How to select Azure Machine Learning algorithms for supervised and unsupervised learning in clustering, classification, or regression experiments. -+
machine-learning How To Use Synapsesparkstep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-synapsesparkstep.md
Title: Use Apache Spark in a machine learning pipeline (deprecated)
description: Link your Azure Synapse Analytics workspace to your Azure Machine Learning pipeline, to use Apache Spark for data manipulation. -+
machine-learning Migrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/migrate-overview.md
Title: Migrate to Azure Machine Learning from Studio (classic) description: Learn how to migrate from Machine Learning Studio (classic) to Azure Machine Learning for a modernized data science platform. -+
machine-learning Migrate Rebuild Experiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/migrate-rebuild-experiment.md
Title: 'ML Studio (classic): Migrate to Azure Machine Learning - Rebuild experiment' description: Rebuild Studio (classic) experiments in Azure Machine Learning designer. -+
machine-learning Migrate Rebuild Integrate With Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/migrate-rebuild-integrate-with-client-app.md
Title: 'Migrate to Azure Machine Learning - Consume pipeline endpoints' description: Learn how to integrate pipeline endpoints with client applications in Azure Machine Learning as part of migrating from Machine Learning Studio (classic). -+
machine-learning Migrate Rebuild Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/migrate-rebuild-web-service.md
Title: 'ML Studio (classic): Migrate to Azure Machine Learning - Rebuild web service' description: Rebuild Studio (classic) web services as pipeline endpoints in Azure Machine Learning -+
machine-learning Migrate Register Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/migrate-register-dataset.md
Title: 'ML Studio (classic): Migrate to Azure Machine Learning - Rebuild dataset' description: Rebuild Studio (classic) datasets in Azure Machine Learning designer. -+
machine-learning Reference Automl Images Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/reference-automl-images-schema.md
--++ Last updated 10/13/2021
migrate Migrate Servers To Azure Using Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-servers-to-azure-using-private-link.md
The following diagram illustrates the agentless replication workflow with privat
Enable replication as follows: 1. In the Azure Migrate project > **Servers, databases and web apps** > **Migration and modernization** > **Migration tools**, select **Replicate**.
- ![Diagram that shows how to replicate servers.](./media/how-to-use-azure-migrate-with-private-endpoints/replicate-servers.png)
- 1. In **Replicate** > **Basics** > **Are your machines virtualized?**, select **Yes, with VMware vSphere**. 1. In **On-premises appliance**, select the name of the Azure Migrate appliance. Select **OK**.-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/source-settings-vmware.png" alt-text="Diagram that shows how to complete source settings.":::
- 1. In **Virtual machines**, select the machines you want to replicate. To apply VM sizing and disk type from an assessment, in **Import migration settings from an Azure Migrate assessment?**, - Select **Yes**, and select the VM group and assessment name. - Select **No** if you aren't using assessment settings.
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/migrate-machines-vmware.png" alt-text="Diagram that shows how to select the VMs.":::
-
-1. In **Virtual machines**, select VMs you want to migrate. Then click **Next**.
-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/select-vm-vmware.png" alt-text="Screenshot of selected VMs to be replicated.":::
+1. In **Virtual machines**, select VMs you want to migrate. Then select **Next**.
1. In **Target settings**, select the **target region** in which the Azure VMs will reside after migration.
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/target-settings.png" alt-text="Screenshot of the Target settings screen.":::
- 1. In **Replication storage account**, use the dropdown list to select a storage account to replicate over a private link. >[!NOTE] > Only the storage accounts in the selected target region and Azure Migrate project subscription are listed.
Enable replication as follows:
> To replicate VMs with CMK, you'll need to [create a disk encryption set](../virtual-machines/disks-enable-customer-managed-keys-portal.yml) under the target Resource Group. A disk encryption set object maps Managed Disks to a Key Vault that contains the CMK to use for SSE. 1. In **Azure Hybrid Benefit**:
- - Select **No** if you don't want to apply Azure Hybrid Benefit and click **Next**.
-
- - Select **Yes** if you have Windows Server machines that are covered with active Software Assurance or Windows Server subscriptions, and you want to apply the benefit to the machines you're migrating and click **Next**.
-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/azure-hybrid-benefit.png" alt-text="Screenshot shows the options in Azure Hybrid Benefit.":::
+ - Select **No** if you don't want to apply Azure Hybrid Benefit and select **Next**.
+ - Select **Yes** if you have Windows Server machines that are covered with active Software Assurance or Windows Server subscriptions, and you want to apply the benefit to the machines you're migrating and select **Next**.
1. In **Compute**, review the VM name, size, OS disk type, and availability configuration (if selected in the previous step). VMs must conform with [Azure requirements](migrate-support-matrix-vmware-migration.md#azure-vm-requirements). - **VM size**: If you're using assessment recommendations, the VM size dropdown shows the recommended size. Otherwise, Azure Migrate picks a size based on the closest match in the Azure subscription. Alternatively, pick a manual size in **Azure VM size**.
Enable replication as follows:
- **Availability Set**: Specify the Availability Set to use. >[!Note] > If you want to select a different availability option for a set of virtual machines, go to step 1 and repeat the steps by selecting different availability options after starting replication for one set of virtual machines.
-1. In **Disks**, specify whether the VM disks should be replicated to Azure, and select the disk type (standard SSD/HDD or premium-managed disks) in Azure. Then click **Next**.
-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/disks-agentless-vmware.png" alt-text="Screenshot shows the Disks tab of the Replicate dialog box.":::
+1. In **Disks**, specify whether the VM disks should be replicated to Azure, and select the disk type (standard SSD/HDD or premium-managed disks) in Azure. Then select **Next**.
1. In **Tags**, add tags to your migrated virtual machines, disks, and NICs.
-1. In **Review and start replication**, review the settings, and click **Replicate** to start the initial replication for the servers.
+1. In **Review and start replication**, review the settings, and select **Replicate** to start the initial replication for the servers.
Next, follow the instructions to [perform migrations](tutorial-migrate-vmware.md#run-a-test-migration). #### Provisioning for the first time
-Azure Migrate does not create any additional resources for replications using Azure Private Link (Service Bus, Key Vault, and storage accounts are not created). Azure Migrate will make use of the selected storage account for uploading replication data, state data, and orchestration messages.
+Azure Migrate doesn't create any additional resources for replications using Azure Private Link (Service Bus, Key Vault, and storage accounts aren't created). Azure Migrate will make use of the selected storage account for uploading replication data, state data, and orchestration messages.
## Create a private endpoint for the storage account
If the user who created the private endpoint is also the storage account owner,
Review the status of the private endpoint connection state before you continue. - Ensure that the on-premises appliance has network connectivity to the storage account via its private endpoint. To validate the private link connection, perform a DNS resolution of the storage account endpoint (private link resource FQDN) from the on-premises server hosting the Migrate appliance and ensure that it resolves to a private IP address. Learn how to verify [network connectivity.](./troubleshoot-network-connectivity.md#verify-dns-resolution) ## Next steps
For migrating Hyper-V VMs, the Migration and modernization tool installs softwar
- The registration key is needed to register the Hyper-V host with the Migration and modernization tool. - The key is valid for five days after you generate it. -
- ![Screenshot of discover machines screen.](./media/how-to-use-azure-migrate-with-private-endpoints/discover-machines-hyper-v.png)
1. Copy the provider setup file and registration key file to each Hyper-V host (or cluster node) running VMs you want to replicate. > [!Note] >Before you register the replication provider, ensure that the vault's private link FQDNs are reachable from the machine that hosts the replication provider. Additional DNS configuration may be required for the on-premises replication appliance to resolve the private link FQDNs to their private IP addresses. Learn more about [how to verify network connectivity](./troubleshoot-network-connectivity.md#verify-dns-resolution)
With discovery completed, you can begin replication of Hyper-V VMs to Azure.
> You can replicate up to 10 machines together. If you need to replicate more, then replicate them simultaneously in batches of 10. 1. In the Azure Migrate project > **Servers, databases and web apps** > **Migration and modernization** > **Migration tools**, select **Replicate**.
-1. In **Replicate** > **Basics** > **Are your machines virtualized?**, select **Yes, with Hyper-V**. Then click **Next: Virtual machines**.
+1. In **Replicate** > **Basics** > **Are your machines virtualized?**, select **Yes, with Hyper-V**. Then select **Next: Virtual machines**.
1. In **Virtual machines**, select the machines you want to replicate. - If you've run an assessment for the VMs, you can apply VM sizing and disk type (premium/standard) recommendations from the assessment results. To do this, in **Import migration settings from an Azure Migrate assessment?**, select the **Yes** option. - If you didn't run an assessment, or you don't want to use the assessment settings, select the **No** option. - If you selected to use the assessment, select the VM group, and assessment name.
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/migrate-machines-vmware.png" alt-text="Screenshot of migrate machines screen.":::
-
-1. In **Virtual machines**, search for VMs as needed, and select each VM you want to migrate. Then click **Next:Target settings**.
-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/select-vm.png" alt-text="Screenshot of selected VMs.":::
+1. In **Virtual machines**, search for VMs as needed, and select each VM you want to migrate. Then select **Next:Target settings**.
1. In **Target settings**, select the target region to which you'll migrate, the subscription, and the resource group in which the Azure VMs will reside after migration.
- :::image type="content" source="./media/tutorial-migrate-hyper-v/target-settings.png" alt-text="Screenshot of target settings.":::
- 1. In **Replication storage account**, select the Azure storage account in which replicated data will be stored in Azure. 1. Next, [**create a private endpoint for the storage account**](/azure/migrate/migrate-servers-to-azure-using-private-link?pivots=agentlessvmware#create-a-private-endpoint-for-the-storage-account) and [**grant permissions to the Recovery Services vault managed identity**](/azure/migrate/migrate-servers-to-azure-using-private-link?pivots=agentbased#grant-access-permissions-to-the-recovery-services-vault-1) to access the storage account required by Azure Migrate. This is mandatory before you proceed.
With discovery completed, you can begin replication of Hyper-V VMs to Azure.
1. In **Azure Hybrid Benefit**:
- - Select **No** if you don't want to apply Azure Hybrid Benefit. Then, click **Next**.
+ - Select **No** if you don't want to apply Azure Hybrid Benefit. Then, select **Next**.
- - Select **Yes** if you have Windows Server machines that are covered with active Software Assurance or Windows Server subscriptions, and you want to apply the benefit to the machines you're migrating. Then click **Next**.
-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/azure-hybrid-benefit.png" alt-text="Screenshot of Azure Hybrid benefit selection.":::
+ - Select **Yes** if you have Windows Server machines that are covered with active Software Assurance or Windows Server subscriptions, and you want to apply the benefit to the machines you're migrating. Then select **Next**.
1. In **Compute**, review the VM name, size, OS disk type, and availability configuration (if selected in the previous step). VMs must conform with [Azure requirements](migrate-support-matrix-hyper-v-migration.md#azure-vm-requirements).
With discovery completed, you can begin replication of Hyper-V VMs to Azure.
- **Availability Set**: If the VM should be in an Azure availability set after migration, specify the set. The set must be in the target resource group you specify for the migration.
-1. In **Disks**, specify the VM disks that need to be replicated to Azure. Then click **Next**.
+1. In **Disks**, specify the VM disks that need to be replicated to Azure. Then select **Next**.
- You can exclude disks from replication. - If you exclude disks, they won't be present on the Azure VM after migration.
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/disks.png" alt-text="Screenshot shows the Disks tab of the Replicate dialog box.":::
- 1. In **Tags**, add tags to your migrated virtual machines, disks, and NICs.
-1. In **Review and start replication**, review the settings, and click **Replicate** to start the initial replication for the servers.
+1. In **Review and start replication**, review the settings, and select **Replicate** to start the initial replication for the servers.
> [!Note] > You can update replication settings any time before replication starts, **Manage** > **Replicating machines**. Settings can't be changed after replication starts.
You can find the details of the Recovery Services vault on the Migration and mod
1. Go to the **Azure Migrate** hub, and on the **Migration and modernization** tile, select **Overview**.
- ![Screenshot that shows the Overview page on the Azure Migrate hub.](./media/how-to-use-azure-migrate-with-private-endpoints/hub-overview.png)
- 1. In the left pane, select **Properties**. Make a note of the Recovery Services vault name and managed identity ID. The vault will have **Private endpoint** as the **Connectivity type** and **Other** as the **Replication type**. You'll need this information when you provide access to the vault.
- ![Screenshot that shows the Migration and modernization tool Properties page.](./media/how-to-use-azure-migrate-with-private-endpoints/vault-info.png)
- **Permissions to access the storage account** To the managed identity of the vault, you must grant the following role permissions on the storage account required for replication. In this case, you must create the storage account in advance.
The role permissions for the Azure Resource Manager vary depending on the type o
1. Select **+ Add**, and select **Add role assignment**.
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/storage-role-assignment.png" alt-text="Screenshot that shows Add role assignment.":::
- 1. On the **Add role assignment** page in the **Role** box, select the appropriate role from the permissions list previously mentioned. Enter the name of the vault noted previously and select **Save**.
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/storage-role-assignment-select-role.png" alt-text="Screenshot that shows the Add role assignment page.":::
- 1. In addition to these permissions, you must also allow access to Microsoft trusted services. If your network access is restricted to selected networks, on the **Networking** tab in the **Exceptions** section, select **Allow trusted Microsoft services to access this storage account**.
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/exceptions.png" alt-text="Screenshot that shows the Allow trusted Microsoft services to access this storage account option.":::
- ## Create a private endpoint for the storage account To replicate by using ExpressRoute with private peering, [create a private endpoint](../private-link/tutorial-private-endpoint-storage-portal.md#create-storage-account-with-a-private-endpoint) for the cache/replication storage accounts (target subresource: *blob*).
If the user who created the private endpoint is also the storage account owner,
Review the status of the private endpoint connection state before you continue.
-![Screenshot that shows the Private endpoint approval status.](./media/how-to-use-azure-migrate-with-private-endpoints/private-endpoint-connection-state.png)
- After you've created the private endpoint, use the dropdown list in **Replicate** > **Target settings** > **Cache storage account** to select the storage account for replicating over a private link. Ensure that the on-premises replication appliance has network connectivity to the storage account on its private endpoint. Learn more about how to verify [network connectivity](./troubleshoot-network-connectivity.md).
This article shows a proof-of-concept deployment path for agent-based replicatio
The following diagram illustrates the agent-based replication workflow with private endpoints by using the Migration and modernization tool.
-![Diagram that shows replication architecture.](./media/how-to-use-azure-migrate-with-private-endpoints/replication-architecture.png)
- The tool uses a replication appliance to replicate your servers to Azure. Follow these steps to create the required resources for migration. 1. In **Discover machines** > **Are your machines virtualized?**, select **Not virtualized/Other**.
Now, select machines for replication and migration.
> You can replicate up to 10 machines together. If you need to replicate more, then replicate them simultaneously in batches of 10. 1. In the Azure Migrate project > **Servers, databases and web apps** > **Migration and modernization** > **Migration tools**, select **Replicate**. -
- ![Diagram that shows how to replicate servers.](./media/how-to-use-azure-migrate-with-private-endpoints/replicate-servers.png)
- 1. In **Replicate** > **Basics** > **Are your machines virtualized?**, select **Not virtualized/Other**. 1. In **On-premises appliance**, select the name of the Azure Migrate appliance that you set up. 1. In **Process Server**, select the name of the replication appliance.
-1. In **Guest credentials**, please select the dummy account created previously during the [replication installer setup](tutorial-migrate-physical-virtual-machines.md#download-the-replication-appliance-installer) to install the Mobility service manually (push install is not supported). Then click **Next: Virtual machines.**
-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/source-settings-vmware.png" alt-text="Diagram that shows how to complete source settings.":::
-
+1. In **Guest credentials**, select the dummy account created previously during the [replication installer setup](tutorial-migrate-physical-virtual-machines.md#download-the-replication-appliance-installer) to install the Mobility service manually (push install isn't supported). Then select **Next: Virtual machines.**
1. In **Virtual machines**, in **Import migration settings from an assessment?**, leave the default setting **No, I'll specify the migration settings manually**.
-1. Select each VM you want to migrate. Then click **Next:Target settings**.
-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/select-vm.png" alt-text="Screenshot of selected VMs to be replicated.":::
-
-1. In **Target settings**, select the subscription,the target region to which you'll migrate, and the resource group in which the Azure VMs will reside after migration.
-
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/target-settings-agent-inline.png" alt-text="Screenshot displays the options in Overview." lightbox="./media/how-to-use-azure-migrate-with-private-endpoints/target-settings-agent-expanded.png":::
-
+1. Select each VM you want to migrate. Then select **Next:Target settings**.
+1. In **Target settings**, select the subscription, the target region to which you'll migrate, and the resource group in which the Azure VMs will reside after migration.
1. In **Virtual network**, select the Azure VNet/subnet for the migrated Azure VMs. 1. In **Cache storage account**, use the dropdown list to select a storage account to replicate over a private link.
Now, select machines for replication and migration.
> [!Note] > To replicate VMs with CMK, you'll need to [create a disk encryption set](../virtual-machines/disks-enable-customer-managed-keys-portal.yml) under the target Resource Group. A disk encryption set object maps Managed Disks to a Key Vault that contains the CMK to use for SSE. 1. In **Azure Hybrid Benefit**:
- - Select **No** if you don't want to apply Azure Hybrid Benefit. Then, click **Next**.
- - Select **Yes** if you have Windows Server machines that are covered with active Software Assurance or Windows Server subscriptions, and you want to apply the benefit to the machines you're migrating. Then click **Next**.
+ - Select **No** if you don't want to apply Azure Hybrid Benefit and select **Next**.
+ - Select **Yes** if you have Windows Server machines that are covered with active Software Assurance or Windows Server subscriptions, and you want to apply the benefit to the machines you're migrating. Then select **Next**.
1. In **Compute**, review the VM name, size, OS disk type, and availability configuration (if selected in the previous step). VMs must conform with [Azure requirements](migrate-support-matrix-physical-migration.md#azure-vm-requirements). - **VM size**: If you're using assessment recommendations, the VM size dropdown shows the recommended size. Otherwise, Azure Migrate picks a size based on the closest match in the Azure subscription. Alternatively, pick a manual size in **Azure VM size**.
Now, select machines for replication and migration.
- **Availability Set**: Specify the Availability Set to use.
-1. In **Disks**, specify whether the VM disks should be replicated to Azure, and select the disk type (standard SSD/HDD or premium managed disks) in Azure. Then click **Next**.
+1. In **Disks**, specify whether the VM disks should be replicated to Azure, and select the disk type (standard SSD/HDD or premium managed disks) in Azure. Then select **Next**.
- You can exclude disks from replication. - If you exclude disks, they won't be present on the Azure VM after migration. -
- :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/disks.png" alt-text="Screenshot shows the Disks tab of the Replicate dialog box.":::
-- 1. In **Tags**, add tags to your migrated virtual machines, disks, and NICs.
-1. In **Review and start replication**, review the settings, and click **Replicate** to start the initial replication for the servers.
+1. In **Review and start replication**, review the settings, and select **Replicate** to start the initial replication for the servers.
> [!Note] > You can update replication settings any time before replication starts, **Manage** > **Replicating machines**. Settings can't be changed after replication starts.
To identify the Recovery Services vault created by Azure Migrate and grant the r
You can find the details of the Recovery Services vault on the **Migration and modernization** page. 1. Go to the **Azure Migrate** hub, and on the **Migration and modernization** tile, select **Overview**.-
- ![Screenshot that shows the Overview page on the Azure Migrate hub.](./media/how-to-use-azure-migrate-with-private-endpoints/hub-overview.png)
- 1. In the left pane, select **Properties**. Make a note of the Recovery Services vault name and managed identity ID. The vault will have **Private endpoint** as the **Connectivity type** and **Other** as the **Replication type**. You'll need this information when you provide access to the vault.
- ![Screenshot that shows the Migration and modernization tool Properties page.](./media/how-to-use-azure-migrate-with-private-endpoints/vault-info.png)
- **Permissions to access the storage account** To the managed identity of the vault, you must grant the following role permissions on the storage account required for replication. In this case, you must create the storage account in advance.
The role permissions for the Azure Resource Manager vary depending on the type o
1. Go to the replication/cache storage account selected for replication. In the left pane, select **Access control (IAM)**. 1. Select **+ Add**, and select **Add role assignment**.-
- ![Screenshot that shows Add role assignment.](./media/how-to-use-azure-migrate-with-private-endpoints/storage-role-assignment.png)
- 1. On the **Add role assignment** page in the **Role** box, select the appropriate role from the permissions list previously mentioned. Enter the name of the vault noted previously and select **Save**.-
- ![Screenshot that shows the Add role assignment page.](./media/how-to-use-azure-migrate-with-private-endpoints/storage-role-assignment-select-role.png)
- 1. In addition to these permissions, you must also allow access to Microsoft trusted services. If your network access is restricted to selected networks, on the **Networking** tab in the **Exceptions** section, select **Allow trusted Microsoft services to access this storage account**.-
- ![Screenshot that shows the Allow trusted Microsoft services to access this storage account option.](./media/how-to-use-azure-migrate-with-private-endpoints/exceptions.png)
- ## Create a private endpoint for the storage account To replicate by using ExpressRoute with private peering, [create a private endpoint](../private-link/tutorial-private-endpoint-storage-portal.md#create-storage-account-with-a-private-endpoint) for the cache/replication storage accounts (target subresource: *blob*).
If the user who created the private endpoint is also the storage account owner,
Review the status of the private endpoint connection state before you continue.
-![Screenshot that shows the Private endpoint approval status.](./media/how-to-use-azure-migrate-with-private-endpoints/private-endpoint-connection-state.png)
- After you've created the private endpoint, use the dropdown list in **Replicate** > **Target settings** > **Cache storage account** to select the storage account for replicating over a private link. Ensure that the on-premises replication appliance has network connectivity to the storage account on its private endpoint. To validate the private link connection, perform a DNS resolution of the storage account endpoint (private link resource FQDN) from the on-premises server hosting the replication appliance and ensure that it resolves to a private IP address. Learn how to verify [network connectivity.](./troubleshoot-network-connectivity.md#verify-dns-resolution)
migrate Tutorial Migrate Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-hyper-v.md
After discovery is finished, you can begin the replication of Hyper-V VMs to Azu
1. In **Review and start replication**, review the settings and select **Replicate** to start the initial replication for the servers. > [!NOTE]
-> You can update replication settings any time before replication starts in **Manage** > **Replicating machines**. Settings can't be changed after replication starts.
+> You can update replication settings any time before replication starts in **Manage** > **Replicated machines**. Settings can't be changed after replication starts.
## Provision for the first time
If this is the first VM you're replicating in the Azure Migrate project, the Mig
You can track job status in the portal notifications.
-You can monitor replication status by selecting **Replicating servers** in **Migration and modernization**.
-
-![Screenshot that shows Monitor replication.](./media/tutorial-migrate-hyper-v/replicating-servers.png)
+You can monitor replication status by selecting **Replicated servers** in **Migration and modernization**.
## Run a test migration
When delta replication begins, you can run a test migration for the VMs before y
To do a test migration:
-1. In **Migration goals**, select **Servers, databases, and web apps** > **Migration and modernization** > **Test migrated servers**.
-
- ![Screenshot that shows Test migrated servers in Migration and modernization.](./media/tutorial-migrate-hyper-v/test-migrated-servers.png)
-
-1. Right-click the VM to test and select **Test migrate**.
+1. In **Migration goals**, select **Servers, databases, and web apps** > **Migration and modernization**, selectΓÇ»**Replicated servers** under **Replications**.
- ![Screenshot that shows the Test migration screen.](./media/tutorial-migrate-hyper-v/test-migrate.png)
+1. In the **Replicating machines** tab, right-click the VM to test and selectΓÇ»**Test migrate**.
1. In **Test Migration**, select the Azure virtual network in which the Azure VM will be located after the migration. We recommend that you use a nonproduction virtual network. 1. You can upgrade the Windows Server OS during test migration. For Hyper-V VMs, automatic detection of an OS isn't yet supported. To upgrade, select the **Check for upgrade** option. In the pane that appears, select the current OS version and the target version to which you want to upgrade. If the target version is available, it's processed accordingly. [Learn more](how-to-upgrade-windows.md). 1. The Test Migration job starts. Monitor the job in the portal notifications. 1. After the migration finishes, view the migrated Azure VM in **Virtual Machines** in the Azure portal. The machine name has the suffix **-Test**. 1. After the test is finished, right-click the Azure VM in **Replications** and select **Clean up test migration**.-
- ![Screenshot that shows the Clean up migration option.](./media/tutorial-migrate-hyper-v/clean-up.png)
> [!NOTE] > You can now register your servers running SQL Server with SQL VM RP to take advantage of automated patching, automated backup, and simplified license management by using the SQL IaaS Agent Extension. >- Select **Manage** > **Replications** > **Machine containing SQL server** > **Compute and Network** and select **yes** to register with SQL VM RP.
To do a test migration:
After you verify that the test migration works as expected, you can migrate the on-premises machines.
-1. In the Azure Migrate project, select **Servers, databases, and web apps** > **Migration and modernization** > **Replicating servers**.
-
- ![Screenshot that shows Replicating servers.](./media/tutorial-migrate-hyper-v/replicate-servers.png)
+1. In the Azure Migrate project, select **Servers, databases, and web apps** > **Migration and modernization**, selectΓÇ»**Replicated servers** under **Replications**.
-1. In **Replicating machines**, right-click the VM and select **Migrate**.
+1. In the **Replicating machines** tab, right-click the VM to test and selectΓÇ»**Migrate**.
1. In **Migrate** > **Shut down virtual machines and perform a planned migration with no data loss**, select **Yes** > **OK**. - By default, Azure Migrate and Modernize shuts down the on-premises VM and runs an on-demand replication to synchronize any VM changes that occurred since the last replication occurred. This action ensures no data loss.
After you verify that the test migration works as expected, you can migrate the
1. After the migration is finished, right-click the VM and select **Stop replication**. This action: - Stops replication for the on-premises machine.
- - Removes the machine from the **Replicating servers** count in the Migration and modernization tool.
+ - Removes the machine from the **Replicated servers** count in the Migration and modernization tool.
- Cleans up replication state information for the VM. 1. Verify and [troubleshoot any Windows activation issues on the Azure VM](/troubleshoot/azure/virtual-machines/troubleshoot-activation-problems). 1. Perform any post-migration app tweaks, such as updating host names, database connection strings, and web server configurations.
migrate Tutorial Migrate Physical Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-physical-virtual-machines.md
The first step of migration is to set up the replication appliance. To set up th
The mobility service agent must be installed on the servers to get them discovered by using the replication appliance. Discovered machines appear in **Azure Migrate: Server Migration**. As VMs are discovered, the **Discovered servers** count rises.
-![Screenshot that shows Discovered servers.](./media/tutorial-migrate-physical-virtual-machines/discovered-servers.png)
- > [!NOTE] > We recommend that you perform discovery and assessment prior to the migration by using the Azure Migrate: Discovery and assessment tool, a separate lightweight Azure Migrate appliance. You can deploy the appliance as a physical server to continuously discover servers and performance metadata. For detailed steps, see [Discover physical servers](tutorial-discover-physical.md).
Now, select machines for migration.
> You can replicate up to 10 machines together. If you need to replicate more, replicate them simultaneously in batches of 10. 1. In the Azure Migrate project, select **Servers, databases, and web apps** > **Migration and modernization** > **Replicate**.-
- :::image type="content" source="./media/tutorial-migrate-physical-virtual-machines/select-replicate.png" alt-text="Screenshot that shows selecting Replicate.":::
- 1. In **Replicate**, > **Source settings** > **Are your machines virtualized?**, select **Physical or other (AWS, GCP, Xen, etc.)**. 1. In **On-premises appliance**, select the name of the Azure Migrate appliance that you set up. 1. In **Process Server**, select the name of the replication appliance.
Now, select machines for migration.
You can track job status in the portal notifications. You can monitor replication status by selecting **Replicating servers** in **Azure Migrate: Server Migration**.
-![Screenshot that shows the Replicating servers option.](./media/tutorial-migrate-physical-virtual-machines/replicating-servers.png)
## Run a test migration
When delta replication begins, you can run a test migration for the VMs before y
To do a test migration:
-1. In **Migration goals**, select **Servers** > **Migration and modernization** > **Test migrated servers**.
+1. In **Migration goals**, select **Servers** > **Migration and modernization**, selectΓÇ»**Replicated servers** under **Replications**.
- :::image type="content" source="./media/tutorial-migrate-physical-virtual-machines/test-migrated-servers.png" alt-text="Screenshot that shows Test migrated servers.":::
-
-1. Right-click the VM you want to test and select **Test migrate**.
+1. In the **Replicating machines** tab, right-click the VM to test and selectΓÇ»**Test migrate**.
:::image type="content" source="./media/tutorial-migrate-physical-virtual-machines/test-migrate-inline.png" alt-text="Screenshot that shows the result after selecting Test migrate." lightbox="./media/tutorial-migrate-physical-virtual-machines/test-migrate-expanded.png":::
To do a test migration:
After you verify that the test migration works as expected, you can migrate the on-premises machines.
-1. In the Azure Migrate project, select **Servers, databases, and web apps** > **Migration and modernization** > **Replicating servers**.
-
- ![Screenshot that shows Replicating servers.](./media/tutorial-migrate-physical-virtual-machines/replicate-servers.png)
-
+1. In the Azure Migrate project, select **Servers, databases, and web apps** > **Migration and modernization**, selectΓÇ»**Replicated servers** under **Replications**.
1. In **Replicating machines**, right-click the VM and select **Migrate**. 1. In **Migrate** > **Shut down virtual machines and perform a planned migration with no data loss**, select **No** > **OK**.
After you verify that the test migration works as expected, you can migrate the
1. After the migration is finished, right-click the VM and select **Stop replication**. This action: - Stops replication for the on-premises machine.
- - Removes the machine from the **Replicating servers** count in the Migration and modernization tool.
+ - Removes the machine from the **Replicated servers** count in the Migration and modernization tool.
- Cleans up replication state information for the machine. 1. Verify and [troubleshoot any Windows activation issues on the Azure VM](/troubleshoot/azure/virtual-machines/troubleshoot-activation-problems). 1. Perform any post-migration app tweaks, such as updating host names, database connection strings, and web server configurations.
migrate Tutorial Discover Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/tutorial-discover-vmware.md
Requirement | Details
| **vCenter Server/ESXi host** | You need a server running vCenter Server version 8.0, 7.0, 6.7, 6.5, 6.0, or 5.5.<br /><br /> Servers must be hosted on an ESXi host running version 5.5 or later.<br /><br /> On the vCenter Server, allow inbound connections on TCP port 443 so that the appliance can collect configuration and performance metadata.<br /><br /> The appliance connects to vCenter Server on port 443 by default. If the server running vCenter Server listens on a different port, you can modify the port when you provide the vCenter Server details in the appliance configuration manager.<br /><br /> On the ESXi hosts, make sure that inbound access is allowed on TCP port 443 for discovery of installed applications and for agentless dependency analysis on servers. **Azure Migrate appliance** | vCenter Server must have these resources to allocate to a server that hosts the Azure Migrate appliance:<br /><br /> - 32 GB of RAM, 8 vCPUs, and approximately 80 GB of disk storage.<br /><br /> - An external virtual switch and internet access on the appliance server, directly or via a proxy.
-**Servers** | All Windows and Linux OS versions are supported for discovery of configuration and performance metadata. <br /><br /> For application discovery on servers, all Windows and Linux OS versions are supported. Check the [OS versions supported for agentless dependency analysis](https://learn.microsoft.com/azure/migrate/vmware/migrate-support-matrix-vmware?pivots=dependency-analysis-agentless-requirements&tabs=businesscase).<br /><br /> For discovery of installed applications and for agentless dependency analysis, VMware Tools (version 10.2.1 or later) must be installed and running on servers. Windows servers must have PowerShell version 2.0 or later installed.<br /><br /> To discover SQL Server instances and databases, check [supported SQL Server and Windows OS versions and editions](https://learn.microsoft.com/azure/migrate/vmware/migrate-support-matrix-vmware?pivots=sql-server-instance-database-discovery-requirements&tabs=businesscase) and Windows authentication mechanisms.<br /><br /> To discover ASP.NET web apps running on IIS web server, check [supported Windows OS and IIS versions](https://learn.microsoft.com/azure/migrate/vmware/migrate-support-matrix-vmware?pivots=web-apps-discovery&tabs=businesscase).<br /><br /> To discover Java web apps running on Apache Tomcat web server, check [supported Linux OS and Tomcat versions](https://learn.microsoft.com/azure/migrate/vmware/migrate-support-matrix-vmware?pivots=web-apps-discovery&tabs=businesscase).
-**SQL Server access** | To discover SQL Server instances and databases, the Windows account, or SQL Server account [requires these permissions](https://learn.microsoft.com/azure/migrate/vmware/migrate-support-matrix-vmware?pivots=sql-server-instance-database-discovery-requirements&tabs=businesscase) for each SQL Server instance. You can use the [account provisioning utility](../least-privilege-credentials.md) to create custom accounts or use any existing account that is a member of the sysadmin server role for simplicity.
+**Servers** | All Windows and Linux OS versions are supported for discovery of configuration and performance metadata. <br /><br /> For application discovery on servers, all Windows and Linux OS versions are supported. Check the [OS versions supported for agentless dependency analysis](/azure/migrate/vmware/migrate-support-matrix-vmware?pivots=dependency-analysis-agentless-requirements&tabs=businesscase).<br /><br /> For discovery of installed applications and for agentless dependency analysis, VMware Tools (version 10.2.1 or later) must be installed and running on servers. Windows servers must have PowerShell version 2.0 or later installed.<br /><br /> To discover SQL Server instances and databases, check [supported SQL Server and Windows OS versions and editions](/azure/migrate/vmware/migrate-support-matrix-vmware?pivots=sql-server-instance-database-discovery-requirements&tabs=businesscase) and Windows authentication mechanisms.<br /><br /> To discover ASP.NET web apps running on IIS web server, check [supported Windows OS and IIS versions](/azure/migrate/vmware/migrate-support-matrix-vmware?pivots=web-apps-discovery&tabs=businesscase).<br /><br /> To discover Java web apps running on Apache Tomcat web server, check [supported Linux OS and Tomcat versions](/azure/migrate/vmware/migrate-support-matrix-vmware?pivots=web-apps-discovery&tabs=businesscase).
+**SQL Server access** | To discover SQL Server instances and databases, the Windows account, or SQL Server account [requires these permissions](/azure/migrate/vmware/migrate-support-matrix-vmware?pivots=sql-server-instance-database-discovery-requirements&tabs=businesscase) for each SQL Server instance. You can use the [account provisioning utility](../least-privilege-credentials.md) to create custom accounts or use any existing account that is a member of the sysadmin server role for simplicity.
## Prepare an Azure user account
mysql Quickstart Create Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-server-portal.md
Title: "Quickstart: Create a flexible server by using the Azure portal"
+ Title: "Quickstart: Create a flexible server using the Azure portal"
description: In this quickstart, learn how to deploy a database in an instance of Azure Database for MySQL - Flexible Server by using the Azure portal. Previously updated : 06/18/2024- Last updated : 07/31/2024+
- mode-ui
-# Quickstart: Create an instance of Azure Database for MySQL - Flexible Server by using the Azure portal
+# Quickstart: Create an instance of Azure Database for MySQL with the Azure portal
-Azure Database for MySQL - Flexible Server is a managed service that you can use to run, manage, and scale highly available MySQL servers in the cloud. This quickstart shows you how to create an Azure Database for MySQL flexible server by using the Azure portal.
+Azure Database for MySQL is a managed service for running, managing, and scaling highly available MySQL servers in the cloud. This article shows you how to use the Azure portal to create an Azure Database for MySQL flexible server instance. You create an instance of Azure Database for MySQL flexible server using a defined set of [compute and storage resources](./concepts-compute-storage.md).
-If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
+## Prerequisites
+
+- [Azure subscription](https://azure.microsoft.com/free/)
+- Access to the Azure portal
+- Basic knowledge of Azure Database for MySQL flexible server deployment options and configurations
## Sign in to the Azure portal
-In the [Azure portal](https://portal.azure.com), enter your credentials to sign in to the portal. The default view is your service dashboard.
+Enter your credentials to sign in to the [Azure portal](https://portal.azure.com).
## Create an Azure Database for MySQL flexible server
-You create an instance of Azure Database for MySQL - Flexible Server by using a defined set of [compute and storage resources](./concepts-compute-storage.md) to create a flexible server. Create the server within an [Azure resource group](../../azure-resource-manager/management/overview.md).
+Create the server within an [Azure resource group](../../azure-resource-manager/management/overview.md).
Complete these steps to create an Azure Database for MySQL flexible server:
-1. In the Azure portal, search for and then select **Azure Database for MySQL servers**:
+1. In the Azure portal, search for and then select **Azure Database for MySQL flexible servers**.
- :::image type="content" source="./media/quickstart-create-server-portal/find-mysql-portal.png" alt-text="Screenshot that shows a search for Azure Database for MySQL servers.":::
+ :::image type="content" source="media/quickstart-create-server-portal/find-mysql-portal.png" alt-text="Screenshot that shows a search for Azure Database for MySQL servers.":::
1. Select **Create**. 1. On the **Select Azure Database for MySQL deployment option** pane, select **Flexible server** as the deployment option:
- :::image type="content" source="./media/quickstart-create-server-portal/azure-mysql-deployment-option.png" alt-text="Screenshot that shows the Flexible server option.":::
+ :::image type="content" source="media/quickstart-create-server-portal/azure-mysql-deployment-option.png" alt-text="Screenshot that shows the Flexible server option." lightbox="media/quickstart-create-server-portal/azure-mysql-deployment-option.png":::
1. On the **Basics** tab, enter or select the following information:
- |Setting|Suggested value|Description|
- ||||
- |**Subscription**|Your subscription name|The Azure subscription that you want to use for your server. If you have multiple subscriptions, choose the subscription in which you want to be billed for the resource.|
- |**Resource group**|**myresourcegroup**| Create a new resource group name, or select an existing resource group from your subscription.|
- |**Server name** |**mydemoserver**|A unique name that identifies your instance of Azure Database for MySQL - Flexible Server. The domain name `mysql.database.azure.com` is appended to the server name that you enter. The server name can contain only lowercase letters, numbers, and the hyphen (`-`) character. It must contain between 3 and 63 characters.|
- |**Region**|The region closest to your users| The location that's closest to your users.|
- |**Workload type**| Development | For production workload, you can select **Small/Medium-size** or **Large-size** depending on [max_connections](concepts-server-parameters.md#max_connections) requirements|
- |**Availability zone**| No preference | If your application client is provisioned in a specific availability zone, you can set your Azure Database for MySQL flexible server to the same availability zone to colocate the application and reduce network latency.|
- |**High availability**| Cleared | For production servers, choose between [zone-redundant high availability](concepts-high-availability.md#zone-redundant-ha-architecture) and [same-zone high availability](concepts-high-availability.md#same-zone-ha-architecture). We recommend that you use high availability for business continuity and protection against virtual machine (VM) failure.|
- |**Standby availability zone**| No preference| Choose the standby server zone location. Colocate the server with the application standby server in case a zone failure occurs. |
- |**MySQL version**|**5.7**| A major version of MySQL.|
- |**Admin username** |**mydemouser**| Your own sign-in account to use when you connect to the server. The admin username can't be **azure_superuser**, **admin**, **administrator**, **root**, **guest**, **sa**, or **public**. The maximum number of characters that are allowed is 32. |
- |**Password** |Your password| A new password for the server admin account. It must contain between 8 and 128 characters. It also must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and nonalphanumeric characters (`!`, `$`, `#`, `%`, and so on).|
- |**Compute + storage** | **Burstable**, **Standard_B1ms**, **10 GiB**, **100 iops**, **7 days** | The compute, storage, input/output operations per second (IOPS), and backup configurations for your new server. On the **Configure server** pane, the default values for **Compute tier**, **Compute size**, **Storage size**, **iops**, and **Retention period** (for backup) are **Burstable**, **Standard_B1ms**, **10 GiB**, **100 iops**, and **7 days**. You can keep the default values or modify these values. For faster data loads during migration, we recommend that you increase IOPS to the maximum size that's supported for the compute size that you selected. Later, scale it back to minimize cost. To save the compute and storage selection, select **Save** to continue with the configuration. <!-- The following screenshot shows compute and storage options. -->|
+ | Setting | Suggested value | Description |
+ | | | |
+ | **Subscription** | Your subscription name | The Azure subscription you want to use for your server. Choose the subscription for which you want to be billed for the resource if you have multiple subscriptions. |
+ | **Resource group** | *myresourcegroup* | Create a new resource group name, or select an existing resource group from your subscription. |
+ | **Server name** | *mydemoserver-quickstart* | A unique name that identifies your instance of Azure Database for MySQL - Flexible Server. The domain name `mysql.database.azure.com` is appended to the server name you enter. The server name can contain only lowercase letters, numbers, and the hyphen (`-`) character. It must contain between 3 and 63 characters. |
+ | **Region** | The region closest to your users | The location closest to your users. |
+ | **MySQL version** | 8.0 | The major engine version. |
+ | **Workload type** | Development | For production workload, you can select **Small/Medium-size** or **Large-size** depending on [max_connections](concepts-server-parameters.md#max_connections) requirements |
+ | **Compute + storage** | **Burstable**, **Standard_B1ms**, **10 GiB**, **100 iops**, **7 days** | The compute, storage, input/output operations per second (IOPS), and backup configurations for your new server. On the **Configure server** pane, the default values for **Compute tier**, **Compute size**, **Storage size**, **iops**, and **Retention period** (for backup) are **Burstable**, **Standard_B1ms**, **10 GiB**, **100 iops**, and **7 days**. You can keep the default values or modify these values. For faster data loads during migration, we recommend increasing IOPS to the maximum size supported for the compute size you selected. Later, scale it back to minimize cost. To save the compute and storage selection, select **Save** to continue with the configuration. |
+ | **Availability zone** | No preference | If your application client is provisioned in a specific availability zone, you can set your Azure Database for MySQL flexible server to the same availability zone to colocate the application and reduce network latency. |
+ | **High availability** | Cleared | For production servers, choose between [zone-redundant high availability](concepts-high-availability.md#zone-redundant-ha-architecture) and [same-zone high availability](concepts-high-availability.md#same-zone-ha-architecture). We recommend high availability for business continuity and protection against virtual machine (VM) failure. |
+ | **Authentication method** | **MySQL and Microsoft Entra authentication** | Select the authentication methods you would like to support for accessing this MySQL server. |
+ | **Admin username** | **mydemouser** | Your sign-in account is to be used when you connect to the server. The admin username can't be **azure_superuser**, **admin**, **administrator**, **root**, **guest**, **sa**, or **public**. The maximum number of characters that are allowed is 32. |
+ | **Password** | Your password | A new password for the server admin account. It must contain between 8 and 128 characters. It also must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and nonalphanumeric characters (`!`, `$`, `#`, `%`, and so on). |
1. Next, configure networking options.
Complete these steps to create an Azure Database for MySQL flexible server:
- Public access (allowed IP addresses) - Private access (virtual network integration)
- When you use public access, access to your server is limited to the allowed IP addresses that you add to a firewall rule. Using this method prevents external applications and tools from connecting to the server and any databases on the server, unless you create a rule to open the firewall for a specific IP address or range of IP addresses. When you select **Create an azuredeploy.json file**, access to your server is limited to your virtual network. For more information about private access, see the [concepts](./concepts-networking.md) article.
+ When you use public access, access to your server is limited to the allowed IP addresses you add to a firewall rule. This method prevents external applications and tools from connecting to the server and any databases on the server unless you create a rule to open the firewall for a specific IP address or range of IP addresses. When you select **Create an azuredeploy.json file**, access to your server is limited to your virtual network. For more information about private access, see the [concepts](./concepts-networking.md) article.
In this quickstart, you learn how to set public access to connect to the server. On the **Networking tab**, for **Connectivity method**, select **Public access**. To set firewall rules, select **Add current client IP address**.
- > [!NOTE]
- > You can't change the connectivity method after you create the server. For example, if you select **Public access (allowed IP addresses)** when you create the server, you can't change the setting to **Private access (VNet Integration)** after the server is deployed. We highly recommend that you create your server to use private access to help secure access to your server via virtual network integration. For more information about private access, see the [concepts](./concepts-networking.md) article.
- >
- > :::image type="content" source="./media/quickstart-create-server-portal/networking.png" alt-text="Screenshot that shows the Networking tab.":::
- >
+ You can't change the connectivity method after you create the server. For example, if you select **Public access (allowed IP addresses)** when you create the server, you can't change the setting to **Private access (VNet Integration)** after the server is deployed. We highly recommend that you create your server to use private access to help secure access to your server via virtual network integration. For more information about private access, see the [concepts](./concepts-networking.md) article.
+
+ :::image type="content" source="media/quickstart-create-server-portal/networking.png" alt-text="Screenshot that shows the Networking tab.":::
1. Select **Review + create** to review your Azure Database for MySQL flexible server configuration. 1. Select **Create** to provision the server. Provisioning might take a few minutes.
-1. On the toolbar, select **Notifications** (the bell icon) to monitor the deployment process. When deployment is finished, you can select **Pin to dashboard** to create a tile for the Azure Database for MySQL flexible server on your Azure portal dashboard. This tile is a shortcut to the server's **Overview** pane. When you select **Go to resource**, the **Overview** pane for the flexible server opens.
+1. select **Notifications** (the bell icon) on the toolbar to monitor the deployment process. After deployment, you can select **Pin to dashboard** to create a tile for the Azure Database for MySQL flexible server on your Azure portal dashboard. This tile is a shortcut to the server's **Overview** pane. When you select **Go to resource**, the **Overview** pane for the flexible server opens.
-By default, these databases are created under your server: **information_schema**, **mysql**, **performance_schema**, and **sys**.
+These databases are created by default under your server: **information_schema**, **mysql**, **performance_schema**, and **sys**.
-> [!NOTE]
-> To avoid connectivity problems, check whether your network allows outbound traffic through port 3306, the port that Azure Database for MySQL - Flexible Server uses.
+> [!NOTE]
+> To avoid connectivity problems, check whether your network allows outbound traffic through port 3306, which Azure Database for MySQL - Flexible Server uses.
## Connect to the server
-Before you get started, download the [public SSL certificate](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem) to use for certificate authority verification.
+Before you start, download the [public SSL certificate](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem) for certificate authority verification.
-If you deploy Azure Database for MySQL - Flexible Server by using the public access connectivity method, you can get started quickly by using the built-in MySQL command-line client tool or Azure Cloud Shell. To use the command-line tool, on the menu bar on the **Overview** pane, select **Connect**.
+If you deploy Azure Database for MySQL using the public access connectivity method, you can get started quickly by using the built-in MySQL command-line client tool or Azure Cloud Shell. To use the command-line tool, on the menu bar in the Overview pane, select Connect.
+> [!NOTE]
+> You can also use the [MySQL extension](/azure-data-studio/extensions/mysql-extension) in Azure Data Studio to connect to your Azure Database for MySQL flexible server.
-After you select **Connect**, you can see details about how to connect locally by using the Azure Database for MySQL - Flexible Server client tool and how to initiate data import and export operations.
+After you select **Connect**, you can see details about connecting locally using the Azure Database for MySQL - Flexible Server client tool and how to initiate data import and export operations.
-> [!IMPORTANT]
+> [!IMPORTANT]
> If you see the following error message when you connect to your Azure Database for MySQL flexible server, either you didn't select the **Allow public access from any Azure service within Azure to this server** checkbox when you set up your firewall rules, or the option isn't saved. Set the firewall rules, and then try again.
->
+>
> `ERROR 2002 (HY000): Can't connect to MySQL server on <servername> (115)` ## Clean up resources
-When you no longer need the resources that you created to use in this quickstart, you can delete the resource group that contains the Azure Database for MySQL - Flexible Server instance. Select the resource group for the Azure Database for MySQL - Flexible Server resource, and then select **Delete**. Enter the name of the resource group that you want to delete.
-
-## Next step
+When you no longer need the resources you created for this quickstart, you can delete the resource group that contains the Azure Database for MySQL flexible server instance. Select the resource group for the Azure Database for MySQL resource, and then select **Delete**. Enter the name of the resource group that you want to delete.
-To learn about other ways to deploy a flexible server, go to the next quickstart. You also can learn how to [build a PHP (Laravel) web app by using MySQL](tutorial-php-database-app.md).
+## Related content
-> [!div class="nextstepaction"]
-> [Connect to an instance of Azure Database for MySQL - Flexible Server in a virtual network](./quickstart-create-connect-server-vnet.md)
+- [Connect to an instance of Azure Database for MySQL - Flexible Server in a virtual network](./quickstart-create-connect-server-vnet.md)
+- [Azure Database for MySQL learning path on Microsoft Learn](/training/paths/introduction-to-azure-database-for-mysql/)
mysql Quickstart Mysql Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-mysql-github-actions.md
Last updated 06/18/2024-+
mysql Concepts Migrate Mydumper Myloader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/concepts-migrate-mydumper-myloader.md
Last updated 05/21/2024-+
mysql How To Decide On Right Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/how-to-decide-on-right-migration-tools.md
Last updated 05/21/2024-+
mysql How To Migrate Single Flexible Minimum Downtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/how-to-migrate-single-flexible-minimum-downtime.md
Last updated 05/21/2024-+
mysql Whats Happening To Mysql Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/whats-happening-to-mysql-single-server.md
Last updated 05/21/2024-+
mysql Sample Change Server Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-change-server-configuration.md
Title: CLI script - Change server parameters - Azure Database for MySQL
description: This sample CLI script lists all available server configurations and updates the value of innodb_lock_wait_timeout. -+ ms.devlang: azurecli
mysql Sample Create Server And Firewall Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-create-server-and-firewall-rule.md
Title: CLI script - Create server - Azure Database for MySQL
description: This sample CLI script creates an Azure Database for MySQL server and configures a server-level firewall rule. -+ ms.devlang: azurecli
mysql Sample Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-point-in-time-restore.md
Title: CLI script - Restore server - Azure Database for MySQL
description: This sample Azure CLI script shows how to restore an Azure Database for MySQL server and its databases to a previous point in time. -+ ms.devlang: azurecli
mysql Sample Scale Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-scale-server.md
Title: CLI script - Scale server - Azure Database for MySQL
description: This sample CLI script scales Azure Database for MySQL server to a different performance level after querying the metrics. -+ ms.devlang: azurecli
mysql Sample Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-server-logs.md
Title: CLI script - Download slow query logs - Azure Database for MySQL
description: This sample Azure CLI script shows how to enable and download the server logs of an Azure Database for MySQL server. -+ ms.devlang: azurecli
nat-gateway Tutorial Hub Spoke Nat Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/tutorial-hub-spoke-nat-firewall.md
The hub virtual network contains the firewall subnet that is associated with the
1. Select **Next** to proceed to the **Security** tab.
-1. Select **Enable Bastion** in the **Azure Bastion** section of the **Security** tab.
+1. Select **Enable Azure Bastion** in the **Azure Bastion** section of the **Security** tab.
Azure Bastion uses your browser to connect to VMs in your virtual network over secure shell (SSH) or remote desktop protocol (RDP) by using their private IP addresses. The VMs don't need public IP addresses, client software, or special configuration. For more information about Azure Bastion, see [Azure Bastion](/azure/bastion/bastion-overview)
The hub virtual network contains the firewall subnet that is associated with the
| Setting | Value | ||| | Azure Bastion host name | Enter **bastion**. |
- | Azure Bastion public IP address | Select **Create a public IP address**. </br> Enter **public-ip** in Name. </br> Select **OK**. |
+ | Azure Bastion public IP address | Select **Create a public IP address**. </br> Enter **public-ip-bastion** in Name. </br> Select **OK**. |
1. Select **Enable Azure Firewall** in the **Azure Firewall** section of the **Security** tab.
The hub virtual network contains the firewall subnet that is associated with the
| Policy | Select **Create new**. </br> Enter **firewall-policy** in Name. </br> Select **OK**. | | Azure Firewall public IP address | Select **Create a public IP address**. </br> Enter **public-ip-firewall** in Name. </br> Select **OK**. |
+1. Select **Next** to proceed to the **IP addresses** tab.
+ 16. Select **Review + create**. 17. Select **Create**.
The spoke virtual network contains the test virtual machine used to test the rou
1. Select **Next** to proceed to the **IP addresses** tab.
-1. In the **IP Addresses** tab in **IPv4 address space**, select the trash can to delete the address space that is auto populated.
+1. In the **IP Addresses** tab in **IPv4 address space**, select **Delete address space** to delete the address space that is auto populated.
+
+1. Select **+ Add IPv4 address space**.
1. In **IPv4 address space** enter **10.1.0.0**. Leave the default of **/16 (65,536 addresses)** in the mask selection.
The spoke virtual network contains the test virtual machine used to test the rou
| Setting | Value | | - | -- |
- | **Subnet details** | |
- | Subnet template | Leave the default **Default**. |
+ | Subnet purpose | Leave the default **Default**. |
| Name | Enter **subnet-private**. |
- | Starting address | Enter **10.1.0.0**. |
- | Subnet size | Leave the default of **/24(256 addresses)**. |
+ | **IPv4** | |
+ | IPv4 address range| Leave the default of **10.1.0.0/16**. |
+ | Starting address | Leave the default of **10.1.0.0**. |
+ | Size | Leave the default of **/24(256 addresses)**. |
1. Select **Add**.
A virtual network peering is used to connect the hub to the spoke and the spoke
1. Enter or select the following information in **Add peering**: | Setting | Value |
- | - | -- |
- | **This virtual network** | |
- | Peering link name | Enter **vnet-hub-to-vnet-spoke**. |
- | Allow 'vnet-hub' to access 'vnet-spoke' | Leave the default of **Selected**. |
- | Allow 'vnet-hub' to receive forwarded traffic from 'vnet-spoke' | **Select** the checkbox. |
- | Allow gateway in 'vnet-hub' to forward traffic to 'vnet-spoke' | Leave the default of **Unselected**. |
- | Enable 'vnet-hub' to use 'vnet-spoke's' remote gateway | Leave the default of **Unselected**. |
- | **Remote virtual network** | |
+ | - | --
+ | **Remote virtual network summary** | |
| Peering link name | Enter **vnet-spoke-to-vnet-hub**. | | Virtual network deployment model | Leave the default of **Resource manager**. | | Subscription | Select your subscription. |
- | Virtual network | Select **vnet-spoke**. |
+ | Virtual network | Select **vnet-spoke (test-rg)**. |
+ | **Remote virtual network peering settings** | |
| Allow 'vnet-spoke' to access 'vnet-hub' | Leave the default of **Selected**. |
- | Allow 'vnet-spoke' to receive forwarded traffic from 'vnet-hub' | **Select** the checkbox. |
- | Allow gateway in 'vnet-spoke' to forward traffic to 'vnet-hub' | Leave the default of **Unselected**. |
- | Enable 'vnet-spoke' to use 'vnet-hub's' remote gateway | Leave the default of **Unselected**. |
-
+ | Allow 'vnet-spoke' to receive forwarded traffic from 'vnet-hub' | Select the checkbox. |
+ | Allow gateway or route server in 'vnet-spoke' to forward traffic to 'vnet-hub' | Leave the default of **Unselected**. |
+ | Enable 'vnet-spoke' to use 'vnet-hub's' remote gateway or route server | Leave the default of **Unselected**. |
+ | **Local virtual network summary** | |
+ | Peering link name | Enter **vnet-hub-to-vnet-spoke**. |
+ | **Local virtual network peering settings** | |
+ | Allow 'vnet-hub' to access 'vnet-spoke-2' | Leave the default of **Selected**. |
+ | Allow 'vnet-hub' to receive forwarded traffic from 'vnet-spoke' | Select the checkbox. |
+ | Allow gateway or route server in 'vnet-hub' to forward traffic to 'vnet-spoke' | Leave the default of **Unselected**. |
+ | Enable 'vnet-hub' to use 'vnet-spoke's' remote gateway or route server | Leave the default of **Unselected**. |
+ 1. Select **Add**. 1. Select **Refresh** and verify **Peering status** is **Connected**.
Traffic from the spoke through the hub must be allowed through and firewall poli
2. Select **firewall-policy**.
-3. In **Settings** select **Network rules**.
+3. Expand **Settings** then select **Network rules**.
4. Select **+ Add a rule collection**.
The following procedure creates a test virtual machine (VM) named **vm-spoke** i
| Region | Select **(US) South Central US**. | | Availability options | Select **No infrastructure redundancy required**. | | Security type | Leave the default of **Standard**. |
- | Image | Select **Ubuntu Server 22.04 LTS - x64 Gen2**. |
+ | Image | Select **Ubuntu Server 24.04 LTS - x64 Gen2**. |
| VM architecture | Leave the default of **x64**. | | Size | Select a size. | | **Administrator account** | |
The following procedure creates a test virtual machine (VM) named **vm-spoke** i
| **Inbound port rules** | | | Public inbound ports | Select **None**. |
-1. Select the **Networking** tab at the top of the page.
+1. Select the **Networking** tab at the top of the page or select **Next:Disks**, then **Next:Networking**.
1. Enter or select the following information in the **Networking** tab:
The following procedure creates a test virtual machine (VM) named **vm-spoke** i
1. Review the settings and select **Create**.
+Wait for the virtual machine to finishing deploying before proceeding to the next steps.
+ >[!NOTE] >Virtual machines in a virtual network with a bastion host don't need public IP addresses. Bastion provides the public IP, and the VMs use private IPs to communicate within the network. You can remove the public IPs from any VMs in bastion hosted virtual networks. For more information, see [Dissociate a public IP address from an Azure VM](../virtual-network/ip-services/remove-public-ip-address-vm.md).
Obtain the NAT gateway public IP address for verification of the steps later in
1. Select **vm-spoke**.
-1. In **Operations**, select **Bastion**.
+1. In **Overview**, select **Connect** then **Connect via Bastion**.
1. Enter the username and password entered during VM creation. Select **Connect**.
network-watcher Network Watcher Connectivity Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-connectivity-rest.md
- Title: Troubleshoot connections - Azure REST API-
-description: Learn how to use the connection troubleshoot capability of Azure Network Watcher using the Azure REST API.
---- Previously updated : 01/07/2021---
-# Troubleshoot connections with Azure Network Watcher using the Azure REST API
-
-Learn how to use connection troubleshoot to verify whether a direct TCP connection from a virtual machine to a given endpoint can be established.
-
-## Before you begin
-
-This article assumes you have the following resources:
-
-* An instance of Network Watcher in the region you want to troubleshoot a connection.
-* Virtual machines to troubleshoot connections with.
-
-> [!IMPORTANT]
-> Connection troubleshoot requires that the VM you troubleshoot from has the `AzureNetworkWatcherExtension` VM extension installed. For installing the extension on a Windows VM visit [Azure Network Watcher Agent virtual machine extension for Windows](../virtual-machines/extensions/network-watcher-windows.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json) and for Linux VM visit [Azure Network Watcher Agent virtual machine extension for Linux](../virtual-machines/extensions/network-watcher-linux.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json). The extension is not required on the destination endpoint.
-
-## Log in with ARMClient
-
-Log in to armclient with your Azure credentials.
-
-```powershell
-armclient login
-```
-
-## Retrieve a virtual machine
-
-Run the following script to return a virtual machine. This information is needed for running connectivity.
-
-The following code needs values for the following variables:
--- **subscriptionId** - The subscription ID to use.-- **resourceGroupName** - The name of a resource group that contains virtual machines.-
-```powershell
-$subscriptionId = '<subscription id>'
-$resourceGroupName = '<resource group name>'
-
-armclient get https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${resourceGroupName}/providers/Microsoft.Compute/virtualMachines?api-version=2015-05-01-preview
-```
-
-From the following output, the ID of the virtual machine is used in the following example:
-
-```json
-...
-,
- "type": "Microsoft.Compute/virtualMachines",
- "location": "westcentralus",
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoExampleRG/providers/Microsoft.Compute
-/virtualMachines/ContosoVM",
- "name": "ContosoVM"
- }
- ]
-}
-```
-
-## Check connectivity to a virtual machine
-
-This example checks connectivity to a destination virtual machine over port 80.
-
-### Example
-
-```powershell
-$subscriptionId = "00000000-0000-0000-0000-000000000000"
-$resourceGroupName = "NetworkWatcherRG"
-$networkWatcherName = "NetworkWatcher_westcentralus"
-$sourceResourceId = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoRG/providers/Microsoft.Compute/virtualMachines/MultiTierApp0"
-$destinationResourceId = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoRG/providers/Microsoft.Compute/virtualMachines/Database0"
-$destinationPort = "0"
-$requestBody = @"
-{
- 'source': {
- 'resourceId': '${sourceResourceId}',
- 'port': 0
- },
- 'destination': {
- 'resourceId': '${destinationResourceId}',
- 'port': ${destinationPort}
- }
-}
-"@
-
-$response = armclient post "https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${resourceGroupName}/providers/Microsoft.Network/networkWatchers/${networkWatcherName}/connectivityCheck?api-version=2017-03-01" $requestBody
-```
-
-Since this operation is long running, the URI for the result is returned in the response header as shown in the following response:
-
-**Important Values**
-
-* **Location** - This property contains the URI where the results are when the operation is complete
-
-```
-HTTP/1.1 202 Accepted
-Pragma: no-cache
-Retry-After: 10
-x-ms-request-id: f09b55fe-1d3a-4df7-817f-bceb8d2a94c8
-Strict-Transport-Security: max-age=31536000; includeSubDomains
-Cache-Control: no-cache
-Location: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Network/locations/westcentralus/operationResults/f09b55fe-1d3a-4df7-817f-bceb8d2a94c8?api-version=2017-03-01
-Server: Microsoft-HTTPAPI/2.0; Microsoft-HTTPAPI/2.0
-x-ms-ratelimit-remaining-subscription-writes: 1199
-x-ms-correlation-request-id: 367a91aa-7142-436a-867d-d3a36f80bc54
-x-ms-routing-request-id: WESTUS2:20170602T202117Z:367a91aa-7142-436a-867d-d3a36f80bc54
-Date: Fri, 02 Jun 2017 20:21:16 GMT
-
-null
-```
-
-### Response
-
-The following response is from the previous example. In this response, the `ConnectionStatus` is **Unreachable**. You can see that all the probes sent failed. The connectivity failed at the virtual appliance due to a user-configured `NetworkSecurityRule` named **UserRule_Port80**, configured to block incoming traffic on port 80. This information can be used to research connection issues.
-
-```json
-{
- "hops": [
- {
- "type": "Source",
- "id": "0cb75c91-7ebf-4df8-8424-15594d6fb51c",
- "address": "10.1.1.4",
- "resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoRG/providers/Microsoft.Network/networkInterfaces/appNic0/ipConfigurations/ipconfig1",
- "nextHopIds": [
- "06dee00a-9c4a-4fb1-b2ea-fa0a539ca684"
- ],
- "issues": []
- },
- {
- "type": "VirtualAppliance",
- "id": "06dee00a-9c4a-4fb1-b2ea-fa0a539ca684",
- "address": "10.1.2.4",
- "resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoRG/providers/Microsoft.Network/networkInterfaces/fwNic/ipConfigurations/ipconfig1",
- "nextHopIds": [
- "75e0cfa5-f9d2-48d8-b705-2c7016f81570"
- ],
- "issues": []
- },
- {
- "type": "VirtualAppliance",
- "id": "75e0cfa5-f9d2-48d8-b705-2c7016f81570",
- "address": "10.1.3.4",
- "resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoRG/providers/Microsoft.Network/networkInterfaces/auNic/ipConfigurations/ipconfig1",
- "nextHopIds": [
- "86caf6aa-33b0-48a1-b4da-f3c9ce785072"
- ],
- "issues": [
- {
- "origin": "Outbound",
- "severity": "Error",
- "type": "NetworkSecurityRule",
- "context": [
- {
- "key": "RuleName",
- "value": "UserRule_Port80"
- }
- ]
- }
- ]
- },
- {
- "type": "VnetLocal",
- "id": "86caf6aa-33b0-48a1-b4da-f3c9ce785072",
- "address": "10.1.4.4",
- "resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoRG/providers/Microsoft.Network/networkInterfaces/dbNic0/ipConfigurations/ipconfig1",
- "nextHopIds": [],
- "issues": []
- }
- ],
- "connectionStatus": "Unreachable",
- "probesSent": 100,
- "probesFailed": 100
-}
-```
-
-## Validate routing issues
-
-The example checks connectivity between a virtual machine and a remote endpoint.
-
-### Example
-
-```powershell
-$subscriptionId = "00000000-0000-0000-0000-000000000000"
-$resourceGroupName = "NetworkWatcherRG"
-$networkWatcherName = "NetworkWatcher_westcentralus"
-$sourceResourceId = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoRG/providers/Microsoft.Compute/virtualMachines/MultiTierApp0"
-$destinationResourceId = "13.107.21.200"
-$destinationPort = "80"
-$requestBody = @"
-{
- 'source': {
- 'resourceId': '${sourceResourceId}',
- 'port': 0
- },
- 'destination': {
- 'address': '${destinationResourceId}',
- 'port': ${destinationPort}
- }
-}
-"@
-
-$response = armclient post "https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${resourceGroupName}/providers/Microsoft.Network/networkWatchers/${networkWatcherName}/connectivityCheck?api-version=2017-03-01" $requestBody
-```
-
-Since this operation is long running, the URI for the result is returned in the response header as shown in the following response:
-
-**Important Values**
-
-* **Location** - This property contains the URI where the results are when the operation is complete
-
-```
-HTTP/1.1 202 Accepted
-Pragma: no-cache
-Retry-After: 10
-x-ms-request-id: 15eeeb69-fcef-41db-bc4a-e2adcf2658e0
-Strict-Transport-Security: max-age=31536000; includeSubDomains
-Cache-Control: no-cache
-Location: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Network/locations/westcentralus/operationResults/15eeeb69-fcef-41db-bc4a-e2adcf2658e0?api-version=2017-03-01
-Server: Microsoft-HTTPAPI/2.0; Microsoft-HTTPAPI/2.0
-x-ms-ratelimit-remaining-subscription-writes: 1199
-x-ms-correlation-request-id: 4370b798-cd8b-4d3e-ba28-22232bc81dc5
-x-ms-routing-request-id: WESTUS:20170602T202606Z:4370b798-cd8b-4d3e-ba28-22232bc81dc5
-Date: Fri, 02 Jun 2017 20:26:05 GMT
-
-null
-```
-
-### Response
-
-In the following example, the `connectionStatus` is shown as **Unreachable**. In the `hops` details, you can see under `issues` that the traffic was blocked due to a `UserDefinedRoute`.
-
-```json
-{
- "hops": [
- {
- "type": "Source",
- "id": "5528055a-b393-4751-97bc-353d8c0aaeff",
- "address": "10.1.1.4",
- "resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoRG/providers/Microsoft.Network/networkInterfaces/appNic0/ipConfigurations/ipconfig1",
- "nextHopIds": [
- "66eefa79-5bfe-48b2-b6ca-eec8247457a3"
- ],
- "issues": [
- {
- "origin": "Outbound",
- "severity": "Error",
- "type": "UserDefinedRoute",
- "context": [
- {
- "key": "RouteType",
- "value": "User"
- }
- ]
- }
- ]
- },
- {
- "type": "Destination",
- "id": "66eefa79-5bfe-48b2-b6ca-eec8247457a3",
- "address": "13.107.21.200",
- "resourceId": "Unknown",
- "nextHopIds": [],
- "issues": []
- }
- ],
- "connectionStatus": "Unreachable",
- "probesSent": 100,
- "probesFailed": 100
-}
-```
-
-## Check website latency
-
-The following example checks the connectivity to a website.
-
-### Example
-
-```powershell
-$subscriptionId = "00000000-0000-0000-0000-000000000000"
-$resourceGroupName = "NetworkWatcherRG"
-$networkWatcherName = "NetworkWatcher_westcentralus"
-$sourceResourceId = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoRG/providers/Microsoft.Compute/virtualMachines/MultiTierApp0"
-$destinationResourceId = "https://bing.com"
-$destinationPort = "0"
-$requestBody = @"
-{
- 'source': {
- 'resourceId': '${sourceResourceId}',
- 'port': 0
- },
- 'destination': {
- 'address': '${destinationResourceId}',
- 'port': ${destinationPort}
- }
-}
-"@
-
-$response = armclient post "https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${resourceGroupName}/providers/Microsoft.Network/networkWatchers/${networkWatcherName}/connectivityCheck?api-version=2017-03-01" $requestBody
-```
-
-Since this operation is long running, the URI for the result is returned in the response header as shown in the following response:
-
-**Important Values**
-
-* **Location** - This property contains the URI where the results are when the operation is complete
-
-```
-HTTP/1.1 202 Accepted
-Pragma: no-cache
-Retry-After: 10
-x-ms-request-id: e49b12c7-c232-472c-b6d2-6c257ce80fa5
-Strict-Transport-Security: max-age=31536000; includeSubDomains
-Cache-Control: no-cache
-Location: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Network/locations/westcentralus/operationResults/e49b12c7-c232-472c-b6d2-6c257ce80fa5?api-version=2017-03-01
-Server: Microsoft-HTTPAPI/2.0; Microsoft-HTTPAPI/2.0
-x-ms-ratelimit-remaining-subscription-writes: 1199
-x-ms-correlation-request-id: c3d9744f-5683-427d-bdd1-636b68ab01b6
-x-ms-routing-request-id: WESTUS:20170602T203101Z:c3d9744f-5683-427d-bdd1-636b68ab01b6
-Date: Fri, 02 Jun 2017 20:31:00 GMT
-
-null
-```
-
-### Response
-
-In the following response, you can see the `connectionStatus` shows as **Reachable**. When a connection is successful, latency values are provided.
-
-```json
-{
- "hops": [
- {
- "type": "Source",
- "id": "6adc0fe1-e384-4220-b1b1-f0d181220072",
- "address": "10.1.1.4",
- "resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoRG/providers/Microsoft.Network/networkInterfaces/appNic0/ipConfigurations/ipconfig1",
- "nextHopIds": [
- "b50b7076-9ff2-4782-b40e-0b89cf758f74"
- ],
- "issues": []
- },
- {
- "type": "Internet",
- "id": "b50b7076-9ff2-4782-b40e-0b89cf758f74",
- "address": "204.79.197.200",
- "resourceId": "Internet",
- "nextHopIds": [],
- "issues": []
- }
- ],
- "connectionStatus": "Reachable",
- "avgLatencyInMs": 1,
- "minLatencyInMs": 0,
- "maxLatencyInMs": 7,
- "probesSent": 100,
- "probesFailed": 0
-}
-```
-
-## Check connectivity to a storage endpoint
-
-The following example checks the connectivity from a virtual machine to a blog storage account.
-
-### Example
-
-```powershell
-$subscriptionId = "00000000-0000-0000-0000-000000000000"
-$resourceGroupName = "NetworkWatcherRG"
-$networkWatcherName = "NetworkWatcher_westcentralus"
-$sourceResourceId = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoRG/providers/Microsoft.Compute/virtualMachines/MultiTierApp0"
-$destinationResourceId = "https://build2017nwdiag360.blob.core.windows.net/"
-$destinationPort = "0"
-$requestBody = @"
-{
- 'source': {
- 'resourceId': '${sourceResourceId}',
- 'port': 0
- },
- 'destination': {
- 'address': '${destinationResourceId}',
- 'port': ${destinationPort}
- }
-}
-"@
-
-$response = armclient post "https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${resourceGroupName}/providers/Microsoft.Network/networkWatchers/${networkWatcherName}/connectivityCheck?api-version=2017-03-01" $requestBody
-```
-
-Since this operation is long running, the URI for the result is returned in the response header as shown in the following response:
-
-**Important Values**
-
-* **Location** - This property contains the URI where the results are when the operation is complete
-
-```
-HTTP/1.1 202 Accepted
-Pragma: no-cache
-Retry-After: 10
-x-ms-request-id: c4ed3806-61ea-4a6b-abc1-9d6f2afc79c2
-Strict-Transport-Security: max-age=31536000; includeSubDomains
-Cache-Control: no-cache
-Location: https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Network/locations/westcentralus/operationResults/c4ed3806-61ea-4a6b-abc1-9d6f2afc79c2?api-version=2017-03-01
-Server: Microsoft-HTTPAPI/2.0; Microsoft-HTTPAPI/2.0
-x-ms-ratelimit-remaining-subscription-writes: 1199
-x-ms-correlation-request-id: 93bf5af0-fef5-4b7a-bb9e-9976ba5cdb95
-x-ms-routing-request-id: WESTUS2:20170602T200504Z:93bf5af0-fef5-4b7a-bb9e-9976ba5cdb95
-Date: Fri, 02 Jun 2017 20:05:03 GMT
-
-null
-```
-
-### Response
-
-The following example is the response from running the previous API call. As the check is successful, the `connectionStatus` property shows as **Reachable**. You are provided the details regarding the number of hops required to reach the storage blob and latency.
-
-```json
-{
- "hops": [
- {
- "type": "Source",
- "id": "6adc0fe1-e384-4220-b1b1-f0d181220072",
- "address": "10.1.1.4",
- "resourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoRG/providers/Microsoft.Network/networkInterfaces/appNic0/ipConfigurations/ipconfig1",
- "nextHopIds": [
- "b50b7076-9ff2-4782-b40e-0b89cf758f74"
- ],
- "issues": []
- },
- {
- "type": "Internet",
- "id": "b50b7076-9ff2-4782-b40e-0b89cf758f74",
- "address": "13.71.200.248",
- "resourceId": "Internet",
- "nextHopIds": [],
- "issues": []
- }
- ],
- "connectionStatus": "Reachable",
- "avgLatencyInMs": 1,
- "minLatencyInMs": 0,
- "maxLatencyInMs": 7,
- "probesSent": 100,
- "probesFailed": 0
-}
-```
-
-## Next steps
-
-Learn how to automate packet captures with Virtual machine alerts by viewing [Create an alert triggered packet capture](network-watcher-alert-triggered-packet-capture.md).
-
-Find if certain traffic is allowed in or out of your VM by visiting [Check IP flow verify](diagnose-vm-network-traffic-filtering-problem.md).
network-watcher Network Watcher Intrusion Detection Open Source Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-intrusion-detection-open-source-tools.md
Title: Perform network intrusion detection by using open-source tools description: Learn how to use Azure Network Watcher and open-source tools to perform network intrusion detection.- Previously updated : 09/29/2023 Last updated : 07/30/2024 # Perform network intrusion detection by using Azure Network Watcher and open-source tools
For more instructions on installing Logstash, refer to the [official Elastic doc
This article provides a sample dashboard for you to view trends and details in your alerts. To use it:
-1. Download the [dashboard file](https://aka.ms/networkwatchersuricatadashboard), [visualization file](https://aka.ms/networkwatchersuricatavisualization), and [saved search file](https://aka.ms/networkwatchersuricatasavedsearch).
+1. Download the [dashboard file](https://github.com/Azure/NWPublicScripts/blob/main/nw-public-docs-artifacts/nsg-flow-logs/suricata/Sample_Suricata_Alert_Kibana_Dashboard.json), [visualization file](https://github.com/Azure/NWPublicScripts/blob/main/nw-public-docs-artifacts/nsg-flow-logs/suricata/Sample_Suricata_Alert_Visualizations.json), and [saved search file](https://github.com/Azure/NWPublicScripts/blob/main/nw-public-docs-artifacts/nsg-flow-logs/suricata/Sample_Suricata_Alert_Saved_Search.json).
1. On the **Management** tab of Kibana, go to **Saved Objects** and import all three files. Then, on the **Dashboard** tab, you can open and load the sample dashboard.
network-watcher Network Watcher Visualize Nsg Flow Logs Open Source Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-visualize-nsg-flow-logs-open-source-tools.md
Title: Visualize NSG flow logs - Elastic Stack
-description: Manage and analyze Network Security Group Flow Logs in Azure using Network Watcher and Elastic Stack.
-
+description: Manage and analyze network security group Flow Logs in Azure using Network Watcher and Elastic Stack.
Previously updated : 05/31/2024 Last updated : 07/30/2024 # Visualize Azure Network Watcher NSG flow logs using open source tools
-Network Security Group flow logs provide information that can be used understand ingress and egress IP traffic on Network Security Groups. These flow logs show outbound and inbound flows on a per rule basis, the NIC the flow applies to, 5-tuple information about the flow (Source/Destination IP, Source/Destination Port, Protocol), and if the traffic was allowed or denied.
+Network security group flow logs provide information that can be used understand ingress and egress IP traffic on network security groups. These flow logs show outbound and inbound flows on a per rule basis, the NIC the flow applies to, 5-tuple information about the flow (Source/Destination IP, Source/Destination Port, Protocol), and if the traffic was allowed or denied.
These flow logs can be difficult to manually parse and gain insights from. However, there are several open source tools that can help visualize this data. This article provides a solution to visualize these logs using the Elastic Stack, which allows you to quickly index and visualize your flow logs on a Kibana dashboard. ## Scenario
-In this article, we set up a solution that allows you to visualize Network Security Group flow logs using the Elastic Stack. A Logstash input plugin obtains the flow logs directly from the storage blob configured for containing the flow logs. Then, using the Elastic Stack, the flow logs are indexed and used to create a Kibana dashboard to visualize the information.
+In this article, we set up a solution that allows you to visualize network security group flow logs using the Elastic Stack. A Logstash input plugin obtains the flow logs directly from the storage blob configured for containing the flow logs. Then, using the Elastic Stack, the flow logs are indexed and used to create a Kibana dashboard to visualize the information.
-![Diagram shows a scenario that allows you to visualize Network Security Group flow logs using the Elastic Stack.][scenario]
+![Diagram shows a scenario that allows you to visualize network security group flow logs using the Elastic Stack.][scenario]
## Steps
-### Enable Network Security Group flow logging
+### Enable network security group flow logging
-For this scenario, you must have Network Security Group Flow Logging enabled on at least one Network Security Group in your account. For instructions on enabling Network Security Flow Logs, see the following article [Introduction to flow logging for Network Security Groups](nsg-flow-logs-overview.md).
+For this scenario, you must have network security group Flow Logging enabled on at least one network security group in your account. For instructions on enabling Network Security Flow Logs, see the following article [Introduction to flow logging for network security groups](nsg-flow-logs-overview.md).
### Set up the Elastic Stack
A sample dashboard to view trends and details in your alerts is shown in the fol
![figure 1][1]
-Download the [dashboard file](https://aka.ms/networkwatchernsgflowlogdashboard), the [visualization file](https://aka.ms/networkwatchernsgflowlogvisualizations), and the [saved search file](https://aka.ms/networkwatchernsgflowlogsearch).
+Download the [dashboard file](https://github.com/Azure/NWPublicScripts/blob/main/nw-public-docs-artifacts/nsg-flow-logs/kibana/Sample_NSG_Flowlog_Dashboard.json), the [visualization file](https://github.com/Azure/NWPublicScripts/blob/main/nw-public-docs-artifacts/nsg-flow-logs/kibana/Sample_NSG_Flowlog_Visualizations.json), and the [saved search file](https://github.com/Azure/NWPublicScripts/blob/main/nw-public-docs-artifacts/nsg-flow-logs/kibana/Sample_NSG_Flowlog_Saved_Search.json).
Under the **Management** tab of Kibana, navigate to **Saved Objects** and import all three files. Then from the **Dashboard** tab you can open and load the sample dashboard.
Using the query bar at the top of the dashboard, you can filter down the dashboa
## Conclusion
-By combining the Network Security Group flow logs with the Elastic Stack, we have come up with powerful and customizable way to visualize our network traffic. These dashboards allow you to quickly gain and share insights about your network traffic, and filter down and investigate on any potential anomalies. Using Kibana, you can tailor these dashboards and create specific visualizations to meet any security, audit, and compliance needs.
+By combining the network security group flow logs with the Elastic Stack, we have come up with powerful and customizable way to visualize our network traffic. These dashboards allow you to quickly gain and share insights about your network traffic, and filter down and investigate on any potential anomalies. Using Kibana, you can tailor these dashboards and create specific visualizations to meet any security, audit, and compliance needs.
## Next steps
networking Networking Partners Msp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/networking-partners-msp.md
Title: 'Networking Partners: Azure Networking | Microsoft Docs'
description: Learn about Azure Networking Managed Service Provider Partner Program and find a list of partners that offer cloud and hybrid networking services. -+ Last updated 06/30/2023
Use the links in this section for more information about managed cloud networkin
|[Zertia](https://zertia.es/)||[ExpressRoute ΓÇô Intercloud connectivity](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-inter-conect-of103?tab=Overview)|[Enterprise Connectivity Suite - Virtual WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-vw-suite-of101?tab=Overview); [Manage Virtual WAN ΓÇô SD-WAN Fortinet](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-mvw-fortinet-of101?tab=Overview); [Manage Virtual WAN ΓÇô SD-WAN Cisco Meraki](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-mvw-cisco-of101?tab=Overview)||| Azure Marketplace offers for Managed ExpressRoute, Virtual WAN, Security Services and Private Edge Zone Services from the following Azure Networking MSP Partners are on our roadmap:
-[Amdocs](https://www.amdocs.com/); [Cirrus Core Networks](https://cirruscorenetworks.com/); [Cognizant](https://www.cognizant.com/us/en/glossary/cloud-enablement); [InterCloud](https://www.intercloud.com/partners/microsoft-azure); [KINX](https://www.kinx.net/service/cloud/?lang=en); [OmniClouds](https://omniclouds.com/); [Sejong Telecom](https://www.sejongtelecom.net/en/pages/service/cloud_ms); [SES](https://www.ses.com/networks/cloud/ses-and-azure-expressroute);
+[Amdocs](https://www.amdocs.com/); [Cirrus Core Networks](https://cirruscorenetworks.com/); [Cognizant](https://www.cognizant.com/us/en/glossary/cloud-enablement); [InterCloud](https://www.intercloud.com/partners/microsoft-azure); [KINX](https://www.kinx.net/service/cloud/?lang=en); [OmniClouds](https://omniclouds.com/); [Sejong Telecom](https://www.sejongtelecom.net/); [SES](https://www.ses.com/networks/cloud/ses-and-azure-expressroute);
## <a name="expressroute"></a>ExpressRoute partners
operational-excellence Relocation Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-backup.md
When you relocate a VM that runs SQL or SAP HANA servers, you will no longer be
- If you're using network security group (NSG) rules to control outbound connectivity, create [these service tag rules](../resource-mover/support-matrix-move-region-azure-vm.md#nsg-rules). 1. Relocate your VM to the new region using [Azure Resource Mover](../resource-mover/tutorial-move-region-virtual-machines.md).-
+1. Create a Recovery Services vault in the new region where the VM is relocated.
+1. Re-configure backup.
### Back up services for on-premises resources
postgresql Concepts Index Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-index-tuning.md
For more information on index tuning and related articles, see the documentation
### Supported regions
-Index tuning feature is available in the following regions:
--- Australia Southeast-- Canada Central-- Central India-- East Asia-- East US 2-- France Central-- Jio India Central-- Korea South-- North Central US-- North Europe-- South Africa North-- South Central US-- Southeast Asia-- Sweden Central-- Switzerland North-- UK South-- West Central US-- West US-- West US 3
+There's no regional limitation for index tuning feature. It is available in [all regions](./overview.md#azure-regions) where Azure Database for PostgreSQL Flexible Server is available.
### Supported tiers and SKUs Index tuning is supported on all [currently available tiers](concepts-compute.md): Burstable, General Purpose, and Memory Optimized, and on any [currently supported compute SKU](concepts-compute.md) with at least 4 vCores. > [!IMPORTANT]
-> If a server has index tuning enabled and is scaled down to a compute with less than the minimum number of required vCores, the feature will remain enabled. Because the feature is not supported on servers with less than 4 vCores, ifyou plan to enable it in a server which is less than 4 vCores, or you plan to scale down your instance to less than 4 vCores, make sure you disable index tuning first, setting `index_tuning.mode`to `OFF`.
+> If a server has index tuning enabled and is scaled down to a compute with less than the minimum number of required vCores, the feature will remain enabled. Because the feature is not supported on servers with less than 4 vCores, if you plan to enable it in a server which is less than 4 vCores, or you plan to scale down your instance to less than 4 vCores, make sure you disable index tuning first, setting `index_tuning.mode`to `OFF`.
### Supported versions of PostgreSQL
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-monitoring.md
description: Review the monitoring and metrics features in Azure Database for Po
Previously updated : 06/18/2024 Last updated : 07/31/2024
The following metrics are available for an Azure Database for PostgreSQL flexibl
|**CPU Credits Consumed** |`cpu_credits_consumed` |Count |Number of credits used by the flexible server. Applies to the Burstable tier. |Yes | |**CPU Credits Remaining** |`cpu_credits_remaining` |Count |Number of credits available to burst. Applies to the Burstable tier. |Yes | |**CPU percent** |`cpu_percent` |Percent |Percentage of CPU in use. |Yes |
-|**Database Size (preview)** |`database_size_bytes` |Bytes |Database size in bytes. |Yes |
+|**Database Size** |`database_size_bytes` |Bytes |Database size in bytes. |Yes |
|**Disk Queue Depth** |`disk_queue_depth` |Count |Number of outstanding I/O operations to the data disk. |Yes | |**IOPS** |`iops` |Count |Number of I/O operations to disk per second. |Yes | |**Maximum Used Transaction IDs**|`maximum_used_transactionIDs`|Count |Maximum number of transaction IDs in use. |Yes | |**Memory percent** |`memory_percent` |Percent |Percentage of memory in use. |Yes |
-|**Network Out** |`network_bytes_egress` |Bytes |Amount of outgoing network traffic. |Yes |
-|**Network In** |`network_bytes_ingress` |Bytes |Amount of incoming network traffic. |Yes |
+|**Network Out** |`network_bytes_egress` |Bytes |Total sum of outgoing network traffic on the server for a selected period. This metric includes outgoing traffic from your database and from Azure Database for Postgres flexible server, including features like monitoring, logs, WAL archive, replication etc. |Yes |
+|**Network In** |`network_bytes_ingress` |Bytes |Total sum of incoming network traffic on the server for a selected period. This metric includes incoming traffic to your database and to Azure Database for Postgres flexible server, including features like monitoring, logs, WAL archive, replication etc. |Yes |
|**Read IOPS** |`read_iops` |Count |Number of data disk I/O read operations per second. |Yes | |**Read Throughput** |`read_throughput` |Bytes |Bytes read per second from disk. |Yes | |**Storage Free** |`storage_free` |Bytes |Amount of storage space that's available. |Yes |
postgresql Quickstart Create Server Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-bicep.md
Last updated 04/27/2024-+
postgresql Automigration Single To Flexible Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/automigration-single-to-flexible-postgresql.md
The automigration provides a highly resilient and self-healing offline migration
- The **migrated Flexible Server is online** and can now be managed via Azure portal/CLI. -- The **updated connection strings** to connect to your old single server are shared with you by email. The connection strings can be used to log in to the Single server if you want to copy any settings to your new Flexible server.
+- The **updated connection strings** to connect to your old single server are shared with you by email if you have enabled Service health notifications on the Azure portal. Alternatively, you can find the connection strings in the Single server portal page under **Settings->Connection strings**. The connection strings can be used to log in to the Single server if you want to copy any settings to your new Flexible server.
- The **legacy Single Server** is deleted **seven days** after the migration.
postgresql Application Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/application-best-practices.md
Title: App development best practices - Azure Database for PostgreSQL single server description: Learn about best practices for building an app by using Azure Database for PostgreSQL single server.-+
postgresql Concepts Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-aks.md
Title: Connect to Azure Kubernetes Service - Azure Database for PostgreSQL - Single Server description: Learn about connecting Azure Kubernetes Service (AKS) with Azure Database for PostgreSQL - Single Server-+
postgresql Concepts Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-audit.md
Title: Audit logging in Azure Database for PostgreSQL - Single Server description: Concepts for pgAudit audit logging in Azure Database for PostgreSQL - Single Server.-+
postgresql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-azure-ad-authentication.md
Title: Active Directory authentication - Azure Database for PostgreSQL - Single Server description: Learn about the concepts of Microsoft Entra ID for authentication with Azure Database for PostgreSQL - Single Server-+
postgresql Concepts Azure Advisor Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-azure-advisor-recommendations.md
Title: Azure Advisor for PostgreSQL description: Learn about Azure Advisor recommendations for PostgreSQL.-+
postgresql Concepts Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-certificate-rotation.md
Title: Certificate rotation for Azure Database for PostgreSQL Single server description: Learn about the upcoming changes of root certificate changes that affect Azure Database for PostgreSQL Single server-+
postgresql Concepts Connection Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-connection-libraries.md
Title: Connection libraries - Azure Database for PostgreSQL - Single Server description: This article describes several libraries and drivers that you can use when coding applications to connect and query Azure Database for PostgreSQL - Single Server.-+
postgresql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-connectivity-architecture.md
Title: Connectivity architecture - Azure Database for PostgreSQL - Single Server description: Describes the connectivity architecture of your Azure Database for PostgreSQL - Single Server.-+
postgresql Concepts Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-connectivity.md
Title: Handle transient connectivity errors - Azure Database for PostgreSQL - Single Server description: Learn how to handle transient connectivity errors for Azure Database for PostgreSQL - Single Server.-+
postgresql Concepts Data Access And Security Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-data-access-and-security-private-link.md
Title: Private Link - Azure Database for PostgreSQL - Single server description: Learn how Private link works for Azure Database for PostgreSQL - Single server.-+
postgresql Concepts Data Access And Security Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-data-access-and-security-vnet.md
Title: Virtual network rules - Azure Database for PostgreSQL - Single Server description: Learn how to use virtual network (vnet) service endpoints to connect to Azure Database for PostgreSQL - Single Server.-+
postgresql Concepts Data Encryption Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-data-encryption-postgresql.md
Title: Data encryption with customer-managed key - Azure Database for PostgreSQL - Single server description: Azure Database for PostgreSQL Single server data encryption with a customer-managed key enables you to Bring Your Own Key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data.-+
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-extensions.md
Title: Extensions - Azure Database for PostgreSQL - Single Server description: Learn about the available PostgreSQL extensions in Azure Database for PostgreSQL - Single Server-+
postgresql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-firewall-rules.md
Title: Firewall rules - Azure Database for PostgreSQL - Single Server description: This article describes how to use firewall rules to connect to Azure Database for PostgreSQL - Single Server.-+
postgresql Concepts Infrastructure Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-infrastructure-double-encryption.md
Title: Infrastructure double encryption - Azure Database for PostgreSQL description: Learn about using Infrastructure double encryption to add a second layer of encryption with a service-managed keys.-+
postgresql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-limits.md
Title: Limits - Azure Database for PostgreSQL - Single Server description: This article describes limits in Azure Database for PostgreSQL - Single Server, such as number of connection and storage engine options.-+
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-monitoring.md
Title: Monitor and tune - Azure Database for PostgreSQL - Single Server description: This article describes monitoring and tuning features in Azure Database for PostgreSQL - Single Server.-+
postgresql Concepts Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-performance-recommendations.md
Title: Performance Recommendations - Azure Database for PostgreSQL - Single Server description: This article describes the Performance Recommendation feature in Azure Database for PostgreSQL - Single Server.-+
postgresql Concepts Planned Maintenance Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-planned-maintenance-notification.md
Title: Planned maintenance notification - Azure Database for PostgreSQL - Single Server description: This article describes the Planned maintenance notification feature in Azure Database for PostgreSQL - Single Server-+
postgresql Concepts Pricing Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-pricing-tiers.md
Title: Pricing tiers - Azure Database for PostgreSQL - Single Server description: This article describes the compute and storage options in Azure Database for PostgreSQL - Single Server.-+
postgresql Concepts Query Performance Insight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-query-performance-insight.md
Title: Query Performance Insight - Azure Database for PostgreSQL - Single Server description: This article describes the Query Performance Insight feature in Azure Database for PostgreSQL - Single Server.-+
postgresql Concepts Query Store Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-query-store-best-practices.md
Title: Query Store best practices in Azure Database for PostgreSQL - Single Server description: This article describes best practices for the Query Store in Azure Database for PostgreSQL - Single Server.-+
postgresql Concepts Query Store Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-query-store-scenarios.md
Title: Query Store scenarios - Azure Database for PostgreSQL - Single Server description: This article describes some scenarios for the Query Store in Azure Database for PostgreSQL - Single Server.-+
postgresql Concepts Query Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-query-store.md
Title: Query Store - Azure Database for PostgreSQL - Single Server description: This article describes the Query Store feature in Azure Database for PostgreSQL - Single Server.-+
postgresql Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-security.md
Title: Security in Azure Database for PostgreSQL - Single Server description: An overview of the security features in Azure Database for PostgreSQL - Single Server.-+
postgresql Concepts Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-server-logs.md
Title: Logs - Azure Database for PostgreSQL - Single Server description: Describes logging configuration, storage and analysis in Azure Database for PostgreSQL - Single Server-+
postgresql Concepts Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-servers.md
Title: Servers - Azure Database for PostgreSQL - Single Server description: This article provides considerations and guidelines for configuring and managing Azure Database for PostgreSQL - Single Server.-+
postgresql Concepts Ssl Connection Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-ssl-connection-security.md
Title: SSL/TLS - Azure Database for PostgreSQL - Single Server description: Instructions and information on how to configure TLS connectivity for Azure Database for PostgreSQL - Single Server.-+
postgresql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-supported-versions.md
Title: Supported versions - Azure Database for PostgreSQL - Single Server description: Describes the supported Postgres major and minor versions in Azure Database for PostgreSQL - Single Server.-+
postgresql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-csharp.md
Title: 'Quickstart: Connect with C# - Azure Database for PostgreSQL - Single Server' description: "This quickstart provides a C# (.NET) code sample you can use to connect and query data from Azure Database for PostgreSQL - Single Server."-+
postgresql Connect Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-go.md
Title: 'Quickstart: Connect with Go - Azure Database for PostgreSQL - Single Server' description: This quickstart provides a Go programming language sample you can use to connect and query data from Azure Database for PostgreSQL - Single Server.-+
postgresql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-java.md
Title: 'Quickstart: Use Java and JDBC with Azure Database for PostgreSQL' description: In this quickstart, you learn how to use Java and JDBC with an Azure Database for PostgreSQL.-+
postgresql Connect Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-nodejs.md
Title: 'Quickstart: Use Node.js to connect to Azure Database for PostgreSQL - Single Server' description: This quickstart provides a Node.js code sample you can use to connect and query data from Azure Database for PostgreSQL - Single Server.-+
postgresql Connect Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-php.md
Title: 'Quickstart: Connect with PHP - Azure Database for PostgreSQL - Single Server' description: This quickstart provides a PHP code sample you can use to connect and query data from Azure Database for PostgreSQL - Single Server.-+
postgresql Connect Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-python.md
Title: 'Quickstart: Connect with Python - Azure Database for PostgreSQL - Single Server' description: This quickstart provides Python code samples that you can use to connect and query data from Azure Database for PostgreSQL - Single Server.-+
postgresql Connect Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-ruby.md
Title: 'Quickstart: Connect with Ruby - Azure Database for PostgreSQL - Single Server' description: This quickstart provides a Ruby code sample you can use to connect and query data from Azure Database for PostgreSQL - Single Server.-+
postgresql Connect Rust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-rust.md
Title: Use Rust to interact with Azure Database for PostgreSQL description: Learn to connect and query data in Azure Database for PostgreSQL Single Server using Rust code samples.-+
postgresql How To Alert On Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-alert-on-metric.md
Title: Configure alerts - Azure portal - Azure Database for PostgreSQL - Single Server description: This article describes how to configure and access metric alerts for Azure Database for PostgreSQL - Single Server from the Azure portal.-+
postgresql How To Auto Grow Storage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-auto-grow-storage-cli.md
Title: Auto-grow storage - Azure CLI - Azure Database for PostgreSQL - Single Server description: This article describes how you can configure storage auto-grow using the Azure CLI in Azure Database for PostgreSQL - Single Server.-+
postgresql How To Auto Grow Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-auto-grow-storage-portal.md
Title: Auto grow storage - Azure portal - Azure Database for PostgreSQL - Single Server description: This article describes how you can configure storage auto-grow using the Azure portal in Azure Database for PostgreSQL - Single Server-+
postgresql How To Auto Grow Storage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-auto-grow-storage-powershell.md
Title: Auto grow storage in Azure Database for PostgreSQL using PowerShell description: Learn how to auto grow storage using PowerShell in Azure Database for PostgreSQL.-+
postgresql How To Configure Privatelink Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-privatelink-cli.md
Title: Private Link - Azure CLI - Azure Database for PostgreSQL - Single server description: Learn how to configure private link for Azure Database for PostgreSQL- Single server from Azure CLI-+
postgresql How To Configure Privatelink Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-privatelink-portal.md
Title: Private Link - Azure portal - Azure Database for PostgreSQL - Single server description: Learn how to configure private link for Azure Database for PostgreSQL- Single server from Azure portal-+
postgresql How To Configure Server Logs In Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-server-logs-in-portal.md
Title: Manage logs - Azure portal - Azure Database for PostgreSQL - Single Server description: This article describes how to configure and access the server logs (.log files) in Azure Database for PostgreSQL - Single Server from the Azure portal.-+
postgresql How To Configure Server Logs Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-server-logs-using-cli.md
Title: Manage logs - Azure CLI - Azure Database for PostgreSQL - Single Server description: This article describes how to configure and access the server logs (.log files) in Azure Database for PostgreSQL - Single Server by using the Azure CLI.-+
postgresql How To Configure Server Parameters Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-server-parameters-using-cli.md
Title: Configure parameters - Azure Database for PostgreSQL - Single Server description: This article describes how to configure Postgres parameters in Azure Database for PostgreSQL - Single Server using the Azure CLI.-+
postgresql How To Configure Server Parameters Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-server-parameters-using-portal.md
Title: Configure server parameters - Azure portal - Azure Database for PostgreSQL - Single Server description: This article describes how to configure the Postgres parameters in Azure Database for PostgreSQL through the Azure portal.-+
postgresql How To Configure Server Parameters Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-server-parameters-using-powershell.md
Title: Configure server parameters - Azure PowerShell - Azure Database for PostgreSQL description: This article describes how to configure the service parameters in Azure Database for PostgreSQL using PowerShell.-+
postgresql How To Configure Sign In Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-sign-in-azure-ad-authentication.md
Title: Use Microsoft Entra ID - Azure Database for PostgreSQL - Single Server description: Learn about how to set up Microsoft Entra ID for authentication with Azure Database for PostgreSQL - Single Server-+
postgresql How To Connect Query Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-connect-query-guide.md
Title: Connect and query - Single Server PostgreSQL description: Links to quickstarts showing how to connect to your Azure Database for PostgreSQL Single Server and run queries.-+
postgresql How To Connect With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-connect-with-managed-identity.md
Title: Connect with Managed Identity - Azure Database for PostgreSQL - Single Server description: Learn about how to connect and authenticate using Managed Identity for authentication with Azure Database for PostgreSQL-+
postgresql How To Connection String Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-connection-string-powershell.md
Title: Generate a connection string with PowerShell - Azure Database for PostgreSQL description: This article provides an Azure PowerShell example to generate a connection string for connecting to Azure Database for PostgreSQL.-+
postgresql How To Create Manage Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-create-manage-server-portal.md
Title: Manage Azure Database for PostgreSQL - Azure portal description: Learn how to manage an Azure Database for PostgreSQL server from the Azure portal.-+
postgresql How To Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-create-users.md
Title: Create users - Azure Database for PostgreSQL - Single Server description: This article describes how you can create new user accounts to interact with an Azure Database for PostgreSQL - Single Server.-+
postgresql How To Data Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-data-encryption-cli.md
Title: Data encryption - Azure CLI - for Azure Database for PostgreSQL - Single server description: Learn how to set up and manage data encryption for your Azure Database for PostgreSQL Single server by using the Azure CLI.-+
postgresql How To Data Encryption Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-data-encryption-portal.md
Title: Data encryption - Azure portal - for Azure Database for PostgreSQL - Single server description: Learn how to set up and manage data encryption for your Azure Database for PostgreSQL Single server by using the Azure portal.-+
postgresql How To Data Encryption Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-data-encryption-troubleshoot.md
Title: Troubleshoot data encryption - Azure Database for PostgreSQL - Single Server description: Learn how to troubleshoot the data encryption on your Azure Database for PostgreSQL - Single Server-+
postgresql How To Data Encryption Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-data-encryption-validation.md
Title: How to ensure validation of the Azure Database for PostgreSQL - Data encryption description: Learn how to validate the encryption of the Azure Database for PostgreSQL - Data encryption using the customers managed key.-+
postgresql How To Deny Public Network Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-deny-public-network-access.md
Title: Deny Public Network Access - Azure portal - Azure Database for PostgreSQL - Single server description: Learn how to configure Deny Public Network Access using Azure portal for your Azure Database for PostgreSQL Single server -+
postgresql How To Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-double-encryption.md
Title: Infrastructure double encryption - Azure portal - Azure Database for PostgreSQL description: Learn how to set up and manage Infrastructure double encryption for your Azure Database for PostgreSQL.-+
postgresql How To Manage Firewall Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-manage-firewall-using-portal.md
Title: Manage firewall rules - Azure portal - Azure Database for PostgreSQL - Single Server description: Create and manage firewall rules for Azure Database for PostgreSQL - Single Server using the Azure portal-+
postgresql How To Manage Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-manage-server-cli.md
Title: Manage server - Azure CLI - Azure Database for PostgreSQL description: Learn how to manage an Azure Database for PostgreSQL server from the Azure CLI.-+
postgresql How To Manage Vnet Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-manage-vnet-using-cli.md
Title: Use virtual network rules - Azure CLI - Azure Database for PostgreSQL - Single Server description: This article describes how to create and manage VNet service endpoints and rules for Azure Database for PostgreSQL using Azure CLI command line.-+
postgresql How To Manage Vnet Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-manage-vnet-using-portal.md
Title: Use virtual network rules - Azure portal - Azure Database for PostgreSQL - Single Server description: Create and manage VNet service endpoints and rules Azure Database for PostgreSQL - Single Server using the Azure portal-+
postgresql How To Move Regions Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-move-regions-portal.md
Title: Move Azure regions - Azure portal - Azure Database for PostgreSQL - Single Server description: Move an Azure Database for PostgreSQL server from one Azure region to another using a read replica and the Azure portal.-+
postgresql How To Optimize Autovacuum https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-optimize-autovacuum.md
Title: Optimize autovacuum - Azure Database for PostgreSQL - Single Server description: This article describes how you can optimize autovacuum on an Azure Database for PostgreSQL - Single Server-+
postgresql How To Optimize Bulk Inserts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-optimize-bulk-inserts.md
Title: Optimize bulk inserts - Azure Database for PostgreSQL - Single Server description: This article describes how you can optimize bulk insert operations on an Azure Database for PostgreSQL - Single Server.-+
postgresql How To Optimize Query Stats Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-optimize-query-stats-collection.md
Title: Optimize query stats collection - Azure Database for PostgreSQL - Single Server description: This article describes how you can optimize query stats collection on an Azure Database for PostgreSQL - Single Server-+
postgresql How To Optimize Query Time With Toast Table Storage Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-optimize-query-time-with-toast-table-storage-strategy.md
Title: Optimize query time by using the TOAST table storage strategy in Azure Database for PostgreSQL - Single Server description: This article describes how to optimize query time with the TOAST table storage strategy on an Azure Database for PostgreSQL - Single Server.-+
postgresql How To Restart Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restart-server-cli.md
Title: Restart server - Azure CLI - Azure Database for PostgreSQL - Single Server description: This article describes how you can restart an Azure Database for PostgreSQL - Single Server using the Azure CLI-+
postgresql How To Restart Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restart-server-portal.md
Title: Restart server - Azure portal - Azure Database for PostgreSQL - Single Server description: This article describes how you can restart an Azure Database for PostgreSQL - Single Server using the Azure portal.-+
postgresql How To Restart Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restart-server-powershell.md
Title: Restart Azure Database for PostgreSQL using PowerShell description: Learn how to restart an Azure Database for PostgreSQL server using PowerShell.-+
postgresql How To Restore Dropped Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restore-dropped-server.md
Title: Restore a dropped Azure Database for PostgreSQL server description: This article describes how to restore a dropped server in Azure Database for PostgreSQL using the Azure portal.-+
postgresql How To Tls Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-tls-configurations.md
Title: TLS configuration - Azure portal - Azure Database for PostgreSQL - Single server description: Learn how to set TLS configuration using Azure portal for your Azure Database for PostgreSQL Single server -+
postgresql How To Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-troubleshoot-common-connection-issues.md
Title: Troubleshoot connections - Azure Database for PostgreSQL - Single Server description: Learn how to troubleshoot connection issues to Azure Database for PostgreSQL - Single Server.-+
postgresql Overview Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/overview-single-server.md
Title: Azure Database for PostgreSQL Single Server description: Provides an overview of Azure Database for PostgreSQL Single Server.-+
postgresql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/policy-reference.md
Title: Built-in policy definitions for Azure Database for PostgreSQL description: Lists Azure Policy built-in policy definitions for Azure Database for PostgreSQL. These built-in policy definitions provide common approaches to managing your Azure resources.-+
postgresql Quickstart Create Postgresql Server Database Using Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-postgresql-server-database-using-arm-template.md
Title: 'Quickstart: Create an Azure Database for PostgreSQL - ARM template' description: In this quickstart, learn how to create an Azure Database for PostgreSQL single server by using an Azure Resource Manager template.-+
postgresql Quickstart Create Postgresql Server Database Using Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-postgresql-server-database-using-azure-powershell.md
Title: 'Quickstart: Create server - Azure PowerShell - Azure Database for PostgreSQL - Single Server' description: Quickstart guide to create an Azure Database for PostgreSQL - Single Server using Azure PowerShell.-+
postgresql Quickstart Create Postgresql Server Database Using Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-postgresql-server-database-using-bicep.md
Title: 'Quickstart: Create an Azure Database for PostgreSQL - Bicep' description: In this quickstart, learn how to create an Azure Database for PostgreSQL single server using Bicep.-+
postgresql Quickstart Create Server Database Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-server-database-azure-cli.md
Title: 'Quickstart: Create server - Azure CLI - Azure Database for PostgreSQL - single server' description: In this quickstart guide, you'll create an Azure Database for PostgreSQL server by using the Azure CLI.-+
postgresql Quickstart Create Server Database Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-server-database-portal.md
Title: 'Quickstart: Create server - Azure portal - Azure Database for PostgreSQL - single server' description: In this quickstart guide, you'll create and manage an Azure Database for PostgreSQL server by using the Azure portal.-+
postgresql Quickstart Create Server Up Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-server-up-azure-cli.md
Title: 'Quickstart: Create server - az postgres up - Azure Database for PostgreSQL - Single Server' description: Quickstart guide to create Azure Database for PostgreSQL - Single Server using Azure CLI (command-line interface) up command.-+
postgresql Sample Scripts Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/sample-scripts-azure-cli.md
Title: Azure CLI samples - Azure Database for PostgreSQL - Single Server | Microsoft Docs description: This article lists several Azure CLI code samples available for interacting with Azure Database for PostgreSQL - Single Server.-+
postgresql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Database for PostgreSQL description: Lists Azure Policy Regulatory Compliance controls available for Azure Database for PostgreSQL. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.-+
postgresql Tutorial Design Database Using Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/tutorial-design-database-using-azure-cli.md
Title: 'Tutorial: Design an Azure Database for PostgreSQL - Single Server - Azure CLI' description: This tutorial shows how to create, configure, and query your first Azure Database for PostgreSQL - Single Server using Azure CLI.-+
postgresql Tutorial Design Database Using Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/tutorial-design-database-using-azure-portal.md
Title: 'Tutorial: Design an Azure Database for PostgreSQL - Single Server - Azure portal' description: This tutorial shows how to Design your first Azure Database for PostgreSQL - Single Server using the Azure portal.-+
postgresql Tutorial Design Database Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/tutorial-design-database-using-powershell.md
Title: 'Tutorial: Design an Azure Database for PostgreSQL - Single Server - Azure PowerShell' description: This tutorial shows how to create, configure, and query your first Azure Database for PostgreSQL - Single Server using Azure PowerShell.-+
postgresql Tutorial Monitor And Tune https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/tutorial-monitor-and-tune.md
Title: 'Tutorial: Monitor and tune - Azure Database for PostgreSQL - Single Server' description: This tutorial walks through monitoring and tuning in Azure Database for PostgreSQL - Single Server.-+
postgresql Whats Happening To Postgresql Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/whats-happening-to-postgresql-single-server.md
Last updated 03/30/2023-+
postgresql Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/videos.md
Title: Azure Database for PostgreSQL Videos description: This page lists video content relevant for learning Azure Database for PostgreSQL.-+
public-multi-access-edge-compute-mec Partner Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/partner-solutions.md
Title: 'Partner solutions available in Public MEC'
description: This article lists all the Partner solutions that can be deployed in Public MEC. -+ Last updated 11/22/2022
The table in this article provides information on Partner solutions that can be
| **Vendor** | **Product(s) Name** | **Market Place** | | | | | | **Checkpoint** | [CloudGuard Network Security](https://www.checkpoint.com/cloudguard/cloud-network-security/) | [CloudGuard Network Security](https://azuremarketplace.microsoft.com/marketplace/apps/checkpoint.vsec?tab=Overview) |
-| **Citrix** | [Application Delivery Controller](https://www.citrix.com/products/citrix-adc/) | [Citrix ADC](https://azuremarketplace.microsoft.com/marketplace/apps/citrix.netscalervpx-130?tab=Overview) |
+| **Citrix** | [Application Delivery Controller](https://www.citrix.com/products/citrix-adc/) | [Citrix ADC](https://azuremarketplace.microsoft.com/marketplace/apps/citrix.netscalervpx-1vm-3nic?tab=Overview) |
| **Couchbase** | [Server](https://www.couchbase.com/products/server), [Sync-Gateway](https://www.couchbase.com/products/sync-gateway) | [Couchbase Server Enterprise](https://azuremarketplace.microsoft.com/en/marketplace/apps/couchbase.couchbase-enterprise?tab=Overview) [Couchbase Sync Gateway Enterprise](https://azuremarketplace.microsoft.com/en/marketplace/apps/couchbase.couchbase-sync-gateway-enterprise?tab=Overview) | | **Fortinet** | [FortiGate](https://www.fortinet.com/products/private-cloud-security/fortigate-virtual-appliances) |[FortiGate](https://azuremarketplace.microsoft.com/marketplace/apps/fortinet.fortinet-fortigate?tab=Overview) | | **Fortinet** | [FortiWeb](https://www.fortinet.com/products/web-application-firewall/fortiweb?tab=saas) | [FortiWeb](https://azuremarketplace.microsoft.com/marketplace/apps/fortinet.fortinet_waas?tab=Overview) |
sentinel Connect Cef Syslog Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-cef-syslog-ama.md
If you're using a log forwarder, configure the syslog daemon to listen for messa
> [!NOTE] > To avoid [Full Disk scenarios](../azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md) where the agent can't function, we recommend that you set the `syslog-ng` or `rsyslog` configuration not to store unneeded logs. A Full Disk scenario disrupts the function of the installed AMA.
- > For more information, see [RSyslog](https://www.rsyslog.com/doc/master/configuration/actions.html) or [Syslog-ng](https://www.syslog-ng.com/technical-documents/doc/syslog-ng-open-source-edition/3.26/administration-guide/34#TOPIC-1431029).
+ > For more information, see [RSyslog](https://www.rsyslog.com/doc/master/configuration/actions.html) or [Syslog-ng](https://syslog-ng.github.io/).
## Configure the security device or appliance
sentinel Digital Guardian Data Loss Prevention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/digital-guardian-data-loss-prevention.md
DigitalGuardianDLPEvent
> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected [**DigitalGuardianDLPEvent**](https://aka.ms/sentinel-DigitalGuardianDLP-parser) which is deployed with the Microsoft Sentinel Solution.
+ > This data connector depends on a parser based on a Kusto Function to work as expected [**DigitalGuardianDLPEvent**](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/Digital%20Guardian%20Data%20Loss%20Prevention/Parsers/DigitalGuardianDLPEvent.yaml) which is deployed with the Microsoft Sentinel Solution.
1. Configure Digital Guardian to forward logs via Syslog to remote server where you will install the agent.
sentinel Senservapro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/senservapro.md
let timeframe = 14d;
1. Setup the data connection
-Visit [Senserva Setup](https://www.senserva.com/senserva-microsoft-sentinel-edition-setup/) for information on setting up the Senserva data connection, support, or any other questions. The Senserva installation will configure a Log Analytics Workspace for output. Deploy Microsoft Sentinel onto the configured Log Analytics Workspace to finish the data connection setup by following [this onboarding guide.](/azure/sentinel/quickstart-onboard)
+Visit [Senserva Setup](https://blog.senserva.com/senserva-and-microsoft-sentinel-overview/) for information on setting up the Senserva data connection, support, or any other questions. The Senserva installation will configure a Log Analytics Workspace for output. Deploy Microsoft Sentinel onto the configured Log Analytics Workspace to finish the data connection setup by following [this onboarding guide.](/azure/sentinel/quickstart-onboard)
sentinel Forward Syslog Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/forward-syslog-monitor-agent.md
This script can make changes for both rsyslog.d and syslog-ng.
> [!NOTE] > To avoid [Full Disk scenarios](../azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md) where the agent can't function, we recommend that you set the `syslog-ng` or `rsyslog` configuration not to store unneeded logs. A Full Disk scenario disrupts the function of the installed Azure Monitor Agent.
-> Read more about [rsyslog](https://www.rsyslog.com/doc/master/configuration/actions.html) or [syslog-ng](https://www.syslog-ng.com/technical-documents/doc/syslog-ng-open-source-edition/3.26/administration-guide/34#TOPIC-1431029).
+> Read more about [rsyslog](https://www.rsyslog.com/doc/master/configuration/actions.html) or [syslog-ng](https://www.syslog-ng.com/technical-documents).
## Verify Syslog data is forwarded to your Log Analytics workspace
sentinel Summary Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/summary-rules.md
This procedure describes a sample process for using summary rules with [auxiliar
1. Set up your custom CEF connector from Logstash:
- 1. Deploy our [ARM template](https://aka.ms/DeployCEFresources) to your Microsoft Sentinel workspace to create a custom table with data collection rules (DCR) and a data collection endpoint (DCE).
+ 1. Deploy the following ARM template to your Microsoft Sentinel workspace to create a custom table with data collection rules (DCR) and a data collection endpoint (DCE):
+
+ [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Sentinel%2Fmaster%2FDataConnectors%2Fmicrosoft-sentinel-log-analytics-logstash-output-plugin%2Fexamples%2Fauxiliry-logs%2Farm-template%2Fdeploy-dcr-dce-cef-table.json)
1. Note the following details from the ARM template output:
This procedure describes a sample process for using summary rules with [auxiliar
1. Create a Microsoft Entra application, and note the application's **Client ID** and **Secret**. For more information, see [Tutorial: Send data to Azure Monitor Logs with Logs ingestion API (Azure portal)](/azure/azure-monitor/logs/tutorial-logs-ingestion-portal).
- 1. Use the following sample script to update your Logstash configuration file. The updates configure Logstash to send CEF logs to the custom table created by the ARM template, transforming JSON data to DCR format. In this script:
-
- - Replace placeholder values with your own values for the custom table and Microsoft Entra app you created earlier.
- - Add the Logstash ['prune' filter plugin](https://www.elastic.co/guide/en/logstash/current/plugins-filters-prune.html) to your filter section to include only the following field names in your events:
-
- :::row:::
- :::column:::
- - `Message`
- - `TimeGenerated`
- - `Activity`
- - `LogSeverity`
- - `CefVersion`
- :::column-end:::
- :::column:::
- - `DeviceVendor`
- - `DeviceProduct`
- - `DeviceVersion`
- - `DeviceEventClassID`
- :::column-end:::
- :::row-end:::
-
- ```json
- input {
- syslog {
- port => 514
- codec => cef
- }
- }
- filter{
- ruby {
- code => "
- require 'json'
- new_hash = event.to_hash
- event.set('Message', new_hash.to_json)
- "
- }
- mutate{
- rename => {"name" => "Activity"}
- rename => {"severity" => "LogSeverity"}
- rename => {"cefVersion" => "CefVersion"}
- rename => {"deviceVendor" => "DeviceVendor"}
- rename => {"deviceProduct" => "DeviceProduct"}
- rename => {"deviceVersion" => "DeviceVersion"}
- rename => {"deviceEventClassId" => "DeviceEventClassID"}
- rename => {"@timestamp" => "TimeGenerated"}
- add_field => {"LogstashVersion" => "${LOGSTASH_VERSION}"}
- }
- }
- output {
- microsoft-sentinel-log-analytics-logstash-output-plugin {
- client_app_Id => "00000000-0000-0000-0000-000000000000"
- client_app_secret => "00000000-0000-0000-0000-000000000000"
- tenant_id => "000000000-0000-0000-0000-000000000000"
- data_collection_endpoint => "https://xxxxxxxxxxxxx.ingest.monitor.azure.com"
- dcr_immutable_id => "dcr-x-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
- dcr_stream_name => "Custom-LS-CefAux_CL"
- }
- }
- ```
+ 1. Use our [sample script](https://github.com/Azure/Azure-Sentinel/blob/master/DataConnectors/microsoft-sentinel-log-analytics-logstash-output-plugin/examples/auxiliry-logs/config/bronze.conf) to update your Logstash configuration file. The updates configure Logstash to send CEF logs to the custom table created by the ARM template, transforming JSON data to DCR format. In this script, make sure to replace placeholder values with your own values for the custom table and Microsoft Entra app you created earlier.
1. Check to see that your CEF data is flowing from Logstash as expected. For example, in Microsoft Sentinel, go to the **Logs** page and run the following query:
service-bus-messaging Service Bus Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-service-endpoints.md
Title: Configure virtual network service endpoints for Azure Service Bus
+ Title: Configure network service endpoints
description: This article provides information on how to add a Microsoft.ServiceBus service endpoint to a virtual network. - Previously updated : 07/24/2024+ Last updated : 07/31/2024
+# Customer intent: As a developer or IT Admin, I want to know how to allow access to my Service Bus namespace only from selected networks.
# Allow access to Azure Service Bus namespace from specific virtual networks
This section shows you how to use Azure portal to add a virtual network service
> [!NOTE] > You see the **Networking** tab only for **premium** namespaces. 1. On the **Networking** page, for **Public network access**, you can set one of the three following options. Choose **Selected networks** option to allow access from only specified IP addresses.
+ - **All networks** (default). This option enables public access from all networks using an access key. If you select the **All networks** option, Service Bus accepts connections from any IP address (using the access key). This setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range.
+
+ :::image type="content" source="./media/service-bus-ip-filtering/firewall-all-networks-selected.png" alt-text="Screenshot of the Networking tab of a Service Bus namespace with the default option All networks selected.":::
- **Disabled**. This option disables any public access to the namespace. The namespace is accessible only through [private endpoints](private-link-service.md). :::image type="content" source="./media/service-bus-ip-filtering/public-access-disabled-page.png" alt-text="Screenshot that shows the Networking page of a namespace with public access disabled.":::
This section shows you how to use Azure portal to add a virtual network service
> [!IMPORTANT] > If you choose **Selected networks**, add at least one IP firewall rule or a virtual network that will have access to the namespace. Choose **Disabled** if you want to restrict all traffic to this namespace over [private endpoints](private-link-service.md) only.
- - **All networks** (default). This option enables public access from all networks using an access key. If you select the **All networks** option, Service Bus accepts connections from any IP address (using the access key). This setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range.
2. To restrict access to specific virtual networks, select the **Selected networks** option if it isn't already selected. 1. In the **Virtual Network** section of the page, select **+Add existing virtual network**. Select **+ Create new virtual network** if you want to create a new virtual network.
Azure portal always uses the latest API version to get and set properties. If yo
:::image type="content" source="./media/service-bus-ip-filtering/firewall-all-networks-selected.png" alt-text="Screenshot of the Azure portal Networking page. The option to allow access from All networks is selected on the Firewalls and virtual networks tab.":::
-## Next steps
+## Related content
For more information about virtual networks, see the following links:
service-fabric Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/release-notes.md
Title: Azure Service Fabric releases description: Release notes for Azure Service Fabric. Includes information on the latest features and improvements in Service Fabric. Previously updated : 06/25/2024 Last updated : 07/31/2024
We're excited to announce that the 10.1 release of the Service Fabric runtime ha
- Service Fabric now emits a health event visible in SFX/SFE when Sessions are exhausted. - This allows the weight of InBuild Auxiliary replicas to be set when applied to InBuild throttling. A higher weight means that an InBuild Auxiliary replica will take up more of the InBuild limit, and likewise a lower weight would consume less of the limit, allowing more replicas to be placed InBuild before the limit is reached. - Starting with Cumulative Update 3.0 (CU3) of the Service Fabric 10.1 runtime, the .NET 8 runtime is supported.
+ - For those interested in using .NET 8, keep the following in mind:
+ - You need to rebuild and redeploy your applications with .NET 8. This step isn't necessary if you want to continue using older versions of .NET.
+ - If you deploy **self-contained** applications, know that applications are [no longer self-contained by default in .NET 8](/dotnet/core/compatibility/sdk/8.0/runtimespecific-app-default). You must explicitly add and set the `SelfContained` property to `true` to your projects in .NET 8.
+ - For customers utilizing Service Fabric Remoting v1, customers must enable the `BinaryFormatter`, which isn't enabled with .NET 8. For the procedure to enable BinaryFormatter, see the [BinaryFormatter Obsoletion Strategy GitHub page](https://github.com/dotnet/designs/blob/main/accepted/2020/better-obsoletion/binaryformatter-obsoletion.md).
### Service Fabric 10.1 releases | Release date | Release | More info |
service-fabric Service Fabric Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-versions.md
Previously updated : 06/24/2024 Last updated : 07/31/2024 # Service Fabric supported versions
-The tables in this article outline the Service Fabric and platform versions that are actively supported.
+The tables in this article outline the Service Fabric and platform versions that are actively supported. For a general summary of key announcements for each version, see the [Service Fabric versions page](service-fabric-versions.md). For more in-depth release notes, follow the link in the desired version's row in the following [listed versions tables](#listed-versions).
> [!NOTE] > For a list of all the available Service Fabric runtime versions in available for your subscription, follow the guidance in the [Check for supported cluster versions section of the Manage Service Fabric Cluster Upgrades guide](service-fabric-cluster-upgrade-version-azure.md#check-for-supported-cluster-versions). > > For the procedure to upgrade your Service Fabric runtime version, see [Upgrade the Service Fabric version that runs on your cluster](service-fabric-cluster-upgrade-windows-server.md).
-Use the following Windows/Linux tab selector to view the corresponding listed Service Fabric runtime versions for Windows and Linux.
+Use the following **Windows/Linux tab selector** to view the corresponding listed Service Fabric runtime versions for Windows and Linux.
# [Windows](#tab/windows)
service-health Alerts Activity Log Service Notifications Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/alerts-activity-log-service-notifications-portal.md
Title: Receive activity log alerts on Azure service notifications using Azure portal description: Learn how to use the Azure portal to set up activity log alerts for service health notifications by using the Azure portal. Previously updated : 06/27/2019 Last updated : 07/31/2024 # Create activity log alerts on service notifications using the Azure portal
To learn more about action groups, see [Create and manage action groups](../azur
For information on how to configure service health notification alerts by using Azure Resource Manager templates, see [Resource Manager templates](../azure-monitor/alerts/alerts-activity-log.md).
-## Watch a video on setting up your first Azure Service Health alert
-
->[!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE2OaXt]
- ## Create a Service Health alert using the Azure portal 1. In the [portal](https://portal.azure.com), select **Service Health**.
site-recovery Hyper V Azure Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-troubleshoot.md
Title: Troubleshoot Hyper-V disaster recovery with Azure Site Recovery
description: Describes how to troubleshoot disaster recovery issues with Hyper-V to Azure replication using Azure Site Recovery -+ Last updated 03/02/2023
All Hyper-V replication events are logged in the Hyper-V-VMMS\Admin log, located
This tool can help with advanced troubleshooting: - For VMM, perform Site Recovery log collection using the [Support Diagnostics Platform (SDP) tool](https://social.technet.microsoft.com/wiki/contents/articles/28198.asr-data-collection-and-analysis-using-the-vmm-support-diagnostics-platform-sdp-tool.aspx).-- For Hyper-V without VMM, [download this tool](https://answers.microsoft.com/windows/forum/all/unable-to-open-diagcab-files/e7f8e4e5-b442-4e53-af7a-90e74985a73f), and run it on the Hyper-V host to collect the logs.
+- For Hyper-V without VMM, [download this tool](https://answers.microsoft.com/en-us/windows/forum/all/unable-to-open-diagcab-files/e7f8e4e5-b442-4e53-af7a-90e74985a73f), and run it on the Hyper-V host to collect the logs.
spring-apps Concepts For Java Memory Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/concepts-for-java-memory-management.md
+
+ Title: Java memory management
+
+description: Introduces concepts for Java memory management to help you understand Java applications in Azure Spring Apps.
++++ Last updated : 07/30/2024+++
+# Java memory management
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ✔️ Basic/Standard ❌ Enterprise
+
+This article describes various concepts related to Java memory management to help you understand the behavior of Java applications hosted in Azure Spring Apps.
+
+## Java memory model
+
+A Java application's memory has several parts, and there are different ways to divide the parts. This article discusses Java memory as divided into heap memory, non-heap memory, and direct memory.
+
+### Heap memory
+
+Heap memory stores all class instances and arrays. Each Java virtual machine (JVM) has only one heap area, which is shared among threads.
+
+Spring Boot Actuator can observe the value of heap memory. Spring Boot Actuator takes the heap value as part of `jvm.memory.used/committed/max`. For more information, see the [jvm.memory.used/committed/max](../enterprise/tools-to-troubleshoot-memory-issues.md#jvmmemoryusedcommittedmax) section in [Tools to troubleshoot memory issues](../enterprise/tools-to-troubleshoot-memory-issues.md).
+
+Heap memory is divided into *young generation* and *old generation*. These terms are described in the following list, along with related terms.
+
+- *Young generation*: all new objects are allocated and aged in young generation.
+
+ - *Eden space*: new objects are allocated in Eden space.
+ - *Survivor space*: objects will be moved from Eden to survivor space after surviving one garbage collection cycle. Survivor space can be divided to two parts: s1 and s2.
+
+- *Old generation*: also called *tenured space*. Objects that have remained in the survivor spaces for a long time will be moved to old generation.
+
+Before Java 8, another section called *permanent generation* was also part of the heap. Starting with Java 8, permanent generation was replaced by metaspace in non-heap memory.
+
+### Non-heap memory
+
+Non-heap memory is divided into the following parts:
+
+- The part of non-heap memory that replaced the permanent generation (or *permGen*) starting with Java 8. Spring Boot Actuator observes this section and takes it as part of `jvm.memory.used/committed/max`. In other words, `jvm.memory.used/committed/max` is the sum of heap memory and the former permGen part of non-heap memory. The former permanent generation is composed of the following parts:
+
+ - *Metaspace*, which stores the class definitions loaded by class loaders.
+ - *Compressed class space*, which is for compressed class pointers.
+ - *Code cache*, which stores native code compiled by JIT.
+
+- Other memory such as the thread stack, which isn't observed by Spring Boot Actuator.
+
+### Direct memory
+
+Direct memory is native memory allocated by `java.nio.DirectByteBuffer`, which is used in third party libraries like nio and gzip.
+
+Spring Boot Actuator doesn't observe the value of direct memory.
+
+The following diagram summarizes the Java memory model described in the previous section.
++
+## Java garbage collection
+
+There are three terms regarding of Java Garbage Collection (GC): "Minor GC", "Major GC", and "Full GC". These terms aren't clearly defined in the JVM specification. Here, we consider "Major GC" and "Full GC" to be equivalent.
+
+Minor GC performs when Eden space is full. It removes all dead objects in young generation and moves live objects to from Eden space to s1 of survivor space, or from s1 to s2.
+
+Full GC or major GC does garbage collection in the entire heap. Full GC can also collect parts like metaspace and direct memory, which can be cleaned only by full GC.
+
+The maximum heap size influences the frequency of minor GC and full GC. The maximum metaspace and maximum direct memory size influence full GC.
+
+When you set the maximum heap size to a lower value, garbage collections occur more frequently, which slow the app a little, but better limits the memory usage. When you set the maximum heap size to a higher value, garbage collections occur less frequently, which may create more out-of-memory (OOM) risk. For more information, see the [Types of out-of-memory issues](../enterprise/how-to-fix-app-restart-issues-caused-by-out-of-memory.md#types-of-out-of-memory-issues) section of [App restart issues caused by out-of-memory issues](../enterprise/how-to-fix-app-restart-issues-caused-by-out-of-memory.md).
+
+Metaspace and direct memory can be collected only by full GC. When metaspace or direct memory is full, full GC will occur.
+
+## Java memory configurations
+
+The following sections describe important aspects of Java memory configuration.
+
+### Java containerization
+
+Applications in Azure Spring Apps run in container environments. For more information, see [Containerize your Java applications](/azure/developer/java/containers/overview?toc=/azure/spring-cloud/toc.json&bc=/azure/spring-cloud/breadcrumb/toc.json).
+
+### Important JVM options
+
+You can configure the maximum size of each part of memory by using JVM options. You can set JVM options by using Azure CLI commands or through the Azure portal. For more information, see the [Modify configurations to fix problems](../enterprise/tools-to-troubleshoot-memory-issues.md#modify-configurations-to-fix-problems) section of [Tools to troubleshoot memory issues](../enterprise/tools-to-troubleshoot-memory-issues.md).
+
+The following list describes the JVM options:
+
+- Heap size configuration
+
+ - `-Xms` sets the initial heap size by absolute value.
+ - `-Xmx` sets the maximum heap size by absolute value.
+ - `-XX:InitialRAMPercentage` sets the initial heap size by the percentage of heap size / app memory size.
+ - `-XX:MaxRAMPercentage` sets the maximum heap size by the percentage of heap size / app memory size.
+
+- Direct memory size configuration
+
+ - `-XX:MaxDirectMemorySize` sets the maximum direct memory size by absolute value. For more information, see [MaxDirectMemorySize](https://docs.oracle.com/en/java/javase/11/tools/java.html#GUID-3B1CE181-CD30-4178-9602-230B800D4FAE__GUID-2E02B495-5C36-4C93-8597-0020EFDC9A9C) in the Oracle documentation.
+
+- Metaspace size configuration
+
+ - `-XX:MaxMetaspaceSize` sets the maximum metaspace size by absolute value.
+
+### Default maximum memory size
+
+The following sections describe how default maximum memory sizes are set.
+
+#### Default maximum heap size
+
+Azure Spring Apps sets the default maximum heap memory size to about 50%-80% of app memory for Java apps. Specifically, Azure Spring Apps uses the following settings:
+
+- If the app memory < 1 GB, the default maximum heap size will be 50% of app memory.
+- If 1 GB <= the app memory < 2 GB, the default maximum heap size will be 60% of app memory.
+- If 2 GB <= the app memory < 3 GB, the default maximum heap size will be 70% of app memory.
+- If 3 GB <= the app memory, the default maximum heap size will be 80% of app memory.
+
+#### Default maximum direct memory size
+
+When the maximum direct memory size isn't set using JVM options, the JVM automatically sets the maximum direct memory size to the value returned by [Runtime.getRuntime.maxMemory()](https://docs.oracle.com/javase/8/docs/api/java/lang/Runtime.html#maxMemory--). This value is approximately equal to the maximum heap memory size. For more information, see the [JDK 8 VM.java file](http://hg.openjdk.java.net/jdk8u/jdk8u/jdk/file/a71d26266469/src/share/classes/sun/misc/VM.java#l282&gt;%20jdk8).
+
+### Memory usage layout
+
+Heap size is influenced by your throughput. Basically, when configuring, you can keep the default maximum heap size, which leaves reasonable memory for other parts.
+
+The metaspace size depends on the complexity of your code, such as the number of classes.
+
+The direct memory size depends on your throughput and your use of third party libraries like nio and gzip.
+
+The following list describes a typical memory layout sample for 2-GB apps. You can refer to this list to configure your memory size settings.
+
+- Total Memory (2048M)
+- Heap memory: Xmx is 1433.6M (70% of total memory). The reference value of daily memory usage is 1200M.
+ - Young generation
+ - Survivor space (S0, S1)
+ - Eden space
+ - Old generation
+- Non-heap memory
+ - Observed part (observed by Spring Boot Actuator)
+ - Metaspace: the daily usage reference value is 50M-256M
+ - Code cache
+ - Compressed class space
+ - Not observed part (not observed by Spring Boot Actuator): the daily usage reference value is 150M-250M.
+ - Thread stack
+ - GC, internal symbol and other
+- Direct memory: the daily usage reference value is 10M-200M.
+
+The following diagram shows the same information. Numbers in grey are the reference values of daily memory usage.
++
+Overall, when configuring maximum memory sizes, you should consider the usage of each part in memory, and the sum of all maximum sizes shouldn't exceed total available memory.
+
+## Java OOM
+
+OOM means the application is out of memory. There are two different concepts: container OOM and JVM OOM. For more information, see [App restart issues caused by out-of-memory issues](../enterprise/how-to-fix-app-restart-issues-caused-by-out-of-memory.md).
+
+## See also
+
+- [App restart issues caused by out-of-memory issues](../enterprise/how-to-fix-app-restart-issues-caused-by-out-of-memory.md)
+- [Tools to troubleshoot memory issues](../enterprise/tools-to-troubleshoot-memory-issues.md)
spring-apps How To Fix App Restart Issues Caused By Out Of Memory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-fix-app-restart-issues-caused-by-out-of-memory.md
The **Resource health** page on the Azure portal shows app restart events due to
The metrics *App memory Usage*, `jvm.memory.used`, and `jvm.memory.committed` provide a view of memory usage. For more information, see the [Metrics](tools-to-troubleshoot-memory-issues.md#metrics) section of [Tools to troubleshoot memory issues](tools-to-troubleshoot-memory-issues.md). Configure the maximum memory sizes in JVM options to ensure that memory is under the limit.
-The sum of the maximum memory sizes of all the parts in the [Java memory model](concepts-for-java-memory-management.md#java-memory-model) should be less than the real available app memory. To set your maximum memory sizes, see the typical memory layout described in the [Memory usage layout](concepts-for-java-memory-management.md#memory-usage-layout) section of [Java memory management](concepts-for-java-memory-management.md).
+The sum of the maximum memory sizes of all the parts in the [Java memory model](../basic-standard/concepts-for-java-memory-management.md#java-memory-model) should be less than the real available app memory. To set your maximum memory sizes, see the typical memory layout described in the [Memory usage layout](../basic-standard/concepts-for-java-memory-management.md#memory-usage-layout) section of [Java memory management](../basic-standard/concepts-for-java-memory-management.md).
Find a balance when you set the maximum memory size. When you set the maximum memory size too high, there's a risk of container OOM. When you set the maximum memory size too low, there's a risk of JVM OOM, and garbage collection will be of and will slow down the app.
Metaspace memory is usually stable.
## See also -- [Java memory management](concepts-for-java-memory-management.md)
+- [Java memory management](../basic-standard/concepts-for-java-memory-management.md)
- [Tools to troubleshoot memory issues](tools-to-troubleshoot-memory-issues.md)
spring-apps Tools To Troubleshoot Memory Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/tools-to-troubleshoot-memory-issues.md
App memory usage is a percentage equal to the app memory used divided by the app
For JVM memory, there are three metrics: `jvm.memory.used`, `jvm.memory.committed`, and `jvm.memory.max`, which are described in the following list.
-"JVM memory" isn't a clearly defined concept. Here, `jvm.memory` is the sum of [heap memory](concepts-for-java-memory-management.md#heap-memory) and former permGen part of [non-heap memory](concepts-for-java-memory-management.md#non-heap-memory). JVM memory doesn't include direct memory or other memory like the thread stack. Spring Boot Actuator gathers these three metrics and determines the scope of `jvm.memory`.
+"JVM memory" isn't a clearly defined concept. Here, `jvm.memory` is the sum of [heap memory](../basic-standard/concepts-for-java-memory-management.md#heap-memory) and former permGen part of [non-heap memory](../basic-standard/concepts-for-java-memory-management.md#non-heap-memory). JVM memory doesn't include direct memory or other memory like the thread stack. Spring Boot Actuator gathers these three metrics and determines the scope of `jvm.memory`.
- `jvm.memory.used` is the amount of used JVM memory, including used heap memory and used former permGen in non-heap memory.
For JVM memory, there are three metrics: `jvm.memory.used`, `jvm.memory.committe
- `jvm.memory.max` is the maximum amount of JVM memory, not to be confused with the real available amount.
- The value of `jvm.memory.max` can sometimes be confusing because it can be much higher than the available app memory. To clarify, `jvm.memory.max` is the sum of all maximum sizes of heap memory and the former permGen part of [non-heap memory](concepts-for-java-memory-management.md#non-heap-memory), regardless of the real available memory. For example, if an app is set with 1 GB of memory in the Azure Spring Apps portal, then the default heap memory size is 0.5 GB. For more information, see the [Default maximum heap size](concepts-for-java-memory-management.md#default-maximum-heap-size) section of [Java memory management](concepts-for-java-memory-management.md).
+ The value of `jvm.memory.max` can sometimes be confusing because it can be much higher than the available app memory. To clarify, `jvm.memory.max` is the sum of all maximum sizes of heap memory and the former permGen part of [non-heap memory](../basic-standard/concepts-for-java-memory-management.md#non-heap-memory), regardless of the real available memory. For example, if an app is set with 1 GB of memory in the Azure Spring Apps portal, then the default heap memory size is 0.5 GB. For more information, see the [Default maximum heap size](../basic-standard/concepts-for-java-memory-management.md#default-maximum-heap-size) section of [Java memory management](../basic-standard/concepts-for-java-memory-management.md).
If the default *compressed class space* size is 1 GB, then the value of `jvm.memory.max` is larger than 1.5 GB regardless of whether the app memory size 1 GB. For more information, see [Java Platform, Standard Edition HotSpot Virtual Machine Garbage Collection Tuning Guide: Other Considerations](https://docs.oracle.com/javase/9/gctuning/other-considerations.htm) in the Oracle documentation. #### jvm.gc.memory.allocated/promoted
-These two metrics are for observing Java garbage collection (GC). For more information, see the [Java garbage collection](concepts-for-java-memory-management.md#java-garbage-collection) section of [Java memory management](concepts-for-java-memory-management.md). The maximum heap size influences the frequency of minor GC and full GC. The maximum metaspace and maximum direct memory size influence full GC. If you want to adjust the frequency of garbage collection, consider modifying the following maximum memory sizes.
+These two metrics are for observing Java garbage collection (GC). For more information, see the [Java garbage collection](../basic-standard/concepts-for-java-memory-management.md#java-garbage-collection) section of [Java memory management](../basic-standard/concepts-for-java-memory-management.md). The maximum heap size influences the frequency of minor GC and full GC. The maximum metaspace and maximum direct memory size influence full GC. If you want to adjust the frequency of garbage collection, consider modifying the following maximum memory sizes.
- `jvm.gc.memory.allocated` is the amount of increase in the size of the young generation memory pool after one GC and before the next. This value reflects minor GC.
For more information, see [Capture heap dump and thread dump manually and use Ja
## Modify configurations to fix problems
-Some issues you might identify include [container OOM](how-to-fix-app-restart-issues-caused-by-out-of-memory.md#fix-app-restart-issues-due-to-oom), heap memory that's too large, and abnormal garbage collection. If you identify any of these issues, you may need to configure the maximum memory size in the JVM options. For more information, see the [Important JVM options](concepts-for-java-memory-management.md#important-jvm-options) section of [Java memory management](concepts-for-java-memory-management.md#important-jvm-options).
+Some issues you might identify include [container OOM](how-to-fix-app-restart-issues-caused-by-out-of-memory.md#fix-app-restart-issues-due-to-oom), heap memory that's too large, and abnormal garbage collection. If you identify any of these issues, you may need to configure the maximum memory size in the JVM options. For more information, see the [Important JVM options](../basic-standard/concepts-for-java-memory-management.md#important-jvm-options) section of [Java memory management](../basic-standard/concepts-for-java-memory-management.md#important-jvm-options).
You can modify the JVM options by using the Azure portal or the Azure CLI.
az spring app update \
## See also -- [Java memory management](concepts-for-java-memory-management.md)
+- [Java memory management](../basic-standard/concepts-for-java-memory-management.md)
- [App restart issues caused by out-of-memory issues](how-to-fix-app-restart-issues-caused-by-out-of-memory.md)
spring-apps Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/whats-new.md
The following updates are now available in the Enterprise plan:
- **Richer log of Application Configuration Service**: The Git revision is a crucial piece of information that indicates the recency of configuration files. Currently, the Application Configuration Service logs the Git revision to enhance troubleshooting efficiency. For more information, see the [Examine Git revisions of the configuration files](how-to-enterprise-application-configuration-service.md#examine-git-revisions-of-the-configuration-files) section of [Use Application Configuration Service for Tanzu](how-to-enterprise-application-configuration-service.md). -- **Managed OSS Spring Cloud Config Server (preview)**: The open-source version of Spring Cloud Config Server provides a native Spring experience to developers. Now we offer managed Spring Config Server to dynamically retrieve configuration properties from central repositories. For more information, see [Configure a managed Spring Cloud Config Server in Azure Spring App](how-to-config-server.md).
+- **Managed OSS Spring Cloud Config Server (preview)**: The open-source version of Spring Cloud Config Server provides a native Spring experience to developers. Now we offer managed Spring Cloud Config Server to dynamically retrieve configuration properties from central repositories. For more information, see [Configure a managed Spring Cloud Config Server in Azure Spring App](how-to-config-server.md).
- **Custom actuator endpoint support**: Users might want to use a different port or path for the actuator due to security concerns, but this choice can result in the Application Live View being unable to connect to the app. This feature enables Application Live View to work with apps that have a non-default port or path for the actuator. For more information, see the [Configure customized Spring Boot actuator](how-to-use-application-live-view.md#configure-customized-spring-boot-actuator) section of [Use Application Live View with the Azure Spring Apps Enterprise plan](how-to-use-application-live-view.md). -- **Disable basic auth for the test endpoint of an app**: Azure Spring Apps provides basic authentication to protect the test endpoint of an application instance. When a user's app is integrated with their authentication server, this basic authentication becomes unnecessary. If the user has a good understanding of the application's security, this feature lets them disable the basic authentication provided by the Azure Spring Apps service, making the tests against the application closer to a real-world environment. For more information, see the second tip in [Set up a staging environment in Azure Spring Apps](how-to-staging-environment.md).
+- **Disable basic auth for the test endpoint of an app**: Azure Spring Apps provides basic authentication to protect the test endpoint of an application instance. When a user's app is integrated with their auth server, this basic authentication becomes unnecessary. If the user has a good understanding of the application's security, this feature lets them disable the basic authentication provided by the Azure Spring Apps service, making the tests against the application closer to a real-world environment. For more information, see the second tip in [Set up a staging environment in Azure Spring Apps](how-to-staging-environment.md).
- **Private storage access for virtual network injection**: The private storage access feature enables routing of traffic through a private network for backend storage hosting application assets like JAR files and logs. This feature enhances security and can potentially improve performance for users. For more information, see [Configure private network access for backend storage in your virtual network (Preview)](how-to-private-network-access-backend-storage.md).
storage Azcopy Cost Estimation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/azcopy-cost-estimation.md
Previously updated : 11/27/2023 Last updated : 07/30/2024
The following table calculates the number of write operations required to upload
| Calculation | Value | |--|-| | Number of MiB in 5 GiB | 5,120 |
-| PutBlock operations per blob (5,120 MiB / 8-MiB block) | 640 |
+| PutBlock operations per blob (5,120 MiB / 8 MiB block) | 640 |
| PutBlockList operations per blob | 1 | | **Total write operations (1,000 * 641)** | **641,000** |
After each blob is uploaded, AzCopy uses the [Get Blob Properties](/rest/api/sto
Using the [Sample prices](#sample-prices) that appear in this article, the following table calculates the cost to upload these blobs.
-| Price factor | Hot | Cool | Cold | Archive |
-||-|-|--|-|
-| Price of a single write operation (price / 10,000) | $0.0000055 | $0.00001 | $0.000018 | $0.00001 |
-| **Cost of write operations (641,000 * operation price)** | **$3.5255** | **$6.4100** | **$11.5380** | **$3.5255** |
-| Price of a single _other_ operation (price / 10,000) | $0.00000044 | $0.00000044 | $0.00000052 | $0.00000044 |
-| **Cost to get blob properties (1000 * _other_ operation price)** | **$0.0004** | **$0.0004** | **$0.0005** | **$0.0004** |
-| **Total cost (write + properties)** | **$3.53** | **$6.41** | **$11.54** | **$3.53** |
+| Price factor | Hot | Cool | Cold | Archive |
+||-|-|--|--|
+| Price of a single write operation (price / 10,000) | $0.0000055 | $0.00001 | $0.000018 | $0.000011 |
+| **Cost of write operations (641,000 * operation price)** | **$3.5255** | **$6.4100** | **$11.5380** | **$7.0510** |
+| Price of a single _other_ operation (price / 10,000) | $0.00000044 | $0.00000044 | $0.00000052 | $0.00000044 |
+| **Cost to get blob properties (1000 * _other_ operation price)** | **$0.0004** | **$0.0004** | **$0.0005** | **$0.00044** |
+| **Total cost (write + properties)** | **$3.53** | **$6.41** | **$11.54** | **$7.05** |
> [!NOTE] > If you upload to the archive tier, each [Put Block](/rest/api/storageservices/put-block) operation is charged at the price of a **hot** write operation. Each [Put Block List](/rest/api/storageservices/put-block-list) operation is charged the price of an **archive** write operation. ### Cost of uploading to the Data Lake Storage endpoint
-If you upload data to the Data Lake Storage endpoint, then AzCopy uploads each blob in 4-MiB blocks. This value is not configurable.
+If you upload data to the Data Lake Storage endpoint, then AzCopy uploads each blob in 4-MiB blocks. This value isn't configurable.
AzCopy uploads each block by using the [Path - Update](/rest/api/storageservices/datalakestoragegen2/path/update) operation with the action parameter set to `append`. After the final block is uploaded, AzCopy commits those blocks by using the [Path - Update](/rest/api/storageservices/datalakestoragegen2/path/update) operation with the action parameter set to `flush`. Both operations are billed as _write_ operations. The following table calculates the number of write operations required to upload these blobs.
-| Calculation | Value
-|||
-| Number of MiB in 5 GiB | 5,120 |
-| Path - Update (append) operations per blob (5,120 MiB / 4-MiB block) | 1,280 |
-| Path - Update (flush) operations per blob | 1 |
-| **Total write operations (1,000 * 1,281)** | **1,281,00** |
+| Calculation | Value |
+|-|--|
+| Number of MiB in 5 GiB | 5,120 |
+| Path - Update (append) operations per blob (5,120 MiB / 4 MiB block) | 1,280 |
+| Path - Update (flush) operations per blob | 1 |
+| **Total write operations (1,000 * 1,281)** | **1,281,00** |
After each blob is uploaded, AzCopy uses the [Get Blob Properties](/rest/api/storageservices/get-blob-properties) operation as part of validating the upload. The [Get Blob Properties](/rest/api/storageservices/get-blob-properties) operation is billed as an _All other operations_ operation.
Using the [Sample prices](#sample-prices) that appear in this article, the follo
| Price factor | Hot | Cool | Cold | Archive | ||-|--|--|--|
-| Price of a single write operation (price / 10,000) | $0.00000715 | $0.000013 | $0.0000234 | $0.0000143 |
-| **Cost of write operations (1,281,000 * operation price)** | **$9.1592** | **$16.6530** | **$29.9754** | **$18.3183** |
-| Price of a single _other_ operation (price / 10,000) | $0.00000044 | $0.00000044 | $0.00000052 | $0.00000044 |
-| **Cost to get blob properties (1000 * operation price)** | **$0.0004** | **$0.0004** | **$0.0005** | **$0.0004** |
-| **Total cost (write + properties)** | **$9.16** | **$16.65** | **$29.98** | **$18.32** |
+| Price of a single write operation (price / 10,000) | $0.00000720 | $0.000013 | $0.0000234 | $0.0000143 |
+| **Cost of write operations (1,281,000 * operation price)** | **$9.2332** | **$16.6530** | **$29.9754** | **$18.3183** |
+| Price of a single _other_ operation (price / 10,000) | $0.00000044 | $0.00000044 | $0.00000068 | $0.00000044 |
+| **Cost to get blob properties (1000 * operation price)** | **$0.0004** | **$0.0004** | **$0.0007** | **$0.0004** |
+| **Total cost (write + properties)** | **$9.22** | **$16.65** | **$29.98** | **$18.32** |
## The cost to download
Using the [Sample prices](#sample-prices) that appear in this article, the follo
| Price factor | Hot | Cool | Cold | |-|-|-|-| | Price of a single list operation (price/ 10,000) | $0.0000055 | $0.0000055 | $0.0000065 |
-| **Cost of listing operations (1 * operation price)** | **$0.0000055** | **$0.0000055** | **$0.0000065** |
-| Price of a single _other_ operation (price / 10,000) | $0.00000044 | $0.00000044 | $0.00000052 |
+| **Cost of listing operations (1 * operation price)** | **$0.0000055** | **$0.0000050** | **$0.0000065** |
+| Price of a single _other_ operation (price / 10,000) | $0.00000044 | $0.00000044 | $0.00000052 |
| **Cost to get blob properties (1000 * operation price)** | **$0.00044** | **$0.00044** | **$0.00052** | | Price of a single read operation (price / 10,000) | $0.00000044 | $0.000001 | $0.00001 | | **Cost of read operations (1000 * operation price)** | **$0.00044** | **$0.001** | **$0.01** |
The following table calculates the number of write operations required to upload
| Calculation | Value | |-|| | Number of MiB in 5 GiB | 5,120 |
-| Path - Update operations per blob (5,120 MiB / 4-MiB block) | 1,280 |
+| Path - Update operations per blob (5,120 MiB / 4 MiB block) | 1,280 |
| Total read operations (1000* 1,280) | **1,280,000** | Using the [Sample prices](#sample-prices) that appear in this article, the following table calculates the cost to download these blobs.
Using the [Sample prices](#sample-prices) that appear in this article, the follo
| Price factor | Hot | Cool | Cold | |--|-|-|-| | Price of a single list operation (price/ 10,000) | $0.0000055 | $0.0000055 | $0.0000065 |
-| **Cost of listing operations (1 * operation price)** | **$0.0000055** | **$0.0000055** | **$0.0000065** |
-| Price of a single _other_ operation (price / 10,000) | $0.00000044 | $0.00000044 | $0.00000052 |
+| **Cost of listing operations (1 * operation price)** | **$0.0000055** | **$0.0000050** | **$0.0000065** |
+| Price of a single _other_ operation (price / 10,000) | $0.00000044 | $0.00000044 | $0.00000052 |
| **Cost to get blob properties (1000 * operation price)** | **$0.00044** | **$0.00044** | **$0.00052** |
-| Price of a single read operation (price / 10,000) | $0.00000057 | $0.00000130 | $0.00001300 |
+| Price of a single read operation (price / 10,000) | $0.00000060 | $0.00000130 | $0.00001300 |
| **Cost of read operations (1,281,000 * operation price)** | **$0.73017** | **$1.6653** | **$16.653** | | Price of data retrieval (per GiB) | $0.00000000 | $0.01000000 | $0.03000000 | | **Cost of data retrieval (5 * operation price)** | **$0.00** | **$0.05** | **$0.15** |
This scenario is identical to the previous one except that you're also billed fo
| Price factor | Hot | Cool | Cold | |-|--|-|-|
-| **Total from previous section** | **$3.5309** | **$0.0064** | **$0.0110** |
+| **Total from previous section** | **$0.0064** | **$0.0109** | **$0.0190** |
| Price of a single read operation (price / 10,000) | $0.00000044 | $0.000001 | $0.00001 | | **Cost of read operations (1,000 * operation price)** | **$0.00044** | **$0.001** | **$0.01** | | Price of data retrieval (per GiB) | Free | $0.01 | $0.03 | | **Cost of data retrieval (5 * operation price)** | **$0.00** | **$.05** | **$.15** |
-| **Total cost (previous section + retrieval + read)** | **$3.53134** | **$0.0574** | **$0.171** |
+| **Total cost (previous section + retrieval + read)** | **$0.0068** | **$0.0619** | **$0.1719** |
### Cost of copying blobs to an account located in another region
-This scenario is identical to the previous one except you are billed for network egress charges.
+This scenario is identical to the previous one except you're billed for network egress charges.
-| Price factor | Hot | Cool | Cold |
-|--|--|-|-|
-| **Total cost from previous section** | **$3.53134** | **$0.0574** | **$0.171** |
-| Price of network egress (per GiB) | $0.02 | $0.02 | $0.02 |
-| **Total cost of network egress (5 * price of egress)** | **$.10** | **$.10** | **$.10** |
-| **Total cost (previous section + egress)** | **$3.5513** | **$0.0774** | **$0.191** |
+| Price factor | Hot | Cool | Cold |
+|--|-|-|-|
+| **Total cost from previous section** | **$0.0068** | **$0.0619** | **$0.1719** |
+| Price of network egress (per GiB) | $0.02 | $0.02 | $0.02 |
+| **Total cost of network egress (5 * price of egress)** | **$.10** | **$.10** | **$.10** |
+| **Total cost (previous section + egress)** | **$0.1068** | **$0.1619** | **$0.2790** |
## The cost to synchronize changes
When you run the [azcopy sync](../common/storage-use-azcopy-blobs-synchronize.md
### Cost to synchronize a container with a local file system
-If you want to keep a container updated with changes to a local file system, then AzCopy performs the exact same tasks as described in the [Cost of uploading to the Blob Service endpoint](#cost-of-uploading-to-the-blob-service-endpoint) section in this article. Blobs are uploaded only if the last modified time of a local file is different than the last modified time of the blob in the container. Therefore, you are billed _write_ transactions only for blobs that are uploaded.
+If you want to keep a container updated with changes to a local file system, then AzCopy performs the exact same tasks as described in the [Cost of uploading to the Blob Service endpoint](#cost-of-uploading-to-the-blob-service-endpoint) section in this article. Blobs are uploaded only if the last modified time of a local file is different than the last modified time of the blob in the container. Therefore, you're billed _write_ transactions only for blobs that are uploaded.
-If you want to keep a local file system updated with changes to a container, then AzCopy performs the exact same tasks as described in the [Cost of downloading from the Blob Service endpoint](#cost-of-downloading-from-the-blob-service-endpoint) section of this article. Blobs are downloaded only If the last modified time of a local blob is different than the last modified time of the blob in the container. Therefore, you are billed _read_ transactions only for blobs that are downloaded.
+If you want to keep a local file system updated with changes to a container, then AzCopy performs the exact same tasks as described in the [Cost of downloading from the Blob Service endpoint](#cost-of-downloading-from-the-blob-service-endpoint) section of this article. Blobs are downloaded only If the last modified time of a local blob is different than the last modified time of the blob in the container. Therefore, you're billed _read_ transactions only for blobs that are downloaded.
### Cost to synchronize containers
-If you want to keep two containers synchronized, then AzCopy performs the exact same tasks as described in the [The cost to copy between containers](#the-cost-to-copy-between-containers) section in this article. A blob is copied only if the last modified time of a blob in the source container is different than the last modified time of a blob in the destination container. Therefore, you are billed _write_ and _read_ transactions only for blobs that are copied.
+If you want to keep two containers synchronized, then AzCopy performs the exact same tasks as described in the [The cost to copy between containers](#the-cost-to-copy-between-containers) section in this article. A blob is copied only if the last modified time of a blob in the source container is different than the last modified time of a blob in the destination container. Therefore, you're billed _write_ and _read_ transactions only for blobs that are copied.
The [azcopy sync](../common/storage-use-azcopy-blobs-synchronize.md?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json) command uses the [List Blobs](/rest/api/storageservices/list-blobs) operation on both source and destination accounts when synchronizing containers that exist in separate accounts.
The [azcopy sync](../common/storage-use-azcopy-blobs-synchronize.md?toc=/azure/s
The following table contains all of the estimates presented in this article. All estimates are based on transferring **1000** blobs that are each **5 GiB** in size and use the sample prices listed in the next section.
-| Scenario | Hot | Cool | Cold | Archive |
-||-||||
-| Upload blobs (Blob Service endpoint) | $3.53 | $6.41 | $11.54 | $3.53 |
-| Upload blobs (Data Lake Storage endpoint) | $9.16 | $16.65 | $29.98 | $18.32 |
-| Download blobs (Blob Service endpoint) | $0.001 | $0.051 | $0.161 | N/A |
-| Download blobs (Data Lake Storage endpoint) | $0.731 | $1.716 | $16.804 | N/A |
-| Copy blobs | $3.5309 | $0.0064 | $0.0110 | N/A |
-| Copy blobs to another account | $3.53134 | $0.0574 | $0.171 | N/A |
-| Copy blobs to an account in another region | $3.5513 | $0.0774 | $0.191 | N/A |
+| Scenario | Hot | Cool | Cold | Archive |
+||||||
+| Upload blobs (Blob Service endpoint) | $3.53 | $6.41 | $11.54 | $3.53 |
+| Upload blobs (Data Lake Storage endpoint) | $9.22 | $16.65 | $29.98 | $18.32 |
+| Download blobs (Blob Service endpoint) | $0.001 | $0.051 | $0.161 | N/A |
+| Download blobs (Data Lake Storage endpoint) | $0.731 | $1.716 | $16.804 | N/A |
+| Copy blobs | $0.064 | $0.0109 | $0.0190 | N/A |
+| Copy blobs to another account | $0.0068 | $0.0619 | $0.1719 | N/A |
+| Copy blobs to an account in another region | $0.1068 | $0.1619 | $0.2790 | N/A |
## Sample prices
The following table includes sample (fictitious) prices for each request to the
| Price factor | Hot | Cool | Cold | Archive | |--||||| | Price of write transactions (per 10,000) | $0.055 | $0.10 | $0.18 | $0.10 |
-| Price of read transactions (per 10,000) | $0.0044 | $0.01 | $0.10 | $5.00 |
-| Price of data retrieval (per GiB) | Free | $0.01 | $0.03 | $0.02 |
-| List and container operations (per 10,000) | $0.055 | $0.055 | $0.065 | $0.055 |
+| Price of read transactions (per 10,000) | $0.0044 | $0.01 | $0.10 | $5.50 |
+| Price of data retrieval (per GiB) | Free | $0.01 | $0.03 | $0.022 |
+| List and container operations (per 10,000) | $0.055 | $0.050 | $0.065 | $0.055 |
| All other operations (per 10,000) | $0.0044 | $0.0044 | $0.0052 | $0.0044 | The following table includes sample prices (fictitious) prices for each request to the Data Lake Storage endpoint (`dfs.core.windows.net`). For official prices, see [Azure Data Lake Storage pricing](https://azure.microsoft.com/pricing/details/storage/data-lake/).
-| Price factor | Hot | Cool | Cold | Archive |
-|--|-|-|-||
-| Price of write transactions (every 4MiB, per 10,000) | $0.0715 | $0.13 | $0.234 | $0.143 |
-| Price of read transactions (every 4MiB, per 10,000) | $0.0057 | $0.013 | $0.13 | $7.15 |
-| Price of data retrieval (per GiB) | Free | $0.01 | $0.03 | $0.022 |
-| Iterative Read operations (per 10,000) | $0.0715 | $0.0715 | $0.0845 | $0.0715 |
+| Price factor | Hot | Cool | Cold | Archive |
+||||||
+| Price of write transactions (every 4 MiB, per 10,000) | $0.0720 | $0.13 | $0.234 | $0.143 |
+| Price of read transactions (every 4 MiB, per 10,000) | $0.0057 | $0.013 | $0.13 | $7.15 |
+| Price of data retrieval (per GiB) | Free | $0.01 | $0.03 | $0.022 |
+| Iterative Read operations (per 10,000) | $0.0715 | $0.0715 | $0.0845 | $0.0715 |
## Operations used by AzCopy commands
storage Storage Use Azcopy V10 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-v10.md
description: AzCopy is a command-line utility that you can use to copy data to,
Previously updated : 09/29/2022 Last updated : 07/18/2024
This video shows you how to download and run the AzCopy utility.
The steps in the video are also described in the following sections.
-## Download AzCopy
+## Install AzCopy on Linux by using a package manager
-First, download the AzCopy V10 executable file to any directory on your computer. AzCopy V10 is just an executable file, so there's nothing to install.
+You can install AzCopy by using a Linux package that is hosted on the [Linux Software Repository for Microsoft Products](/linux/packages).
+
+### [dnf (RHEL)](#tab/dnf)
+
+1. Download the repository configuration package.
+
+ > [!IMPORTANT]
+ > Make sure to replace the distribution and version with the appropriate strings.
+
+ ```bash
+ curl -sSL -O https://packages.microsoft.com/config/<distribution>/<version>/packages-microsoft-prod.rpm
+ ```
+
+2. Install the repository configuration package.
+
+ ```bash
+ sudo rpm -i packages-microsoft-prod.rpm
+ ````
+
+3. Delete the repository configuration package after you've installed it.
+
+ ```bash
+ rm packages-microsoft-prod.rpm
+ ````
+
+4. Update the package index files.
+
+ ```bash
+ sudo dnf update
+ ```
+5. Install AzCopy.
+
+ ```bash
+ sudo dnf install azcopy
+ ```
++
+### [zypper (OpenSUSE, SLES)](#tab/zypper)
+
+1. Download the repository configuration package.
+
+ > [!IMPORTANT]
+ > Make sure to replace the distribution and version with the appropriate strings.
+
+ ```bash
+ curl -sSL -O https://packages.microsoft.com/config/<distribution>/<version>/packages-microsoft-prod.rpm
+ ```
+
+2. Install the repository configuration package.
+
+ ```bash
+ sudo rpm -i packages-microsoft-prod.rpm
+ ```
+
+3. Delete the repository configuration package after you've installed it.
+
+ ```bash
+ rm packages-microsoft-prod.rpm
+ ```
+
+4. Update the package index files.
+
+ ```bash
+ sudo dnf update
+ ```
+
+5. Install AzCopy.
+
+ ```bash
+ sudo zypper install -y azcopy
+ ```
+
+### [apt (Ubuntu, Debian)](#tab/apt)
+
+1. Download the repository configuration package.
+
+ > [!IMPORTANT]
+ > Make sure to replace the distribution and version with the appropriate strings.
+
+ ```bash
+ curl -sSL -O https://packages.microsoft.com/config/<distribution>/<version>/packages-microsoft-prod.deb
+ ```
+
+2. Install the repository configuration package.
+
+ ```bash
+ sudo dpkg -i packages-microsoft-prod.deb
+ ```
+
+3. Delete the repository configuration package after you've installed it.
+
+ ```bash
+ rm packages-microsoft-prod.deb
+ ```
+
+4. Update the package index files.
+
+ ```bash
+ sudo apt-get update
+ ```
+
+5. Install AzCopy.
+
+ ```bash
+ sudo apt-get install azcopy
+ ```
+
+# [tdnf (Azure Linux)](#tab/tdnf)
+
+Install AzCopy.
+
+```bash
+sudo tdnf install azcopy
+```
+++
+<a id="download-azcopy"></a>
+
+## Download the AzCopy portable binary
+
+As an alternative to installing a package, you can download the AzCopy V10 executable file to any directory on your computer.
- [Windows 64-bit](https://aka.ms/downloadazcopy-v10-windows) (zip) - [Windows 32-bit](https://aka.ms/downloadazcopy-v10-windows-32bit) (zip)
storage Use Container Storage With Local Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-local-disk.md
kubectl delete sp -n acstor <storage-pool-name>
### Optimize performance when using local NVMe
-Depending on your workloadΓÇÖs performance requirements, you can choose from three different performance tiers: **Basic**, **Standard**, and **Advanced**. These tiers offer a different range of IOPS, and your selection will impact the number of vCPUs that Azure Container Storage components consume in the nodes where it's installed. Standard is the default configuration if you don't update the performance tier.
+Depending on your workloadΓÇÖs performance requirements, you can choose from three different performance tiers: **Basic**, **Standard**, and **Advanced**. Your selection will impact the number of vCPUs that Azure Container Storage components consume in the nodes where it's installed. Standard is the default configuration if you don't update the performance tier.
-| **Tier** | **Number of vCPUs** |
-||--|
-| `Basic` | 12.5% of total VM cores |
-| `Standard` (default) | 25% of total VM cores |
-| `Advanced` | 50% of total VM cores |
+These three tiers offer a different range of IOPS. The following table contains guidance on what you could expect with each of these tiers. We used [FIO](https://github.com/axboe/fio), a popular benchmarking tool, to achieve these numbers with the following configuration:
+- AKS: Node SKU - Standard_L16s_v3;
+- FIO: Block size - 4KB; Queue depth - 32; Numjobs - number of cores assigned to container storage components; Access pattern - random; Worker set size - 32G
+
+| **Tier** | **Number of vCPUs** | **100 % Read IOPS** | **100 % Write IOPS** |
+| | | | |
+| `Basic` | 12.5% of total VM cores | Up to 100,000 | Up to 90,000 |
+| `Standard` (default)| 25% of total VM cores | Up to 200,000 | Up to 180,000 |
+| `Advanced` | 50% of total VM cores | Up to 400,000 | Up to 360,000 |
> [!NOTE] > RAM and hugepages consumption will stay consistent across all tiers: 1 GiB of RAM and 2 GiB of hugepages.
synapse-analytics Get Started Add Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-add-admin.md
description: In this tutorial, you'll learn how to add another administrative us
-+ Last updated 04/02/2021
synapse-analytics Get Started Analyze Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-spark.md
description: In this tutorial, you'll learn to analyze data with Apache Spark.
-+ Last updated 11/18/2022
synapse-analytics Get Started Analyze Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-storage.md
description: In this tutorial, you'll learn how to analyze data located in a sto
-+ Last updated 11/18/2022
synapse-analytics Get Started Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-create-workspace.md
description: In this tutorial, you'll learn how to create a Synapse workspace, a
-+ Last updated 11/18/2022
synapse-analytics Get Started Knowledge Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-knowledge-center.md
description: In this tutorial, you'll learn how to use the Synapse Knowledge cen
-+ Last updated 04/04/2021
synapse-analytics Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started.md
description: In this tutorial, you'll learn the basic steps to set up and use Az
-+ Last updated 11/18/2022
synapse-analytics How To Analyze Complex Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/how-to-analyze-complex-schema.md
Title: Analyze schema with arrays and nested structures description: How to analyze arrays and nested structures with Apache Spark and SQL -+ Last updated 06/15/2020
synapse-analytics Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/known-issues.md
Title: Known issues description: Learn about the currently known issues with Azure Synapse Analytics and their possible workarounds or resolutions.-+ Last updated 04/08/2024
To learn more about Azure Synapse Analytics, see the [Azure Synapse Analytics Ov
|Azure Synapse Workspace|[REST API PUT operations or ARM/Bicep templates to update network settings fail](#rest-api-put-operations-or-armbicep-templates-to-update-network-settings-fail)|Has workaround| |Azure Synapse Workspace|[Known issue incorporating square brackets [] in the value of Tags](#known-issue-incorporating-square-brackets--in-the-value-of-tags)|Has workaround| |Azure Synapse Workspace|[Deployment Failures in Synapse Workspace using Synapse-workspace-deployment v1.8.0 in GitHub actions with ARM templates](#deployment-failures-in-synapse-workspace-using-synapse-workspace-deployment-v180-in-github-actions-with-arm-templates)|Has workaround|
+|Azure Synapse Workspace|[No `GET` API operation dedicated to the `Microsoft.Synapse/workspaces/trustedServiceBypassEnabled` setting](#no-get-api-operation-dedicated-to-the-microsoftsynapseworkspacestrustedservicebypassenabled-setting)|Has workaround|
The error message displayed is `Action failed - Error: Orchestrate failed - Synt
After applying either of these workarounds and successfully deploying, manually update the necessary configurations within the workspace to ensure everything is set up correctly. This might involve editing configuration files, adjusting settings, or performing other tasks relevant to the specific environment or application being deployed.
+### No 'GET' API operation dedicated to the "Microsoft.Synapse/workspaces/trustedServiceBypassEnabled" setting
+
+**Issue Summary:** In Azure Synapse Analytics, there is no dedicated 'GET' API operation for retrieving the state of the "trustedServiceBypassEnabled" setting at the resource scope "Microsoft.Synapse/workspaces/trustedServiceBypassEnabled". While users can set this configuration, they cannot directly retrieve its state via this specific resource scope.
+
+**Impact:** This limitation impacts Azure Policy definitions, as they cannot enforce a specific state for the "trustedServiceBypassEnabled" setting. Customers are unable to use Azure Policy to deny or manage this configuration.
+
+**Workaround:** There is no workaround available in Azure Policy to enforce the desired configuration state for this property. However, users can use the 'GET' workspace operation to audit the configuration state for reporting purposes.\
+This 'GET' workspace operation maps to the 'Microsoft.Synapse/workspaces/trustedServiceBypassEnabled' Azure Policy Alias.
+
+The Azure Policy Alias can be used for managing this property with a Deny Azure Policy Effect if the operation is a PUT request against the Microsoft.Synapse/workspace resource, but it will only function for Audit purposes if the PUT request is being sent directly to the Microsoft.Synapse/workspaces/trustedServiceByPassConfiguration child resource. The parent resource has a property [properties.trustedServiceBypassEnabled] that maps the configuration from the child resource and this is why it can still be audited through the parent resourceΓÇÖs Azure Policy Alias.
+
+Since the Microsoft.Synapse/workspaces/trustedServiceByPassConfiguration child resource has no GET operation available, Azure Policy cannot manage these requests, and Azure Policy cannot generate an Azure Policy Alias for it.
+
+**Parent Resource:** Microsoft.Synapse/workspaces
+
+**Child Resource:** Microsoft.Synapse/workspaces/trustedServiceByPassConfiguration
+
+The Azure portal makes the PUT request directly to the PUT API for the child resource and therefore the Azure portal, along with any other API requests made outside of the parent Microsoft.Synapse/workspaces APIs, cannot be managed by Azure Policy through a Deny or other actionable Azure Policy Effect.
+ ## Azure Synapse Analytics serverless SQL pool active known issues summary ### Query failures from serverless SQL pool to Azure Cosmos DB analytical store
synapse-analytics Tutorial Build Applications Use Mmlspark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-build-applications-use-mmlspark.md
Title: 'Tutorial: Build machine learning applications using Synapse Machine Learning' description: Learn how to use Synapse Machine Learning to create machine learning applications in Azure Synapse Analytics.-+ Last updated 03/08/2021
synapse-analytics Tutorial Computer Vision Use Mmlspark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-computer-vision-use-mmlspark.md
Title: 'Tutorial: Vision with Azure AI services' description: Learn how to use Azure AI Vision in Azure Synapse Analytics.-+ Last updated 11/02/2021
synapse-analytics Tutorial Form Recognizer Use Mmlspark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-form-recognizer-use-mmlspark.md
Title: 'Tutorial: Document Intelligence with Azure AI services' description: Learn how to use Azure AI Document Intelligence in Azure Synapse Analytics.-+ Last updated 11/02/2021
synapse-analytics Tutorial Text Analytics Use Mmlspark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-text-analytics-use-mmlspark.md
Title: 'Tutorial: Text Analytics with Azure AI services' description: Learn how to use text analytics in Azure Synapse Analytics.-+ Last updated 11/02/2021
synapse-analytics Tutorial Translator Use Mmlspark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-translator-use-mmlspark.md
Title: 'Tutorial: Translator with Azure AI services' description: Learn how to use translator in Azure Synapse Analytics.-+ Last updated 11/02/2021
synapse-analytics Quickstart Apache Spark Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-apache-spark-notebook.md
description: This quickstart shows how to use the web tools to create a serverle
-+ Last updated 02/15/2022
synapse-analytics Quickstart Connect Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-connect-azure-data-explorer.md
Title: 'Quickstart: Connect Azure Data Explorer to an Azure Synapse Analytics workspace' description: Connect an Azure Data Explorer cluster to an Azure Synapse Analytics workspace by using Apache Spark for Azure Synapse Analytics. -+ Last updated 02/15/2022
synapse-analytics Quickstart Create Apache Spark Pool Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-apache-spark-pool-portal.md
Last updated 03/11/2024-+
synapse-analytics Quickstart Create Apache Spark Pool Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-apache-spark-pool-studio.md
Last updated 03/11/2024-+
synapse-analytics Quickstart Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-workspace.md
Title: 'Quickstart: create a Synapse workspace' description: Create an Synapse workspace by following the steps in this guide. -+ Last updated 03/23/2022
synapse-analytics Gateway Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/gateway-ip-addresses.md
Title: Gateway IP addresses description: An article that teaches you what are the IP addresses used in different regions. -+ Last updated 03/23/2023
synapse-analytics Apache Spark Delta Lake Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-delta-lake-overview.md
Title: Overview of how to use Linux Foundation Delta Lake in Apache Spark for Az
description: Learn how to use Delta Lake in Apache Spark for Azure Synapse Analytics, to create, and use tables with ACID properties. -+
synapse-analytics Apache Spark External Metastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-external-metastore.md
Title: Use external Hive Metastore for Azure Synapse Spark Pool description: Learn how to set up external Hive Metastore for Azure Synapse Spark Pool. keywords: external Hive Metastore,share,Synapse-+
synapse-analytics Apache Spark History Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-history-server.md
Title: Use the extended Spark history server to debug apps
description: Use the extended Spark history server to debug and diagnose Spark applications in Azure Synapse Analytics. -+ Last updated 02/15/2022
synapse-analytics Apache Spark Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-overview.md
Title: Apache Spark in Azure Synapse Analytics overview
description: This article provides an introduction to Apache Spark in Azure Synapse Analytics and the different scenarios in which you can use Spark. -+ Last updated 12/06/2022
synapse-analytics Apache Spark Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-performance.md
Title: Optimize Spark jobs for performance
description: This article provides an introduction to Apache Spark in Azure Synapse Analytics. -+ Last updated 02/15/2022
synapse-analytics Runtime For Apache Spark Lifecycle And Supportability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/runtime-for-apache-spark-lifecycle-and-supportability.md
Last updated 03/08/2024-+
synapse-analytics Spark Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/spark-dotnet.md
Title: Use .NET for Apache Spark
description: Learn about using .NET and Apache Spark to do batch processing, real-time streaming, machine learning, and write ad-hoc queries in Azure Synapse Analytics notebooks. -+
synapse-analytics Tutorial Use Pandas Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/tutorial-use-pandas-spark-pool.md
Title: 'Tutorial: Use Pandas to read/write ADLS data in serverless Apache Spark pool in Synapse Analytics' description: Tutorial for how to use Pandas in a PySpark notebook to read/write ADLS data in a serverless Apache Spark pool.-+
synapse-analytics Use Prometheus Grafana To Monitor Apache Spark Application Level Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/use-prometheus-grafana-to-monitor-apache-spark-application-level-metrics.md
description: Tutorial - Learn how to deploy the Apache Spark application metrics
-+ Last updated 01/22/2021
synapse-analytics Sql Data Warehouse Workload Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-classification.md
Workload management classification allows workload policies to be applied to req
While there are many ways to classify data warehousing workloads, the simplest and most common classification is load and query. You load data with insert, update, and delete statements. You query the data using selects. A data warehousing solution will often have a workload policy for load activity, such as assigning a higher resource class with more resources. A different workload policy could apply to queries, such as lower importance compared to load activities.
-You can also subclassify your load and query workloads. Subclassification gives you more control of your workloads. For example, query workloads can consist of cube refreshes, dashboard queries or ad hoc queries. You can classify each of these query workloads with different resource classes or importance settings. Load can also benefit from subclassification. Large transformations can be assigned to larger resource classes. Higher importance can be used to ensure key sales data is loader before weather data or a social data feed.
+You can also subclassify your load and query workloads. Subclassification gives you more control of your workloads. For example, query workloads can consist of cube refreshes, dashboard queries or ad hoc queries. You can classify each of these query workloads with different resource classes or importance settings. Load can also benefit from subclassification. Large transformations can be assigned to larger resource classes. Higher importance can be used to ensure key sales data is loaded before weather data or a social data feed.
Not all statements are classified as they do not require resources or need importance to influence execution. `DBCC` commands, `BEGIN`, `COMMIT`, and `ROLLBACK TRANSACTION` statements are not classified.
synapse-analytics Synapse Notebook Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-notebook-activity.md
Title: Transform data by running a Synapse notebook
description: In this article, you learn how to create and develop a Synapse notebook activity and a Synapse pipeline. -+ Last updated 05/19/2021
update-manager Guidance Migration Automation Update Management Azure Update Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/guidance-migration-automation-update-management-azure-update-manager.md
- Title: Guidance to move virtual machines from Automation Update Management to Azure Update Manager
-description: Guidance overview on migration from Automation Update Management to Azure Update Manager
--- Previously updated : 05/09/2024---
-# Move from Automation Update Management to Azure Update Manager
-
-**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers
-
-This article provides guidance to move virtual machines from Automation Update Management to Azure Update Manager.
-
-Azure Update Manager provides a SaaS solution to manage and govern software updates to Windows and Linux machines across Azure, on-premises, and multicloud environments. It's an evolution of [Azure Automation Update management solution](../automation/update-management/overview.md) with new features and functionality, for assessment and deployment of software updates on a single machine or on multiple machines at scale.
-
-For the Azure Update Manager, both AMA and MMA aren't a requirement to manage software update workflows as it relies on the Microsoft Azure VM Agent for Azure VMs and Azure connected machine agent for Arc-enabled servers. When you perform an update operation for the first time on a machine, an extension is pushed to the machine and it interacts with the agents to assess missing updates and install updates.
--
-> [!NOTE]
-> - If you are using Azure Automation Update Management Solution, we recommend that you don't remove MMA agents from the machines without completing the migration to Azure Update Manager for the machine's patch management needs. If you remove the MMA agent from the machine without moving to Azure Update Manager, it would break the patching workflows for that machine.
->
-> - All capabilities of Azure Automation Update Management will be available on Azure Update Manager before the deprecation date.
-
-## Azure portal experience
-
-This section explains how to use the portal experience to move schedules and machines from Automation Update Management to Azure Update Manager. With minimal clicks and automated way to move your resources, it's the easiest way to move if you don't have customizations built on top of your Automation Update Management solution.
-
-To access the portal migration experience, you can use several entry points.
-
-Select the **Migrate Now** button present on the following entry points. After the selection, you're guided through the process of moving your schedules and machines to Azure Update Manager. This process is designed to be user-friendly and straight forward to allow you to complete the migration with minimal effort.
-
-You can migrate from any of the following entry points:
-
-#### [Automation Update Management](#tab/update-mgmt)
-
-Select the **Migrate Now** button and a migration blade opens. It contains a summary of all resources including machines, and schedules in the Automation account. By default, the Automation account from which you accessed this blade is preselected if you go by this route.
-
- :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/migrate-from-update-management.png" alt-text="Screenshot that shows how to migrate from Automation Update Management entry point." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/migrate-from-update-management.png":::
-
-Here, you can see how many of Azure, Arc-enabled servers, non-Azure non Arc-enabled servers, and schedules are enabled in Automation Update Management and need to be moved to Azure Update Manager. You can also view the details of these resources.
-
-The migration blade provides an overview of the resources that will be moved, allowing you to review and confirm the migration before proceeding. Once you're ready, you can proceed with the migration process to move your schedules and machines to Azure Update Manager.
--
-After you review the resources that must be moved, you can proceed with the migration process which is a three-step process:
-
-1. **Prerequisites**
-
- This includes two steps:
-
- a. **Onboard non-Azure non-Arc-enabled machines to Arc** - This is because Arc connectivity is a prerequisite for Azure Update Manager. Onboarding your machines to Azure Arc is free of cost, and once you do so, you can avail all management services as you can do for any Azure machine. For more information, see [Azure Arc documentation](../azure-arc/servers/onboard-service-principal.md)
- on how to onboard your machines.
-
- b. **Download and run PowerShell script locally** - This is required for the creation of a user identity and appropriate role assignments so that the migration can take place. This script gives proper RBAC to the User Identity on the subscription to which the automation account belongs, machines onboarded to Automation Update Management, scopes that are part of dynamic queries etc. so that the configuration can be assigned to the machines, MRP configurations can be created and updates solution can be removed. For more information, see [Azure Update Manager documentation](guidance-migration-automation-update-management-azure-update-manager.md#prerequisite-2-create-user-identity-and-role-assignments-by-running-powershell-script).
-
- :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/prerequisite-migration-update-manager.png" alt-text="Screenshot that shows the prerequisites for migration." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/prerequisite-migration-update-manager.png":::
-
-1. **Move resources in Automation account to Azure Update Manager**
-
- The next step in the migration process is to enable Azure Update Manager on the machines to be moved and create equivalent maintenance configurations for the schedules to be migrated. When you select the **Migrate Now** button, it imports the *MigrateToAzureUpdateManager* runbook into your Automation account and sets the verbose logging to **True**.
-
- :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/step-two-migrate-workload.png" alt-text="Screenshot that shows how to migrate workload in your Automation account." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/step-two-migrate-workload.png":::
-
- Select **Start** runbook, which presents the parameters that must be passed to the runbook.
-
- :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/start-runbook-migration.png" alt-text="Screenshot that shows how to start runbook to allow the parameters to be passed to the runbook." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/start-runbook-migration.png":::
-
- For more information on the parameters to fetch and the location from where it must be fetched, see [migration of machines and schedules](#step-1-migration-of-machines-and-schedules). Once you start the runbook after passing in all the parameters, Azure Update Manager will begin to get enabled on machines and maintenance configuration in Azure Update Manager will start getting created. You can monitor Azure runbook logs for the status of execution and migration of schedules.
--
-1. **Deboard resources from Automation Update management**
-
- Run the clean-up script to deboard machines from the Automation Update Management solution and disable Automation Update Management schedules.
-
- After you select the **Run clean-up script** button, the runbook *DeboardFromAutomationUpdateManagement* will be imported into your Automation account, and its verbose logging is set to **True**.
-
- :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/run-clean-up-script.png" alt-text="Screenshot that shows how to perform post migration." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/run-clean-up-script.png":::
-
- When you select **Start** the runbook, asks for parameters to be passed to the runbook. For more information, see [Deboarding from Automation Update Management solution](#step-2-deboarding-from-automation-update-management-solution) to fetch the parameters to be passed to the runbook.
-
- :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/deboard-update-management-start-runbook.png" alt-text="Screenshot that shows how to deboard from Automation Update Management and starting the runbook." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/deboard-update-management-start-runbook.png":::
-
-#### [Azure Update Manager](#tab/update-manager)
-
-You can initiate migration from Azure Update Manager. On the top of screen, you can see a deprecation banner with a **Migrate Now** button at the top of screen.
-
- :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/migration-entry-update-manager.png" alt-text="Screenshot that shows how to migrate from Azure Update Manager entry point." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/migration-entry-update-manager.png":::
-
-Select **Migrate Now** button to view the migration blade that allows you to select the Automation account whose resources you want to move from Automation Update Management to Azure Update Manager. You must select subscription, resource group, and finally the Automation account name. After you select, you will view the summary of machines and schedules to be migrated to Azure Update Manager. From here, follow the migration steps listed in [Automation Update Management](#azure-portal-experience).
-
-#### [Virtual machine](#tab/virtual-machine)
-
-To initiate migration from a single VM **Updates** view, follow these steps:
-
-1. Select the machine that is enabled for Automation Update Management and under **Operations**, select **Updates**.
-1. In the deprecation banner, select the **Migrate Now** button.
-
- :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/migrate-single-virtual-machine.png" alt-text="Screenshot that shows how to migrate from single virtual machine entry point." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/migrate-single-virtual-machine.png":::
-
- You can see that the Automation account to which the machine belongs is preselected and a summary of all resources in the Automation account is presented. This allows you to migrate the resources from Automation Update Management to Azure Update Manager.
-
- :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/single-vm-migrate-now.png" alt-text="Screenshot that shows how to migrate the resources from single virtual machine entry point." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/single-vm-migrate-now.png":::
-
- From here, follow the migration steps listed in [Automation Update Management](#azure-portal-experience).
-
- For more information on how the scripts are executed in the backend, and their behavior see, [Migration scripts](#migration-scripts).
---
-## Migration scripts
-
-Using migration runbooks, you can automatically migrate all workloads (machines and schedules) from Automation Update Management to Azure Update Manager. This section details on how to run the script, what the script does at the backend, expected behavior, and any limitations, if applicable. The script can migrate all the machines and schedules in one automation account at one go. If you have multiple automation accounts, you have to run the runbook for all the automation accounts.
-
-At a high level, you need to follow the below steps to migrate your machines and schedules from Automation Update Management to Azure Update Manager.
-
-### Prerequisites summary
-
-1. Onboard [non-Azure machines on to Azure Arc](../azure-arc/servers/onboard-service-principal.md).
-1. Download and run the PowerShell script for the creation of User Identity and Role Assignments locally on your system. See detailed instructions in the [step-by-step guide](#step-by-step-guide) as it also has certain prerequisites.
-
-### Steps summary
-
-1. Run migration automation runbook for migrating machines and schedules from Automation Update Management to Azure Update Manager. See detailed instructions in the [step-by-step guide](#step-by-step-guide).
-1. Run cleanup scripts to deboard from Automation Update Management. See detailed instructions in the [step-by-step guide](#step-by-step-guide).
-
-### Unsupported scenarios
--- Non-Azure Saved Search Queries won't be migrated; these have to be migrated manually.-
-For the complete list of limitations and things to note, see the last section of this article.
-
-### Step-by-step guide
-
-The information mentioned in each of the above steps is explained in detail below.
-
-#### Prerequisite 1: Onboard Non-Azure Machines to Arc
-
-**What to do**
-
-Migration automation runbook ignores resources that aren't onboarded to Arc. It's therefore a prerequisite to onboard all non-Azure machines on to Azure Arc before running the migration runbook. Follow the steps to [onboard machines on to Azure Arc](../azure-arc/servers/onboard-service-principal.md).
-
-#### Prerequisite 2: Create User Identity and Role Assignments by running PowerShell script
--
-**A. Prerequisites to run the script**
-
- - Run the command `Install-Module -Name Az -Repository PSGallery -Force` in PowerShell. The prerequisite script depends on Az.Modules. This step is required if Az.Modules aren't present or updated.
- - To run this prerequisite script, you must have *Microsoft.Authorization/roleAssignments/write* permissions on all the subscriptions that contain Automation Update Management resources such as machines, schedules, log analytics workspace, and automation account. See [how to assign an Azure role](../role-based-access-control/role-assignments-rest.md#assign-an-azure-role).
- - You must have the [Update Management Permissions](../automation/automation-role-based-access-control.md).
-
- :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/prerequisite-install-module.png" alt-text="Screenshot that shows how the command to install module." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/prerequisite-install-module.png":::
--
-**B. Run the script**
-
- Download and run the PowerShell script [`MigrationPrerequisiteScript`](https://github.com/azureautomation/Preqrequisite-for-Migration-from-Azure-Automation-Update-Management-to-Azure-Update-Manager/blob/main/MigrationPrerequisites.ps1) locally. This script takes AutomationAccountResourceId of the Automation account to be migrated and AutomationAccountAzureEnvironment as the inputs. The accepted values for AutomationAccountAzureEnvironment are AzureCloud, AzureUSGovernment and AzureChina signifying the cloud to which the automation account belongs.
-
- :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/run-script.png" alt-text="Screenshot that shows how to download and run the script." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/run-script.png":::
-
- You can fetch AutomationAccountResourceId by going to **Automation Account** > **Properties**.
-
- :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/fetch-resource-id.png" alt-text="Screenshot that shows how to fetch the resource ID." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/fetch-resource-id.png":::
-
-**C. Verify**
-
- After you run the script, verify that a user managed identity is created in the automation account. **Automation account** > **Identity** > **User Assigned**.
-
- :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/script-verification.png" alt-text="Screenshot that shows how to verify that a user managed identity is created." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/script-verification.png":::
-
-**D. Backend operations by the script**
-
- 1. Updating the Az.Modules for the Automation account, which will be required for running migration and deboarding scripts.
- 1. Creates an automation variable with name AutomationAccountAzureEnvironment which will store the Azure Cloud Environment to which Automation Account belongs.
- 1. Creation of User Identity in the same Subscription and resource group as the Automation Account. The name of User Identity will be like *AutomationAccount_aummig_umsi*.
- 1. Attaching the User Identity to the Automation Account.
- 1. The script assigns the following permissions to the user managed identity: [Update Management Permissions Required](../automation/automation-role-based-access-control.md#update-management-permissions).
--
- 1. For this, the script fetches all the machines onboarded to Automation Update Management under this automation account and parse their subscription IDs to be given the required RBAC to the User Identity.
- 1. The script gives a proper RBAC to the User Identity on the subscription to which the automation account belongs so that the MRP configs can be created here.
- 1. The script assigns the required roles for the Log Analytics workspace and solution.
- 1. Registration of required subscriptions to Microsoft.Maintenance and Microsoft.EventGrid Resource Providers.
-
-#### Step 1: Migration of machines and schedules
-
-This step involves using an automation runbook to migrate all the machines and schedules from an automation account to Azure Update Manager.
-
-**Follow these steps:**
-
-1. Import [migration runbook](https://github.com/azureautomation/Migrate-from-Azure-Automation-Update-Management-to-Azure-Update-Manager/blob/main/Migration.ps1) from the runbooks gallery and publish. Search for **azure automation update** from browse gallery, and import the migration runbook named **Migrate from Azure Automation Update Management to Azure Update Manager** and publish the runbook.
-
- :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/migrate-from-automation-update-management.png" alt-text="Screenshot that shows how to migrate from Automation Update Management." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/migrate-from-automation-update-management.png":::
-
- Runbook supports PowerShell 5.1.
-
- :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/runbook-support.png" alt-text="Screenshot that shows runbook supports PowerShell 5.1 while importing." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/runbook-support.png":::
-
-1. Set Verbose Logging to True for the runbook.
-
- :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/verbose-log-records.png" alt-text="Screenshot that shows how to set verbose log records." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/verbose-log-records.png":::
-
-1. Run the runbook and pass the required parameters like AutomationAccountResourceId, UserManagedServiceIdentityClientId, etc.
-
- :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/run-runbook-parameters.png" alt-text="Screenshot that shows how to run the runbook and pass the required parameters." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/run-runbook-parameters.png":::
-
- 1. You can fetch AutomationAccountResourceId from **Automation Account** > **Properties**.
-
- :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/fetch-resource-id-portal.png" alt-text="Screenshot that shows how to fetch Automation account resource ID." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/fetch-resource-id-portal.png":::
-
- 1. You can fetch UserManagedServiceIdentityClientId from **Automation Account** > **Identity** > **User Assigned** > **Identity** > **Properties** > **Client ID**.
-
- :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/fetch-client-id.png" alt-text="Screenshot that shows how to fetch client ID." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/fetch-client-id.png":::
-
- 1. Setting **EnablePeriodicAssessmentForMachinesOnboardedToUpdateManagement** to **TRUE** would enable periodic assessment property on all the machines onboarded to Automation Update Management.
-
- 1. Setting **MigrateUpdateSchedulesAndEnablePeriodicAssessmentonLinkedMachines** to **TRUE** would migrate all the update schedules in Automation Update Management to Azure Update Manager and would also turn on periodic assessment property to **True** on all the machines linked to these schedules.
-
- 1. You need to specify **ResourceGroupForMaintenanceConfigurations** where all the maintenance configurations in Azure Update Manager would be created. If you supply a new name, a resource group would be created where all the maintenance configurations would be created. However, if you supply a name with which a resource group already exists, all the maintenance configurations would be created in the existing resource group.
-
-1. Check Azure Runbook Logs for the status of execution and migration status of SUCs.
-
- :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/log-status.png" alt-text="Screenshot that shows the runbook logs." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/fetch-client-id.png":::
-
-**Runbook operations in backend**
-
-The migration of runbook does the following tasks:
--- Enables periodic assessment on all machines.-- All schedules in the automation account are migrated to Azure Update Manager and a corresponding maintenance configuration is created for each of them, having the same properties.-
-**About the script**
-
-The following is the behavior of the migration script:
--- Check if a resource group with the name taken as input is already present in the subscription of the automation account or not. If not, then create a resource group with the name specified by the customer. This resource group is used for creating the MRP configs for V2. -- RebootOnly Setting isn't available in Azure Update Manager. Schedules having RebootOnly Setting aren't migrated.-- Filter out SUCs that are in the errored/expired/provisioningFailed/disabled state and mark them as **Not Migrated**, and print the appropriate logs indicating that such SUCs aren't migrated. -- The config assignment name is a string that will be in the format **AUMMig_AAName_SUCName** -- Figure out if this Dynamic Scope is already assigned to the Maintenance config or not by checking against Azure Resource Graph. If not assigned, then only assign with assignment name in the format **AUMMig_ AAName_SUCName_SomeGUID**.-- For schedules having pre/post tasks configured, the script will create an automation webhook for the runbooks in pre/post tasks and event grid subscriptions for pre/post maintenance events. For more information, see [how pre/post works in Azure Update Manager](tutorial-webhooks-using-runbooks.md)-- A summarized set of logs is printed to the Output stream to give an overall status of machines and SUCs. -- Detailed logs are printed to the Verbose Stream. -- Post-migration, a Software Update Configuration can have any one of the following four migration statuses:-
- - **MigrationFailed**
- - **PartiallyMigrated**
- - **NotMigrated**
- - **Migrated**
-
-The below table shows the scenarios associated with each Migration Status.
-
-| **MigrationFailed** | **PartiallyMigrated** | **NotMigrated** | **Migrated** |
-|||||
-|Failed to create Maintenance Configuration for the Software Update Configuration.| Non-Zero number of Machines where Patch-Settings failed to apply.| Failed to get software update configuration from the API due to some client/server error like maybe **internal Service Error**.| |
-| | Non-Zero number of Machines with failed Configuration Assignments.| Software Update Configuration is having reboot setting as reboot only. This isn't supported today in Azure Update Manager.| |
-| | Non-Zero number of Dynamic Queries failed to resolve that is failed to execute the query against Azure Resource Graph.| | |
-| | Non-Zero number of Dynamic Scope Configuration assignment failures.| Software Update Configuration isn't having succeeded provisioning state in DB.| |
-| | Software Update Configuration is having Saved Search Queries.| Software Update Configuration is in errored state in DB.| |
-| | Software Update Configuration is having pre/post tasks which have not been migrated successfully. | Schedule associated with Software Update Configuration is already expired at the time of migration.| |
-| | | Schedule associated with Software Update Configuration is disabled.| |
-| | | Unhandled exception while migrating software update configuration.| Zero Machines where Patch-Settings failed to apply.<br><br> **And** <br><br> Zero Machines with failed Configuration Assignments. <br><br> **And** <br><br> Zero Dynamic Queries failed to resolve that is failed to execute the query against Azure Resource Graph. <br><br> **And** <br><br> Zero Dynamic Scope Configuration assignment failures. <br><br> **And** <br><br> Software Update Configuration has zero Saved Search Queries.|
-
-To figure out from the table above which scenario/scenarios correspond to why the software update configuration has a specific status, look at the verbose/failed/warning logs to get the error code and error message.
-
-You can also search with the name of the update schedule to get logs specific to it for debugging.
--
-#### Step 2: Deboarding from Automation Update Management solution
-
-**Follow these steps:**
-
-1. Import the migration runbook from runbooks gallery. Search for **azure automation update** from browse gallery, and import the migration runbook named **Deboard from Azure Automation Update Management** and publish the runbook.
-
- :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/deboard-from-automation-update-management.png" alt-text="Screenshot that shows how to import the deaboard migration runbook." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/deboard-from-automation-update-management.png":::
-
- Runbook supports PowerShell 5.1.
-
- :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/deboard-runbook-support.png" alt-text="Screenshot that shows the runbook supports PowerShell 5.1 while deboarding." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/deboard-runbook-support.png":::
-
-1. Set Verbose Logging to **True** for the Runbook.
-
- :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/verbose-log-records-deboard.png" alt-text="Screenshot that shows log verbose records setting while deboarding." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/verbose-log-records-deboard.png":::
-
-1. Start the runbook and pass parameters such as Automation AccountResourceId, UserManagedServiceIdentityClientId, etc.
-
- :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/deboard-runbook-parameters.png" alt-text="Screenshot that shows how to start runbook and pass parameters while deboarding." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/deboard-runbook-parameters.png":::
-
- You can fetch AutomationAccountResourceId from **Automation Account** > **Properties**.
-
- :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/fetch-resource-id-deboard.png" alt-text="Screenshot that shows how to fetch resource ID while deboarding." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/deboard-runbook-parameters.png":::
-
- You can fetch UserManagedServiceIdentityClientId from **Automation Account** > **Identity** > **User Assigned** > **Identity** > **Properties** > **Client ID**.
-
- :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/deboard-fetch-client-id.png" alt-text="Screenshot that shows how to fetch client ID while deboarding." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/deboard-fetch-client-id.png":::
-
-1. Check Azure runbook logs for the status of deboarding of machines and schedules.
-
- :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/deboard-debug-logs.png" alt-text="Screenshot that shows how runbook logs while deboarding." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/deboard-debug-logs.png":::
-
-**Deboarding script operations in the backend**
--- Disable all the underlying schedules for all the software update configurations present in this Automation account. This is done to ensure that Patch-MicrosoftOMSComputers Runbook isn't triggered for SUCs that were partially migrated to V2. -- Delete the Updates Solution from the Linked Log Analytics Workspace for the Automation Account being Deboarded from Automation Update Management in V1. -- A summarized log of all SUCs disabled and status of removing updates solution from linked log analytics workspace is also printed to the output stream. -- Detailed logs are printed on the verbose streams.
-
-**Callouts for the migration process:**
--- Non-Azure Saved Search Queries won't be migrated. -- The Migration and Deboarding Runbooks need to have the Az.Modules updated to work. -- The prerequisite script updates the Az.Modules to the latest version 8.0.0.-- The StartTime of the MRP Schedule will be equal to the nextRunTime of the Software Update Configuration. -- Data from Log Analytics won't be migrated. -- User Managed Identities [don't support](/entra/identity/managed-identities-azure-resources/managed-identities-faq#can-i-use-a-managed-identity-to-access-a-resource-in-a-different-directorytenant) cross tenant scenarios.-- RebootOnly Setting isn't available in Azure Update Manager. Schedules having RebootOnly Setting won't be migrated.-- For Recurrence, Automation schedules support values between (1 to 100) for Hourly/Daily/Weekly/Monthly schedules, whereas Azure Update ManagerΓÇÖs maintenance configuration supports between (6 to 35) for Hourly and (1 to 35) for Daily/Weekly/Monthly.
- - For example, if the automation schedule has a recurrence of every 100 Hours, then the equivalent maintenance configuration schedule has it for every 100/24 = 4.16 (Round to Nearest Value) -> Four days will be the recurrence for the maintenance configuration.
- - For example, if the automation schedule has a recurrence of every 1 hour, then the equivalent maintenance configuration schedule will have it every 6 hours.
- - Apply the same convention for Weekly and Daily.
- - If the automation schedule has daily recurrence of say 100 days, then 100/7 = 14.28 (Round to Nearest Value) -> 14 weeks will be the recurrence for the maintenance configuration schedule.
- - If the automation schedule has weekly recurrence of say 100 weeks, then 100/4.34 = 23.04 (Round to Nearest Value) -> 23 Months will be the recurrence for the maintenance configuration schedule.
- - If the automation schedule that should recur Every 100 Weeks and has to be Executed on Fridays. When translated to maintenance configuration, it will be Every 23 Months (100/4.34). But there's no way in Azure Update Manager to say that execute every 23 Months on all Fridays of that Month, so the schedule won't be migrated.
- - If an automation schedule has a recurrence of more than 35 Months, then in maintenance configuration it will always have 35 Months Recurrence.
- - SUC supports between 30 Minutes to six Hours for the Maintenance Window. MRP supports between 1 hour 30 minutes to 4 hours.
- - For example, if SUC has a Maintenance Window of 30 Minutes, then the equivalent MRP schedule will have it for 1 hour 30 minutes.
- - For example, if SUC has a Maintenance Window of 6 hours, then the equivalent MRP schedule will have it for 4 hours.
-- When the migration runbook is executed multiple times, say you did Migrate All automation schedules and then again tried to migrate all the schedules, then migration runbook will run the same logic. Doing it again will update the MRP schedule if any new change is present in SUC. It won't make duplicate config assignments. Also, operations are carried only for automation schedules having enabled schedules. If an SUC was **Migrated** earlier, it will be skipped in the next turn as its underlying schedule will be **Disabled**. -- In the end, you can resolve more machines from Azure Resource Graph as in Azure Update Manager; You can't check if Hybrid Runbook Worker is reporting or not, unlike in Automation Update Management where it was an intersection of Dynamic Queries and Hybrid Runbook Worker.--
-## Manual migration guidance
-
-Guidance to move various capabilities is provided in table below:
-
-**S.No** | **Capability** | **Automation Update Management** | **Azure Update Manager** | **Steps using Azure portal** | **Steps using API/script** |
- | | | | | |
-1 | Patch management for Off-Azure machines. | Could run with or without Arc connectivity. | Azure Arc is a prerequisite for non-Azure machines. | 1. [Create service principal](../app-service/quickstart-php.md#1get-the-sample-repository) </br> 2. [Generate installation script](../azure-arc/servers/onboard-service-principal.md#generate-the-installation-script-from-the-azure-portal) </br> 3. [Install agent and connect to Azure](../azure-arc/servers/onboard-service-principal.md#install-the-agent-and-connect-to-azure) | 1. [Create service principal](../azure-arc/servers/onboard-service-principal.md#azure-powershell) <br> 2. [Generate installation script](../azure-arc/servers/onboard-service-principal.md#generate-the-installation-script-from-the-azure-portal) </br> 3. [Install agent and connect to Azure](../azure-arc/servers/onboard-service-principal.md#install-the-agent-and-connect-to-azure) |
-2 | Enable periodic assessment to check for latest updates automatically every few hours. | Machines automatically receive the latest updates every 12 hours for Windows and every 3 hours for Linux. | Periodic assessment is an update setting on your machine. If it's turned on, the Update Manager fetches updates every 24 hours for the machine and shows the latest update status. | 1. [Single machine](manage-update-settings.md#configure-settings-on-a-single-vm) </br> 2. [At scale](manage-update-settings.md#configure-settings-at-scale) </br> 3. [At scale using policy](periodic-assessment-at-scale.md) | 1. [For Azure VM](../virtual-machines/automatic-vm-guest-patching.md#azure-powershell-when-updating-a-windows-vm) </br> 2.[For Arc-enabled VM](/powershell/module/az.connectedmachine/update-azconnectedmachine) |
-3 | Static Update deployment schedules (Static list of machines for update deployment). | Automation Update management had its own schedules. | Azure Update Manager creates a [maintenance configuration](../virtual-machines/maintenance-configurations.md) object for a schedule. So, you need to create this object, copying all schedule settings from Automation Update Management to Azure Update Manager schedule. | 1. [Single VM](scheduled-patching.md#schedule-recurring-updates-on-a-single-vm) </br> 2. [At scale](scheduled-patching.md#schedule-recurring-updates-at-scale) </br> 3. [At scale using policy](scheduled-patching.md#onboard-to-schedule-by-using-azure-policy) | [Create a static scope](manage-vms-programmatically.md) |
-4 | Dynamic Update deployment schedules (Defining scope of machines using resource group, tags, etc. that is evaluated dynamically at runtime).| Same as static update schedules. | Same as static update schedules. | [Add a dynamic scope](manage-dynamic-scoping.md#add-a-dynamic-scope) | [Create a dynamic scope]( tutorial-dynamic-grouping-for-scheduled-patching.md#create-a-dynamic-scope) |
-5 | Deboard from Azure Automation Update management. | After you complete the steps 1, 2, and 3, you need to clean up Azure Update management objects. | | [Remove Update Management solution](../automation/update-management/remove-feature.md#remove-updatemanagement-solution) </br> | NA |
-6 | Reporting | Custom update reports using Log Analytics queries. | Update data is stored in Azure Resource Graph (ARG). Customers can query ARG data to build custom dashboards, workbooks etc. | The old Automation Update Management data stored in Log analytics can be accessed, but there's no provision to move data to ARG. You can write ARG queries to access data that will be stored to ARG after virtual machines are patched via Azure Update Manager. With ARG queries you can, build dashboards and workbooks using following instructions: </br> 1. [Log structure of Azure Resource graph updates data](query-logs.md) </br> 2. [Sample ARG queries](sample-query-logs.md) </br> 3. [Create workbooks](manage-workbooks.md) | NA |
-7 | Customize workflows using pre and post scripts. | Available as Automation runbooks. | We recommend that you try out the Public Preview for pre and post scripts on your non-production machines and use the feature on production workloads once the feature enters General Availability. |[Manage pre and post events (preview)](manage-pre-post-events.md) and [Tutorial: Create pre and post events using a webhook with Automation](tutorial-webhooks-using-runbooks.md) | |
-8 | Create alerts based on updates data for your environment | Alerts can be set up on updates data stored in Log Analytics. | We recommend that you try out the Public Preview for alerts on your non-production machines and use the feature on production workloads once the feature enters General Availability. |[Create alerts (preview)](manage-alerts.md) | |
--
-## Next steps
--- [Guidance on migrating Azure VMs from Microsoft Configuration Manager to Azure Update Manager](./guidance-migration-azure.md)
update-manager Guidance Migration Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/guidance-migration-azure.md
description: Patching guidance overview for Microsoft Configuration Manager to A
Previously updated : 07/05/2024 Last updated : 07/31/2024
Before initiating migration, you need to understand mapping between System Cente
| System Center Operations Manager (SCOM) | Azure Monitor SCOM Managed Instance | | System Center Configuration Manager (SCCM), now called Microsoft Configuration Manager (MCM) | Azure Update Manager, </br> Change Tracking and Inventory, </br> Guest Config, </br> Azure Automation, </br> Desired State Configuration (DSC), </br> Defender for Cloud | | System Center Virtual Machine Manager (SCVMM) | Arc enabled System Center VMM |
-| System Center Data Protection Manager (SCDPM) | Arc enabled DPM |
-| System Center Orchestrator (SCORCH) | Arc enabled DPM |
+| System Center Data Protection Manager (SCDPM) | DPM |
+| System Center Orchestrator (SCORCH) | Azure Automation |
| System Center Service Manager (SCSM) | - | > [!NOTE]
update-manager Migration Key Points https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/migration-key-points.md
+
+ Title: Important details during migration either by portal or scripts in Azure Update Manager
+description: A summary of important pointers while migrating using Azure portal or migration scripts in Azure Update Manager
+++ Last updated : 07/30/2024+++
+# Key points for automated migration
+
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers
+
+This article lists the significant details that you must note when you're migrating using the portal migration tool or migration scripts.
+
+## Important reminders
+
+- Non-Azure Saved Search Queries aren't migrated.
+- The Migration and Deboarding Runbooks need to have the Az.Modules updated to work.
+- The prerequisite script updates the Az.Modules to the latest version 8.0.0.
+- The StartTime of the MRP Schedule will be equal to the nextRunTime of the Software Update Configuration.
+- Data from Log Analytics isn't migrated.
+- User Managed Identities [don't support](/entra/identity/managed-identities-azure-resources/managed-identities-faq#can-i-use-a-managed-identity-to-access-a-resource-in-a-different-directorytenant) cross tenant scenarios.
+- RebootOnly Setting isn't available in Azure Update Manager. Schedules with the RebootOnly setting aren't migrated..
+- For Recurrence, Automation schedules support values between (1 to 100) for Hourly/Daily/Weekly/Monthly schedules, whereas Azure Update ManagerΓÇÖs maintenance configuration supports between (6 to 35) for Hourly and (1 to 35) for Daily/Weekly/Monthly. See the following examples:
+
+ | **Automation schedule recurrence** | **Maintenance configuration schedule recurrence calculation** |
+ |||
+ | **100 hours** | 100/24 = 4.16 (Round to Nearest Value) -> every four days |
+ | **1 hour** | Every 6 hours as it is the minimum value |
+ | **100 days** | 100/7 = 14.28 (Round to Nearest Value) -> every 14 weeks |
+ | **100 weeks** | 100/4.34 = 23.04 (Round to Nearest Value) -> every 23 Months |
+ | **Every 100 Weeks and must be Executed on Fridays** | 23 Months (100/4.34). But there's no way in Azure Update Manager to say that execute every 23 Months on all Fridays of that Month, so the schedule isn't migrated. |
+ | **More than 35 Months** | 35 months recurrence |
+
+- SUC supports between 30 Minutes to six Hours for the Maintenance Window. MRP supports between 1 hour 30 minutes to 4 hours.
+
+ | **Maintenance window in Automation Update Management** | **Maintenance window in Azure Update Manager** |
+ |||
+ | **30 minutes** | one hour 30 minutes |
+ | **6 hours** | Four hours |
+
+- When the migration runbook is executed multiple times, say you did Migrate All automation schedules and then again tried to migrate all the schedules, then migration runbook runs the same logic. Doing it again updates the MRP schedule if any new change is present in SUC. It doesn't make duplicate config assignments. Also, operations are carried only for automation schedules having enabled schedules. If an SUC was **Migrated** earlier, it will be skipped in the next turn as its underlying schedule will be **Disabled**.
+- In the end, you can resolve more machines from Azure Resource Graph as in Azure Update Manager. You can't check if Hybrid Runbook Worker is reporting or not, unlike in Automation Update Management where it was an intersection of Dynamic Queries and Hybrid Runbook Worker.
+- Machines that are unsupported in Azure Update Manager aren't migrated. The Schedules, which have such machines will be partially migrated and only supported machines of the software update configuration will be moved to Azure Update Manager. To prevent patching by both Automation Update Management and Azure Update Manager, remove migrated machines from deployment schedules in Automation Update Management.
+
+Post-migration, a Software Update Configuration can have any one of the following four migration statuses:
+
+- MigrationFailed
+- PartiallyMigrated
+- NotMigrated
+- Migrated
+
+The following table shows the scenarios associated with each Migration Status:
+
+| **MigrationFailed** | **PartiallyMigrated** |**NotMigrated** | **Migrated** |
+|||||
+| Failed to create Maintenance Configuration for the Software Update Configuration| Non-Zero number of Machines where Patch-Settings failed to apply. </br> For example, if a machine is unsupported in Azure Update Manager, then status of the Software Update Configuration will be partially migrated. | Failed to get software update configuration from the API due to some client/server error such as **internal Service Error.** | Zero Machines where Patch-Settings failed to apply </br> **And** </br> Zero Machines with failed Configuration Assignments. </br> **And** </br> Zero Dynamic Queries failed to resolve that is failed to execute the query against Azure Resource Graph. </br> **And** </br> Zero Dynamic Scope Configuration assignment failures </br> **And** </br> Software Update Configuration has zero Saved Search Queries.|
+| | Non-Zero number of Machines with failed Configuration Assignments. | Software Update Configuration is having reboot setting as reboot only. This isn't supported today in Azure Update Manager. | |
+| | Non-Zero number of Dynamic Queries failed to resolve that is failed to execute the query against Azure Resource Graph. | Software Update Configuration doesn't have a succeeded provisioning state in DB. | |
+| | Non-Zero number of Dynamic Scope Configuration assignment failures. | Software Update Configuration is in errored state in DB. | |
+| | Software Update Configuration is having Saved Search Queries. | Schedule associated with Software Update Configuration is already expired at the time of migration. | |
+| | Software Update Configuration is having pre/post tasks, which haven't been migrated successfully| Schedule associated with Software Update Configuration is disabled. | |
+| | | Unhandled exception while migrating software update configuration. |
+
+## Next steps
+
+- [An overview of migration](migration-overview.md)
+- [Migration using Azure portal](migration-using-portal.md)
+- [Migration using runbook scripts](migration-using-runbook-scripts.md)
+- [Manual migration guidance](migration-manual.md)
update-manager Migration Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/migration-manual.md
+
+ Title: Manual migration from Automation Update Management to Azure Update Manager
+description: Guidance on manual migration while migrating from Automation Update Management to Azure Update Manager.
+++ Last updated : 05/09/2024+++
+# Manual migration guidance
+
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers
+
+The article provides the guidance to move various resources when you migrate manually.
+
+## Guidance to move various resources
++
+**S.No** | **Capability** | **Automation Update Management** | **Azure Update Manager** | **Steps using Azure portal** | **Steps using API/script** |
+ | | | | | |
+1 | Patch management for Off-Azure machines. | Could run with or without Arc connectivity. | Azure Arc is a prerequisite for non-Azure machines. | 1. [Create service principal](../app-service/quickstart-php.md#1get-the-sample-repository) </br> 2. [Generate installation script](../azure-arc/servers/onboard-service-principal.md#generate-the-installation-script-from-the-azure-portal) </br> 3. [Install agent and connect to Azure](../azure-arc/servers/onboard-service-principal.md#install-the-agent-and-connect-to-azure) | 1. [Create service principal](../azure-arc/servers/onboard-service-principal.md#azure-powershell) <br> 2. [Generate installation script](../azure-arc/servers/onboard-service-principal.md#generate-the-installation-script-from-the-azure-portal) </br> 3. [Install agent and connect to Azure](../azure-arc/servers/onboard-service-principal.md#install-the-agent-and-connect-to-azure) |
+2 | Enable periodic assessment to check for latest updates automatically every few hours. | Machines automatically receive the latest updates every 12 hours for Windows and every 3 hours for Linux. | Periodic assessment is an update setting on your machine. If it's turned on, the Update Manager fetches updates every 24 hours for the machine and shows the latest update status. | 1. [Single machine](manage-update-settings.md#configure-settings-on-a-single-vm) </br> 2. [At scale](manage-update-settings.md#configure-settings-at-scale) </br> 3. [At scale using policy](periodic-assessment-at-scale.md) | 1. [For Azure VM](../virtual-machines/automatic-vm-guest-patching.md#azure-powershell-when-updating-a-windows-vm) </br> 2. [For Arc-enabled VM](/powershell/module/az.connectedmachine/update-azconnectedmachine) |
+3 | Static Update deployment schedules (Static list of machines for update deployment). | Automation Update management had its own schedules. | Azure Update Manager creates a [maintenance configuration](../virtual-machines/maintenance-configurations.md) object for a schedule. So, you need to create this object, copying all schedule settings from Automation Update Management to Azure Update Manager schedule. | 1. [Single VM](scheduled-patching.md#schedule-recurring-updates-on-a-single-vm) </br> 2. [At scale](scheduled-patching.md#schedule-recurring-updates-at-scale) </br> 3. [At scale using policy](scheduled-patching.md#onboard-to-schedule-by-using-azure-policy) | [Create a static scope](manage-vms-programmatically.md) |
+4 | Dynamic Update deployment schedules (Defining scope of machines using resource group, tags, etc. that is evaluated dynamically at runtime).| Same as static update schedules. | Same as static update schedules. | [Add a dynamic scope](manage-dynamic-scoping.md#add-a-dynamic-scope) | [Create a dynamic scope]( tutorial-dynamic-grouping-for-scheduled-patching.md#create-a-dynamic-scope) |
+5 | Deboard from Azure Automation Update management. | After you complete the steps 1, 2, and 3, you need to clean up Azure Update management objects. | | [Remove Update Management solution](../automation/update-management/remove-feature.md#remove-updatemanagement-solution) </br> | NA |
+6 | Reporting | Custom update reports using Log Analytics queries. | Update data is stored in Azure Resource Graph (ARG). Customers can query ARG data to build custom dashboards, workbooks etc. | The old Automation Update Management data stored in Log analytics can be accessed, but there's no provision to move data to ARG. You can write ARG queries to access data that will be stored to ARG after virtual machines are patched via Azure Update Manager. With ARG queries you can build dashboards and workbooks using following instructions: </br> 1. [Log structure of Azure Resource graph updates data](query-logs.md) </br> 2. [Sample ARG queries](sample-query-logs.md) </br> 3. [Create workbooks](manage-workbooks.md) | NA |
+7 | Customize workflows using pre and post scripts. | Available as Automation runbooks. | We recommend that you try out the Public Preview for pre and post scripts on your non-production machines and use the feature on production workloads once the feature enters General Availability. |[Manage pre and post events (preview)](manage-pre-post-events.md) and [Tutorial: Create pre and post events using a webhook with Automation](tutorial-webhooks-using-runbooks.md) | |
+8 | Create alerts based on updates data for your environment | Alerts can be set up on updates data stored in Log Analytics. | We recommend that you try out the Public Preview for alerts on your non-production machines and use the feature on production workloads once the feature enters General Availability. |[Create alerts (preview)](manage-alerts.md) | |
++
+## Next steps
+
+- [An overview of migration](migration-overview.md)
+- [Migration using Azure portal](migration-using-portal.md)
+- [Migration using runbook scripts](migration-using-runbook-scripts.md)
+- [Key points during migration](migration-key-points.md)
update-manager Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/migration-overview.md
+
+ Title: An overview on how to move virtual machines from Automation Update Management to Azure Update Manager
+description: A guidance overview on migration from Automation Update Management to Azure Update Manager
+++ Last updated : 05/09/2024+++
+# Overview on migration from Automation Update Management to Azure Update Manager
+
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers
+
+This article provides guidance to move virtual machines from Automation Update Management to Azure Update Manager.
+
+Azure Update Manager provides a SaaS solution to manage and govern software updates to Windows and Linux machines across Azure, on-premises, and multicloud environments. It's an evolution of [Azure Automation Update management solution](../automation/update-management/overview.md) with new features and functionality, for assessment and deployment of software updates on a single machine or on multiple machines at scale.
+
+> [!Note]
+> - On 31 August 2024, both Azure Automation Update Management and the Log Analytics agent it uses [will be retired](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). Therefore, if you are using the Azure Automation Update Management solution, we recommend that you move to Azure Update Manager for your software update needs. Follow guidance in this document to move your machines and schedules from Automation Update Management to Azure Update Manager. For more information, see the [FAQs on retirement](https://aka.ms/aum-migration-faqs) You can [sign up](https://aka.ms/AUMLive) for monthly live sessions on migration including Q&A sessions.
+> - If you are using Azure Automation Update Management Solution, we recommend that you don't remove MMA agents from the machines without completing the migration to Azure Update Manager for the machine's patch management needs. If you remove the MMA agent from the machine without moving to Azure Update Manager, it will break the patching workflows for that machine.
+
+For the Azure Update Manager, both AMA and MMA aren't a requirement to manage software update workflows as it relies on the Microsoft Azure VM Agent for Azure VMs and Azure connected machine agent for Arc-enabled servers. When you perform an update operation for the first time on a machine, an extension is pushed to the machine, and it interacts with the agents to assess missing updates and install updates.
+
+We provide three methods to move from Automation Update Management to Azure Update Manager which are explained in detail:
+- [Portal migration tool](migration-using-portal.md)
+- [Migration runbook scripts](migration-using-runbook-scripts.md)
+- [Manual migration](migration-manual.md)
++
+## Next steps
+
+- [Migration using Azure portal](migration-using-portal.md)
+- [Migration using runbook scripts](migration-using-runbook-scripts.md)
+- [Manual migration guidance](migration-manual.md)
+- [Key points during migration](migration-key-points.md)
update-manager Migration Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/migration-using-portal.md
+
+ Title: Use Azure portal to move schedules and machines from Automation Update Management to Azure Update Manager
+description: Guidance on how to use Azure portal to move schedules and machines from Automation Update Management to Azure Update Manager
+++ Last updated : 07/30/2024+++
+# Move schedules and machines from Automation Update Management to Azure Update Manager using Azure portal
+
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers
+
+
+This article explains how to use the Azure portal to move schedules and machines from Automation Update Management to Azure Update Manager. With minimal clicks and automated way to move your resources, it's the easiest way to move if you don't have customizations built on top of your Automation Update Management solution. For more details on what this portal tool is doing in the backend, please refer to [migration scripts](migration-using-runbook-scripts.md)
+
+## Azure portal experience
+
+To access the portal migration experience, you can use several entry points.
+
+Select the **Migrate Now** button present on the following entry points. After the selection, you're guided through the process of moving your schedules and machines to Azure Update Manager. This process is designed to be user-friendly and straight forward to allow you to complete the migration with minimal effort.
+
+You can migrate from any of the following entry points:
+
+#### [Automation Update Management](#tab/update-mgmt)
+
+Select the **Migrate Now** button.
+
+ :::image type="content" source="./media/migration-using-portal/migrate-from-update-management.png" alt-text="Screenshot that shows how to migrate from Automation Update Management entry point." lightbox="./media/migration-using-portal/migrate-from-update-management.png":::
+
+The migration blade opens. It contains a summary of all resources including machines, and schedules in the Automation account. By default, the Automation account from which you accessed this blade is preselected if you go by this route.
+
+Here, you can see how many of Azure, Arc-enabled servers, non-Azure non Arc-enabled servers, and schedules are enabled in Automation Update Management and need to be moved to Azure Update Manager. You can also view the details of these resources.
++
+After you review the resources that must be moved, you can proceed with the migration process which is a three-step process:
+
+1. **Prerequisites**
+
+ This includes two steps:
+
+ a. **Onboard non-Azure non-Arc-enabled machines to Arc** - This is because Arc connectivity is a prerequisite for Azure Update Manager. Onboarding your machines to Azure Arc is free of cost, and once you do so, you can avail all management services as you can do for any Azure machine. For more information, see [Azure Arc documentation](../azure-arc/servers/onboard-service-principal.md)
+ on how to onboard your machines.
+
+ b. **Download and run PowerShell script locally** - This is required for the creation of a user identity and appropriate role assignments so that the migration can take place. This script gives proper RBAC to the User Identity on the subscription to which the automation account belongs, machines onboarded to Automation Update Management, scopes that are part of dynamic queries etc. so that the configuration can be assigned to the machines, MRP configurations can be created and updates solution can be removed.
+
+ :::image type="content" source="./media/migration-using-portal/prerequisite-migration-update-manager.png" alt-text="Screenshot that shows the prerequisites for migration." lightbox="./media/migration-using-portal/prerequisite-migration-update-manager.png":::
+
+1. **Move resources in Automation account to Azure Update Manager**
+
+ The next step in the migration process is to enable Azure Update Manager on the machines to be moved and create equivalent maintenance configurations for the schedules to be migrated. When you select the **Migrate Now** button, it imports the *MigrateToAzureUpdateManager* runbook into your Automation account and sets the verbose logging to **True**.
+
+ :::image type="content" source="./media/migration-using-portal/step-two-migrate-workload.png" alt-text="Screenshot that shows how to migrate workload in your Automation account." lightbox="./media/migration-using-portal/step-two-migrate-workload.png":::
+
+ Select **Start** runbook, which presents the parameters that must be passed to the runbook.
+
+ :::image type="content" source="./media/migration-using-portal/start-runbook-migration.png" alt-text="Screenshot that shows how to start runbook to allow the parameters to be passed to the runbook." lightbox="./media/migration-using-portal/start-runbook-migration.png":::
+
+ For more information on the parameters to fetch and the location from where it must be fetched, see [migration of machines and schedules](migration-using-runbook-scripts.md#step-1-migration-of-machines-and-schedules). Once you start the runbook after passing in all the parameters, Azure Update Manager will begin to get enabled on machines and maintenance configuration in Azure Update Manager will start getting created. You can monitor Azure runbook logs for the status of execution and migration of schedules.
++
+1. **Deboard resources from Automation Update management**
+
+ Run the clean-up script to deboard machines from the Automation Update Management solution and disable Automation Update Management schedules.
+
+ After you select the **Run clean-up script** button, the runbook *DeboardFromAutomationUpdateManagement* will be imported into your Automation account, and its verbose logging is set to **True**.
+
+ :::image type="content" source="./media/migration-using-portal/run-clean-up-script.png" alt-text="Screenshot that shows how to perform post migration." lightbox="./media/migration-using-portal/run-clean-up-script.png":::
+
+ When you select **Start** the runbook, asks for parameters to be passed to the runbook. For more information, see [Deboarding from Automation Update Management solution](migration-using-runbook-scripts.md#step-2-deboarding-from-automation-update-management-solution) to fetch the parameters to be passed to the runbook.
+
+ :::image type="content" source="./media/migration-using-portal/deboard-update-management-start-runbook.png" alt-text="Screenshot that shows how to deboard from Automation Update Management and starting the runbook." lightbox="./media/migration-using-portal/deboard-update-management-start-runbook.png":::
+
+#### [Azure Update Manager](#tab/update-manager)
+
+You can initiate migration from Azure Update Manager. On the top of screen, you can see a deprecation banner with a **Migrate Now** button at the top of screen.
+
+ :::image type="content" source="./media/migration-using-portal/migration-entry-update-manager.png" alt-text="Screenshot that shows how to migrate from Azure Update Manager entry point." lightbox="./media/migration-using-portal/migration-entry-update-manager.png":::
+
+Select **Migrate Now** button to view the migration blade that allows you to select the Automation account whose resources you want to move from Automation Update Management to Azure Update Manager. You must select subscription, resource group, and finally the Automation account name. After you select, you will view the summary of machines and schedules to be migrated to Azure Update Manager. From here, follow the migration steps listed in [Automation Update Management](#azure-portal-experience).
+
+#### [Virtual machine](#tab/virtual-machine)
+
+To initiate migration from a single VM **Updates** view, follow these steps:
+
+1. Select the machine that is enabled for Automation Update Management and under **Operations**, select **Updates**.
+1. In the deprecation banner, select the **Migrate Now** button.
+
+ :::image type="content" source="./media/migration-using-portal/migrate-single-virtual-machine.png" alt-text="Screenshot that shows how to migrate from single virtual machine entry point." lightbox="./media/migration-using-portal/migrate-single-virtual-machine.png":::
+
+ You can see that the Automation account to which the machine belongs is preselected and a summary of all resources in the Automation account is presented. This allows you to migrate the resources from Automation Update Management to Azure Update Manager.
+
+ :::image type="content" source="./media/migration-using-portal/single-vm-migrate-now.png" alt-text="Screenshot that shows how to migrate the resources from single virtual machine entry point." lightbox="./media/migration-using-portal/single-vm-migrate-now.png":::
+++
+## Next steps
+
+- [An overview of migration](migration-overview.md)
+- [Migration using runbook scripts](migration-using-runbook-scripts.md)
+- [Manual migration guidance](migration-manual.md)
+- [Key points during migration](migration-key-points.md)
update-manager Migration Using Runbook Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/migration-using-runbook-scripts.md
+
+ Title: Use migration runbooks to migrate workloads from Automation Update Management to Azure Update Manager
+description: Guidance on how to use migration runbooks to move schedules and machines from Automation Update Management to Azure Update Manager
+++ Last updated : 07/30/2024+++
+# Migration using Automated runbook scripts
+
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers
+
+This article details on how by using migration runbooks, you can automatically migrate all workloads (machines and schedules) from Automation Update Management to Azure Update Manager.
+
+The following sections details on how to run the script, what the script does at the backend, expected behavior, and any limitations, if applicable. The script can migrate all the machines and schedules in one automation account at one go. If you have multiple automation accounts, you have to run the runbook for all the automation accounts.
+
+At a high level, you need to follow the below steps to migrate your machines and schedules from Automation Update Management to Azure Update Manager.
++
+### Unsupported scenarios
+
+- Non-Azure Saved Search Queries won't be migrated; these have to be migrated manually.
+
+For the complete list of limitations and things to note, see the [key points in migration](migration-key-points.md)
+
+### Step-by-step guide
+
+The information mentioned in each of the above steps is explained in detail below.
+
+#### Prerequisite 1: Onboard Non-Azure Machines to Arc
+
+**What to do**
+
+Migration automation runbook ignores resources that aren't onboarded to Arc. It's therefore a prerequisite to onboard all non-Azure machines on to Azure Arc before running the migration runbook. Follow the steps to [onboard machines on to Azure Arc](../azure-arc/servers/onboard-service-principal.md).
+
+#### Prerequisite 2: Create User Identity and Role Assignments by running PowerShell script
++
+**A. Prerequisites to run the script**
+
+ - Run the command `Install-Module -Name Az -Repository PSGallery -Force` in PowerShell. The prerequisite script depends on Az.Modules. This step is required if Az.Modules aren't present or updated.
+ - To run this prerequisite script, you must have *Microsoft.Authorization/roleAssignments/write* permissions on all the subscriptions that contain Automation Update Management resources such as machines, schedules, log analytics workspace, and automation account. See [how to assign an Azure role](../role-based-access-control/role-assignments-rest.md#assign-an-azure-role).
+ - You must have the [Update Management Permissions](../automation/automation-role-based-access-control.md).
+
+ :::image type="content" source="./media/migration-using-runbook-scripts/prerequisite-install-module.png" alt-text="Screenshot that shows how the command to install module." lightbox="./media/migration-using-runbook-scripts/prerequisite-install-module.png":::
++
+**B. Run the script**
+
+ Download and run the PowerShell script [`MigrationPrerequisiteScript`](https://github.com/azureautomation/Preqrequisite-for-Migration-from-Azure-Automation-Update-Management-to-Azure-Update-Manager/blob/main/MigrationPrerequisites.ps1) locally. This script takes AutomationAccountResourceId of the Automation account to be migrated and AutomationAccountAzureEnvironment as the inputs. The accepted values for AutomationAccountAzureEnvironment are AzureCloud, AzureUSGovernment and AzureChina signifying the cloud to which the automation account belongs.
+
+ :::image type="content" source="./media/migration-using-runbook-scripts/run-script.png" alt-text="Screenshot that shows how to download and run the script." lightbox="./media/migration-using-runbook-scripts/run-script.png":::
+
+ You can fetch AutomationAccountResourceId by going to **Automation Account** > **Properties**.
+
+ :::image type="content" source="./media/migration-using-runbook-scripts/fetch-resource-id.png" alt-text="Screenshot that shows how to fetch the resource ID." lightbox="./media/migration-using-runbook-scripts/fetch-resource-id.png":::
+
+**C. Verify**
+
+ After you run the script, verify that a user managed identity is created in the automation account. **Automation account** > **Identity** > **User Assigned**.
+
+ :::image type="content" source="./media/migration-using-runbook-scripts/script-verification.png" alt-text="Screenshot that shows how to verify that a user managed identity is created." lightbox="./media/migration-using-runbook-scripts/script-verification.png":::
+
+**D. Backend operations by the script**
+
+ - Updating the Az.Modules for the Automation account, which will be required for running migration and deboarding scripts.
+ - Creates an automation variable with name AutomationAccountAzureEnvironment which will store the Azure Cloud Environment to which Automation Account belongs.
+ - Creation of User Identity in the same Subscription and resource group as the Automation Account. The name of User Identity will be like *AutomationAccount_aummig_umsi*.
+ - Attaching the User Identity to the Automation Account.
+ - The script assigns the following permissions to the user managed identity: [Update Management Permissions Required](../automation/automation-role-based-access-control.md#update-management-permissions).
++
+ 1. For this, the script fetches all the machines onboarded to Automation Update Management under this automation account and parse their subscription IDs to be given the required RBAC to the User Identity.
+ 1. The script gives a proper RBAC to the User Identity on the subscription to which the automation account belongs so that the MRP configs can be created here.
+ 1. The script assigns the required roles for the Log Analytics workspace and solution.
+- Registration of required subscriptions to Microsoft.Maintenance and Microsoft.EventGrid Resource Providers.
+
+#### Step 1: Migration of machines and schedules
+
+This step involves using an automation runbook to migrate all the machines and schedules from an automation account to Azure Update Manager.
+
+**Follow these steps:**
+
+1. Import [migration runbook](https://github.com/azureautomation/Migrate-from-Azure-Automation-Update-Management-to-Azure-Update-Manager/blob/main/Migration.ps1) from the runbooks gallery and publish. Search for **azure automation update** from browse gallery, and import the migration runbook named **Migrate from Azure Automation Update Management to Azure Update Manager** and publish the runbook.
+
+ :::image type="content" source="./media/migration-using-runbook-scripts/migrate-from-automation-update-management.png" alt-text="Screenshot that shows how to migrate from Automation Update Management." lightbox="./media/migration-using-runbook-scripts/migrate-from-automation-update-management.png":::
+
+ Runbook supports PowerShell 5.1.
+
+ :::image type="content" source="./media/migration-using-runbook-scripts/runbook-support.png" alt-text="Screenshot that shows runbook supports PowerShell 5.1 while importing." lightbox="./media/migration-using-runbook-scripts/runbook-support.png":::
+
+1. Set Verbose Logging to True for the runbook.
+
+ :::image type="content" source="./media/migration-using-runbook-scripts/verbose-log-records.png" alt-text="Screenshot that shows how to set verbose log records." lightbox="./media/migration-using-runbook-scripts/verbose-log-records.png":::
+
+1. Run the runbook and pass the required parameters like AutomationAccountResourceId, UserManagedServiceIdentityClientId, etc.
+
+ :::image type="content" source="./media/migration-using-runbook-scripts/run-runbook-parameters.png" alt-text="Screenshot that shows how to run the runbook and pass the required parameters." lightbox="./media/migration-using-runbook-scripts/run-runbook-parameters.png":::
+
+ 1. You can fetch AutomationAccountResourceId from **Automation Account** > **Properties**.
+
+ :::image type="content" source="./media/migration-using-runbook-scripts/fetch-resource-id-portal.png" alt-text="Screenshot that shows how to fetch Automation account resource ID." lightbox="./media/migration-using-runbook-scripts/fetch-resource-id-portal.png":::
+
+ 1. You can fetch UserManagedServiceIdentityClientId from **Automation Account** > **Identity** > **User Assigned** > **Identity** > **Properties** > **Client ID**.
+
+ :::image type="content" source="./media/migration-using-runbook-scripts/fetch-client-id.png" alt-text="Screenshot that shows how to fetch client ID." lightbox="./media/migration-using-runbook-scripts/fetch-client-id.png":::
+
+ 1. Setting **EnablePeriodicAssessmentForMachinesOnboardedToUpdateManagement** to **TRUE** would enable periodic assessment property on all the machines onboarded to Automation Update Management.
+
+ 1. Setting **MigrateUpdateSchedulesAndEnablePeriodicAssessmentonLinkedMachines** to **TRUE** would migrate all the update schedules in Automation Update Management to Azure Update Manager and would also turn on periodic assessment property to **True** on all the machines linked to these schedules.
+
+ 1. You need to specify **ResourceGroupForMaintenanceConfigurations** where all the maintenance configurations in Azure Update Manager would be created. If you supply a new name, a resource group would be created where all the maintenance configurations would be created. However, if you supply a name with which a resource group already exists, all the maintenance configurations would be created in the existing resource group.
+
+1. Check Azure Runbook Logs for the status of execution and migration status of SUCs.
+
+ :::image type="content" source="./media/migration-using-runbook-scripts/log-status.png" alt-text="Screenshot that shows the runbook logs." lightbox="./media/migration-using-runbook-scripts/fetch-client-id.png":::
+
+**Runbook operations in backend**
+
+The migration of runbook does the following tasks:
+
+- Enables periodic assessment on all machines.
+- All schedules in the automation account are migrated to Azure Update Manager and a corresponding maintenance configuration is created for each of them, having the same properties.
+
+**About the script**
+
+The following is the behavior of the migration script:
+
+- Check if a resource group with the name taken as input is already present in the subscription of the automation account or not. If not, then create a resource group with the name specified by the customer. This resource group is used for creating the MRP configs for V2.
+- RebootOnly Setting isn't available in Azure Update Manager. Schedules having RebootOnly Setting aren't migrated.
+- Filter out SUCs that are in the errored/expired/provisioningFailed/disabled state and mark them as **Not Migrated**, and print the appropriate logs indicating that such SUCs aren't migrated.
+- The config assignment name is a string that will be in the format **AUMMig_AAName_SUCName**
+- Figure out if this Dynamic Scope is already assigned to the Maintenance config or not by checking against Azure Resource Graph. If not assigned, then only assign with assignment name in the format **AUMMig_ AAName_SUCName_SomeGUID**.
+- For schedules having pre/post tasks configured, the script will create an automation webhook for the runbooks in pre/post tasks and Event Grid subscriptions for pre/post maintenance events. For more information, see [how pre/post works in Azure Update Manager](tutorial-webhooks-using-runbooks.md)
+- A summarized set of logs is printed to the Output stream to give an overall status of machines and SUCs.
+- Detailed logs are printed to the Verbose Stream.
+- Post-migration, a Software Update Configuration can have any one of the following four migration statuses:
+
+ - **MigrationFailed**
+ - **PartiallyMigrated**
+ - **NotMigrated**
+ - **Migrated**
+
+The below table shows the scenarios associated with each Migration Status.
+
+| **MigrationFailed** | **PartiallyMigrated** | **NotMigrated** | **Migrated** |
+|||||
+|Failed to create Maintenance Configuration for the Software Update Configuration.| Non-Zero number of Machines where Patch-Settings failed to apply.| Failed to get software update configuration from the API due to some client/server error like maybe **internal Service Error**.| |
+| | Non-Zero number of Machines with failed Configuration Assignments.| Software Update Configuration is having reboot setting as reboot only. This isn't supported today in Azure Update Manager.| |
+| | Non-Zero number of Dynamic Queries failed to resolve that is failed to execute the query against Azure Resource Graph.| | |
+| | Non-Zero number of Dynamic Scope Configuration assignment failures.| Software Update Configuration isn't having succeeded provisioning state in DB.| |
+| | Software Update Configuration is having Saved Search Queries.| Software Update Configuration is in errored state in DB.| |
+| | Software Update Configuration is having pre/post tasks which have not been migrated successfully. | Schedule associated with Software Update Configuration is already expired at the time of migration.| |
+| | | Schedule associated with Software Update Configuration is disabled.| |
+| | | Unhandled exception while migrating software update configuration.| Zero Machines where Patch-Settings failed to apply.<br><br> **And** <br><br> Zero Machines with failed Configuration Assignments. <br><br> **And** <br><br> Zero Dynamic Queries failed to resolve that is failed to execute the query against Azure Resource Graph. <br><br> **And** <br><br> Zero Dynamic Scope Configuration assignment failures. <br><br> **And** <br><br> Software Update Configuration has zero Saved Search Queries.|
+
+To figure out from the table above which scenario/scenarios correspond to why the software update configuration has a specific status, look at the verbose/failed/warning logs to get the error code and error message.
+
+You can also search with the name of the update schedule to get logs specific to it for debugging.
++
+#### Step 2: Deboarding from Automation Update Management solution
+
+**Follow these steps:**
+
+1. Import the migration runbook from runbooks gallery. Search for **azure automation update** from browse gallery, and import the migration runbook named **Deboard from Azure Automation Update Management** and publish the runbook.
+
+ :::image type="content" source="./media/migration-using-runbook-scripts/deboard-from-automation-update-management.png" alt-text="Screenshot that shows how to import the deaboard migration runbook." lightbox="./media/migration-using-runbook-scripts/deboard-from-automation-update-management.png":::
+
+ Runbook supports PowerShell 5.1.
+
+ :::image type="content" source="./media/migration-using-runbook-scripts/deboard-runbook-support.png" alt-text="Screenshot that shows the runbook supports PowerShell 5.1 while deboarding." lightbox="./media/migration-using-runbook-scripts/deboard-runbook-support.png":::
+
+1. Set Verbose Logging to **True** for the Runbook.
+
+ :::image type="content" source="./media/migration-using-runbook-scripts/verbose-log-records-deboard.png" alt-text="Screenshot that shows log verbose records setting while deboarding." lightbox="./media/migration-using-runbook-scripts/verbose-log-records-deboard.png":::
+
+1. Start the runbook and pass parameters such as Automation AccountResourceId, UserManagedServiceIdentityClientId, etc.
+
+ :::image type="content" source="./media/migration-using-runbook-scripts/deboard-runbook-parameters.png" alt-text="Screenshot that shows how to start runbook and pass parameters while deboarding." lightbox="./media/migration-using-runbook-scripts/deboard-runbook-parameters.png":::
+
+ You can fetch AutomationAccountResourceId from **Automation Account** > **Properties**.
+
+ :::image type="content" source="./media/migration-using-runbook-scripts/fetch-resource-id-deboard.png" alt-text="Screenshot that shows how to fetch resource ID while deboarding." lightbox="./media/migration-using-runbook-scripts/deboard-runbook-parameters.png":::
+
+ You can fetch UserManagedServiceIdentityClientId from **Automation Account** > **Identity** > **User Assigned** > **Identity** > **Properties** > **Client ID**.
+
+ :::image type="content" source="./media/migration-using-runbook-scripts/deboard-fetch-client-id.png" alt-text="Screenshot that shows how to fetch client ID while deboarding." lightbox="./media/migration-using-runbook-scripts/deboard-fetch-client-id.png":::
+
+1. Check Azure runbook logs for the status of deboarding of machines and schedules.
+
+ :::image type="content" source="./media/migration-using-runbook-scripts/deboard-debug-logs.png" alt-text="Screenshot that shows how runbook logs while deboarding." lightbox="./media/migration-using-runbook-scripts/deboard-debug-logs.png":::
+
+**Deboarding script operations in the backend**
+
+- Disable all the underlying schedules for all the software update configurations present in this Automation account. This is done to ensure that Patch-MicrosoftOMSComputers Runbook isn't triggered for SUCs that were partially migrated to V2.
+- Delete the Updates Solution from the Linked Log Analytics Workspace for the Automation Account being Deboarded from Automation Update Management in V1.
+- A summarized log of all SUCs disabled and status of removing updates solution from linked log analytics workspace is also printed to the output stream.
+- Detailed logs are printed on the verbose streams.
++
+## Next steps
+
+- [An overview of migration](migration-overview.md)
+- [Migration using Azure portal](migration-using-portal.md)
+- [Manual migration guidance](migration-manual.md)
+- [Key points during migration](migration-key-points.md)
+++
+
update-manager Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/whats-new.md
Last updated 07/24/2024
### Support for Windows IoT Enterprise on Arc enabled IaaS VMs
-Public preview: Azure Update Manager now supports Windows IoT Enterprise on Arc enabled IaaS VMs. For more information, see [supported Windows IoT enterprise releases](https://learn.microsoft.com/azure/update-manager/support-matrix?tabs=winio-arc%2Cpublic%2Cthird-party-win#support-for-check-for-updatesone-time-updateperiodic-assessment-and-scheduled-patching).
+Public preview: Azure Update Manager now supports Windows IoT Enterprise on Arc enabled IaaS VMs. For more information, see [supported Windows IoT enterprise releases](/azure/update-manager/support-matrix?tabs=winio-arc%2Cpublic%2Cthird-party-win#support-for-check-for-updatesone-time-updateperiodic-assessment-and-scheduled-patching).
## June 2024
update-manager Workflow Update Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/workflow-update-manager.md
AUM performs the following steps:
## Updates data stored in Azure Resource Graph
-Update Manager extension pushes all the pending updates information and update installation results to [Azure Resource Graph](https://learn.microsoft.com/azure/governance/resource-graph/overview) where data is retained for below time periods:
+Update Manager extension pushes all the pending updates information and update installation results to [Azure Resource Graph](/azure/governance/resource-graph/overview) where data is retained for below time periods:
|Data | Retention period in Azure Resource graph | |||
virtual-machine-scale-sets Spot Priority Mix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/spot-priority-mix.md
Title: Get high availability and cost savings with Spot Priority Mix for Virtual
description: Learn how to run a mix of Spot VMs and uninterruptible standard VMs for Virtual Machine Scale Sets to achieve high availability and cost savings. --++ Last updated 06/14/2024
virtual-machine-scale-sets Spot Vm Size Recommendation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/spot-vm-size-recommendation.md
Title: Spot Virtual Machine Size Recommendation for Virtual Machine Scale Sets
description: Learn how to pick the right VM size when using Azure Spot for Virtual Machine Scale Sets. --++ Last updated 11/22/2022
virtual-machine-scale-sets Tutorial Use Custom Image Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-use-custom-image-cli.md
Title: Tutorial - Use a custom VM image in a scale set with Azure CLI description: Learn how to use the Azure CLI to create a custom VM image that you can use to deploy a Virtual Machine Scale Set -+ Last updated 06/14/2024
virtual-machine-scale-sets Tutorial Use Custom Image Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-use-custom-image-powershell.md
Title: Tutorial - Use a custom VM image in a scale set with Azure PowerShell description: Learn how to use Azure PowerShell to create a custom VM image that you can use to deploy a Virtual Machine Scale Set -+ Last updated 06/14/2024
virtual-machine-scale-sets Use Spot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/use-spot.md
description: Learn how to create Azure Virtual Machine Scale Sets that use Azure
--++ Last updated 06/14/2024
virtual-machine-scale-sets Virtual Machine Scale Sets Mvss Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-mvss-custom-image.md
description: Learn how to add a custom image to an existing Azure Virtual Machin
-+ Last updated 06/14/2024
virtual-machine-scale-sets Virtual Machine Scale Sets Scale In Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-scale-in-policy.md
-+ Last updated 06/14/2024
virtual-machine-scale-sets Virtual Machine Scale Sets Terminate Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-terminate-notification.md
description: Learn how to enable termination notification for Azure Virtual Mach
-+ Last updated 06/14/2024
virtual-machines Acu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/acu.md
Title: Overview of the Azure Compute Unit description: Overview of the concept of the Azure compute units. The ACU provides a way of comparing CPU performance across Azure SKUs. -+ Last updated 04/27/2022
virtual-machines Av2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/av2-series.md
Title: Av2-series description: Specifications for the Av2-series VMs. -+ Last updated 12/21/2022
virtual-machines Azure Hpc Vm Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/azure-hpc-vm-images.md
Title: Azure HPC VM images description: HPC VM images to be used on InfiniBand enabled H-series and GPU enabled N-series VMs.-+
virtual-machines B Series Cpu Credit Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/b-series-cpu-credit-model/b-series-cpu-credit-model.md
Title: B Series CPU Credit Model
description: Overview of B Series CPU Credit Model -+ Last updated 09/12/2023
virtual-machines Basv2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/basv2.md
Title: 'Basv2 Series (preview)' #Required; page title is displayed in search res
description: Overview of AMD Bsv2 Virtual Machine Series; #Required; this appears in search as the short description -+ Last updated 06/20/2022 #Required; mm/dd/yyyy format. Date the article was created or the last time it was tested and confirmed correct
virtual-machines Bpsv2 Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/bpsv2-arm.md
Title: 'Bpsv2 Series (preview)' #Required; page title is displayed in search res
description: Overview of Bpsv2 ARM series; this appears in search as the short description -+ Last updated 06/09/2023
virtual-machines Bsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/bsv2-series.md
Title: 'Bsv2 Series (preview)' #Required; page title is displayed in search resu
description: Overview of Intel Bsv2 Virtual Machine Series; #Required; this appears in search as the short description -+ Last updated 06/20/2022 #Required; mm/dd/yyyy format. Date the article was created or the last time it was tested and confirmed correct
virtual-machines Dalsv6 Daldsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dalsv6-daldsv6-series.md
Title: Dalsv6 and Daldsv6-series
description: Specifications for Dalsv6 and Daldsv6-series VMS -+ Last updated 01/29/2024
virtual-machines Dasv5 Dadsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dasv5-dadsv5-series.md
description: Specifications for the Dasv5 and Dadsv5-series VMs.
-+ Last updated 10/8/2021
virtual-machines Dasv6 Dadsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dasv6-dadsv6-series.md
Title: 'Dasv6 and Dadsv6-series - Azure Virtual Machines'
description: Specifications for the Dasv6 and Dadsv6-series VMs. -+ Last updated 01/29/2024
virtual-machines Dav4 Dasv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dav4-dasv4-series.md
Title: Dav4 and Dasv4-series
description: Specifications for the Dav4 and Dasv4-series VMs. -+ Last updated 12/19/2022
virtual-machines Dcasccv5 Dcadsccv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dcasccv5-dcadsccv5-series.md
description: Specifications for Azure Confidential Computing's Azure DCas_cc_v5
-+ Last updated 03/29/2022
virtual-machines Dcasv5 Dcadsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dcasv5-dcadsv5-series.md
description: Specifications for Azure Confidential Computing's DCasv5 and DCadsv
-+ Last updated 11/15/2021
virtual-machines Dcesv5 Dcedsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dcesv5-dcedsv5-series.md
description: Specifications for Azure Confidential Computing's DCesv5 and DCedsv
-+ - ignite-2023
virtual-machines Dcv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dcv2-series.md
Title: DCsv2-series - Azure Virtual Machines description: Specifications for the DCsv2-series VMs. -+ Last updated 12/12/2022
virtual-machines Dcv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dcv3-series.md
Title: DCsv3 and DCdsv3-series description: Specifications for the DCsv3 and DCdsv3-series Azure Virtual Machines. -+ Last updated 05/24/2022
virtual-machines Ddv4 Ddsv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ddv4-ddsv4-series.md
-+ Last updated 06/01/2020
virtual-machines Ddv5 Ddsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ddv5-ddsv5-series.md
description: Specifications for the Ddv5 and Ddsv5-series VMs.
-+ Last updated 10/20/2021
virtual-machines Disk Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disk-encryption-overview.md
Last updated 07/17/2024 -+
virtual-machines Dlsv6 Dldsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dlsv6-dldsv6-series.md
Title: Dlsv6 and Dldsv6-series
description: Specifications for the Dlsv6 and Dldsv6-series VMs -+ Last updated 07/16/2024
Disk throughput is measured in input/output operations per second (IOPS) and MBp
Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to **ReadOnly** or **ReadWrite**. For uncached data disk operation, the host cache mode is set to **None**.
-To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](https://learn.microsoft.com/azure/virtual-machines/disks-performance).
+To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](/azure/virtual-machines/disks-performance).
**Expected network bandwidth** is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see [Virtual machine network bandwidth](../virtual-network/virtual-machine-network-throughput.md).
-Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see [Optimize network throughput for Azure virtual machines](https://learn.microsoft.com/azure/virtual-network/virtual-network-optimize-network-bandwidth). To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see [Bandwidth/Throughput testing (NTTTCP)](https://learn.microsoft.com/azure/virtual-network/virtual-network-bandwidth-testing)
+Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see [Optimize network throughput for Azure virtual machines](/azure/virtual-network/virtual-network-optimize-network-bandwidth). To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see [Bandwidth/Throughput testing (NTTTCP)](/azure/virtual-network/virtual-network-bandwidth-testing)
virtual-machines Dsv6 Ddsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dsv6-ddsv6-series.md
Title: Dsv6 and Ddsv6-series
description: Specifications for Dsv6 and Ddsv6-series -+ Last updated 07/17/2024
virtual-machines Dv2 Dsv2 Series Memory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dv2-dsv2-series-memory.md
Title: Memory optimized Dv2 and Dsv2-series VMs - Azure Virtual Machines
description: Specifications for the Dv2 and DSv2-series VMs. -+ Last updated 12/21/2022
virtual-machines Dv2 Dsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dv2-dsv2-series.md
Title: Dv2 and DSv2-series - Azure Virtual Machines
description: Specifications for the Dv2 and Dsv2-series VMs. -+ Last updated 02/02/2023
virtual-machines Dv3 Dsv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dv3-dsv3-series.md
Title: Dv3 and Dsv3-series
description: Specifications for the Dv3 and Dsv3-series VMs. -+ Last updated 11/11/2022
virtual-machines Dv4 Dsv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dv4-dsv4-series.md
Title: Dv4 and Dsv4-series - Azure Virtual Machines
description: Specifications for the Dv4 and Dsv4-series VMs. -+ Last updated 12/19/2022
virtual-machines Dv5 Dsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dv5-dsv5-series.md
Title: Dv5 and Dsv5-series - Azure Virtual Machines
description: Specifications for the Dv5 and Dsv5-series VMs. -+ Last updated 10/20/2021
virtual-machines Easv5 Eadsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/easv5-eadsv5-series.md
description: Specifications for the Easv5 and Eadsv5-series VMs.
-+ Last updated 10/8/2021
virtual-machines Easv6 Eadsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/easv6-eadsv6-series.md
Title: 'Easv6 and Eadsv6-series - Azure Virtual Machines'
description: Specifications for the Easv6 and Eadsv6-series VMs. -+ Last updated 01/29/2024
virtual-machines Eav4 Easv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/eav4-easv4-series.md
Title: Eav4-series and Easv4-series description: Specifications for the Eav4 and Easv4-series VMs. -+ Last updated 12/21/2022
virtual-machines Ebdsv5 Ebsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ebdsv5-ebsv5-series.md
Title: Ebdsv5 and Ebsv5 series description: Specifications for the Ebdsv5-series and Ebsv5-series Azure virtual machines.-+ Last updated 07/08/2024
virtual-machines Ecasccv5 Ecadsccv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ecasccv5-ecadsccv5-series.md
description: Specifications for Azure Confidential Computing's Azure ECas_cc_v5
-+ Last updated 03/29/2022
virtual-machines Ecasv5 Ecadsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ecasv5-ecadsv5-series.md
description: Specifications for Azure Confidential Computing's ECasv5 and ECadsv
-+ Last updated 11/15/2021
virtual-machines Ecesv5 Ecedsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ecesv5-ecedsv5-series.md
description: Specifications for Azure Confidential Computing's ECesv5 and ECedsv
-+ - ignite-2023
virtual-machines Edv4 Edsv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/edv4-edsv4-series.md
Title: Edv4 and Edsv4-series
description: Specifications for the Ev4, Edv4, Esv4 and Edsv4-series VMs. -+ Last updated 10/20/2021
virtual-machines Edv5 Edsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/edv5-edsv5-series.md
Title: Edv5 and Edsv5-series - Azure Virtual Machines
description: Specifications for the Edv5 and Edsv5-series VMs. -+ Last updated 10/20/2021
virtual-machines Enable Nvme Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/enable-nvme-interface.md
Title: Supported OS Images description: Get a list of supported operating system images for remote NVMe.-+ Last updated 06/25/2024
virtual-machines Error Codes Spot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/error-codes-spot.md
Title: Error codes for Azure Spot Virtual Machines and scale sets instances description: Learn about error codes that you could possibly see when using Azure Spot Virtual Machines and scale set instances. --++ Last updated 02/28/2023
virtual-machines Esv6 Edsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/esv6-edsv6-series.md
Title: Esv6 and Edsv6-series
description: Specifications for Esv6 and Edsv6-series -+ Last updated 07/17/2024
virtual-machines Ev3 Esv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ev3-esv3-series.md
Title: Ev3-series and Esv3-series description: Specifications for the Ev3 and Esv3-series VMs.-+ Last updated 12/19/2022
virtual-machines Ev4 Esv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ev4-esv4-series.md
Title: Ev4 and Esv4-series - Azure Virtual Machines
description: Specifications for the Ev4, and Esv4-series VMs. -+ Last updated 12/21/2022
virtual-machines Ev5 Esv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ev5-esv5-series.md
Title: Ev5 and Esv5-series - Azure Virtual Machines
description: Specifications for the Ev5 and Esv5-series VMs. -+ Last updated 10/20/2021
virtual-machines Network Watcher Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-linux.md
Title: Manage Network Watcher Agent VM extension - Linux
description: Learn about the Network Watcher Agent virtual machine extension for Linux virtual machines and how to install and uninstall it. -+
virtual-machines Network Watcher Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-windows.md
Title: Manage Network Watcher Agent VM extension - Windows
description: Learn about the Network Watcher Agent virtual machine extension on Windows virtual machines and how to deploy it. -+
virtual-machines Fasv6 Falsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/fasv6-falsv6-series.md
Title: Falsv6, Fasv6, and Famsv6-series
description: Specifications for Fasv6, Falsv6 and Famsv6 -+ Last updated 01/29/2024
virtual-machines Field Programmable Gate Arrays Attestation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/field-programmable-gate-arrays-attestation.md
Title: Azure FPGA Attestation Service description: Attestation service for the NP-series VMs.-+ Last updated 02/27/2023
virtual-machines Fsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/fsv2-series.md
Title: Fsv2-series description: Specifications for the Fsv2-series VMs. -+ Last updated 12/19/2022
virtual-machines Fx Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/fx-series.md
Title: FX-series description: Specifications for the FX-series VMs. -+ Last updated 12/20/2022
virtual-machines Generation 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/generation-2.md
Title: Azure support for Generation 2 VMs description: Overview of Azure support for Generation 2 VMs -+ Last updated 03/04/2024
virtual-machines Hibernate Resume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hibernate-resume.md
VM sizes with up to 64-GB RAM from the following General Purpose VM series suppo
VM sizes with up to 112-GB RAM from the following GPU VM series support hibernation. - [NVv4-series](../virtual-machines/nvv4-series.md) (in preview)-- [NVadsA10v5-series](../virtual-machines/nva10v5-series.md) (in preview)
+- [NVadsA10v5-series](../virtual-machines/nva10v5-series.md) (in preview). If you are using any UVM-enabled compute applications then we recommned you to idle the application before initiating hibernate action.
+-
> [!IMPORTANT] > Azure Virtual Machines - Hibernation for GPU VMs is currently in PREVIEW.
virtual-machines Lasv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/lasv3-series.md
Title: Lasv3-series - Azure Virtual Machines description: Specifications for the Lasv3-series of Azure Virtual Machines (Azure VMs). -+ Last updated 06/01/2022
virtual-machines Azure Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/azure-dns.md
Title: DNS Name resolution options for Linux VMs description: Name Resolution scenarios for Linux virtual machines in Azure IaaS, including provided DNS services, hybrid external DNS and Bring Your Own DNS server. -+
virtual-machines Compute Benchmark Scores https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/compute-benchmark-scores.md
Title: Compute benchmark scores for Azure Linux VMs description: Compare CoreMark compute benchmark scores for Azure VMs running Linux.-+
virtual-machines Disk Encryption Isolated Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-isolated-network.md
Title: Azure Disk Encryption on an isolated network description: In this article, learn about troubleshooting tips for Microsoft Azure Disk Encryption on Linux VMs. -+
virtual-machines Disk Encryption Key Vault Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-key-vault-aad.md
Title: Creating and configuring a key vault for Azure Disk Encryption with Microsoft Entra ID (previous release) description: This article provides prerequisites for using Microsoft Azure Disk Encryption for Linux VMs. -+
virtual-machines Disk Encryption Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-key-vault.md
Title: Creating and configuring a key vault for Azure Disk Encryption description: This article provides steps for creating and configuring a key vault for use with Azure Disk Encryption on a Linux VM.-+
virtual-machines Disk Encryption Linux Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-linux-aad.md
Title: Azure Disk Encryption with Microsoft Entra App Linux IaaS VMs (previous release) description: This article provides instructions on enabling Microsoft Azure Disk Encryption for Linux IaaS VMs. -+
virtual-machines Disk Encryption Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-linux.md
Title: Azure Disk Encryption scenarios on Linux VMs description: This article provides instructions on enabling Microsoft Azure Disk Encryption for Linux VMs for various scenarios -+
virtual-machines Disk Encryption Overview Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-overview-aad.md
Title: Azure Disk Encryption with Microsoft Entra app prerequisites (previous release) description: This article provides supplements to Azure Disk Encryption for Linux VMs with additional requirements and prerequisites for Azure Disk Encryption with Microsoft Entra ID. -+
virtual-machines Disk Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-overview.md
Title: Enable Azure Disk Encryption for Linux VMs description: This article provides instructions on enabling Microsoft Azure Disk Encryption for Linux VMs. -+
virtual-machines Disk Encryption Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-portal-quickstart.md
Title: Create and encrypt a Linux VM with the Azure portal
description: In this quickstart, you learn how to use the Azure portal to create and encrypt a Linux virtual machine -+ Last updated 02/20/2024
virtual-machines Disk Encryption Powershell Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-powershell-quickstart.md
Title: Create and encrypt a Linux VM with Azure PowerShell
description: In this quickstart, you learn how to use Azure PowerShell to create and encrypt a Linux virtual machine -+
virtual-machines Disk Encryption Sample Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-sample-scripts.md
Title: Azure Disk Encryption sample scripts description: This article is the appendix for Microsoft Azure Disk Encryption for Linux VMs. -+
virtual-machines Disk Encryption Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-troubleshooting.md
Title: Troubleshooting Azure Disk Encryption for Linux VMs description: This article provides troubleshooting tips for Microsoft Azure Disk Encryption for Linux VMs. -+
virtual-machines N Series Driver Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/n-series-driver-setup.md
Title: Azure N-series GPU driver setup for Linux
description: How to set up NVIDIA GPU drivers for N-series VMs running Linux in Azure -+
virtual-machines Oracle Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/oracle-create-upload-vhd.md
Title: Create and upload an Oracle Linux VHD description: Learn to create and upload an Azure virtual hard disk (VHD) that contains an Oracle Linux operating system. -+
virtual-machines Redhat Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/redhat-create-upload-vhd.md
Title: Create and upload a Red Hat Enterprise Linux VHD for use in Azure description: Learn to create and upload an Azure virtual hard disk (VHD) that contains a Red Hat Linux operating system. -+ vm-linux
virtual-machines Scheduled Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/scheduled-events.md
Title: Scheduled Events for Linux VMs in Azure description: Scheduled events using the Azure Metadata Service for your Linux virtual machines. -+
virtual-machines Spot Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/spot-cli.md
Title: Use CLI to deploy Azure Spot Virtual Machines description: Learn how to use the CLI to deploy Azure Spot Virtual Machines to save costs. --++ Last updated 05/31/2023
virtual-machines Spot Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/spot-template.md
Title: Use a template to deploy Azure Spot Virtual Machines description: Learn how to use a template to deploy Azure Spot Virtual Machines to save costs. --++ Last updated 05/31/2023
virtual-machines Storage Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/storage-performance.md
Title: Optimize performance on Lsv3, Lasv3, and Lsv2-series Linux VMs description: Learn how to optimize performance for your solution on the Lsv3, Lasv3, and Lsv2-series Linux virtual machines (VMs) on Azure. -+
virtual-machines Lsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/lsv2-series.md
Title: Lsv2-series - Azure Virtual Machines description: Specifications for the Lsv2-series VMs. -+
virtual-machines Lsv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/lsv3-series.md
Title: Lsv3-series - Azure Virtual Machines description: Specifications for the Lsv3-series of Azure Virtual Machines (Azure VMs). -+ Last updated 06/01/2022
virtual-machines M Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/m-series.md
Title: M-series - Azure Virtual Machines description: Specifications for the M-series VMs. -+
virtual-machines Migration Managed Image To Compute Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-managed-image-to-compute-gallery.md
Title: Migrate Managed image to Compute gallery description: Learn how to legacy Managed image to image version in Azure compute gallery. -+ Last updated 03/09/2024
virtual-machines Mitigate Se https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/mitigate-se.md
description: Learn more about Guidance for mitigating silicon based micro-archit
keywords: spectre,meltdown,specter-+ Last updated 02/26/2024
virtual-machines Msv2 Mdsv2 Isolated Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/msv2-mdsv2-isolated-retirement.md
Title: Msv2 and Mdsv2 Isolated Sizes Retirement
description: Migration guide for sizes -+ Last updated 01/10/2024
virtual-machines Msv2 Mdsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/msv2-mdsv2-series.md
Title: Msv2/Mdsv2 Medium Memory Series - Azure Virtual Machines description: Specifications for the Msv2-series VMs. -+ Last updated 12/20/2022
virtual-machines Mv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/mv2-series.md
Title: Mv2-series - Azure Virtual Machines description: Specifications for the Mv2-series VMs. -+ Last updated 12/20/2022
virtual-machines N Series Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/n-series-migration.md
Title: Migration Guide for GPU Compute Workloads in Azure description: NC, ND, NCv2-series migration guide.-+ Last updated 02/27/2023
virtual-machines Nc A100 V4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nc-a100-v4-series.md
Title: NC A100 v4-series
description: Specifications for the NC A100 v4-series Azure VMs. These VMs include Linux, Windows, Flexible scale sets, and uniform scale sets.``` -+
virtual-machines Nc Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nc-series.md
Title: NC-series - Azure Virtual Machines description: Specifications for the NC-series VMs. -+ Last updated 12/21/2022
virtual-machines Ncads H100 V5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ncads-h100-v5.md
Title: NCads H100 v5-series
description: Specifications for the NCads H100 v5-series Azure VMs. These VMs include Linux, Windows, Flexible scale sets, and uniform scale sets.``` -+
virtual-machines Nct4 V3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nct4-v3-series.md
Title: NCas T4 v3-series description: Specifications for the NCas T4 v3-series VMs.-+
virtual-machines Ncv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ncv2-series.md
Title: NCv2-series - Azure Virtual Machines description: Specifications for the NCv2-series VMs. -+ Last updated 02/03/2020
virtual-machines Ncv3 Nc24rs Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ncv3-nc24rs-retirement.md
Title: NCv3 and NC24rs Retirement
description: Migration guide for NC24rs_v3 sizes -+ Last updated 03/19/2024
virtual-machines Ncv3 Nc6s Nc12s Nc24s Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ncv3-nc6s-nc12s-nc24s-retirement.md
Title: NCv3 and NC6s NC12s NC24s Retirement
description: Migration guide for sizes NC6s_v3 NC12s_v3 NC24s_v3 -+ Last updated 03/19/2024
virtual-machines Ncv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ncv3-series.md
Title: NCv3-series - Azure Virtual Machines description: Specifications for the NCv3-series VMs. -+ Last updated 12/20/2023
virtual-machines Nd Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nd-series.md
Title: ND-series - Azure Virtual Machines description: Specifications for the ND-series VMs. -+ Last updated 12/20/2022
virtual-machines Nda100 V4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nda100-v4-series.md
Title: ND A100 v4-series description: Specifications for the ND A100 v4-series VMs.-+
virtual-machines Ndm A100 V4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ndm-a100-v4-series.md
Title: NDm A100 v4-series
description: Specifications for the NDm A100 v4-series VMs. -+ Last updated 03/13/2023
virtual-machines Ndv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ndv2-series.md
Title: NDv2-series description: Specifications for the NDv2-series VMs. -+ Last updated 12/20/2022
virtual-machines Ngads V 620 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ngads-v-620-series.md
Title: Overview of NGads V620 Series (preview)
description: Overview of NGads V620 series GPU-enabled virtual machines -+ Last updated 06/11/2023
virtual-machines Np Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/np-series.md
Title: NP-series - Azure Virtual Machines description: Specifications for the NP-series VMs. -+ Last updated 03/07/2023
VM Generation Support: Generation 1<br>
**Q:** What shell version is supported and how can I get the development files?
-**A:** The FPGAs in Azure NP VMs support Xilinx Shell 2.1 (gen3x16-xdma-shell_2.1). See Xilinx Page [Xilinx/Azure with Alveo U250](https://www.xilinx.com/microsoft-azure.html) to get the development shell files.
+**A:** The FPGAs in Azure NP VMs support Xilinx Shell 2.1 (gen3x16-xdma-shell_2.1). See Xilinx Page [Xilinx/Azure with Alveo U250](https://www.amd.com/en/where-to-buy/accelerators/alveo/cloud-solutions/microsoft-azure.html) to get the development shell files.
**Q:** Which file returned from attestation should I use when programming my FPGA in an NP VM?
VM Generation Support: Generation 1<br>
**Q:** Where should I get all the XRT / Platform files?
-**A:** Visit Xilinx's [Microsoft-Azure](https://www.xilinx.com/microsoft-azure.html) site for all files.
+**A:** Visit Xilinx's [Microsoft-Azure](https://www.amd.com/en/where-to-buy/accelerators/alveo/cloud-solutions/microsoft-azure.html) site for all files.
**Q:** What Version of XRT should I use?
virtual-machines Nv Series Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nv-series-migration-guide.md
Title: NV series migration guide description: NV series migration guide -+ Last updated 02/27/2023
virtual-machines Nv Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nv-series.md
Title: NV-series - Azure Virtual Machines description: Specifications for the NV-series VMs. -+ Last updated 03/29/2022
virtual-machines Nva10v5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nva10v5-series.md
Title: NV A10 v5-series description: Specifications for the NV A10 v5-series VMs. -+ Last updated 02/01/2022
virtual-machines Nvv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nvv3-series.md
Title: NVv3-series - Azure Virtual Machines
description: Specifications for the NVv3-series VMs. -+ Last updated 12/21/2022
virtual-machines Nvv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nvv4-series.md
Title: NVv4-series description: Specifications for the NVv4-series VMs. -+ Last updated 01/12/2020
virtual-machines Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/policy-reference.md
Last updated 02/06/2024
-+
virtual-machines Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/quotas.md
Title: vCPU quotas description: Check your vCPU quotas for Azure virtual-machines. -+ Last updated 02/15/2023
virtual-machines Security Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/security-policy.md
Title: Secure and use policies description: Learn about security and policies for virtual machines in Azure. -+ Last updated 02/26/2024
virtual-machines Sizes B Series Burstable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-b-series-burstable.md
description: Describes the B-series of burstable Azure VM sizes.
-+ Last updated 02/03/2020
virtual-machines B Family https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/b-family.md
Title: B family VM size series description: List of size series in the B family. -+ Previously updated : 04/16/2024 Last updated : 07/29/2024
Read more about the [B-series CPU credit model](../../b-series-cpu-credit-model/
## Series in family
-### Bsv2-series
-
-[View the full Bsv2-series page](../../bsv2-series.md).
--- ### Basv2-series [!INCLUDE [basv2-series-summary](./includes/basv2-series-summary.md)]
-[View the full Basv2-series page](../../basv2.md).
+[View the full Basv2-series page](./basv2-series.md).
[!INCLUDE [basv2-series-specs](./includes/basv2-series-specs.md)]
Read more about the [B-series CPU credit model](../../b-series-cpu-credit-model/
### Bpsv2-series [!INCLUDE [bpsv2-series-summary](./includes/bpsv2-series-summary.md)]
-[View the full Bpsv2-series page](../../bpsv2-arm.md).
+[View the full Bpsv2-series page](./bpsv2-series.md).
[!INCLUDE [bpsv2-series-specs](./includes/bpsv2-series-specs.md)]
+### Bsv2-series
+
+[View the full Bsv2-series page](./bsv2-series.md).
++++ ### Previous-generation B family series For older sizes, see [previous generation sizes](../previous-gen-sizes-list.md#general-purpose-previous-gen-sizes).
virtual-machines Basv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/basv2-series.md
+
+ Title: Basv2 size series
+description: Information on and specifications of the Basv2-series sizes
++++ Last updated : 07/29/2024++++
+# Basv2 sizes series
++
+## Host specifications
+
+## Feature support
+[Premium Storage](../../premium-storage-performance.md): Supported <br>[Premium Storage caching](../../premium-storage-performance.md): Supported <br>[Live Migration](../../maintenance-and-updates.md): Supported <br>[Memory Preserving Updates](../../maintenance-and-updates.md): Supported <br>[Generation 2 VMs](../../generation-2.md): Supported <br>[Generation 1 VMs](../../generation-2.md): Not Supported <br>[Accelerated Networking](../../../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br>[Ephemeral OS Disk](../../ephemeral-os-disks.md): Not Supported <br>[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
+
+## Sizes in series
+
+### [Basics](#tab/sizebasic)
+
+vCPUs (Qty.) and Memory for each size
+
+| Size Name | vCPUs (Qty.) | Memory (GB) |
+| | | |
+| Standard_B2ats_v2 | 2 | 1 |
+| Standard_B2als_v2 | 2 | 4 |
+| Standard_B2as_v2 | 2 | 8 |
+| Standard_B4als_v2 | 4 | 8 |
+| Standard_B4as_v2 | 4 | 16 |
+| Standard_B8als_v2 | 8 | 16 |
+| Standard_B8as_v2 | 8 | 32 |
+| Standard_B16als_v2 | 16 | 32 |
+| Standard_B16as_v2 | 16 | 64 |
+| Standard_B32als_v2 | 32 | 64 |
+| Standard_B32as_v2 | 32 | 128 |
+
+#### VM Basics resources
+- [Check vCPU quotas](../../../virtual-machines/quotas.md)
++
+### [CPU Burst](#tab/sizeburstdata)
+
+Base CPU performance, Credits, and other CPU bursting related info
+
+| Size Name | Base CPU Performance Percentage | Initial Credits (Qty.) | Credits banked/hour (Qty.) | Max Banked Credits (Qty.) |
+| | | | | |
+| Standard_B2ats_v2 | 20% | 60 | 24 | 576 |
+| Standard_B2als_v2 | 30% | 60 | 36 | 864 |
+| Standard_B2as_v2 | 40% | 60 | 48 | 1152 |
+| Standard_B4als_v2 | 30% | 120 | 72 | 1728 |
+| Standard_B4as_v2 | 40% | 120 | 96 | 2304 |
+| Standard_B8als_v2 | 30% | 240 | 144 | 3456 |
+| Standard_B8as_v2 | 40% | 240 | 192 | 4608 |
+| Standard_B16als_v2 | 30% | 480 | 288 | 6912 |
+| Standard_B16as_v2 | 40% | 480 | 384 | 9216 |
+| Standard_B32als_v2 | 30% | 960 | 576 | 13824 |
+| Standard_B32as_v2 | 40% | 960 | 768 | 18432 |
+
+#### CPU Burst resources
+- Learn more about [CPU bursting](../../b-series-cpu-credit-model/b-series-cpu-credit-model.md)
+
+### [Local Storage](#tab/sizestoragelocal)
+
+Local (temp) storage info for each size
+
+> [!NOTE]
+> No local storage present in this series.
+>
+> For frequently asked questions, see [Azure VM sizes with no local temp disk](../../azure-vms-no-temp-disk.yml).
+++
+### [Remote Storage](#tab/sizestorageremote)
+
+Remote (uncached) storage info for each size
+
+| Size Name | Max Remote Storage Disks (Qty.) | Uncached Disk IOPS | Uncached Disk Speed (MBps) | Uncached Disk Burst<sup>1</sup> IOPS | Uncached Disk Burst<sup>1</sup> Speed (MBps) | Uncached Special<sup>2</sup> Disk IOPS | Uncached Special<sup>2</sup> Disk Speed (MBps) | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk IOPS | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk Speed (MBps) |
+| | | | | | | | | | |
+| Standard_B2ats_v2 | 4 | 3750 | 85 | 10,000 | 960 | | | | |
+| Standard_B2als_v2 | 4 | 3750 | 85 | 10,000 | 960 | | | | |
+| Standard_B2as_v2 | 4 | 3750 | 85 | 10,000 | 960 | | | | |
+| Standard_B4als_v2 | 8 | 6,400 | 145 | 20,000 | 960 | | | | |
+| Standard_B4as_v2 | 8 | 6,400 | 145 | 20,000 | 960 | | | | |
+| Standard_B8als_v2 | 16 | 12,800 | 290 | 20,000 | 960 | | | | |
+| Standard_B8as_v2 | 16 | 12,800 | 290 | 20,000 | 960 | | | | |
+| Standard_B16als_v2 | 32 | 25,600 | 600 | 40,000 | 960 | | | | |
+| Standard_B16as_v2 | 32 | 25,600 | 600 | 40,000 | 960 | | | | |
+| Standard_B32als_v2 | 32 | 25,600 | 600 | 80,000 | 960 | | | | |
+| Standard_B32as_v2 | 32 | 25,600 | 600 | 80,000 | 960 | | | | |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>Some sizes support [bursting](../../disk-bursting.md) to temporarily increase disk performance. Burst speeds can be maintained for up to 30 minutes at a time.
+- <sup>2</sup>Special Storage refers to either [Ultra Disk](../../../virtual-machines/disks-enable-ultra-ssd.md) or [Premium SSD v2](../../../virtual-machines/disks-deploy-premium-v2.md) storage.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to ReadOnly or ReadWrite. For uncached data disk operation, the host cache mode is set to None.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
++
+### [Network](#tab/sizenetwork)
+
+Network interface info for each size
+
+| Size Name | Max NICs (Qty.) | Max Bandwidth (Mbps) |
+| | | |
+| Standard_B2ats_v2 | 2 | 6250 |
+| Standard_B2als_v2 | 2 | 6250 |
+| Standard_B2as_v2 | 2 | 6250 |
+| Standard_B4als_v2 | 2 | 6250 |
+| Standard_B4as_v2 | 2 | 6250 |
+| Standard_B8als_v2 | 2 | 6250 |
+| Standard_B8as_v2 | 2 | 6250 |
+| Standard_B16als_v2 | 4 | 6250 |
+| Standard_B16as_v2 | 4 | 6250 |
+| Standard_B32als_v2 | 4 | 6250 |
+| Standard_B32as_v2 | 4 | 6250 |
+
+#### Networking resources
+- [Virtual networks and virtual machines in Azure](../../../virtual-network/network-overview.md)
+- [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+
+#### Table definitions
+- Expected network bandwidth is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+- Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see [Optimize network throughput for Azure virtual machines](../../../virtual-network/virtual-network-optimize-network-bandwidth.md).
+- To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see [Bandwidth/Throughput testing (NTTTCP)](../../../virtual-network/virtual-network-bandwidth-testing.md).
+
+### [Accelerators](#tab/sizeaccelerators)
+
+Accelerator (GPUs, FPGAs, etc.) info for each size
+
+> [!NOTE]
+> No accelerators are present in this series.
+++
virtual-machines Bpsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/bpsv2-series.md
+
+ Title: Bpsv2 size series
+description: Information on and specifications of the Bpsv2-series sizes
++++ Last updated : 07/29/2024++++
+# Bpsv2 sizes series
++
+## Host specifications
+
+## Feature support
+[Premium Storage](../../premium-storage-performance.md): Supported <br>[Premium Storage caching](../../premium-storage-performance.md): Supported <br>[Live Migration](../../maintenance-and-updates.md): Supported <br>[Memory Preserving Updates](../../maintenance-and-updates.md): Supported <br>[Generation 2 VMs](../../generation-2.md): Supported <br>[Generation 1 VMs](../../generation-2.md): Not Supported <br>[Accelerated Networking](../../../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br>[Ephemeral OS Disk](../../ephemeral-os-disks.md): Not Supported <br>[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
+
+## Sizes in series
+
+### [Basics](#tab/sizebasic)
+
+vCPUs (Qty.) and Memory for each size
+
+| Size Name | vCPUs (Qty.) | Memory (GB) |
+| | | |
+| Standard_B2pts_v2 | 2 | 1 |
+| Standard_B2pls_v2 | 2 | 4 |
+| Standard_B2ps_v2 | 2 | 8 |
+| Standard_B4pls_v2 | 4 | 8 |
+| Standard_B4ps_v2 | 4 | 16 |
+| Standard_B8pls_v2 | 8 | 16 |
+| Standard_B8ps_v2 | 8 | 32 |
+| Standard_B16pls_v2 | 16 | 32 |
+| Standard_B16ps_v2 | 16 | 64 |
+
+#### VM Basics resources
+- [Check vCPU quotas](../../../virtual-machines/quotas.md)
++
+### [CPU Burst](#tab/sizeburstdata)
+
+Base CPU performance, Credits, and other CPU bursting related info
+
+| Size Name | Base CPU Performance Percentage | Initial Credits (Qty.) | Credits banked/hour (Qty.) | Max Banked Credits (Qty.) |
+| | | | | |
+| Standard_B2pts_v2 | 20% | 60 | 24 | 576 |
+| Standard_B2pls_v2 | 30% | 60 | 36 | 864 |
+| Standard_B2ps_v2 | 40% | 60 | 48 | 1152 |
+| Standard_B4pls_v2 | 30% | 120 | 72 | 1728 |
+| Standard_B4ps_v2 | 40% | 120 | 96 | 2304 |
+| Standard_B8pls_v2 | 30% | 240 | 144 | 3456 |
+| Standard_B8ps_v2 | 40% | 240 | 192 | 4608 |
+| Standard_B16pls_v2 | 30% | 480 | 288 | 6912 |
+| Standard_B16ps_v2 | 40% | 480 | 384 | 9216 |
+
+#### CPU Burst resources
+- Learn more about [CPU bursting](../../b-series-cpu-credit-model/b-series-cpu-credit-model.md)
+
+### [Local Storage](#tab/sizestoragelocal)
+
+Local (temp) storage info for each size
+
+> [!NOTE]
+> No local storage present in this series.
+>
+> For frequently asked questions, see [Azure VM sizes with no local temp disk](../../azure-vms-no-temp-disk.yml).
+++
+### [Remote Storage](#tab/sizestorageremote)
+
+Remote (uncached) storage info for each size
+
+| Size Name | Max Remote Storage Disks (Qty.) | Uncached Disk IOPS | Uncached Disk Speed (MBps) | Uncached Disk Burst<sup>1</sup> IOPS | Uncached Disk Burst<sup>1</sup> Speed (MBps) | Uncached Special<sup>2</sup> Disk IOPS | Uncached Special<sup>2</sup> Disk Speed (MBps) | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk IOPS | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk Speed (MBps) |
+| | | | | | | | | | |
+| Standard_B2pts_v2 | 4 | 3750 | 85 | 10,000 | 960 | | | | |
+| Standard_B2pls_v2 | 4 | 3750 | 85 | 10,000 | 960 | | | | |
+| Standard_B2ps_v2 | 4 | 3750 | 85 | 10,000 | 960 | | | | |
+| Standard_B4pls_v2 | 8 | 6,400 | 145 | 20,000 | 960 | | | | |
+| Standard_B4ps_v2 | 8 | 6,400 | 145 | 20,000 | 960 | | | | |
+| Standard_B8pls_v2 | 16 | 12,800 | 290 | 20,000 | 960 | | | | |
+| Standard_B8ps_v2 | 16 | 12,800 | 290 | 20,000 | 960 | | | | |
+| Standard_B16pls_v2 | 32 | 25,600 | 600 | 40,000 | 960 | | | | |
+| Standard_B16ps_v2 | 32 | 25,600 | 600 | 40,000 | 960 | | | | |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>Some sizes support [bursting](../../disk-bursting.md) to temporarily increase disk performance. Burst speeds can be maintained for up to 30 minutes at a time.
+- <sup>2</sup>Special Storage refers to either [Ultra Disk](../../../virtual-machines/disks-enable-ultra-ssd.md) or [Premium SSD v2](../../../virtual-machines/disks-deploy-premium-v2.md) storage.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to ReadOnly or ReadWrite. For uncached data disk operation, the host cache mode is set to None.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
++
+### [Network](#tab/sizenetwork)
+
+Network interface info for each size
+
+| Size Name | Max NICs (Qty.) | Max Bandwidth (Mbps) |
+| | | |
+| Standard_B2pts_v2 | 2 | 6250 |
+| Standard_B2pls_v2 | 2 | 6250 |
+| Standard_B2ps_v2 | 2 | 6250 |
+| Standard_B4pls_v2 | 2 | 6250 |
+| Standard_B4ps_v2 | 2 | 6250 |
+| Standard_B8pls_v2 | 2 | 6250 |
+| Standard_B8ps_v2 | 2 | 6250 |
+| Standard_B16pls_v2 | 4 | 6250 |
+| Standard_B16ps_v2 | 4 | 6250 |
+
+#### Networking resources
+- [Virtual networks and virtual machines in Azure](../../../virtual-network/network-overview.md)
+- [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+
+#### Table definitions
+- Expected network bandwidth is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+- Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see [Optimize network throughput for Azure virtual machines](../../../virtual-network/virtual-network-optimize-network-bandwidth.md).
+- To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see [Bandwidth/Throughput testing (NTTTCP)](../../../virtual-network/virtual-network-bandwidth-testing.md).
+
+### [Accelerators](#tab/sizeaccelerators)
+
+Accelerator (GPUs, FPGAs, etc.) info for each size
+
+> [!NOTE]
+> No accelerators are present in this series.
+++
virtual-machines Bsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/bsv2-series.md
+
+ Title: Bsv2 size series
+description: Information on and specifications of the Bsv2-series sizes
++++ Last updated : 07/29/2024++++
+# Bsv2 sizes series
++
+## Host specifications
+
+## Feature support
+[Premium Storage](../../premium-storage-performance.md): Supported <br>[Premium Storage caching](../../premium-storage-performance.md): Supported <br>[Live Migration](../../maintenance-and-updates.md): Supported <br>[Memory Preserving Updates](../../maintenance-and-updates.md): Supported <br>[Generation 2 VMs](../../generation-2.md): Supported <br>[Generation 1 VMs](../../generation-2.md): Not Supported <br>[Accelerated Networking](../../../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br>[Ephemeral OS Disk](../../ephemeral-os-disks.md): Not Supported <br>[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
+
+## Sizes in series
+
+### [Basics](#tab/sizebasic)
+
+vCPUs (Qty.) and Memory for each size
+
+| Size Name | vCPUs (Qty.) | Memory (GB) |
+| | | |
+| Standard_B2ts_v2 | 2 | 1 |
+| Standard_B2ls_v2 | 2 | 4 |
+| Standard_B2s_v2 | 2 | 8 |
+| Standard_B4ls_v2 | 4 | 8 |
+| Standard_B4s_v2 | 4 | 16 |
+| Standard_B8ls_v2 | 8 | 16 |
+| Standard_B8s_v2 | 8 | 32 |
+| Standard_B16ls_v2 | 16 | 32 |
+| Standard_B16s_v2 | 16 | 64 |
+| Standard_B32ls_v2 | 32 | 64 |
+| Standard_B32s_v2 | 32 | 128 |
+
+#### VM Basics resources
+- [Check vCPU quotas](../../../virtual-machines/quotas.md)
++
+### [CPU Burst](#tab/sizeburstdata)
+
+Base CPU performance, Credits, and other CPU bursting related info
+
+| Size Name | Base CPU Performance Percentage | Initial Credits (Qty.) | Credits banked/hour (Qty.) | Max Banked Credits (Qty.) |
+| | | | | |
+| Standard_B2ts_v2 | 20% | 60 | 24 | 576 |
+| Standard_B2ls_v2 | 30% | 60 | 36 | 864 |
+| Standard_B2s_v2 | 40% | 60 | 48 | 1152 |
+| Standard_B4ls_v2 | 30% | 120 | 72 | 1728 |
+| Standard_B4s_v2 | 40% | 120 | 96 | 2304 |
+| Standard_B8ls_v2 | 30% | 240 | 144 | 3456 |
+| Standard_B8s_v2 | 40% | 240 | 192 | 4608 |
+| Standard_B16ls_v2 | 30% | 480 | 288 | 6912 |
+| Standard_B16s_v2 | 40% | 480 | 384 | 9216 |
+| Standard_B32ls_v2 | 30% | 960 | 576 | 13824 |
+| Standard_B32s_v2 | 40% | 960 | 768 | 18432 |
+
+#### CPU Burst resources
+- Bsv2-series virtual machines can burst their disk performance and get up to their bursting max for up to 30 minutes at a time.
+- Learn more about [CPU bursting](../../b-series-cpu-credit-model/b-series-cpu-credit-model.md)
+
+### [Local Storage](#tab/sizestoragelocal)
+
+Local (temp) storage info for each size
+
+> [!NOTE]
+> No local storage present in this series.
+>
+> For frequently asked questions, see [Azure VM sizes with no local temp disk](../../azure-vms-no-temp-disk.yml).
+++
+### [Remote Storage](#tab/sizestorageremote)
+
+Remote (uncached) storage info for each size
+
+| Size Name | Max Remote Storage Disks (Qty.) | Uncached Disk IOPS | Uncached Disk Speed (MBps) | Uncached Disk Burst<sup>1</sup> IOPS | Uncached Disk Burst<sup>1</sup> Speed (MBps) | Uncached Special<sup>2</sup> Disk IOPS | Uncached Special<sup>2</sup> Disk Speed (MBps) | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk IOPS | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk Speed (MBps) |
+| | | | | | | | | | |
+| Standard_B2ts_v2 | 4 | 3750 | 85 | 10,000 | 960 | | | | |
+| Standard_B2ls_v2 | 4 | 3750 | 85 | 10,000 | 960 | | | | |
+| Standard_B2s_v2 | 4 | 3750 | 85 | 10,000 | 960 | | | | |
+| Standard_B4ls_v2 | 8 | 6,400 | 145 | 20,000 | 960 | | | | |
+| Standard_B4s_v2 | 8 | 6,400 | 145 | 20,000 | 960 | | | | |
+| Standard_B8ls_v2 | 16 | 12,800 | 290 | 20,000 | 960 | | | | |
+| Standard_B8s_v2 | 16 | 12,800 | 290 | 20,000 | 960 | | | | |
+| Standard_B16ls_v2 | 32 | 25,600 | 600 | 40,000 | 960 | | | | |
+| Standard_B16s_v2 | 32 | 25,600 | 600 | 40,000 | 960 | | | | |
+| Standard_B32ls_v2 | 32 | 51,200 | 600 | 80,000 | 960 | | | | |
+| Standard_B32s_v2 | 32 | 51,200 | 600 | 80,000 | 960 | | | | |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>Some sizes support [bursting](../../disk-bursting.md) to temporarily increase disk performance. Burst speeds can be maintained for up to 30 minutes at a time.
+- <sup>2</sup>Special Storage refers to either [Ultra Disk](../../../virtual-machines/disks-enable-ultra-ssd.md) or [Premium SSD v2](../../../virtual-machines/disks-deploy-premium-v2.md) storage.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to ReadOnly or ReadWrite. For uncached data disk operation, the host cache mode is set to None.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
++
+### [Network](#tab/sizenetwork)
+
+Network interface info for each size
+
+| Size Name | Max NICs (Qty.) | Max Bandwidth (Mbps) |
+| | | |
+| Standard_B2ts_v2 | 2 | 6250 |
+| Standard_B2ls_v2 | 2 | 6250 |
+| Standard_B2s_v2 | 2 | 6250 |
+| Standard_B4ls_v2 | 2 | 6250 |
+| Standard_B4s_v2 | 2 | 6250 |
+| Standard_B8ls_v2 | 2 | 6250 |
+| Standard_B8s_v2 | 2 | 6250 |
+| Standard_B16ls_v2 | 4 | 6250 |
+| Standard_B16s_v2 | 4 | 6250 |
+| Standard_B32ls_v2 | 4 | 6250 |
+| Standard_B32s_v2 | 4 | 6250 |
+
+#### Networking resources
+- [Virtual networks and virtual machines in Azure](../../../virtual-network/network-overview.md)
+- [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+
+#### Table definitions
+- Expected network bandwidth is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+- Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see [Optimize network throughput for Azure virtual machines](../../../virtual-network/virtual-network-optimize-network-bandwidth.md).
+- To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see [Bandwidth/Throughput testing (NTTTCP)](../../../virtual-network/virtual-network-bandwidth-testing.md).
+
+### [Accelerators](#tab/sizeaccelerators)
+
+Accelerator (GPUs, FPGAs, etc.) info for each size
+
+> [!NOTE]
+> No accelerators are present in this series.
+++
virtual-machines Bv1 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/bv1-series.md
+
+ Title: Bv1 size series
+description: Information on and specifications of the Bv1-series sizes
++++ Last updated : 07/29/2024++++
+# Bv1 sizes series
++
+## Host specifications
+
+## Feature support
+[Premium Storage](../../premium-storage-performance.md): Supported <br>[Premium Storage caching](../../premium-storage-performance.md): Not Supported <br>[Live Migration](../../maintenance-and-updates.md): Supported <br>[Memory Preserving Updates](../../maintenance-and-updates.md): Supported <br>[Generation 2 VMs](../../generation-2.md): Supported <br>[Generation 1 VMs](../../generation-2.md): Supported <br>[Accelerated Networking](../../../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br>[Ephemeral OS Disk](../../ephemeral-os-disks.md): Supported <br>[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
+
+> [!NOTE]
+> Accelerated Networking is only supported for Standard_B12ms, Standard_B16ms and Standard_B20ms.
+
+## Sizes in series
+
+### [Basics](#tab/sizebasic)
+
+vCPUs (Qty.) and Memory for each size
+
+| Size Name | vCPUs (Qty.) | Memory (GB) |
+| | | |
+| Standard_B1ls2 | 1 | 0.5 |
+| Standard_B1s | 1 | 1 |
+| Standard_B1ms | 1 | 2 |
+| Standard_B2s | 2 | 4 |
+| Standard_B2ms | 2 | 8 |
+| Standard_B4ms | 4 | 16 |
+| Standard_B8ms | 8 | 32 |
+| Standard_B12ms | 12 | 48 |
+| Standard_B16ms | 16 | 64 |
+| Standard_B20ms | 20 | 80 |
+
+#### VM Basics resources
+- [Check vCPU quotas](../../../virtual-machines/quotas.md)
++
+### [CPU Burst](#tab/sizeburstdata)
+
+Base CPU performance, Credits, and other CPU bursting related info
+
+| Size Name | Base CPU Performance Percentage | Initial Credits (Qty.) | Credits banked/hour (Qty.) | Max Banked Credits (Qty.) |
+| | | | | |
+| Standard_B1ls | 5% | 30 | 3 | 72 |
+| Standard_B1s | 10% | 30 | 6 | 144 |
+| Standard_B1ms | 20% | 30 | 12 | 288 |
+| Standard_B2s | 20% | 60 | 24 | 576 |
+| Standard_B2ms | 30% | 60 | 36 | 864 |
+| Standard_B4ms | 22.5% | 120 | 54 | 1296 |
+| Standard_B8ms | 17% | 240 | 81 | 1994 |
+| Standard_B12ms | 17% | 360 | 121 | 2908 |
+| Standard_B16ms | 17% | 480 | 162 | 3888 |
+| Standard_B20ms | 17% | 600 | 202 | 4867 |
+
+#### CPU Burst resources
+- B-series VMs can burst their disk performance and get up to their bursting max for up to 30 minutes at a time.
+- B1ls is supported only on Linux
+- Learn more about [CPU bursting](../../b-series-cpu-credit-model/b-series-cpu-credit-model.md)
+
+### [Local Storage](#tab/sizestoragelocal)
+
+Local (temp) storage info for each size
+
+| Size Name | Max Temp Storage Disks (Qty.) | Temp Disk Size (GiB) | Temp Disk Random Read (RR)<sup>1</sup> IOPS | Temp Disk Random Read (RR)<sup>1</sup> Speed (MBps) | Temp Disk Random Write (RW)<sup>1</sup> IOPS | Temp Disk Random Write (RW)<sup>1</sup> Speed (MBps) |
+| | | | | | | |
+| Standard_B1ls2 | 1 | 4 | | | | |
+| Standard_B1s | 1 | 4 | | | | |
+| Standard_B1ms | 1 | 4 | | | | |
+| Standard_B2s | 1 | 8 | | | | |
+| Standard_B2ms | 1 | 16 | | | | |
+| Standard_B4ms | 1 | 32 | | | | |
+| Standard_B8ms | 1 | 64 | | | | |
+| Standard_B12ms | 1 | 96 | | | | |
+| Standard_B16ms | 1 | 128 | | | | |
+| Standard_B20ms | 1 | 160 | | | | |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>Temp disk speed often differs between RR (Random Read) and RW (Random Write) operations. RR operations are typically faster than RW operations. The RW speed is usually slower than the RR speed on series where only the RR speed value is listed.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
+
+### [Remote Storage](#tab/sizestorageremote)
+
+Remote (uncached) storage info for each size
+
+| Size Name | Max Remote Storage Disks (Qty.) | Uncached Disk IOPS | Uncached Disk Speed (MBps) | Uncached Disk Burst<sup>1</sup> IOPS | Uncached Disk Burst<sup>1</sup> Speed (MBps) | Uncached Special<sup>2</sup> Disk IOPS | Uncached Special<sup>2</sup> Disk Speed (MBps) | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk IOPS | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk Speed (MBps) |
+| | | | | | | | | | |
+| Standard_B1ls2 | 2 | 160 | 10 | 4000 | 100 | | | | |
+| Standard_B1s | 2 | 320 | 10 | 4000 | 100 | | | | |
+| Standard_B1ms | 2 | 640 | 10 | 4000 | 100 | | | | |
+| Standard_B2s | 4 | 1280 | 15 | 4000 | 100 | | | | |
+| Standard_B2ms | 4 | 1920 | 22.5 | 4000 | 100 | | | | |
+| Standard_B4ms | 8 | 2880 | 35 | 8000 | 200 | | | | |
+| Standard_B8ms | 16 | 4320 | 50 | 8000 | 200 | | | | |
+| Standard_B12ms | 16 | 4320 | 50 | 16000 | 400 | | | | |
+| Standard_B16ms | 32 | 4320 | 50 | 16000 | 400 | | | | |
+| Standard_B20ms | 32 | 4320 | 50 | 16000 | 400 | | | | |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>Some sizes support [bursting](../../disk-bursting.md) to temporarily increase disk performance. Burst speeds can be maintained for up to 30 minutes at a time.
+- <sup>2</sup>Special Storage refers to either [Ultra Disk](../../../virtual-machines/disks-enable-ultra-ssd.md) or [Premium SSD v2](../../../virtual-machines/disks-deploy-premium-v2.md) storage.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to ReadOnly or ReadWrite. For uncached data disk operation, the host cache mode is set to None.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
++
+### [Network](#tab/sizenetwork)
+
+Network interface info for each size
+
+| Size Name | Max NICs (Qty.) | Max Bandwidth (Mbps) |
+| | | |
+| Standard_B1ls2 | 2 | |
+| Standard_B1s | 2 | |
+| Standard_B1ms | 2 | |
+| Standard_B2s | 3 | |
+| Standard_B2ms | 3 | |
+| Standard_B4ms | 4 | |
+| Standard_B8ms | 4 | |
+| Standard_B12ms | 6 | |
+| Standard_B16ms | 8 | |
+| Standard_B20ms | 8 | |
+
+#### Networking resources
+- [Virtual networks and virtual machines in Azure](../../../virtual-network/network-overview.md)
+- [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+
+#### Table definitions
+- Expected network bandwidth is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+- Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see [Optimize network throughput for Azure virtual machines](../../../virtual-network/virtual-network-optimize-network-bandwidth.md).
+- To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see [Bandwidth/Throughput testing (NTTTCP)](../../../virtual-network/virtual-network-bandwidth-testing.md).
+
+### [Accelerators](#tab/sizeaccelerators)
+
+Accelerator (GPUs, FPGAs, etc.) info for each size
+
+> [!NOTE]
+> No accelerators are present in this series.
+++
virtual-machines D Family https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/d-family.md
Title: D-family size series description: List of sizes in the D family. -+ Previously updated : 04/16/2024 Last updated : 07/30/2024
### Dasv6 and Dadsv6-series
+#### [Dasv6-series](#tab/dasv6)
[!INCLUDE [dasv6-series-summary](./includes/dasv6-series-summary.md)]
-[View the full Dasv6 and Dadsv6-series page](../../dasv6-dadsv6-series.md).
+[View the full Dasv6-series page](./dasv6-series.md).
+#### [Dadsv6-series](#tab/dadsv6)
+[View the full Dadsv6-series page](./dadsv6-series.md).
+++ ### Dalsv6 and Daldsv6-series
+#### [Dalsv6-series](#tab/dalsv6)
+
+[View the full Dalsv6-series page](./dalsv6-series.md).
+
-[View the full Dalsv6 and Daldsv6-series page](../../dalsv6-daldsv6-series.md).
+#### [Daldsv6-series](#tab/daldsv6)
+[View the full Daldsv6-series page](./daldsv6-series.md).
+ ### Dv5 and Dsv5-series #### [Dv5-series](#tab/dv5) [!INCLUDE [dv5-series-summary](./includes/dv5-series-summary.md)]
#### [Dasv5-series](#tab/dasv5) [!INCLUDE [dasv5-series-summary](./includes/dasv5-series-summary.md)]
-[View the full Dasv5-series page](../../dasv5-dadsv5-series.md).
+[View the full Dasv5-series page](./dasv5-series.md).
#### [Dadsv5-series](#tab/dadsv5)
-[View the full Dasv5 and Dadsv5-series page](../../dasv5-dadsv5-series.md).
+[View the full Dadsv5-series page](./dadsv5-series.md).
### Dpsv5 and Dpdsv5-series
### Dav4 and Dasv4-series
+#### [Dav4-series](#tab/dav4)
+
+[View the full Dav4-series page](./dav4-series.md).
-[View the full Dav4 and Dasv4-series page](../../dav4-dasv4-series.md).
+#### [Dasv4-series](#tab/dasv4)
+[View the full Dasv4-series page](./dasv4-series.md).
++ ### Ddv4 and Ddsv4-series #### [Ddv4-series](#tab/ddv4) [!INCLUDE [ddv4-series-summary](./includes/ddv4-series-summary.md)]
virtual-machines Dadsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/dadsv5-series.md
+
+ Title: Dadsv5 size series
+description: Information on and specifications of the Dadsv5-series sizes
++++ Last updated : 07/29/2024++++
+# Dadsv5 sizes series
++
+## Host specifications
+
+## Feature support
+[Premium Storage](../../premium-storage-performance.md): Supported <br>[Premium Storage caching](../../premium-storage-performance.md): Supported <br>[Live Migration](../../maintenance-and-updates.md): Supported <br>[Memory Preserving Updates](../../maintenance-and-updates.md): Supported <br>[Generation 2 VMs](../../generation-2.md): Supported <br>[Generation 1 VMs](../../generation-2.md): Supported <br>[Accelerated Networking](../../../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br>[Ephemeral OS Disk](../../ephemeral-os-disks.md): Supported <br>[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
+
+## Sizes in series
+
+### [Basics](#tab/sizebasic)
+
+vCPUs (Qty.) and Memory for each size
+
+| Size Name | vCPUs (Qty.) | Memory (GB) |
+| | | |
+| Standard_D2ads_v5 | 2 | 8 |
+| Standard_D4ads_v5 | 4 | 16 |
+| Standard_D8ads_v5 | 8 | 32 |
+| Standard_D16ads_v5 | 16 | 64 |
+| Standard_D32ads_v5 | 32 | 128 |
+| Standard_D48ads_v5 | 48 | 192 |
+| Standard_D64ads_v5 | 64 | 256 |
+| Standard_D96ads_v5 | 96 | 384 |
+
+#### VM Basics resources
+- [Check vCPU quotas](../../../virtual-machines/quotas.md)
+
+### [Local Storage](#tab/sizestoragelocal)
+
+Local (temp) storage info for each size
+
+| Size Name | Max Temp Storage Disks (Qty.) | Temp Disk Size (GiB) | Temp Disk Random Read (RR)<sup>1</sup> IOPS | Temp Disk Random Read (RR)<sup>1</sup> Speed (MBps) | Temp Disk Random Write (RW)<sup>1</sup> IOPS | Temp Disk Random Write (RW)<sup>1</sup> Speed (MBps) |
+| | | | | | | |
+| Standard_D2ads_v5 | 1 | 75 | 9000 | 125 | | |
+| Standard_D4ads_v5 | 1 | 150 | 19000 | 250 | | |
+| Standard_D8ads_v5 | 1 | 300 | 38000 | 500 | | |
+| Standard_D16ads_v5 | 1 | 600 | 75000 | 1000 | | |
+| Standard_D32ads_v5 | 1 | 1200 | 150000 | 2000 | | |
+| Standard_D48ads_v5 | 1 | 1800 | 225000 | 3000 | | |
+| Standard_D64ads_v5 | 1 | 2400 | 300000 | 4000 | | |
+| Standard_D96ads_v5 | 1 | 3600 | 450000 | 4000 | | |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>Temp disk speed often differs between RR (Random Read) and RW (Random Write) operations. RR operations are typically faster than RW operations. The RW speed is usually slower than the RR speed on series where only the RR speed value is listed.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
+
+### [Remote Storage](#tab/sizestorageremote)
+
+Remote (uncached) storage info for each size
+
+| Size Name | Max Remote Storage Disks (Qty.) | Uncached Disk IOPS | Uncached Disk Speed (MBps) | Uncached Disk Burst<sup>1</sup> IOPS | Uncached Disk Burst<sup>1</sup> Speed (MBps) | Uncached Special<sup>2</sup> Disk IOPS | Uncached Special<sup>2</sup> Disk Speed (MBps) | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk IOPS | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk Speed (MBps) |
+| | | | | | | | | | |
+| Standard_D2ads_v5 | 4 | 3750 | 82 | 10000 | 600 | | | | |
+| Standard_D4ads_v5 | 8 | 6400 | 144 | 20000 | 600 | | | | |
+| Standard_D8ads_v5 | 16 | 12800 | 200 | 20000 | 600 | | | | |
+| Standard_D16ads_v5 | 32 | 25600 | 384 | 40000 | 800 | | | | |
+| Standard_D32ads_v5 | 32 | 51200 | 768 | 80000 | 1000 | | | | |
+| Standard_D48ads_v5 | 32 | 76800 | 1152 | 80000 | 2000 | | | | |
+| Standard_D64ads_v5 | 32 | 80000 | 1200 | 80000 | 2000 | | | | |
+| Standard_D96ads_v5 | 32 | 80000 | 1600 | 80000 | 2000 | | | | |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>Some sizes support [bursting](../../disk-bursting.md) to temporarily increase disk performance. Burst speeds can be maintained for up to 30 minutes at a time.
+- <sup>2</sup>Special Storage refers to either [Ultra Disk](../../../virtual-machines/disks-enable-ultra-ssd.md) or [Premium SSD v2](../../../virtual-machines/disks-deploy-premium-v2.md) storage.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to ReadOnly or ReadWrite. For uncached data disk operation, the host cache mode is set to None.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
++
+### [Network](#tab/sizenetwork)
+
+Network interface info for each size
+
+| Size Name | Max NICs (Qty.) | Max Bandwidth (Mbps) |
+| | | |
+| Standard_D2ads_v5 | 2 | 12500 |
+| Standard_D4ads_v5 | 2 | 12500 |
+| Standard_D8ads_v5 | 4 | 12500 |
+| Standard_D16ads_v5 | 8 | 12500 |
+| Standard_D32ads_v5 | 8 | 16000 |
+| Standard_D48ads_v5 | 8 | 24000 |
+| Standard_D64ads_v5 | 8 | 32000 |
+| Standard_D96ads_v5 | 8 | 40000 |
+
+#### Networking resources
+- [Virtual networks and virtual machines in Azure](../../../virtual-network/network-overview.md)
+- [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+
+#### Table definitions
+- Expected network bandwidth is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+- Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see [Optimize network throughput for Azure virtual machines](../../../virtual-network/virtual-network-optimize-network-bandwidth.md).
+- To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see [Bandwidth/Throughput testing (NTTTCP)](../../../virtual-network/virtual-network-bandwidth-testing.md).
+
+### [Accelerators](#tab/sizeaccelerators)
+
+Accelerator (GPUs, FPGAs, etc.) info for each size
+
+> [!NOTE]
+> No accelerators are present in this series.
+++
virtual-machines Dadsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/dadsv6-series.md
+
+ Title: Dadsv6 size series
+description: Information on and specifications of the Dadsv6-series sizes
++++ Last updated : 07/29/2024++++
+# Dadsv6 sizes series
+
+>[!NOTE]
+>This VM series is currently in **Preview**. See the [Preview Terms Of Use | Microsoft Azure](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
++
+## Host specifications
+
+## Feature support
+[Premium Storage](../../premium-storage-performance.md): Supported <br>[Premium Storage caching](../../premium-storage-performance.md): Supported <br>[Live Migration](../../maintenance-and-updates.md): Not Supported <br>[Memory Preserving Updates](../../maintenance-and-updates.md): Supported <br>[Generation 2 VMs](../../generation-2.md): Supported <br>[Generation 1 VMs](../../generation-2.md): Not Supported <br>[Accelerated Networking](../../../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br>[Ephemeral OS Disk](../../ephemeral-os-disks.md): Not Supported <br>[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
+
+## Sizes in series
+
+### [Basics](#tab/sizebasic)
+
+vCPUs (Qty.) and Memory for each size
+
+| Size Name | vCPUs (Qty.) | Memory (GB) |
+| | | |
+| Standard_D2ads_v6 | 2 | 8 |
+| Standard_D4ads_v6 | 4 | 16 |
+| Standard_D8ads_v6 | 8 | 32 |
+| Standard_D16ads_v6 | 16 | 64 |
+| Standard_D32ads_v6 | 32 | 128 |
+| Standard_D48ads_v6 | 48 | 192 |
+| Standard_D64ads_v6 | 64 | 256 |
+| Standard_D96ads_v6 | 96 | 384 |
+
+#### VM Basics resources
+- [Check vCPU quotas](../../../virtual-machines/quotas.md)
+
+### [Local Storage](#tab/sizestoragelocal)
+
+Local (temp) storage info for each size
+
+| Size Name | Max Temp Storage Disks (Qty.) | Temp Disk Size (GiB) | Temp Disk Random Read (RR)<sup>1</sup> IOPS | Temp Disk Random Read (RR)<sup>1</sup> Speed (MBps) | Temp Disk Random Write (RW)<sup>1</sup> IOPS | Temp Disk Random Write (RW)<sup>1</sup> Speed (MBps) |
+| | | | | | | |
+| Standard_D2ads_v6 | 1 | 110 | 37500 | 180 | | |
+| Standard_D4ads_v6 | 1 | 220 | 75000 | 360 | | |
+| Standard_D8ads_v6 | 1 | 440 | 150000 | 720 | | |
+| Standard_D16ads_v6 | 2 | 440 | 300000 | 1440 | | |
+| Standard_D32ads_v6 | 4 | 440 | 600000 | 2880 | | |
+| Standard_D48ads_v6 | 6 | 440 | 900000 | 4320 | | |
+| Standard_D64ads_v6 | 4 | 880 | 1200000 | 5760 | | |
+| Standard_D96ads_v6 | 6 | 880 | 1800000 | 8640 | | |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>Temp disk speed often differs between RR (Random Read) and RW (Random Write) operations. RR operations are typically faster than RW operations. The RW speed is usually slower than the RR speed on series where only the RR speed value is listed.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
+
+### [Remote Storage](#tab/sizestorageremote)
+
+Remote (uncached) storage info for each size
+
+| Size Name | Max Remote Storage Disks (Qty.) | Uncached Disk IOPS | Uncached Disk Speed (MBps) | Uncached Disk Burst<sup>1</sup> IOPS | Uncached Disk Burst<sup>1</sup> Speed (MBps) | Uncached Special<sup>2</sup> Disk IOPS | Uncached Special<sup>2</sup> Disk Speed (MBps) | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk IOPS | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk Speed (MBps) |
+| | | | | | | | | | |
+| Standard_D2ads_v6 | 4 | 4000 | 90 | 20000 | 1250 | 4000 | 90 | 20000 | 1250 |
+| Standard_D4ads_v6 | 8 | 7600 | 180 | 20000 | 1250 | 7600 | 180 | 20000 | 1250 |
+| Standard_D8ads_v6 | 16 | 15200 | 360 | 20000 | 1250 | 15200 | 360 | 20000 | 1250 |
+| Standard_D16ads_v6 | 32 | 30400 | 720 | 40000 | 1250 | 30400 | 720 | 40000 | 1250 |
+| Standard_D32ads_v6 | 32 | 57600 | 1440 | 80000 | 1700 | 57600 | 1440 | 80000 | 1700 |
+| Standard_D48ads_v6 | 32 | 86400 | 2160 | 90000 | 2550 | 86400 | 2160 | 90000 | 2550 |
+| Standard_D64ads_v6 | 32 | 115200 | 2880 | 120000 | 3400 | 115200 | 2880 | 120000 | 3400 |
+| Standard_D96ads_v6 | 32 | 175000 | 4320 | 175000 | 5090 | 175000 | 4320 | 175000 | 5090 |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>Some sizes support [bursting](../../disk-bursting.md) to temporarily increase disk performance. Burst speeds can be maintained for up to 30 minutes at a time.
+- <sup>2</sup>Special Storage refers to either [Ultra Disk](../../../virtual-machines/disks-enable-ultra-ssd.md) or [Premium SSD v2](../../../virtual-machines/disks-deploy-premium-v2.md) storage.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to ReadOnly or ReadWrite. For uncached data disk operation, the host cache mode is set to None.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
++
+### [Network](#tab/sizenetwork)
+
+Network interface info for each size
+
+| Size Name | Max NICs (Qty.) | Max Bandwidth (Mbps) |
+| | | |
+| Standard_D2ads_v6 | 2 | 12500 |
+| Standard_D4ads_v6 | 2 | 12500 |
+| Standard_D8ads_v6 | 4 | 12500 |
+| Standard_D16ads_v6 | 8 | 16000 |
+| Standard_D32ads_v6 | 8 | 20000 |
+| Standard_D48ads_v6 | 8 | 28000 |
+| Standard_D64ads_v6 | 8 | 36000 |
+| Standard_D96ads_v6 | 8 | 40000 |
+
+#### Networking resources
+- [Virtual networks and virtual machines in Azure](../../../virtual-network/network-overview.md)
+- [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+
+#### Table definitions
+- Expected network bandwidth is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+- Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see [Optimize network throughput for Azure virtual machines](../../../virtual-network/virtual-network-optimize-network-bandwidth.md).
+- To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see [Bandwidth/Throughput testing (NTTTCP)](../../../virtual-network/virtual-network-bandwidth-testing.md).
+
+### [Accelerators](#tab/sizeaccelerators)
+
+Accelerator (GPUs, FPGAs, etc.) info for each size
+
+> [!NOTE]
+> No accelerators are present in this series.
+++
virtual-machines Daldsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/daldsv6-series.md
+
+ Title: Daldsv6 size series
+description: Information on and specifications of the Daldsv6-series sizes
++++ Last updated : 07/29/2024++++
+# Daldsv6 sizes series
+
+>[!NOTE]
+>This VM series is currently in **Preview**. See the [Preview Terms Of Use | Microsoft Azure](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
++
+## Host specifications
+
+## Feature support
+[Premium Storage](../../premium-storage-performance.md): Supported <br>[Premium Storage caching](../../premium-storage-performance.md): Supported <br>[Live Migration](../../maintenance-and-updates.md): Not Supported <br>[Memory Preserving Updates](../../maintenance-and-updates.md): Supported <br>[Generation 2 VMs](../../generation-2.md): Supported <br>[Generation 1 VMs](../../generation-2.md): Not Supported <br>[Accelerated Networking](../../../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br>[Ephemeral OS Disk](../../ephemeral-os-disks.md): Not Supported <br>[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
+
+## Sizes in series
+
+### [Basics](#tab/sizebasic)
+
+vCPUs (Qty.) and Memory for each size
+
+| Size Name | vCPUs (Qty.) | Memory (GB) |
+| | | |
+| Standard_D2alds_v6 | 2 | 4 |
+| Standard_D4alds_v6 | 4 | 8 |
+| Standard_D8alds_v6 | 8 | 16 |
+| Standard_D16alds_v6 | 16 | 32 |
+| Standard_D32alds_v6 | 32 | 64 |
+| Standard_D48alds_v6 | 48 | 96 |
+| Standard_D64alds_v6 | 64 | 128 |
+| Standard_D96alds_v6 | 96 | 192 |
+
+#### VM Basics resources
+- [Check vCPU quotas](../../../virtual-machines/quotas.md)
+
+### [Local Storage](#tab/sizestoragelocal)
+
+Local (temp) storage info for each size
+
+| Size Name | Max Temp Storage Disks (Qty.) | Temp Disk Size (GiB) | Temp Disk Random Read (RR)<sup>1</sup> IOPS | Temp Disk Random Read (RR)<sup>1</sup> Speed (MBps) | Temp Disk Random Write (RW)<sup>1</sup> IOPS | Temp Disk Random Write (RW)<sup>1</sup> Speed (MBps) |
+| | | | | | | |
+| Standard_D2alds_v6 | 1 | 110 | 37500 | 180 | | |
+| Standard_D4alds_v6 | 1 | 220 | 75000 | 360 | | |
+| Standard_D8alds_v6 | 1 | 440 | 150000 | 720 | | |
+| Standard_D16alds_v6 | 2 | 440 | 300000 | 1440 | | |
+| Standard_D32alds_v6 | 4 | 440 | 600000 | 2880 | | |
+| Standard_D48alds_v6 | 6 | 440 | 900000 | 4320 | | |
+| Standard_D64alds_v6 | 4 | 880 | 1200000 | 5760 | | |
+| Standard_D96alds_v6 | 6 | 880 | 1800000 | 8640 | | |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>Temp disk speed often differs between RR (Random Read) and RW (Random Write) operations. RR operations are typically faster than RW operations. The RW speed is usually slower than the RR speed on series where only the RR speed value is listed.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
+
+### [Remote Storage](#tab/sizestorageremote)
+
+Remote (uncached) storage info for each size
+
+| Size Name | Max Remote Storage Disks (Qty.) | Uncached Disk IOPS | Uncached Disk Speed (MBps) | Uncached Disk Burst<sup>1</sup> IOPS | Uncached Disk Burst<sup>1</sup> Speed (MBps) | Uncached Special<sup>2</sup> Disk IOPS | Uncached Special<sup>2</sup> Disk Speed (MBps) | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk IOPS | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk Speed (MBps) |
+| | | | | | | | | | |
+| Standard_D2alds_v6 | 4 | 4000 | 90 | 20000 | 1250 | 4000 | 90 | 20000 | 1250 |
+| Standard_D4alds_v6 | 8 | 7600 | 180 | 20000 | 1250 | 7600 | 180 | 20000 | 1250 |
+| Standard_D8alds_v6 | 16 | 15200 | 360 | 20000 | 1250 | 15200 | 360 | 20000 | 1250 |
+| Standard_D16alds_v6 | 32 | 30400 | 720 | 40000 | 1250 | 30400 | 720 | 40000 | 1250 |
+| Standard_D32alds_v6 | 32 | 57600 | 1440 | 80000 | 1700 | 57600 | 1440 | 80000 | 1700 |
+| Standard_D48alds_v6 | 32 | 86400 | 2160 | 90000 | 2550 | 86400 | 2160 | 90000 | 2550 |
+| Standard_D64alds_v6 | 32 | 115200 | 2880 | 120000 | 3400 | 115200 | 2880 | 120000 | 3400 |
+| Standard_D96alds_v6 | 32 | 175000 | 4320 | 175000 | 5090 | 175000 | 4320 | 175000 | 5090 |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>Some sizes support [bursting](../../disk-bursting.md) to temporarily increase disk performance. Burst speeds can be maintained for up to 30 minutes at a time.
+- <sup>2</sup>Special Storage refers to either [Ultra Disk](../../../virtual-machines/disks-enable-ultra-ssd.md) or [Premium SSD v2](../../../virtual-machines/disks-deploy-premium-v2.md) storage.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to ReadOnly or ReadWrite. For uncached data disk operation, the host cache mode is set to None.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
++
+### [Network](#tab/sizenetwork)
+
+Network interface info for each size
+
+| Size Name | Max NICs (Qty.) | Max Bandwidth (Mbps) |
+| | | |
+| Standard_D2alds_v6 | 2 | 12500 |
+| Standard_D4alds_v6 | 2 | 12500 |
+| Standard_D8alds_v6 | 4 | 12500 |
+| Standard_D16alds_v6 | 8 | 16000 |
+| Standard_D32alds_v6 | 8 | 20000 |
+| Standard_D48alds_v6 | 8 | 28000 |
+| Standard_D64alds_v6 | 8 | 36000 |
+| Standard_D96alds_v6 | 8 | 40000 |
+
+#### Networking resources
+- [Virtual networks and virtual machines in Azure](../../../virtual-network/network-overview.md)
+- [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+
+#### Table definitions
+- Expected network bandwidth is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+- Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see [Optimize network throughput for Azure virtual machines](../../../virtual-network/virtual-network-optimize-network-bandwidth.md).
+- To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see [Bandwidth/Throughput testing (NTTTCP)](../../../virtual-network/virtual-network-bandwidth-testing.md).
+
+### [Accelerators](#tab/sizeaccelerators)
+
+Accelerator (GPUs, FPGAs, etc.) info for each size
+
+> [!NOTE]
+> No accelerators are present in this series.
+++
virtual-machines Dalsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/dalsv6-series.md
+
+ Title: Dalsv6 size series
+description: Information on and specifications of the Dalsv6-series sizes
++++ Last updated : 07/29/2024++++
+# Dalsv6 sizes series
+
+>[!NOTE]
+>This VM series is currently in **Preview**. See the [Preview Terms Of Use | Microsoft Azure](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
++
+## Host specifications
+
+## Feature support
+[Premium Storage](../../premium-storage-performance.md): Supported <br>[Premium Storage caching](../../premium-storage-performance.md): Supported <br>[Live Migration](../../maintenance-and-updates.md): Not Supported <br>[Memory Preserving Updates](../../maintenance-and-updates.md): Supported <br>[Generation 2 VMs](../../generation-2.md): Supported <br>[Generation 1 VMs](../../generation-2.md): Not Supported <br>[Accelerated Networking](../../../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br>[Ephemeral OS Disk](../../ephemeral-os-disks.md): Not Supported <br>[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
+
+## Sizes in series
+
+### [Basics](#tab/sizebasic)
+
+vCPUs (Qty.) and Memory for each size
+
+| Size Name | vCPUs (Qty.) | Memory (GB) |
+| | | |
+| Standard_D2als_v6 | 2 | 4 |
+| Standard_D4als_v6 | 4 | 8 |
+| Standard_D8als_v6 | 8 | 16 |
+| Standard_D16als_v6 | 16 | 32 |
+| Standard_D32als_v6 | 32 | 64 |
+| Standard_D48als_v6 | 48 | 96 |
+| Standard_D64als_v6 | 64 | 128 |
+| Standard_D96als_v6 | 96 | 192 |
+
+#### VM Basics resources
+- [Check vCPU quotas](../../../virtual-machines/quotas.md)
+
+### [Local Storage](#tab/sizestoragelocal)
+
+Local (temp) storage info for each size
+
+> [!NOTE]
+> No local storage present in this series.
+>
+> For frequently asked questions, see [Azure VM sizes with no local temp disk](../../azure-vms-no-temp-disk.yml).
+++
+### [Remote Storage](#tab/sizestorageremote)
+
+Remote (uncached) storage info for each size
+
+| Size Name | Max Remote Storage Disks (Qty.) | Uncached Disk IOPS | Uncached Disk Speed (MBps) | Uncached Disk Burst<sup>1</sup> IOPS | Uncached Disk Burst<sup>1</sup> Speed (MBps) | Uncached Special<sup>2</sup> Disk IOPS | Uncached Special<sup>2</sup> Disk Speed (MBps) | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk IOPS | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk Speed (MBps) |
+| | | | | | | | | | |
+| Standard_D2als_v6 | 4 | 4000 | 90 | 20000 | 1250 | 4000 | 90 | 20000 | 1250 |
+| Standard_D4als_v6 | 8 | 7600 | 180 | 20000 | 1250 | 7600 | 180 | 20000 | 1250 |
+| Standard_D8als_v6 | 16 | 15200 | 360 | 20000 | 1250 | 15200 | 360 | 20000 | 1250 |
+| Standard_D16als_v6 | 32 | 30400 | 720 | 40000 | 1250 | 30400 | 720 | 40000 | 1250 |
+| Standard_D32als_v6 | 32 | 57600 | 1440 | 80000 | 1700 | 57600 | 1440 | 80000 | 1700 |
+| Standard_D48als_v6 | 32 | 86400 | 2160 | 90000 | 2550 | 86400 | 2160 | 90000 | 2550 |
+| Standard_D64als_v6 | 32 | 115200 | 2880 | 120000 | 3400 | 115200 | 2880 | 120000 | 3400 |
+| Standard_D96als_v6 | 32 | 175000 | 4320 | 175000 | 5090 | 175000 | 4320 | 175000 | 5090 |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>Some sizes support [bursting](../../disk-bursting.md) to temporarily increase disk performance. Burst speeds can be maintained for up to 30 minutes at a time.
+- <sup>2</sup>Special Storage refers to either [Ultra Disk](../../../virtual-machines/disks-enable-ultra-ssd.md) or [Premium SSD v2](../../../virtual-machines/disks-deploy-premium-v2.md) storage.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to ReadOnly or ReadWrite. For uncached data disk operation, the host cache mode is set to None.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
++
+### [Network](#tab/sizenetwork)
+
+Network interface info for each size
+
+| Size Name | Max NICs (Qty.) | Max Bandwidth (Mbps) |
+| | | |
+| Standard_D2als_v6 | 2 | 12500 |
+| Standard_D4als_v6 | 2 | 12500 |
+| Standard_D8als_v6 | 4 | 12500 |
+| Standard_D16als_v6 | 8 | 16000 |
+| Standard_D32als_v6 | 8 | 20000 |
+| Standard_D48als_v6 | 8 | 28000 |
+| Standard_D64als_v6 | 8 | 36000 |
+| Standard_D96als_v6 | 8 | 40000 |
+
+#### Networking resources
+- [Virtual networks and virtual machines in Azure](../../../virtual-network/network-overview.md)
+- [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+
+#### Table definitions
+- Expected network bandwidth is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+- Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see [Optimize network throughput for Azure virtual machines](../../../virtual-network/virtual-network-optimize-network-bandwidth.md).
+- To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see [Bandwidth/Throughput testing (NTTTCP)](../../../virtual-network/virtual-network-bandwidth-testing.md).
+
+### [Accelerators](#tab/sizeaccelerators)
+
+Accelerator (GPUs, FPGAs, etc.) info for each size
+
+> [!NOTE]
+> No accelerators are present in this series.
+++
virtual-machines Dasv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/dasv4-series.md
+
+ Title: Dasv4 size series
+description: Information on and specifications of the Dasv4-series sizes
++++ Last updated : 07/29/2024++++
+# Dasv4 sizes series
++
+## Host specifications
+
+## Feature support
+[Premium Storage](../../premium-storage-performance.md): Supported <br>[Premium Storage caching](../../premium-storage-performance.md): Supported <br>[Live Migration](../../maintenance-and-updates.md): Supported <br>[Memory Preserving Updates](../../maintenance-and-updates.md): Supported <br>[Generation 2 VMs](../../generation-2.md): Supported <br>[Generation 1 VMs](../../generation-2.md): Supported <br>[Accelerated Networking](../../../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br>[Ephemeral OS Disk](../../ephemeral-os-disks.md): Supported <br>[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
+
+## Sizes in series
+
+### [Basics](#tab/sizebasic)
+
+vCPUs (Qty.) and Memory for each size
+
+| Size Name | vCPUs (Qty.) | Memory (GB) |
+| | | |
+| Standard_D2as_v42 | 2 | 8 |
+| Standard_D4as_v4 | 4 | 16 |
+| Standard_D8as_v4 | 8 | 32 |
+| Standard_D16as_v4 | 16 | 64 |
+| Standard_D32as_v4 | 32 | 128 |
+| Standard_D48as_v4 | 48 | 192 |
+| Standard_D64as_v4 | 64 | 256 |
+| Standard_D96as_v4 | 96 | 384 |
+
+#### VM Basics resources
+- [Check vCPU quotas](../../../virtual-machines/quotas.md)
+
+### [Local Storage](#tab/sizestoragelocal)
+
+Local (temp) storage info for each size
+
+| Size Name | Max Temp Storage Disks (Qty.) | Temp Disk Size (GiB) | Temp Disk Random Read (RR)<sup>1</sup> IOPS | Temp Disk Random Read (RR)<sup>1</sup> Speed (MBps) | Temp Disk Random Write (RW)<sup>1</sup> IOPS | Temp Disk Random Write (RW)<sup>1</sup> Speed (MBps) |
+| | | | | | | |
+| Standard_D2as_v42 | 4 | 16 | 4000 | 32 | 4000 | 100 |
+| Standard_D4as_v4 | 8 | 32 | 8000 | 64 | 8000 | 200 |
+| Standard_D8as_v4 | 16 | 64 | 16000 | 128 | 16000 | 400 |
+| Standard_D16as_v4 | 32 | 128 | 32000 | 255 | 32000 | 800 |
+| Standard_D32as_v4 | 32 | 256 | 64000 | 510 | 64000 | 1600 |
+| Standard_D48as_v4 | 32 | 384 | 96000 | 1020 | 96000 | 2000 |
+| Standard_D64as_v4 | 32 | 512 | 128000 | 1020 | 128000 | 2000 |
+| Standard_D96as_v4 | 32 | 768 | 192000 | 1020 | 192000 | 2000 |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>Temp disk speed often differs between RR (Random Read) and RW (Random Write) operations. RR operations are typically faster than RW operations. The RW speed is usually slower than the RR speed on series where only the RR speed value is listed.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
+
+### [Remote Storage](#tab/sizestorageremote)
+
+Remote (uncached) storage info for each size
+
+| Size Name | Max Remote Storage Disks (Qty.) | Uncached Disk IOPS | Uncached Disk Speed (MBps) | Uncached Disk Burst<sup>1</sup> IOPS | Uncached Disk Burst<sup>1</sup> Speed (MBps) | Uncached Special<sup>2</sup> Disk IOPS | Uncached Special<sup>2</sup> Disk Speed (MBps) | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk IOPS | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk Speed (MBps) |
+| | | | | | | | | | |
+| Standard_D2as_v42 | 4 | 3200 | 48 | 4000 | 200 | | | | |
+| Standard_D4as_v4 | 8 | 6400 | 96 | 8000 | 200 | | | | |
+| Standard_D8as_v4 | 16 | 12800 | 192 | 16000 | 400 | | | | |
+| Standard_D16as_v4 | 32 | 25600 | 384 | 32000 | 800 | | | | |
+| Standard_D32as_v4 | 32 | 51200 | 768 | 64000 | 1600 | | | | |
+| Standard_D48as_v4 | 32 | 76800 | 1148 | 80000 | 2000 | | | | |
+| Standard_D64as_v4 | 32 | 80000 | 1200 | 80000 | 2000 | | | | |
+| Standard_D96as_v4 | 32 | 80000 | 1200 | 80000 | 2000 | | | | |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>Some sizes support [bursting](../../disk-bursting.md) to temporarily increase disk performance. Burst speeds can be maintained for up to 30 minutes at a time.
+- <sup>2</sup>Special Storage refers to either [Ultra Disk](../../../virtual-machines/disks-enable-ultra-ssd.md) or [Premium SSD v2](../../../virtual-machines/disks-deploy-premium-v2.md) storage.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to ReadOnly or ReadWrite. For uncached data disk operation, the host cache mode is set to None.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
++
+### [Network](#tab/sizenetwork)
+
+Network interface info for each size
+
+| Size Name | Max NICs (Qty.) | Max Bandwidth (Mbps) |
+| | | |
+| Standard_D2as_v42 | 2 | 2000 |
+| Standard_D4as_v4 | 2 | 4000 |
+| Standard_D8as_v4 | 4 | 8000 |
+| Standard_D16as_v4 | 8 | 10000 |
+| Standard_D32as_v4 | 8 | 16000 |
+| Standard_D48as_v4 | 8 | 24000 |
+| Standard_D64as_v4 | 8 | 32000 |
+| Standard_D96as_v4 | 8 | 40000 |
+
+#### Networking resources
+- [Virtual networks and virtual machines in Azure](../../../virtual-network/network-overview.md)
+- [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+
+#### Table definitions
+- Expected network bandwidth is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+- Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see [Optimize network throughput for Azure virtual machines](../../../virtual-network/virtual-network-optimize-network-bandwidth.md).
+- To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see [Bandwidth/Throughput testing (NTTTCP)](../../../virtual-network/virtual-network-bandwidth-testing.md).
+
+### [Accelerators](#tab/sizeaccelerators)
+
+Accelerator (GPUs, FPGAs, etc.) info for each size
+
+> [!NOTE]
+> No accelerators are present in this series.
+++
virtual-machines Dasv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/dasv5-series.md
+
+ Title: Dasv5 size series
+description: Information on and specifications of the Dasv5-series sizes
++++ Last updated : 07/29/2024++++
+# Dasv5 sizes series
++
+## Host specifications
+
+## Feature support
+[Premium Storage](../../premium-storage-performance.md): Supported <br>[Premium Storage caching](../../premium-storage-performance.md): Supported <br>[Live Migration](../../maintenance-and-updates.md): Supported <br>[Memory Preserving Updates](../../maintenance-and-updates.md): Supported <br>[Generation 2 VMs](../../generation-2.md): Supported <br>[Generation 1 VMs](../../generation-2.md): Supported <br>[Accelerated Networking](../../../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br>[Ephemeral OS Disk](../../ephemeral-os-disks.md): Not Supported <br>[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
+
+## Sizes in series
+
+### [Basics](#tab/sizebasic)
+
+vCPUs (Qty.) and Memory for each size
+
+| Size Name | vCPUs (Qty.) | Memory (GB) |
+| | | |
+| Standard_D2as_v5 | 2 | 8 |
+| Standard_D4as_v5 | 4 | 16 |
+| Standard_D8as_v5 | 8 | 32 |
+| Standard_D16as_v5 | 16 | 64 |
+| Standard_D32as_v5 | 32 | 128 |
+| Standard_D48as_v5 | 48 | 192 |
+| Standard_D64as_v5 | 64 | 256 |
+| Standard_D96as_v5 | 96 | 384 |
+
+#### VM Basics resources
+- [Check vCPU quotas](../../../virtual-machines/quotas.md)
+
+### [Local Storage](#tab/sizestoragelocal)
+
+Local (temp) storage info for each size
+
+> [!NOTE]
+> No local storage present in this series.
+>
+> For frequently asked questions, see [Azure VM sizes with no local temp disk](../../azure-vms-no-temp-disk.yml).
+++
+### [Remote Storage](#tab/sizestorageremote)
+
+Remote (uncached) storage info for each size
+
+| Size Name | Max Remote Storage Disks (Qty.) | Uncached Disk IOPS | Uncached Disk Speed (MBps) | Uncached Disk Burst<sup>1</sup> IOPS | Uncached Disk Burst<sup>1</sup> Speed (MBps) | Uncached Special<sup>2</sup> Disk IOPS | Uncached Special<sup>2</sup> Disk Speed (MBps) | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk IOPS | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk Speed (MBps) |
+| | | | | | | | | | |
+| Standard_D2as_v5 | 4 | 3750 | 82 | 10000 | 600 | | | | |
+| Standard_D4as_v5 | 8 | 6400 | 144 | 20000 | 600 | | | | |
+| Standard_D8as_v5 | 16 | 12800 | 200 | 20000 | 600 | | | | |
+| Standard_D16as_v5 | 32 | 25600 | 384 | 40000 | 800 | | | | |
+| Standard_D32as_v5 | 32 | 51200 | 768 | 80000 | 1600 | | | | |
+| Standard_D48as_v5 | 32 | 76800 | 1152 | 80000 | 2000 | | | | |
+| Standard_D64as_v5 | 32 | 80000 | 1200 | 80000 | 2000 | | | | |
+| Standard_D96as_v5 | 32 | 80000 | 1600 | 80000 | 2000 | | | | |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>Some sizes support [bursting](../../disk-bursting.md) to temporarily increase disk performance. Burst speeds can be maintained for up to 30 minutes at a time.
+- <sup>2</sup>Special Storage refers to either [Ultra Disk](../../../virtual-machines/disks-enable-ultra-ssd.md) or [Premium SSD v2](../../../virtual-machines/disks-deploy-premium-v2.md) storage.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to ReadOnly or ReadWrite. For uncached data disk operation, the host cache mode is set to None.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
++
+### [Network](#tab/sizenetwork)
+
+Network interface info for each size
+
+| Size Name | Max NICs (Qty.) | Max Bandwidth (Mbps) |
+| | | |
+| Standard_D2as_v5 | 2 | 12500 |
+| Standard_D4as_v5 | 2 | 12500 |
+| Standard_D8as_v5 | 4 | 12500 |
+| Standard_D16as_v5 | 8 | 12500 |
+| Standard_D32as_v5 | 8 | 16000 |
+| Standard_D48as_v5 | 8 | 24000 |
+| Standard_D64as_v5 | 8 | 32000 |
+| Standard_D96as_v5 | 8 | 40000 |
+
+#### Networking resources
+- [Virtual networks and virtual machines in Azure](../../../virtual-network/network-overview.md)
+- [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+
+#### Table definitions
+- Expected network bandwidth is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+- Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see [Optimize network throughput for Azure virtual machines](../../../virtual-network/virtual-network-optimize-network-bandwidth.md).
+- To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see [Bandwidth/Throughput testing (NTTTCP)](../../../virtual-network/virtual-network-bandwidth-testing.md).
+
+### [Accelerators](#tab/sizeaccelerators)
+
+Accelerator (GPUs, FPGAs, etc.) info for each size
+
+> [!NOTE]
+> No accelerators are present in this series.
+++
virtual-machines Dasv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/dasv6-series.md
+
+ Title: Dasv6 size series
+description: Information on and specifications of the Dasv6-series sizes
++++ Last updated : 07/29/2024++++
+# Dasv6 sizes series
+
+>[!NOTE]
+>This VM series is currently in **Preview**. See the [Preview Terms Of Use | Microsoft Azure](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
++
+## Host specifications
+
+## Feature support
+[Premium Storage](../../premium-storage-performance.md): Supported <br>[Premium Storage caching](../../premium-storage-performance.md): Supported <br>[Live Migration](../../maintenance-and-updates.md): Not Supported <br>[Memory Preserving Updates](../../maintenance-and-updates.md): Supported <br>[Generation 2 VMs](../../generation-2.md): Supported <br>[Generation 1 VMs](../../generation-2.md): Not Supported <br>[Accelerated Networking](../../../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br>[Ephemeral OS Disk](../../ephemeral-os-disks.md): Not Supported <br>[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
+
+## Sizes in series
+
+### [Basics](#tab/sizebasic)
+
+vCPUs (Qty.) and Memory for each size
+
+| Size Name | vCPUs (Qty.) | Memory (GB) |
+| | | |
+| Standard_D2as_v6 | 2 | 8 |
+| Standard_D4as_v6 | 4 | 16 |
+| Standard_D8as_v6 | 8 | 32 |
+| Standard_D16as_v6 | 16 | 64 |
+| Standard_D32as_v6 | 32 | 128 |
+| Standard_D48as_v6 | 48 | 192 |
+| Standard_D64as_v6 | 64 | 256 |
+| Standard_D96as_v6 | 96 | 384 |
+
+#### VM Basics resources
+- [Check vCPU quotas](../../../virtual-machines/quotas.md)
+
+### [Local Storage](#tab/sizestoragelocal)
+
+Local (temp) storage info for each size
+
+> [!NOTE]
+> No local storage present in this series.
+>
+> For frequently asked questions, see [Azure VM sizes with no local temp disk](../../azure-vms-no-temp-disk.yml).
+++
+### [Remote Storage](#tab/sizestorageremote)
+
+Remote (uncached) storage info for each size
+
+| Size Name | Max Remote Storage Disks (Qty.) | Uncached Disk IOPS | Uncached Disk Speed (MBps) | Uncached Disk Burst<sup>1</sup> IOPS | Uncached Disk Burst<sup>1</sup> Speed (MBps) | Uncached Special<sup>2</sup> Disk IOPS | Uncached Special<sup>2</sup> Disk Speed (MBps) | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk IOPS | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk Speed (MBps) |
+| | | | | | | | | | |
+| Standard_D2as_v6 | 4 | 4000 | 90 | 20000 | 1250 | 4000 | 90 | 20000 | 1250 |
+| Standard_D4as_v6 | 8 | 7600 | 180 | 20000 | 1250 | 7600 | 180 | 20000 | 1250 |
+| Standard_D8as_v6 | 16 | 15200 | 360 | 20000 | 1250 | 15200 | 360 | 20000 | 1250 |
+| Standard_D16as_v6 | 32 | 30400 | 720 | 40000 | 1250 | 30400 | 720 | 40000 | 1250 |
+| Standard_D32as_v6 | 32 | 57600 | 1440 | 80000 | 1700 | 57600 | 1440 | 80000 | 1700 |
+| Standard_D48as_v6 | 32 | 86400 | 2160 | 90000 | 2550 | 86400 | 2160 | 90000 | 2550 |
+| Standard_D64as_v6 | 32 | 115200 | 2880 | 120000 | 3400 | 115200 | 2880 | 120000 | 3400 |
+| Standard_D96as_v6 | 32 | 175000 | 4320 | 175000 | 5090 | 175000 | 4320 | 175000 | 5090 |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>Some sizes support [bursting](../../disk-bursting.md) to temporarily increase disk performance. Burst speeds can be maintained for up to 30 minutes at a time.
+- <sup>2</sup>Special Storage refers to either [Ultra Disk](../../../virtual-machines/disks-enable-ultra-ssd.md) or [Premium SSD v2](../../../virtual-machines/disks-deploy-premium-v2.md) storage.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to ReadOnly or ReadWrite. For uncached data disk operation, the host cache mode is set to None.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
++
+### [Network](#tab/sizenetwork)
+
+Network interface info for each size
+
+| Size Name | Max NICs (Qty.) | Max Bandwidth (Mbps) |
+| | | |
+| Standard_D2as_v6 | 2 | 12500 |
+| Standard_D4as_v6 | 2 | 12500 |
+| Standard_D8as_v6 | 4 | 12500 |
+| Standard_D16as_v6 | 8 | 16000 |
+| Standard_D32as_v6 | 8 | 20000 |
+| Standard_D48as_v6 | 8 | 28000 |
+| Standard_D64as_v6 | 8 | 36000 |
+| Standard_D96as_v6 | 8 | 40000 |
+
+#### Networking resources
+- [Virtual networks and virtual machines in Azure](../../../virtual-network/network-overview.md)
+- [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+
+#### Table definitions
+- Expected network bandwidth is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+- Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see [Optimize network throughput for Azure virtual machines](../../../virtual-network/virtual-network-optimize-network-bandwidth.md).
+- To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see [Bandwidth/Throughput testing (NTTTCP)](../../../virtual-network/virtual-network-bandwidth-testing.md).
+
+### [Accelerators](#tab/sizeaccelerators)
+
+Accelerator (GPUs, FPGAs, etc.) info for each size
+
+> [!NOTE]
+> No accelerators are present in this series.
+++
virtual-machines Dav4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/dav4-series.md
+
+ Title: Dav4 size series
+description: Information on and specifications of the Dav4-series sizes
++++ Last updated : 07/29/2024++++
+# Dav4 sizes series
++
+## Host specifications
+
+## Feature support
+[Premium Storage](../../premium-storage-performance.md): Not Supported <br>[Premium Storage caching](../../premium-storage-performance.md): Not Supported <br>[Live Migration](../../maintenance-and-updates.md): Supported <br>[Memory Preserving Updates](../../maintenance-and-updates.md): Supported <br>[Generation 2 VMs](../../generation-2.md): Supported <br>[Generation 1 VMs](../../generation-2.md): Supported <br>[Accelerated Networking](../../../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br>[Ephemeral OS Disk](../../ephemeral-os-disks.md): Supported <br>[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
+
+## Sizes in series
+
+### [Basics](#tab/sizebasic)
+
+vCPUs (Qty.) and Memory for each size
+
+| Size Name | vCPUs (Qty.) | Memory (GB) |
+| | | |
+| Standard_D2a_v41 | 2 | 8 |
+| Standard_D4a_v4 | 4 | 16 |
+| Standard_D8a_v4 | 8 | 32 |
+| Standard_D16a_v4 | 16 | 64 |
+| Standard_D32a_v4 | 32 | 128 |
+| Standard_D48a_v4 | 48 | 192 |
+| Standard_D64a_v4 | 64 | 256 |
+| Standard_D96a_v4 | 96 | 384 |
+
+#### VM Basics resources
+- [Check vCPU quotas](../../../virtual-machines/quotas.md)
+
+### [Local Storage](#tab/sizestoragelocal)
+
+Local (temp) storage info for each size
+
+| Size Name | Max Temp Storage Disks (Qty.) | Temp Disk Size (GiB) | Temp Disk Random Read (RR)<sup>1</sup> IOPS | Temp Disk Random Read (RR)<sup>1</sup> Speed (MBps) | Temp Disk Random Write (RW)<sup>1</sup> IOPS | Temp Disk Random Write (RW)<sup>1</sup> Speed (MBps) |
+| | | | | | | |
+| Standard_D2a_v41 | 1 | 50 | 3000 | 46 | | 23 |
+| Standard_D4a_v4 | 1 | 100 | 6000 | 93 | | 46 |
+| Standard_D8a_v4 | 1 | 200 | 12000 | 187 | | 93 |
+| Standard_D16a_v4 | 1 | 400 | 24000 | 375 | | 187 |
+| Standard_D32a_v4 | 1 | 800 | 48000 | 750 | | 375 |
+| Standard_D48a_v4 | 1 | 1200 | 96000 | 1000 | | 500 |
+| Standard_D64a_v4 | 1 | 1600 | 96000 | 1000 | | 500 |
+| Standard_D96a_v4 | 1 | 2400 | 96000 | 1000 | | 500 |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>Temp disk speed often differs between RR (Random Read) and RW (Random Write) operations. RR operations are typically faster than RW operations. The RW speed is usually slower than the RR speed on series where only the RR speed value is listed.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
+
+### [Remote Storage](#tab/sizestorageremote)
+
+Remote (uncached) storage info for each size
+
+| Size Name | Max Remote Storage Disks (Qty.) | Uncached Disk IOPS | Uncached Disk Speed (MBps) | Uncached Disk Burst<sup>1</sup> IOPS | Uncached Disk Burst<sup>1</sup> Speed (MBps) | Uncached Special<sup>2</sup> Disk IOPS | Uncached Special<sup>2</sup> Disk Speed (MBps) | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk IOPS | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk Speed (MBps) |
+| | | | | | | | | | |
+| Standard_D2a_v41 | 4 | 3200 | 48 | 4000 | 200 | | | | |
+| Standard_D4a_v4 | 8 | 6400 | 96 | 8000 | 200 | | | | |
+| Standard_D8a_v4 | 16 | 12800 | 192 | 16000 | 400 | | | | |
+| Standard_D16a_v4 | 32 | 25600 | 384 | 32000 | 800 | | | | |
+| Standard_D32a_v4 | 32 | 51200 | 768 | 64000 | 1600 | | | | |
+| Standard_D48a_v4 | 32 | 76800 | 1148 | 80000 | 2000 | | | | |
+| Standard_D64a_v4 | 32 | 80000 | 1200 | 80000 | 2000 | | | | |
+| Standard_D96a_v4 | 32 | 80000 | 1200 | 80000 | 2000 | | | | |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>Some sizes support [bursting](../../disk-bursting.md) to temporarily increase disk performance. Burst speeds can be maintained for up to 30 minutes at a time.
+- <sup>2</sup>Special Storage refers to either [Ultra Disk](../../../virtual-machines/disks-enable-ultra-ssd.md) or [Premium SSD v2](../../../virtual-machines/disks-deploy-premium-v2.md) storage.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to ReadOnly or ReadWrite. For uncached data disk operation, the host cache mode is set to None.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
++
+### [Network](#tab/sizenetwork)
+
+Network interface info for each size
+
+| Size Name | Max NICs (Qty.) | Max Bandwidth (Mbps) |
+| | | |
+| Standard_D2a_v41 | 2 | 2000 |
+| Standard_D4a_v4 | 2 | 4000 |
+| Standard_D8a_v4 | 4 | 8000 |
+| Standard_D16a_v4 | 8 | 10000 |
+| Standard_D32a_v4 | 8 | 16000 |
+| Standard_D48a_v4 | 8 | 24000 |
+| Standard_D64a_v4 | 8 | 32000 |
+| Standard_D96a_v4 | 8 | 40000 |
+
+#### Networking resources
+- [Virtual networks and virtual machines in Azure](../../../virtual-network/network-overview.md)
+- [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+
+#### Table definitions
+- Expected network bandwidth is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+- Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see [Optimize network throughput for Azure virtual machines](../../../virtual-network/virtual-network-optimize-network-bandwidth.md).
+- To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see [Bandwidth/Throughput testing (NTTTCP)](../../../virtual-network/virtual-network-bandwidth-testing.md).
+
+### [Accelerators](#tab/sizeaccelerators)
+
+Accelerator (GPUs, FPGAs, etc.) info for each size
+
+> [!NOTE]
+> No accelerators are present in this series.
+++
virtual-machines Ddsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/ddsv6-series.md
+
+ Title: Ddsv6 size series
+description: Information on and specifications of the Ddsv6-series sizes
++++ Last updated : 07/29/2024++++
+# Ddsv6 sizes series
+
+>[!NOTE]
+>This VM series is currently in **Preview**. See the [Preview Terms Of Use | Microsoft Azure](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
++
+## Host specifications
+
+## Feature support
+[Premium Storage](../../premium-storage-performance.md): Supported <br>[Premium Storage caching](../../premium-storage-performance.md): Supported <br>[Live Migration](../../maintenance-and-updates.md): Not Supported <br>[Memory Preserving Updates](../../maintenance-and-updates.md): Supported <br>[Generation 2 VMs](../../generation-2.md): Supported <br>[Generation 1 VMs](../../generation-2.md): Not Supported <br>[Accelerated Networking](../../../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br>[Ephemeral OS Disk](../../ephemeral-os-disks.md): Not Supported <br>[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
+
+## Sizes in series
+
+### [Basics](#tab/sizebasic)
+
+vCPUs (Qty.) and Memory for each size
+
+| Size Name | vCPUs (Qty.) | Memory (GB) |
+| | | |
+| Standard_D2ds_v6 | 2 | 8 |
+| Standard_D4ds_v6 | 4 | 16 |
+| Standard_D8ds_v6 | 8 | 32 |
+| Standard_D16ds_v6 | 16 | 64 |
+| Standard_D32ds_v6 | 32 | 128 |
+| Standard_D48ds_v6 | 48 | 192 |
+| Standard_D64ds_v6 | 64 | 256 |
+| Standard_D96ds_v6 | 96 | 384 |
+| Standard_D128ds_v6 | 128 | 512 |
+
+#### VM Basics resources
+- [Check vCPU quotas](../../../virtual-machines/quotas.md)
+
+### [Local Storage](#tab/sizestoragelocal)
+
+Local (temp) storage info for each size
+
+| Size Name | Max Temp Storage Disks (Qty.) | Temp Disk Size (GiB) | Temp Disk Random Read (RR)<sup>1</sup> IOPS | Temp Disk Random Read (RR)<sup>1</sup> Speed (MBps) | Temp Disk Random Write (RW)<sup>1</sup> IOPS | Temp Disk Random Write (RW)<sup>1</sup> Speed (MBps) |
+| | | | | | | |
+| Standard_D2ds_v6 | 1 | 110 | 37500 | 180 | 15000 | 90 |
+| Standard_D4ds_v6 | 1 | 220 | 75000 | 360 | 30000 | 180 |
+| Standard_D8ds_v6 | 1 | 440 | 150000 | 720 | 60000 | 360 |
+| Standard_D16ds_v6 | 2 | 440 | 300000 | 1440 | 120000 | 720 |
+| Standard_D32ds_v6 | 4 | 440 | 600000 | 2880 | 240000 | 1440 |
+| Standard_D48ds_v6 | 6 | 440 | 900000 | 4320 | 360000 | 2160 |
+| Standard_D64ds_v6 | 4 | 880 | 1200000 | 5760 | 480000 | 2880 |
+| Standard_D96ds_v6 | 6 | 880 | 1800000 | 8640 | 720000 | 4320 |
+| Standard_D128ds_v6 | 4 | 1760 | 2400000 | 11520 | 960000 | 5760 |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>Temp disk speed often differs between RR (Random Read) and RW (Random Write) operations. RR operations are typically faster than RW operations. The RW speed is usually slower than the RR speed on series where only the RR speed value is listed.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
+
+### [Remote Storage](#tab/sizestorageremote)
+
+Remote (uncached) storage info for each size
+
+| Size Name | Max Remote Storage Disks (Qty.) | Uncached Disk IOPS | Uncached Disk Speed (MBps) | Uncached Disk Burst<sup>1</sup> IOPS | Uncached Disk Burst<sup>1</sup> Speed (MBps) | Uncached Special<sup>2</sup> Disk IOPS | Uncached Special<sup>2</sup> Disk Speed (MBps) | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk IOPS | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk Speed (MBps) |
+| | | | | | | | | | |
+| Standard_D2ds_v6 | 8 | 3750 | 106 | 40000 | 1250 | 4167 | 124 | 44444 | 1463 |
+| Standard_D4ds_v6 | 12 | 6400 | 212 | 40000 | 1250 | 8333 | 248 | 52083 | 1463 |
+| Standard_D8ds_v6 | 24 | 12800 | 424 | 40000 | 1250 | 16667 | 496 | 52083 | 1463 |
+| Standard_D16ds_v6 | 48 | 25600 | 848 | 40000 | 1250 | 33333 | 992 | 52083 | 1463 |
+| Standard_D32ds_v6 | 64 | 51200 | 1696 | 80000 | 1696 | 66667 | 1984 | 104167 | 1984 |
+| Standard_D48ds_v6 | 64 | 76800 | 2544 | 80000 | 2544 | 100000 | 2976 | 104167 | 2976 |
+| Standard_D64ds_v6 | 64 | 102400 | 3392 | 102400 | 3392 | 133333 | 3969 | 133333 | 3969 |
+| Standard_D96ds_v6 | 64 | 153600 | 5088 | 153600 | 5088 | 200000 | 5953 | 200000 | 5953 |
+| Standard_D128ds_v6 | 64 | 204800 | 6782 | 204800 | 6782 | 266667 | 7935 | 266667 | 7935 |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>Some sizes support [bursting](../../disk-bursting.md) to temporarily increase disk performance. Burst speeds can be maintained for up to 30 minutes at a time.
+- <sup>2</sup>Special Storage refers to either [Ultra Disk](../../../virtual-machines/disks-enable-ultra-ssd.md) or [Premium SSD v2](../../../virtual-machines/disks-deploy-premium-v2.md) storage.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to ReadOnly or ReadWrite. For uncached data disk operation, the host cache mode is set to None.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
++
+### [Network](#tab/sizenetwork)
+
+Network interface info for each size
+
+| Size Name | Max NICs (Qty.) | Max Bandwidth (Mbps) |
+| | | |
+| Standard_D2ds_v6 | 2 | 12500 |
+| Standard_D4ds_v6 | 2 | 12500 |
+| Standard_D8ds_v6 | 4 | 12500 |
+| Standard_D16ds_v6 | 8 | 12500 |
+| Standard_D32ds_v6 | 8 | 16000 |
+| Standard_D48ds_v6 | 8 | 24000 |
+| Standard_D64ds_v6 | 8 | 30000 |
+| Standard_D96ds_v6 | 8 | 41000 |
+| Standard_D128ds_v6 | 8 | 54000 |
+
+#### Networking resources
+- [Virtual networks and virtual machines in Azure](../../../virtual-network/network-overview.md)
+- [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+
+#### Table definitions
+- Expected network bandwidth is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+- Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see [Optimize network throughput for Azure virtual machines](../../../virtual-network/virtual-network-optimize-network-bandwidth.md).
+- To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see [Bandwidth/Throughput testing (NTTTCP)](../../../virtual-network/virtual-network-bandwidth-testing.md).
+
+### [Accelerators](#tab/sizeaccelerators)
+
+Accelerator (GPUs, FPGAs, etc.) info for each size
+
+> [!NOTE]
+> No accelerators are present in this series.
+++
virtual-machines Dldsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/dldsv6-series.md
+
+ Title: Dldsv6 size series
+description: Information on and specifications of the Dldsv6-series sizes
++++ Last updated : 07/29/2024++++
+# Dldsv6 sizes series
+
+>[!NOTE]
+>This VM series is currently in **Preview**. See the [Preview Terms Of Use | Microsoft Azure](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
++
+## Host specifications
+
+## Feature support
+[Premium Storage](../../premium-storage-performance.md): Supported <br>[Premium Storage caching](../../premium-storage-performance.md): Supported <br>[Live Migration](../../maintenance-and-updates.md): Not Supported <br>[Memory Preserving Updates](../../maintenance-and-updates.md): Supported <br>[Generation 2 VMs](../../generation-2.md): Supported <br>[Generation 1 VMs](../../generation-2.md): Not Supported <br>[Accelerated Networking](../../../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br>[Ephemeral OS Disk](../../ephemeral-os-disks.md): Not Supported <br>[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
+
+## Sizes in series
+
+### [Basics](#tab/sizebasic)
+
+vCPUs (Qty.) and Memory for each size
+
+| Size Name | vCPUs (Qty.) | Memory (GB) |
+| | | |
+| Standard_D2lds_v6 | 2 | 4 |
+| Standard_D4lds_v6 | 4 | 8 |
+| Standard_D8lds_v6 | 8 | 16 |
+| Standard_D16lds_v6 | 16 | 32 |
+| Standard_D32lds_v6 | 32 | 64 |
+| Standard_D48lds_v6 | 48 | 96 |
+| Standard_D64lds_v6 | 64 | 128 |
+| Standard_D96lds_v6 | 96 | 192 |
+| Standard_D128lds_v6 | 128 | 256 |
+
+#### VM Basics resources
+- [Check vCPU quotas](../../../virtual-machines/quotas.md)
+
+### [Local Storage](#tab/sizestoragelocal)
+
+Local (temp) storage info for each size
+
+| Size Name | Max Temp Storage Disks (Qty.) | Temp Disk Size (GiB) | Temp Disk Random Read (RR)<sup>1</sup> IOPS | Temp Disk Random Read (RR)<sup>1</sup> Speed (MBps) | Temp Disk Random Write (RW)<sup>1</sup> IOPS | Temp Disk Random Write (RW)<sup>1</sup> Speed (MBps) |
+| | | | | | | |
+| Standard_D2lds_v6 | 1 | 110 | 37500 | 180 | 15000 | 90 |
+| Standard_D4lds_v6 | 1 | 220 | 75000 | 360 | 30000 | 180 |
+| Standard_D8lds_v6 | 1 | 440 | 150000 | 720 | 60000 | 360 |
+| Standard_D16lds_v6 | 2 | 440 | 300000 | 1440 | 120000 | 720 |
+| Standard_D32lds_v6 | 4 | 440 | 600000 | 2880 | 240000 | 1440 |
+| Standard_D48lds_v6 | 6 | 440 | 900000 | 4320 | 360000 | 2160 |
+| Standard_D64lds_v6 | 4 | 880 | 1200000 | 5760 | 480000 | 2880 |
+| Standard_D96lds_v6 | 6 | 880 | 1800000 | 8640 | 720000 | 4320 |
+| Standard_D128lds_v6 | 4 | 1760 | 2400000 | 11520 | 960000 | 5760 |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>Temp disk speed often differs between RR (Random Read) and RW (Random Write) operations. RR operations are typically faster than RW operations. The RW speed is usually slower than the RR speed on series where only the RR speed value is listed.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
+
+### [Remote Storage](#tab/sizestorageremote)
+
+Remote (uncached) storage info for each size
+
+| Size Name | Max Remote Storage Disks (Qty.) | Uncached Disk IOPS | Uncached Disk Speed (MBps) | Uncached Disk Burst<sup>1</sup> IOPS | Uncached Disk Burst<sup>1</sup> Speed (MBps) | Uncached Special<sup>2</sup> Disk IOPS | Uncached Special<sup>2</sup> Disk Speed (MBps) | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk IOPS | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk Speed (MBps) |
+| | | | | | | | | | |
+| Standard_D2lds_v6 | 8 | 3750 | 106 | 40000 | 1250 | 4167 | 124 | 44444 | 1463 |
+| Standard_D4lds_v6 | 12 | 6400 | 212 | 40000 | 1250 | 8333 | 248 | 52083 | 1463 |
+| Standard_D8lds_v6 | 24 | 12800 | 424 | 40000 | 1250 | 16667 | 496 | 52083 | 1463 |
+| Standard_D16lds_v6 | 48 | 25600 | 848 | 40000 | 1250 | 33333 | 992 | 52083 | 1463 |
+| Standard_D32lds_v6 | 64 | 51200 | 1696 | 80000 | 1696 | 66667 | 1984 | 104167 | 1984 |
+| Standard_D48lds_v6 | 64 | 76800 | 2544 | 80000 | 2544 | 100000 | 2976 | 104167 | 2976 |
+| Standard_D64lds_v6 | 64 | 102400 | 3392 | 102400 | 3392 | 133333 | 3969 | 133333 | 3969 |
+| Standard_D96lds_v6 | 64 | 153600 | 5088 | 153600 | 5088 | 200000 | 5953 | 200000 | 5953 |
+| Standard_D128lds_v6 | 64 | 204800 | 6782 | 204800 | 6782 | 266667 | 7935 | 266667 | 7935 |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>Some sizes support [bursting](../../disk-bursting.md) to temporarily increase disk performance. Burst speeds can be maintained for up to 30 minutes at a time.
+- <sup>2</sup>Special Storage refers to either [Ultra Disk](../../../virtual-machines/disks-enable-ultra-ssd.md) or [Premium SSD v2](../../../virtual-machines/disks-deploy-premium-v2.md) storage.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to ReadOnly or ReadWrite. For uncached data disk operation, the host cache mode is set to None.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
++
+### [Network](#tab/sizenetwork)
+
+Network interface info for each size
+
+| Size Name | Max NICs (Qty.) | Max Bandwidth (Mbps) |
+| | | |
+| Standard_D2lds_v6 | 2 | 12500 |
+| Standard_D4lds_v6 | 2 | 12500 |
+| Standard_D8lds_v6 | 4 | 12500 |
+| Standard_D16lds_v6 | 8 | 12500 |
+| Standard_D32lds_v6 | 8 | 16000 |
+| Standard_D48lds_v6 | 8 | 24000 |
+| Standard_D64lds_v6 | 8 | 30000 |
+| Standard_D96lds_v6 | 8 | 41000 |
+| Standard_D128lds_v6 | 8 | 54000 |
+
+#### Networking resources
+- [Virtual networks and virtual machines in Azure](../../../virtual-network/network-overview.md)
+- [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+
+#### Table definitions
+- Expected network bandwidth is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+- Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see [Optimize network throughput for Azure virtual machines](../../../virtual-network/virtual-network-optimize-network-bandwidth.md).
+- To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see [Bandwidth/Throughput testing (NTTTCP)](../../../virtual-network/virtual-network-bandwidth-testing.md).
+
+### [Accelerators](#tab/sizeaccelerators)
+
+Accelerator (GPUs, FPGAs, etc.) info for each size
+
+> [!NOTE]
+> No accelerators are present in this series.
+++
virtual-machines Dlsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/dlsv6-series.md
+
+ Title: Dlsv6 size series
+description: Information on and specifications of the Dlsv6-series sizes
++++ Last updated : 07/29/2024++++
+# Dlsv6 sizes series
+
+>[!NOTE]
+>This VM series is currently in **Preview**. See the [Preview Terms Of Use | Microsoft Azure](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
++
+## Host specifications
+
+## Feature support
+[Premium Storage](../../premium-storage-performance.md): Supported <br>[Premium Storage caching](../../premium-storage-performance.md): Supported <br>[Live Migration](../../maintenance-and-updates.md): Not Supported <br>[Memory Preserving Updates](../../maintenance-and-updates.md): Supported <br>[Generation 2 VMs](../../generation-2.md): Supported <br>[Generation 1 VMs](../../generation-2.md): Not Supported <br>[Accelerated Networking](../../../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br>[Ephemeral OS Disk](../../ephemeral-os-disks.md): Not Supported <br>[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
+
+## Sizes in series
+
+### [Basics](#tab/sizebasic)
+
+vCPUs (Qty.) and Memory for each size
+
+| Size Name | vCPUs (Qty.) | Memory (GB) |
+| | | |
+| Standard_D2ls_v6 | 2 | 4 |
+| Standard_D4ls_v6 | 4 | 8 |
+| Standard_D8ls_v6 | 8 | 16 |
+| Standard_D16ls_v6 | 16 | 32 |
+| Standard_D32ls_v6 | 32 | 64 |
+| Standard_D48ls_v6 | 48 | 96 |
+| Standard_D64ls_v6 | 64 | 128 |
+| Standard_D96ls_v6 | 96 | 192 |
+| Standard_D128ls_v6 | 128 | 256 |
+
+#### VM Basics resources
+- [Check vCPU quotas](../../../virtual-machines/quotas.md)
+
+### [Local Storage](#tab/sizestoragelocal)
+
+Local (temp) storage info for each size
+
+> [!NOTE]
+> No local storage present in this series.
+>
+> For frequently asked questions, see [Azure VM sizes with no local temp disk](../../azure-vms-no-temp-disk.yml).
+++
+### [Remote Storage](#tab/sizestorageremote)
+
+Remote (uncached) storage info for each size
+
+| Size Name | Max Remote Storage Disks (Qty.) | Uncached Disk IOPS | Uncached Disk Speed (MBps) | Uncached Disk Burst<sup>1</sup> IOPS | Uncached Disk Burst<sup>1</sup> Speed (MBps) | Uncached Special<sup>2</sup> Disk IOPS | Uncached Special<sup>2</sup> Disk Speed (MBps) | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk IOPS | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk Speed (MBps) |
+| | | | | | | | | | |
+| Standard_D2ls_v6 | 8 | 3750 | 106 | 40000 | 1250 | 4167 | 124 | 44444 | 1463 |
+| Standard_D4ls_v6 | 12 | 6400 | 212 | 40000 | 1250 | 8333 | 248 | 52083 | 1463 |
+| Standard_D8ls_v6 | 24 | 12800 | 424 | 40000 | 1250 | 16667 | 496 | 52083 | 1463 |
+| Standard_D16ls_v6 | 48 | 25600 | 848 | 40000 | 1250 | 33333 | 992 | 52083 | 1463 |
+| Standard_D32ls_v6 | 64 | 51200 | 1696 | 80000 | 1696 | 66667 | 1984 | 104167 | 1984 |
+| Standard_D48ls_v6 | 64 | 76800 | 2544 | 80000 | 2544 | 100000 | 2976 | 104167 | 2976 |
+| Standard_D64ls_v6 | 64 | 102400 | 3392 | 102400 | 3392 | 133333 | 3969 | 133333 | 3969 |
+| Standard_D96ls_v6 | 64 | 153600 | 5088 | 153600 | 5088 | 200000 | 5953 | 200000 | 5953 |
+| Standard_D128ls_v6 | 64 | 204800 | 6782 | 204800 | 6782 | 266667 | 7935 | 266667 | 7935 |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>Some sizes support [bursting](../../disk-bursting.md) to temporarily increase disk performance. Burst speeds can be maintained for up to 30 minutes at a time.
+- <sup>2</sup>Special Storage refers to either [Ultra Disk](../../../virtual-machines/disks-enable-ultra-ssd.md) or [Premium SSD v2](../../../virtual-machines/disks-deploy-premium-v2.md) storage.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to ReadOnly or ReadWrite. For uncached data disk operation, the host cache mode is set to None.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
++
+### [Network](#tab/sizenetwork)
+
+Network interface info for each size
+
+| Size Name | Max NICs (Qty.) | Max Bandwidth (Mbps) |
+| | | |
+| Standard_D2ls_v6 | 2 | 12500 |
+| Standard_D4ls_v6 | 2 | 12500 |
+| Standard_D8ls_v6 | 4 | 12500 |
+| Standard_D16ls_v6 | 8 | 12500 |
+| Standard_D32ls_v6 | 8 | 16000 |
+| Standard_D48ls_v6 | 8 | 24000 |
+| Standard_D64ls_v6 | 8 | 30000 |
+| Standard_D96ls_v6 | 8 | 41000 |
+| Standard_D128ls_v6 | 8 | 54000 |
+
+#### Networking resources
+- [Virtual networks and virtual machines in Azure](../../../virtual-network/network-overview.md)
+- [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+
+#### Table definitions
+- Expected network bandwidth is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+- Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see [Optimize network throughput for Azure virtual machines](../../../virtual-network/virtual-network-optimize-network-bandwidth.md).
+- To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see [Bandwidth/Throughput testing (NTTTCP)](../../../virtual-network/virtual-network-bandwidth-testing.md).
+
+### [Accelerators](#tab/sizeaccelerators)
+
+Accelerator (GPUs, FPGAs, etc.) info for each size
+
+> [!NOTE]
+> No accelerators are present in this series.
+++
virtual-machines Dpdsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/dpdsv6-series.md
Title: Dpdsv6 size series description: Information on and specifications of the Dpdsv6-series sizes -+ - build-2024
virtual-machines Dpldsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/dpldsv6-series.md
Title: Dpldsv6 size series description: Information on and specifications of the Dpldsv6-series sizes -+ - build-2024
virtual-machines Dplsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/dplsv6-series.md
Title: Dplsv6 size series description: Information on and specifications of the Dplsv6-series sizes -+ - build-2024
virtual-machines Dpsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/dpsv6-series.md
Title: Dpsv6 size series description: Information on and specifications of the Dpsv6-series sizes -+ - build-2024
virtual-machines Dsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/general-purpose/dsv6-series.md
+
+ Title: Dsv6 size series
+description: Information on and specifications of the Dsv6-series sizes
++++ Last updated : 07/29/2024++++
+# Dsv6 sizes series
+
+>[!NOTE]
+>This VM series is currently in **Preview**. See the [Preview Terms Of Use | Microsoft Azure](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
++
+## Host specifications
+
+## Feature support
+[Premium Storage](../../premium-storage-performance.md): Supported <br>[Premium Storage caching](../../premium-storage-performance.md): Supported <br>[Live Migration](../../maintenance-and-updates.md): Not Supported <br>[Memory Preserving Updates](../../maintenance-and-updates.md): Not Supported <br>[Generation 2 VMs](../../generation-2.md): Supported <br>[Generation 1 VMs](../../generation-2.md): Not Supported <br>[Accelerated Networking](../../../virtual-network/create-vm-accelerated-networking-cli.md): Supported <br>[Ephemeral OS Disk](../../ephemeral-os-disks.md): Not Supported <br>[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported <br>
+
+## Sizes in series
+
+### [Basics](#tab/sizebasic)
+
+vCPUs (Qty.) and Memory for each size
+
+| Size Name | vCPUs (Qty.) | Memory (GB) |
+| | | |
+| Standard_D2s_v6 | 2 | 8 |
+| Standard_D4s_v6 | 4 | 16 |
+| Standard_D8s_v6 | 8 | 32 |
+| Standard_D16s_v6 | 16 | 64 |
+| Standard_D32s_v6 | 32 | 128 |
+| Standard_D48s_v6 | 48 | 192 |
+| Standard_D64s_v6 | 64 | 256 |
+| Standard_D96s_v6 | 96 | 384 |
+| Standard_D128s_v6 | 128 | 512 |
+
+#### VM Basics resources
+- [Check vCPU quotas](../../../virtual-machines/quotas.md)
+
+### [Local Storage](#tab/sizestoragelocal)
+
+Local (temp) storage info for each size
+
+> [!NOTE]
+> No local storage present in this series.
+>
+> For frequently asked questions, see [Azure VM sizes with no local temp disk](../../azure-vms-no-temp-disk.yml).
+++
+### [Remote Storage](#tab/sizestorageremote)
+
+Remote (uncached) storage info for each size
+
+| Size Name | Max Remote Storage Disks (Qty.) | Uncached Disk IOPS | Uncached Disk Speed (MBps) | Uncached Disk Burst<sup>1</sup> IOPS | Uncached Disk Burst<sup>1</sup> Speed (MBps) | Uncached Special<sup>2</sup> Disk IOPS | Uncached Special<sup>2</sup> Disk Speed (MBps) | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk IOPS | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk Speed (MBps) |
+| | | | | | | | | | |
+| Standard_D2s_v6 | 8 | 3750 | 106 | 40000 | 1250 | 4167 | 124 | 44444 | 1463 |
+| Standard_D4s_v6 | 12 | 6400 | 212 | 40000 | 1250 | 8333 | 248 | 52083 | 1463 |
+| Standard_D8s_v6 | 24 | 12800 | 424 | 40000 | 1250 | 16667 | 496 | 52083 | 1463 |
+| Standard_D16s_v6 | 48 | 25600 | 848 | 40000 | 1250 | 33333 | 992 | 52083 | 1463 |
+| Standard_D32s_v6 | 64 | 51200 | 1696 | 80000 | 1696 | 66667 | 1984 | 104167 | 1984 |
+| Standard_D48s_v6 | 64 | 76800 | 2544 | 80000 | 2544 | 100000 | 2976 | 104167 | 2976 |
+| Standard_D64s_v6 | 64 | 102400 | 3392 | 102400 | 3392 | 133333 | 3969 | 133333 | 3969 |
+| Standard_D96s_v6 | 64 | 153600 | 5088 | 153600 | 5088 | 200000 | 5953 | 200000 | 5953 |
+| Standard_D128s_v6 | 64 | 204800 | 6782 | 204800 | 6782 | 266667 | 7935 | 266667 | 7935 |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>Some sizes support [bursting](../../disk-bursting.md) to temporarily increase disk performance. Burst speeds can be maintained for up to 30 minutes at a time.
+- <sup>2</sup>Special Storage refers to either [Ultra Disk](../../../virtual-machines/disks-enable-ultra-ssd.md) or [Premium SSD v2](../../../virtual-machines/disks-deploy-premium-v2.md) storage.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to ReadOnly or ReadWrite. For uncached data disk operation, the host cache mode is set to None.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
++
+### [Network](#tab/sizenetwork)
+
+Network interface info for each size
+
+| Size Name | Max NICs (Qty.) | Max Bandwidth (Mbps) |
+| | | |
+| Standard_D2s_v6 | 2 | 12500 |
+| Standard_D4s_v6 | 2 | 12500 |
+| Standard_D8s_v6 | 4 | 12500 |
+| Standard_D16s_v6 | 8 | 12500 |
+| Standard_D32s_v6 | 8 | 16000 |
+| Standard_D48s_v6 | 8 | 24000 |
+| Standard_D64s_v6 | 8 | 30000 |
+| Standard_D96s_v6 | 8 | 41000 |
+| Standard_D128s_v6 | 8 | 54000 |
+
+#### Networking resources
+- [Virtual networks and virtual machines in Azure](../../../virtual-network/network-overview.md)
+- [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+
+#### Table definitions
+- Expected network bandwidth is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+- Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see [Optimize network throughput for Azure virtual machines](../../../virtual-network/virtual-network-optimize-network-bandwidth.md).
+- To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see [Bandwidth/Throughput testing (NTTTCP)](../../../virtual-network/virtual-network-bandwidth-testing.md).
+
+### [Accelerators](#tab/sizeaccelerators)
+
+Accelerator (GPUs, FPGAs, etc.) info for each size
+
+> [!NOTE]
+> No accelerators are present in this series.
+++
virtual-machines Epdsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/memory-optimized/epdsv6-series.md
Title: Epdsv6 size series description: Information on and specifications of the Epdsv6-series sizes -+ - build-2024
virtual-machines Epsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/memory-optimized/epsv6-series.md
Title: Epsv6 size series description: Information on and specifications of the Epsv6-series sizes -+ - build-2024
virtual-machines Mbsv3 Mbdsv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/memory-optimized/mbsv3-mbdsv3-series.md
The increased remote storage performance of these VMs is ideal for storage throu
ΓÇó IOPS/MBps listed here refer to uncached mode for data disks.
-ΓÇó To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](https://learn.microsoft.com/azure/virtual-machines/disks-performance).
+ΓÇó To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](/azure/virtual-machines/disks-performance).
ΓÇó IOPS spec is defined using common small random block sizes like 4KiB or 8KiB. Maximum IOPS is defined as "up-to" and measured using 4KiB random reads workloads.
virtual-machines Av1 Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/migration-guides/av1-series-retirement.md
Title: Av1-series retirement description: Retirement information for the Av1 series virtual machine sizes. Before retirement, migrate your workloads to Av2-series virtual machines. -+ Last updated 06/08/2022
virtual-machines Nc Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/migration-guides/nc-series-retirement.md
Title: NC-series retirement description: NC-series retirement by September 6, 2023 -+ Last updated 12/20/2022
virtual-machines Ncv2 Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/migration-guides/ncv2-series-retirement.md
Title: NCv2-series retirement description: NCv2-series retirement by September 6, 2023 -+ Last updated 11/21/2022
virtual-machines Nd Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/migration-guides/nd-series-retirement.md
Title: ND-series retirement description: ND-series retirement by September 6, 2023 -+ Last updated 02/27/2023
virtual-machines Nv Series Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/migration-guides/nv-series-retirement.md
Title: NV series retirement description: NV series retirement starting September 6, 2023 -+ Last updated 02/27/2023
virtual-machines Spot Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/spot-portal.md
Title: Use the portal to deploy Azure Spot Virtual Machines description: How to use the Portal to deploy Spot Virtual Machines --++ Last updated 02/28/2023
virtual-machines Spot Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/spot-vms.md
Title: Use Azure Spot Virtual Machines
description: Learn how to use Azure Spot Virtual Machines to save on costs. --++ Last updated 06/14/2024
virtual-machines Trusted Launch Existing Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch-existing-vm.md
description: Learn how to enable Trusted launch on existing Azure virtual machin
-+ Last updated 08/13/2023
virtual-machines Disk Encryption Cli Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-cli-quickstart.md
Title: Create and encrypt a Windows VM with Azure CLI
description: In this quickstart, you learn how to use Azure CLI to create and encrypt a Windows virtual machine -+
virtual-machines Disk Encryption Key Vault Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-key-vault-aad.md
Title: Create and configure a key vault for Azure Disk Encryption with Microsoft Entra ID (previous release) description: In this article, learn how to create and configure a key vault for Azure Disk Encryption with Microsoft Entra ID. -+
virtual-machines Disk Encryption Overview Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-overview-aad.md
Title: Azure Disk Encryption with Azure AD (previous release) description: This article provides prerequisites for using Microsoft Azure Disk Encryption for IaaS VMs. -+
virtual-machines Disk Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-overview.md
Title: Enable Azure Disk Encryption for Windows VMs description: This article provides instructions on enabling Microsoft Azure Disk Encryption for Windows VMs. -+
virtual-machines Disk Encryption Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-portal-quickstart.md
Title: Create and encrypt a Windows VM with the Azure portal
description: In this quickstart, you learn how to use the Azure portal to create and encrypt a Windows virtual machine -+
virtual-machines Disk Encryption Powershell Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-powershell-quickstart.md
Title: Create and encrypt a Windows VM with Azure PowerShell
description: In this quickstart, you learn how to use Azure PowerShell to create and encrypt a Windows virtual machine -+
virtual-machines Disk Encryption Sample Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-sample-scripts.md
Title: Azure Disk Encryption sample scripts for Windows VMs description: This article is the appendix for Microsoft Azure Disk Encryption for Windows VMs. -+
virtual-machines Disk Encryption Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-troubleshooting.md
Title: Azure Disk Encryption troubleshooting guide description: This article provides troubleshooting tips for Microsoft Azure Disk Encryption for Windows VMs. -+
virtual-machines Disk Encryption Windows Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-windows-aad.md
Title: Azure Disk Encryption with Microsoft Entra ID for Windows VMs (previous release) description: This article provides instructions on enabling Microsoft Azure Disk Encryption for Windows IaaS VMs. -+
virtual-machines Disk Encryption Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-windows.md
Title: Azure Disk Encryption scenarios on Windows VMs description: This article provides instructions on enabling Microsoft Azure Disk Encryption for Windows VMs for various scenarios -+
virtual-machines Key Vault Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/key-vault-setup.md
Title: Set up Key Vault using PowerShell description: How to set up Key Vault for use with a virtual machine using PowerShell. -+ Last updated 01/24/2017
virtual-machines N Series Amd Driver Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/n-series-amd-driver-setup.md
Title: Azure N-series AMD GPU driver setup for Windows
description: How to set up AMD GPU drivers for N-series VMs running Windows Server or Windows in Azure -+
virtual-machines N Series Driver Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/n-series-driver-setup.md
Title: Azure N-series NVIDIA GPU driver setup for Windows
description: How to set up NVIDIA GPU drivers for N-series VMs running Windows Server or Windows in Azure -+
virtual-machines Scheduled Event Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/scheduled-event-service.md
Title: Monitor scheduled events for your VMs in Azure description: Learn how to monitor your Azure virtual machines for scheduled events. -+ Last updated 08/20/2019
virtual-machines Scheduled Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/scheduled-events.md
Title: Scheduled Events for Windows VMs in Azure description: Scheduled events using the Azure Metadata Service for your Windows virtual machines. -+
virtual-machines Spot Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/spot-powershell.md
Title: Use PowerShell to deploy Azure Spot Virtual Machines description: Learn how to use Azure PowerShell to deploy Azure Spot Virtual Machines to save on costs. --++ Last updated 02/28/2023
virtual-machines Storage Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/storage-performance.md
Title: Optimize performance on Lsv3, Lasv3, and Lsv2-series Windows VMs description: Learn how to optimize performance for your solution on the Lsv2-series Windows virtual machines (VMs) on Azure. -+ Last updated 06/01/2022
virtual-machines Tutorial Secure Web Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/tutorial-secure-web-server.md
Title: "Tutorial: Secure a Windows web server with TLS certificates in Azure" description: Learn how to use Azure PowerShell to secure a Windows virtual machine that runs the IIS web server with TLS certificates stored in Azure Key Vault. -+
virtual-machines Centos End Of Life https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/centos/centos-end-of-life.md
Title: CentOS end-of-life (EOL) guidance description: Understand your options for moving CentOS workloads -+
virtual-machines Byos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/byos.md
Title: Red Hat Enterprise Linux bring-your-own-subscription Azure images | Microsoft Docs description: Learn about bring-your-own-subscription images for Red Hat Enterprise Linux on Azure. -+
virtual-machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/overview.md
Title: Red Hat workloads on Azure overview | Microsoft Docs description: Learn about the Red Hat product offerings available on Azure. -+
virtual-machines Redhat Extended Lifecycle Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/redhat-extended-lifecycle-support.md
Title: Red Hat Enterprise Linux Extended Life Cycle Support description: Learn about adding Red Hat Enterprise Extended Life Cycle Support Add-on -+
virtual-machines Redhat Imagelist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/redhat-imagelist.md
Title: Red Hat Enterprise Linux images available in Azure description: Learn about Red Hat Enterprise Linux images in Microsoft Azure -+
virtual-machines Redhat Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/redhat-images.md
Title: Overview of Red Hat Enterprise Linux images in Azure description: Learn about available Red Hat Enterprise Linux images in Azure Marketplace and policies around their naming and retention in Microsoft Azure. -+
virtual-machines Redhat In Place Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/redhat-in-place-upgrade.md
Title: In-place upgrade of Red Hat Enterprise Linux images on Azure description: Learn how to do an in-place upgrade from Red Hat Enterprise 7.x images to the latest 8.x version. -+
virtual-machines Redhat Rhui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/redhat-rhui.md
Title: Red Hat Update Infrastructure | Microsoft Docs description: Learn about Red Hat Update Infrastructure for on-demand Red Hat Enterprise Linux instances in Microsoft Azure. -+
virtual-network Network Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/network-overview.md
We welcome you to share your feedback about this feature in this [quick survey](
For more information about how-to configure multiple address prefixes on a subnet, see [Create multiple prefixes for a subnet](how-to-multiple-prefixes-subnet.md).
+> [!IMPORTANT]
+> There are two subnet properties for address space, **AddressPrefix** (string), and **AddressPrefixes** (list). The distinction and usage is explained as follows.
+> - The array property was introduced for dual stack. The property is also used for scenarios with more than one subnet prefixes as discussed previously.
+> - As part of the Azure Portal customer experience update, the **AddressPrefixes** is the default property for subnet address space when a subnet is created via the portal.
+> - Any new subnets created via portal will default to the **AddressPrefixes** list parameter.
+> - If customers are using dual-stack in their virtual network or have more than one subnet prefixes, they are updated to use the list property.
+> - For existing deployments using the string, the current behavior is retained unless there are explicit changes in your virtual network to use the list property for subnet address prefixes. An example is adding IPv6 address space or another prefix to the subnet.
+> - We recommend that customers should look for both the properties in subnet wherever applicable.
++ ## Network security groups A [network security group (NSG)](../virtual-network/network-security-groups-overview.md) contains a list of Access Control List (ACL) rules that allow or deny network traffic to subnets, NICs, or both. NSGs can be associated with either subnets or individual NICs connected to a subnet. When an NSG is associated with a subnet, the ACL rules apply to all the VMs in that subnet. Traffic to an individual NIC can be restricted by associating an NSG directly to a NIC.
vpn-gateway About Gateway Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/about-gateway-skus.md
For information about working with the legacy gateway SKUs (Basic, Standard, and
You specify the gateway SKU when you create your VPN Gateway. See the following article for steps: * [Azure portal](tutorial-create-gateway-portal.md)
+* [PowerShell - Basic SKU](create-gateway-basic-sku-powershell.md)
* [PowerShell](create-gateway-powershell.md) * [Azure CLI](create-routebased-vpn-gateway-cli.md)