Updates from: 07/12/2024 01:10:46
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Concept Accuracy Confidence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-accuracy-confidence.md
- ignite-2023 Previously updated : 07/09/2024 Last updated : 07/11/2024
ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom.md
To create a custom extraction model, label a dataset of documents with the value
> Starting with version 4.0 (2024-02-29-preview) API, custom neural models now support **overlapping fields** and **table, row and cell level confidence**. >
-The custom neural (custom document) model uses deep learning models and base model trained on a large collection of documents. This model is then fine-tuned or adapted to your data when you train the model with a labeled dataset. Custom neural models support structured, semi-structured, and unstructured documents to extract fields. Custom neural models currently support English-language documents. When you're choosing between the two model types, start with a neural model to determine if it meets your functional needs. See [neural models](concept-custom-neural.md) to learn more about custom document models.
+The custom neural (custom document) model uses deep learning models and base model trained on a large collection of documents. This model is then fine-tuned or adapted to your data when you train the model with a labeled dataset. Custom neural models support structured, semi-structured, and unstructured documents to extract fields. When you're choosing between the two model types, start with a neural model to determine if it meets your functional needs. See [neural models](concept-custom-neural.md) to learn more about custom document models.
### Custom template model
The `build custom model` operation adds support for the *template* and *neural*
* Template models only accept documents that have the same basic page structureΓÇöa uniform visual appearanceΓÇöor the same relative positioning of elements within the document.
-* Neural models support documents that have the same information, but different page structures. Examples of these documents include United States W2 forms, which share the same information, but vary in appearance across companies. Neural models currently only support English text.
+* Neural models support documents that have the same information, but different page structures. Examples of these documents include United States W2 forms, which share the same information, but vary in appearance across companies.
This table provides links to the build mode programming language SDK references and code samples on GitHub:
ai-services Create Sas Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/create-sas-tokens.md
- ignite-2023 Previously updated : 07/18/2023 Last updated : 07/11/2024
At a high level, here's how SAS tokens work:
-* Your application submits the SAS token to Azure Storage as part of a REST API request.
+* First, your application submits the SAS token to Azure Storage as part of a REST API request.
-* If the storage service verifies that the SAS is valid, the request is authorized.
-
-* If the SAS token is deemed invalid, the request is declined and the error code 403 (Forbidden) is returned.
+* Next, if the storage service verifies that the SAS is valid, the request is authorized. If, the SAS token is deemed invalid, the request is declined and the error code 403 (Forbidden) is returned.
Azure Blob Storage offers three resource types:
The Azure portal is a web-based console that enables you to manage your Azure su
> > :::image type="content" source="media/sas-tokens/need-permissions.png" alt-text="Screenshot that shows the lack of permissions warning."::: >
- > * [Azure role-based access control](../../role-based-access-control/overview.md) (Azure RBAC) is the authorization system used to manage access to Azure resources. Azure RBAC helps you manage access and permissions for your Azure resources.
+ > * [Azure role-based access control](../../role-based-access-control/overview.md) (Azure RBAC) is the authorization system used to manage access to Azure resources. Azure RBAC helps you manage access and permissions for your Azure resources.
> * [Assign an Azure role for access to blob data](../../role-based-access-control/role-assignments-portal.yml?tabs=current) to assign a role that allows for read, write, and delete permissions for your Azure storage container. *See* [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor). 1. Specify the signed key **Start** and **Expiry** times.
The Azure portal is a web-based console that enables you to manage your Azure su
* When you create a SAS token, the default duration is 48 hours. After 48 hours, you'll need to create a new token. * Consider setting a longer duration period for the time you're using your storage account for Document Intelligence Service operations. * The value of the expiry time is determined by whether you're using an **Account key** or **User delegation key** **Signing method**:
- * **Account key**: There's no imposed maximum time limit; however, best practices recommended that you configure an expiration policy to limit the interval and minimize compromise. [Configure an expiration policy for shared access signatures](/azure/storage/common/sas-expiration-policy).
- * **User delegation key**: The value for the expiry time is a maximum of seven days from the creation of the SAS token. The SAS is invalid after the user delegation key expires, so a SAS with an expiry time of greater than seven days will still only be valid for seven days. For more information,*see* [Use Microsoft Entra credentials to secure a SAS](/azure/storage/blobs/storage-blob-user-delegation-sas-create-cli#use-azure-ad-credentials-to-secure-a-sas).
+ * **Account key**: No imposed maximum time limit; however, best practices recommended that you configure an expiration policy to limit the interval and minimize compromise. [Configure an expiration policy for shared access signatures](/azure/storage/common/sas-expiration-policy).
+ * **User delegation key**: The value for the expiry time is a maximum of seven days from the creation of the SAS token. The SAS is invalid after the user delegation key expires, so a SAS with an expiry time of greater than seven days will still only be valid for seven days. For more information, *see* [Use Microsoft Entra credentials to secure a SAS](/azure/storage/blobs/storage-blob-user-delegation-sas-create-cli#use-azure-ad-credentials-to-secure-a-sas).
-1. The **Allowed IP addresses** field is optional and specifies an IP address or a range of IP addresses from which to accept requests. If the request IP address doesn't match the IP address or address range specified on the SAS token, authorization fails. The IP address or a range of IP addresses must be public IPs, not private. For more information,*see*, [**Specify an IP address or IP range**](/rest/api/storageservices/create-account-sas#specify-an-ip-address-or-ip-range).
+1. The **Allowed IP addresses** field is optional and specifies an IP address or a range of IP addresses from which to accept requests. If the request IP address doesn't match the IP address or address range specified on the SAS token, authorization fails. The IP address or a range of IP addresses must be public IPs, not private. For more information, *see*, [**Specify an IP address or IP range**](/rest/api/storageservices/create-account-sas#specify-an-ip-address-or-ip-range).
1. The **Allowed protocols** field is optional and specifies the protocol permitted for a request made with the SAS token. The default value is HTTPS.
The Azure portal is a web-based console that enables you to manage your Azure su
1. The **Blob SAS token** query string and **Blob SAS URL** appear in the lower area of the window. To use the Blob SAS token, append it to a storage service URI.
-1. Copy and paste the **Blob SAS token** and **Blob SAS URL** values in a secure location. They're displayed only once and can't be retrieved after the window is closed.
+1. Copy and paste the **Blob SAS token** and **Blob SAS URL** values in a secure location. The values are displayed only once and can't be retrieved after the window is closed.
1. To [construct a SAS URL](#use-your-sas-url-to-grant-access), append the SAS token (URI) to the URL for a storage service.
To use your SAS URL with the [REST API](/rest/api/aiservices/document-models/bui
} ```
-That's it! You've learned how to create SAS tokens to authorize how clients access your data.
+That's it! You learned how to create SAS tokens to authorize how clients access your data.
## Next step
ai-services Deploy Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/deploy-label-tool.md
- ignite-2023 Previously updated : 07/18/2023 Last updated : 07/11/2024 monikerRange: 'doc-intel-2.1.0'
The Document Intelligence Sample Labeling tool is an application that provides a
* [Run the Sample Labeling tool locally](#run-the-sample-labeling-tool-locally) * [Deploy the Sample Labeling tool to an Azure Container Instance (ACI)](#deploy-with-azure-container-instances-aci)
-* [Use and contribute to the open-source OCR Form Labeling Tool](#open-source-on-github)
+* [Use and contribute to the open-source Form Labeling Tool](#open-source-on-github)
## Run the Sample Labeling tool locally
Follow these steps to create a new resource using the Azure portal:
### Continuous deployment
-After you've created your web app, you can enable the continuous deployment option:
+After you create your web app, you can enable the continuous deployment option:
* From the left pane, choose **Container settings**. * In the main window, navigate to Continuous deployment and toggle between the **On** and **Off** buttons to set your preference:
As an alternative to using the Azure portal, you can create a resource using the
There's a few things you need know about this command:
-* `DNS_NAME_LABEL=aci-demo-$RANDOM` generates a random DNS name.
+* `DNS_NAME_LABEL=aci-demo-$RANDOM` generates a random Domain Name System (DNS) identity.
* This sample assumes that you have a resource group that you can use to create a resource. Replace `<resource_group_name>` with a valid resource group associated with your subscription. * You need to specify where you want to create the resource. Replace `<region name>` with your desired region for the web app.
-* This command automatically accepts EULA.
+* This command automatically accepts End User License Agreement (EULA).
From the Azure CLI, run this command to create a web app resource for the Sample Labeling tool:
az container create \
### Connect to Microsoft Entra ID for authorization
-It's recommended that you connect your web app to Microsoft Entra ID. This connection ensures that only users with valid credentials can sign in and use your web app. Follow the instructions in [Configure your App Service app](../../app-service/configure-authentication-provider-aad.md) to connect to Microsoft Entra ID.
+We recommend that you connect your web app to Microsoft Entra ID. This connection ensures that only users with valid credentials can sign in and use your web app. Follow the instructions in [Configure your App Service app](../../app-service/configure-authentication-provider-aad.md) to connect to Microsoft Entra ID.
## Open source on GitHub
-The OCR Form Labeling Tool is also available as an open-source project on GitHub. The tool is a web application built using React + Redux, and is written in TypeScript. To learn more or contribute, see [OCR Form Labeling Tool](https://github.com/microsoft/OCR-Form-Tools/blob/master/README.md).
+The Form Labeling Tool is also available as an open-source project on GitHub. The tool is a web application built using React + Redux, and is written in TypeScript. To learn more or contribute, see [Form Labeling Tool](https://github.com/microsoft/OCR-Form-Tools/blob/master/README.md).
## Next steps
ai-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/label-tool.md
Title: "How-to: Analyze documents, Label forms, train a model, and analyze forms with Document Intelligence (formerly Form Recognizer)"
-description: How to use the Document Intelligence sample tool to analyze documents, invoices, receipts etc. Label and create a custom model to extract text, tables, selection marks, structure and key-value pairs from documents.
+description: How to use the Document Intelligence sample tool to analyze documents, invoices, receipts etc. Label and create a custom model to extract text, tables, selection marks, structure, and key-value pairs from documents.
- ignite-2023 Previously updated : 07/18/2023 Last updated : 07/11/2024 monikerRange: 'doc-intel-2.1.0'
When you create or open a project, the main tag editor window opens. The tag edi
Select **Run Layout on unvisited documents** on the left pane to get the text and table layout information for each document. The labeling tool draws bounding boxes around each text element.
-The labeling tool also shows which tables have been automatically extracted. Select the table/grid icon on the left hand of the document to see the extracted table. In this quickstart, because the table content is automatically extracted, we don't label the table content, but rather rely on the automated extraction.
+The labeling tool also shows which tables were automatically extracted. To see extracted tables, select the table/grid icon on the left hand of the document. In this quickstart, because the table content is automatically extracted, we don't label the table content, but rather rely on the automated extraction.
:::image type="content" source="media/label-tool/table-extraction.png" alt-text="Table visualization in Sample Labeling tool.":::
At times, your data might lend itself better to being labeled as a table rather
:::image type="content" source="media/label-tool/table-tag.png" alt-text="Configuring a table tag.":::
-Once you've defined your table tag, tag the cell values.
+Once you define your table tag, tag the cell values.
:::image type="content" source="media/table-labeling.png" alt-text="Labeling a table."::: ## Train a custom model
-Choose the Train icon on the left pane to open the Training page. Then select the **Train** button to begin training the model. Once the training process completes, you see the following information:
+To open the Training page, choose the Train icon on the left pane. Then select the **Train** button to begin training the model. Once the training process completes, you see the following information:
* **Model ID** - The ID of the model that was created and trained. Each training call creates a new model with its own ID. Copy this string to a secure location; you need it if you want to do prediction calls through the [REST API](~/articles/ai-services/document-intelligence/how-to-guides/use-sdk-rest-api.md?view=doc-intel-2.1.0&preserve-view=true?pivots=programming-language-rest-api&tabs=preview%2cv2-1) or [client library guide](~/articles/ai-services/document-intelligence/how-to-guides/use-sdk-rest-api.md?view=doc-intel-2.1.0&preserve-view=true). * **Average Accuracy** - The model's average accuracy. You can improve model accuracy by adding and labeling more forms, then retraining to create a new model. We recommend starting by labeling five forms and adding more forms as needed.
Choose the Train icon on the left pane to open the Training page. Then select th
:::image type="content" source="media/label-tool/train-screen.png" alt-text="Training view.":::
-After training finishes, examine the **Average Accuracy** value. If it's low, you should add more input documents and repeat the labeling steps. The documents you've already labeled remain in the project index.
+After training finishes, examine the **Average Accuracy** value. If it's low, you should add more input documents and repeat the labeling steps. The documents you already labeled remain in the project index.
> [!TIP] > You can also run the training process with a REST API call. To learn how to do this, see [Train with labels using Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-labeled-data.md).
With Model Compose, you can compose up to 200 models to a single model ID. When
## Analyze a form
-Select the Analyze icon from the navigation bar to test your model. Select source *Local file*. Browse for a file and select a file from the sample dataset that you unzipped in the test folder. Then choose the **Run analysis** button to get key/value pairs, text and tables predictions for the form. The tool applies tags in bounding boxes and reports the confidence of each tag.
+To test your model, select the `Analyze` icon from the navigation bar. Select source *Local file*. Browse for a file and select a file from the sample dataset that you unzipped in the test folder. Then choose the **Run analysis** button to get key/value pairs, text, and tables predictions for the form. The tool applies tags in bounding boxes and reports the confidence of each tag.
:::image type="content" source="media/analyze.png" alt-text="Screenshot of analyze-a-custom-form window"::: > [!TIP]
-> You can also run the Analyze API with a REST call. To learn how to do this, see [Train with labels using Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-labeled-data.md).
+> You can also run the `Analyze` API with a REST call. To learn how to do this, see [Train with labels using Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-labeled-data.md).
## Improve results
-Depending on the reported accuracy, you may want to do further training to improve the model. After you've done a prediction, examine the confidence values for each of the applied tags. If the average accuracy training value is high, but the confidence scores are low (or the results are inaccurate), add the prediction file to the training set, label it, and train again.
+Depending on the reported accuracy, you may want to do further training to improve the model. After you complete a prediction, examine the confidence values for each of the applied tags. If the average accuracy training value is high, but the confidence scores are low (or the results are inaccurate), add the prediction file to the training set, label it, and train again.
The reported average accuracy, confidence scores, and actual accuracy can be inconsistent when the analyzed documents differ from documents used in training. Keep in mind that some documents look similar when viewed by people but can look distinct to the AI model. For example, you might train with a form type that has two variations, where the training set consists of 20% variation A and 80% variation B. During prediction, the confidence scores for documents of variation A are likely to be lower.
Finally, go to the main page (house icon) and select **Open Cloud Project**. The
## Next steps
-In this quickstart, you've learned how to use the Document Intelligence Sample Labeling tool to train a model with manually labeled data. If you'd like to build your own utility to label training data, use the REST APIs that deal with labeled data training.
+In this quickstart, you learned how to use the Document Intelligence Sample Labeling tool to train a model with manually labeled data. If you'd like to build your own utility to label training data, use the REST APIs that deal with labeled data training.
> [!div class="nextstepaction"] > [Train with labels using Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-labeled-data.md)
ai-services Supervised Table Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/supervised-table-tags.md
- ignite-2023 Previously updated : 07/18/2023 Last updated : 07/11/2024 monikerRange: 'doc-intel-2.1.0' #Customer intent: As a user of the Document Intelligence custom model service, I want to ensure I'm training my model in the best way.
monikerRange: 'doc-intel-2.1.0'
> * You can refer to the [API migration guide](v3-1-migration-guide.md) for detailed information about migrating from v2.1 to v3.0. > * *See* our [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) or [**C#**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), or [Python](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) SDK quickstarts to get started with version v3.0.
-In this article, you'll learn how to train your custom template model with table tags (labels). Some scenarios require more complex labeling than simply aligning key-value pairs. Such scenarios include extracting information from forms with complex hierarchical structures or encountering items that not automatically detected and extracted by the service. In these cases, you can use table tags to train your custom template model.
+In this article, learn how to train your custom template model with table tags (labels). Some scenarios require more complex labeling than simply aligning key-value pairs. Such scenarios include extracting information from forms with complex hierarchical structures or encountering items that not automatically detected and extracted by the service. In these cases, you can use table tags to train your custom template model.
## When should I use table tags? Here are some examples of when using table tags would be appropriate: -- There's data that you wish to extract presented as tables in your forms, and the structure of the tables are meaningful. For instance, each row of the table represents one item and each column of the row represents a specific feature of that item. In this case, you could use a table tag where a column represents features and a row represents information about each feature.-- There's data you wish to extract that isn't presented in specific form fields but semantically, the data could fit in a two-dimensional grid. For instance, your form has a list of people, and includes, a first name, a last name, and an email address. You would like to extract this information. In this case, you could use a table tag with first name, last name, and email address as columns and each row is populated with information about a person from your list.
+* There's data that you wish to extract presented as tables in your forms, and the structure of the tables are meaningful. For instance, each row of the table represents one item and each column of the row represents a specific feature of that item. In this case, you could use a table tag where a column represents features and a row represents information about each feature.
+* There's data you wish to extract that isn't presented in specific form fields but semantically, the data could fit in a two-dimensional grid. For instance, your form has a list of people, and includes, a first name, a surname, and an email address. You would like to extract this information. In this case, you could use a table tag with first name, surname, and email address as columns and each row is populated with information about a person from your list.
> [!NOTE] > Document Intelligence automatically finds and extracts all tables in your documents whether the tables are tagged or not. Therefore, you don't have to label every table from your form with a table tag and your table tags don't have to replicate the structure of very table found in your form. Tables extracted automatically by Document Intelligence will be included in the pageResults section of the JSON output.
ai-services V3 Error Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/v3-error-guide.md
Title: "Reference: Document Intelligence (formerly Form Recognizer) Errors" description: Learn how errors are represented in Document Intelligence and find a list of possible errors returned by the service.-+ - ignite-2023 Previously updated : 07/18/2023 Last updated : 07/11/2024 monikerRange: '>=doc-intel-3.0.0'
monikerRange: '>=doc-intel-3.0.0'
# Error guide v4.0, v3.1, and v3.0
-Document Intelligence uses a unified design to represent all errors encountered in the REST APIs. Whenever an API operation returns a 4xx or 5xx status code, additional information about the error is returned in the response JSON body as follows:
+Document Intelligence uses a unified design to represent all errors encountered in the REST APIs. Whenever an API operation returns a 4xx or 5xx status code, additional information about the error is returned in the response JSON body as follows:
```json {
Document Intelligence uses a unified design to represent all errors encountered
} ```
-For long-running operations where multiple errors are encountered, the top-level error code is set to the most severe error, with the individual errors listed under the *error.details* property. In such scenarios, the *target* property of each individual error specifies the trigger of the error.
+For long-running operations where multiple errors are encountered, the top-level error code is set to the most severe error, with the individual errors listed under the *error.details* property. In such scenarios, the *target* property of each individual error specifies the trigger of the error.
```json {
The top-level *error.code* property can be one of the following error code messa
| Conflict | The request couldn't be completed due to a conflict. | 409 | | UnsupportedMediaType | Request content type isn't supported. | 415 | | InternalServerError | An unexpected error occurred. | 500 |
-| ServiceUnavailable | A transient error has occurred. Try again. | 503 |
+| ServiceUnavailable | A transient error occurred. Try again. | 503 |
When possible, more details are specified in the *inner error* property.
When possible, more details are specified in the *inner error* property.
| InvalidRequest | OperationNotCancellable | The operation can no longer be canceled. | | InvalidRequest | TrainingContentMissing | Training data is missing: {details} | | InvalidRequest | UnsupportedContent | Content isn't supported: {details} |
-| NotFound | ModelNotFound | The requested model wasn't found. It's deleted or still building. |
-| NotFound | OperationNotFound | The requested operation wasn't found. The identifier is invalid or the operation has expired. |
+| NotFound | ModelNotFound | The requested model wasn't found. It was deleted or still building. |
+| NotFound | OperationNotFound | The requested operation wasn't found. The identifier is invalid or the operation is expired. |
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-support.md
Previously updated : 07/18/2023 Last updated : 07/08/2024
The following table provides links to language support reference articles by sup
|![Document Intelligence icon](~/reusable-content/ce-skilling/azure/medi) | Turn documents into intelligent data-driven solutions. | |![Immersive Reader icon](medi) | Help users read and comprehend text. | |![Language icon](~/reusable-content/ce-skilling/azure/medi) | Build apps with industry-leading natural language understanding capabilities. |
-|![Language Understanding icon](medi) (retired) | Understand natural language in your apps. |
+|![Language Understanding icon](medi) (retired) | Understand natural language in your apps. |
|![QnA Maker icon](medi) (retired) | Distill information into easy-to-navigate questions and answers. | |![Speech icon](~/reusable-content/ce-skilling/azure/medi)| Configure speech-to-text, text-to-speech, translation, and speaker recognition applications. | |![Translator icon](~/reusable-content/ce-skilling/azure/medi) | Translate more than 100 in-use, at-risk, and endangered languages and dialects.|
These Azure AI services are language agnostic and don't have limitations based o
## See also * [What are Azure AI services?](./what-are-ai-services.md)
-* [Create an account](multi-service-resource.md?pivots=azportal)
+* [How to create an account](multi-service-resource.md?pivots=azportal)
ai-services Model Retirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/model-retirements.md
description: Learn about the model deprecations and retirements in Azure OpenAI. Previously updated : 06/19/2024 Last updated : 07/10/2024
Azure OpenAI Service models are continually refreshed with newer and more capabl
* Deprecation * When a model is deprecated, it's no longer available for new customers. It continues to be available for use by customers with existing deployments until the model is retired.
-## Preretirement notification
+## Notifications
Azure OpenAI notifies customers of active Azure OpenAI Service deployments for models with upcoming retirements. We notify customers of upcoming retirements as follows for each deployment:
-* At least 60 days before retirement
-* At least 30 days before retirement
-* At retirement
+1. At model launch, we programmatically designate a "not sooner than" retirement date (typically six months to one year out).
+2. At least 60 days notice before model retirement for Generally Available (GA) models.
+3. At least 14 days notice before preview model version upgrades.
Retirements are done on a rolling basis, region by region.
+## Model availability
+
+1. At least one year of model availability for GA models after the release date of a model in at least one region worldwide
+2. For global deployments, all future model versions starting with `gpt-4o` and `gpt-4 0409` will be available with their (`N`) next succeeding model (`N+1`) for comparison together.
+1. Customers have 60 days to try out a new GA model in at least one global, or standard region, before any upgrades happen to a newer GA model.
+
+### Considerations for the Azure public cloud
+
+Be aware of the following:
+
+1. All model version combinations will **not** be available in all regions.
+2. Model version `N` and `N+1` might not always be available in the same region.
+3. GA model version `N` might upgrade to a future model version `N+X` in some regions based on capacity limitations, and without the new model version `N+X` separately being available to test in the same region. The new model version will be available to test in other regions before any upgrades are scheduled.
+4. Preview model versions and GA versions of the same model won't always be available to test together in the same region. There will be preview and GA versions available to test in different regions.
+5. We reserve the right to limit future customers using a particular region to balance service quality for existing customers.
+6. As always at Microsoft, security is of the utmost importance. If a model or model version is found to have compliance or security issues, we reserve the right to invoke the need to do emergency retirements. See the terms of service for more information.
+
+### Special considerations for Azure Government clouds
+
+1. Global standard deployments won't be available in government clouds.
+2. Not all models or model versions available in commercial / public cloud will be available in government clouds.
+3. In the Azure Government clouds, we intend to support only one version of a given model at a time.
+ 1. For example only one version of `gpt-35-turbo 0125` and `gpt-4o (2024-05-13)`.
+4. There will however be a 30 day overlap between new model versions, where more than two will be available.
+ 1. For example if `gpt-35-turbo 0125` or `gpt-4o (2024-05-13)` is updated to a future version, or
+ 2. for model family changes beyond version updates, such as when moving from `gpt-4 1106-preview` to `gpt-4o (2024-05-13)`.
+ ### Who is notified of upcoming retirements Azure OpenAI notifies those who are members of the following roles for each subscription with a deployment of a model with an upcoming retirement.
ai-services Document Translation Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/connector/document-translation-flow.md
Previously updated : 07/18/2023 Last updated : 07/09/2024
Next, assign a **`Storage Blob Data Contributor`** role to the managed identity
### Configure a Document Translation flow
-Now that you've completed the prerequisites and initial setup, let's get started using the Translator V3 connector to create your document translation flow:
+Now that you completed the prerequisites and initial setup, let's get started using the Translator V3 connector to create your document translation flow:
1. Sign in to [Power Automate](https://powerautomate.microsoft.com/).
Here are the steps to translate a file in Azure Blob Storage using the Translato
* **Translator resource name**. Enter the name of your Translator resource found at the top of your resource page in the Azure portal. Select **Create**.
- :::image type="content" source="../media/connectors/add-connection.png" alt-text="Screenshot showing the add connection window.":::
+ :::image type="content" source="../media/connectors/add-connection.png" alt-text="Screenshot showing the how-to-add connection window.":::
> [!NOTE] > After you've set up your connection, you won't be required to reenter your credentials for subsequent flows.
Here are the steps to translate a file in Azure Blob Storage using the Translato
* For **Storage type of the input documents**. Select **File** or **Folder**. * Select a **Source Language** from the dropdown menu or keep the default **Auto-detect** option.
- * **Location of the source documents**. Enter the URL for your document(s) in your Azure storage source document container.
+ * **Location of the source documents**. Enter the URL for your documents in your Azure storage source document container.
* **Location of the translated documents**. Enter the URL for your Azure storage target document container. To find your source and target URLs:
Here are the steps to translate a file in Azure Blob Storage using the Translato
## Get documents status
-Now that you've submitted your document(s) for translation, let's check the status of the operation.
+Now that you submitted your documents for translation, let's check the status of the operation.
1. Select **New step**.
Here are the steps to upload a file from your SharePoint site to Azure Blob Stor
##### Get file content
- 1. In the Choose an operation pop-up window, enter **SharePoint**, then select the **Get file content** content. Power Automate automatically signs you into your SharePoint account(s).
+ 1. In the Choose an operation pop-up window, enter **SharePoint**, then select the **Get file content** content. Power Automate automatically signs you into your SharePoint accounts.
:::image type="content" source="../media/connectors/get-file-content.png" alt-text="Screenshot of the SharePoint Get file content action."::: 1. On the **Get file content** step window, complete the following fields: * **Site Address**. Select the SharePoint site URL where your file is located from the dropdown list.
- * **File Identifier**. Select the folder icon and choose the document(s) for translation.
+ * **File Identifier**. Select the folder icon and choose the documents for translation.
##### Create a storage blob
Here are the steps to upload a file from your SharePoint site to Azure Blob Stor
1. Choose the Microsoft Entra account associated with your Azure Blob Storage and Translator resource accounts.
-1. After you have completed the **Azure Blob Storage** authentication, the **Create blob** step appears. Complete the fields as follows:
+1. After you completed the **Azure Blob Storage** authentication, the **Create blob** step appears. Complete the fields as follows:
* **Storage account name or blob endpoint**. Select **Enter custom value** and enter your storage account name. * **Folder path**. Select the folder icon and select your source document container.
Here are the steps to upload a file from your SharePoint site to Azure Blob Stor
* **Translator resource name**. Enter the name of your Translator resource found at the top of your resource page in the Azure portal. * Select **Create**.
- :::image type="content" source="../media/connectors/add-connection.png" alt-text="Screenshot showing the add connection window.":::
+ :::image type="content" source="../media/connectors/add-connection.png" alt-text="Screenshot showing the how-to-add connection window.":::
> [!NOTE] > After you've setup your connection, you won't be required to reenter your credentials for subsequent Translator flows.
Here are the steps to upload a file from your SharePoint site to Azure Blob Stor
##### Get documents status
-Prior to retrieving the documents status, let's schedule a 30-second delay to ensure that the file has been processed for translation:
+Before retrieving the documents status, let's schedule a 30-second delay to ensure that the file is processed for translation:
1. Select **New step**. Enter **Schedule** in the search box and choose **Delay**. * For **Count**. Enter **30**.
In this step, you retrieve the translated document from Azure Blob Storage and u
1. Select **Add an action**, enter **Azure Blob Storage** in the search box, and select the **Get blob content using path (V2)** action. 1. In the **Storage account name or blob endpoint** field, select **Enter custom value** and enter your storage account name.
-1. Select the **Blob path** field to show the **Dynamic content** window, select **Expression** and enter the following logic in the formula field:
+1. Select the **Blob path** field to show the **Dynamic content** window, select **Expression**, and enter the following logic in the formula field:
```powerappsfl
In this step, you retrieve the translated document from Azure Blob Storage and u
1. Select **Add an action**, enter **Azure Blob Storage** in the search box, and select the **Get Blob Metadata using path (V2)** action. 1. In the **Storage account name or blob endpoint** field, select **Enter custom value** and enter your storage account name.
- :::image type="content" source="../media/connectors/enter-custom-value.png" alt-text="Screenshot showing 'enter custom value' from the Create blob (V2) window.":::
+ :::image type="content" source="../media/connectors/enter-custom-value.png" alt-text="Screenshot showing 'enter custom value' from the create-blob-(V2) window.":::
1. Select the **Blob path** field to show the **Dynamic content** window, select **Expression** and enter the following logic in the formula field:
Let's check your document translation flow and results.
-That's it! You've learned to automate document translation processes using the Microsoft Translator V3 connector and Power Automate.
+That's it! You learned to automate document translation processes using the Microsoft Translator V3 connector and Power Automate.
## Next steps
ai-services Text Translator Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/connector/text-translator-flow.md
Previously updated : 07/18/2023 Last updated : 07/10/2024
To get started, you need an active Azure subscription. If you don't have an Azu
## Configure the Translator V3 connector
-Now that you've completed the prerequisites, let's get started.
+Now that you completed the prerequisites, let's get started.
1. Sign in to [Power Automate](https://powerautomate.microsoft.com/).
Let's select an action. Choose to translate or transliterate text.
* **Subscription Key**. Enter one of your keys that you copied from the Azure portal. * **Translator resource name**. Enter the name of your Translator resource found at the top of your resource page in the Azure portal. Select **Create**.
- :::image type="content" source="../media/connectors/add-connection.png" alt-text="Screenshot showing the add connection window.":::
+ :::image type="content" source="../media/connectors/add-connection.png" alt-text="Screenshot showing the add-connection window.":::
> [!NOTE] > After you've setup your connection, you won't be required to reenter your credentials for subsequent Translator flows.
Let's select an action. Choose to translate or transliterate text.
1. Enter the **Body Text**. 1. Select **Save**.
- :::image type="content" source="../media/connectors/translate-text-step.png" alt-text="Screenshot showing the translate text step.":::
+ :::image type="content" source="../media/connectors/translate-text-step.png" alt-text="Screenshot showing the translate-text step.":::
#### [Transliterate text](#tab/transliterate)
Let's select an action. Choose to translate or transliterate text.
* **Subscription Key**. Enter one of your keys that you copied from the Azure portal. * **Translator resource name**. Enter the name of your Translator resource found at the top of your resource page in the Azure portal. Select **Create**.
- :::image type="content" source="../media/connectors/add-connection.png" alt-text="Screenshot showing the add connection window.":::
+ :::image type="content" source="../media/connectors/add-connection.png" alt-text="Screenshot showing the add-connection window.":::
1. Next, the **Transliterate** action window appears. 1. **Language**. Select the language of the text that is to be converted.
Let's select an action. Choose to translate or transliterate text.
1. **Target script**. Select the name of transliterated text script. 1. Select **Save**.
- :::image type="content" source="../media/connectors/transliterate-text-step.png" alt-text="Screenshot showing the transliterate text step.":::
+ :::image type="content" source="../media/connectors/transliterate-text-step.png" alt-text="Screenshot showing the transliterate-text step.":::
ai-services Deploy User Managed Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/deploy-user-managed-glossary.md
Previously updated : 08/15/2023 Last updated : 07/10/2024 recommendations: false
ai-services Beginners Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/beginners-guide.md
Previously updated : 07/18/2023 Last updated : 07/08/2024
-# Custom Translator for beginners
+# Custom Translator for beginners
[Custom Translator](overview.md) enables you to a build translation system that reflects your business, industry, and domain-specific terminology and style. Training and deploying a custom system is easy and doesn't require any programming skills. The customized translation system seamlessly integrates into your existing applications, workflows, and websites and is available on Azure through the same cloud-based [Microsoft Text Translation API](../reference/v3-0-translate.md?tabs=curl) service that powers billions of translations every day.
-The platform enables users to build and publish custom translation systems to and from English. The Custom Translator supports more than 60 languages that map directly to the languages available for NMT. For a complete list, *see* [Translator language support](../language-support.md).
+The platform enables users to build and publish custom translation systems to and from English. The Custom Translator supports more than 60 languages that map directly to the languages available for Neural machine translation (NMT). For a complete list, *see* [Translator language support](../language-support.md).
## Is a custom translation model the right choice for me? A well-trained custom translation model provides more accurate domain-specific translations because it relies on previously translated in-domain documents to learn preferred translations. Translator uses these terms and phrases in context to produce fluent translations in the target language while respecting context-dependent grammar.
-Training a full custom translation model requires a substantial amount of data. If you don't have at least 10,000 sentences of previously trained documents, you won't be able to train a full-language translation model. However, you can either train a dictionary-only model or use the high-quality, out-of-the-box translations available with the Text Translation API.
+Training a full custom translation model requires a substantial amount of data. If you don't have at least 10,000 sentences of previously trained documents, you can't train a full-language translation model. However, you can either train a dictionary-only model or use the high-quality, out-of-the-box translations available with the Text Translation API.
:::image type="content" source="media/how-to/for-beginners.png" alt-text="Screenshot illustrating the difference between custom and general models.":::
Building a custom translation model requires:
* Obtaining in-domain translated data (preferably human translated).
-* The ability to assess translation quality or target language translations.
+* Assessing translation quality or target language translations.
## How do I evaluate my use-case? Having clarity on your use-case and what success looks like is the first step towards sourcing proficient training data. Here are a few considerations:
-* What is your desired outcome and how will you measure it?
+* Is your desired outcome specified and how is it measured?
-* What is your business domain?
+* Is your business domain identified?
* Do you have in-domain sentences of similar terminology and style?
Having clarity on your use-case and what success looks like is the first step to
Finding in-domain quality data is often a challenging task that varies based on user classification. Here are some questions you can ask yourself as you evaluate what data may be available to you:
-* Enterprises often have a wealth of translation data that has accumulated over many years of using human translation. Does your company have previous translation data available that you can use?
+* Does your company have previous translation data available that you can use? Enterprises often have a wealth of translation data accumulated over many years of using human translation.
* Do you have a vast amount of monolingual data? Monolingual data is data in only one language. If so, can you get translations for this data?
Finding in-domain quality data is often a challenging task that varies based on
| Tuning documents | Trains the Neural Machine Translation parameters. | **Be strict**. Compose them to be optimally representative of what you are going to translate in the future. | | Test documents | Calculate the [BLEU score](concepts/bleu-score.md?WT.mc_id=aiml-43548-heboelma).| **Be strict**. Compose test documents to be optimally representative of what you plan to translate in the future. | | Phrase dictionary | Forces the given translation 100% of the time. | **Be restrictive**. A phrase dictionary is case-sensitive and any word or phrase listed is translated in the way you specify. In many cases, it's better to not use a phrase dictionary and let the system learn. |
-| Sentence dictionary | Forces the given translation 100% of the time. | **Be strict**. A sentence dictionary is case-insensitive and good for common in domain short sentences. For a sentence dictionary match to occur, the entire submitted sentence must match the source dictionary entry. If only a portion of the sentence matches, the entry won't match. |
+| Sentence dictionary | Forces the given translation 100% of the time. | **Be strict**. A sentence dictionary is case-insensitive and good for common in domain short sentences. For a sentence dictionary match to occur, the entire submitted sentence must match the source dictionary entry. If only a portion of the sentence matches, the entry doesn't match. |
## What is a BLEU score?
-BLEU (Bilingual Evaluation Understudy) is an algorithm for evaluating the precision or accuracy of text that has been machine translated from one language to another. Custom Translator uses the BLEU metric as one way of conveying translation accuracy.
+BLEU (Bilingual Evaluation Understudy) is an algorithm for evaluating the precision or accuracy of text that is machine translated from one language to another. Custom Translator uses the BLEU metric as one way of conveying translation accuracy.
A BLEU score is a number between zero and 100. A score of zero indicates a low quality translation where nothing in the translation matched the reference. A score of 100 indicates a perfect translation that is identical to the reference. It's not necessary to attain a score of 100 - a BLEU score between 40 and 60 indicates a high-quality translation.
A BLEU score is a number between zero and 100. A score of zero indicates a low q
## What happens if I don't submit tuning or testing data?
-Tuning and test sentences are optimally representative of what you plan to translate in the future. If you don't submit any tuning or testing data, Custom Translator will automatically exclude sentences from your training documents to use as tuning and test data.
+Tuning and test sentences are optimally representative of what you plan to translate in the future. If you don't submit any tuning or testing data, Custom Translator automatically excludes sentences from your training documents to use as tuning and test data.
| System-generated | Manual-selection | |||
Tuning and test sentences are optimally representative of what you plan to trans
## How is training material processed by Custom Translator?
-To prepare for training, documents undergo a series of processing and filtering steps. These steps are explained below. Knowledge of the filtering process may help with understanding the sentence count displayed as well as the steps you can take to prepare training documents for training with Custom Translator.
+To prepare for training, documents undergo a series of processing and filtering steps. Knowledge of the filtering process may help with understanding the sentence count displayed as well as the steps you can take to prepare training documents for training with Custom Translator. The filtering steps are as follows:
* ### Sentence alignment
- If your document isn't in XLIFF, XLSX, TMX, or ALIGN format, Custom Translator aligns the sentences of your source and target documents to each other, sentence-by-sentence. Translator doesn't perform document alignmentΓÇöit follows your naming convention for the documents to find a matching document in the other language. Within the source text, Custom Translator tries to find the corresponding sentence in the target language. It uses document markup like embedded HTML tags to help with the alignment.
+ If your document isn't in `XLIFF`, `XLSX`, `TMX`, or `ALIGN` format, Custom Translator aligns the sentences of your source and target documents to each other, sentence-by-sentence. Translator doesn't perform document alignmentΓÇöit follows your naming convention for the documents to find a matching document in the other language. Within the source text, Custom Translator tries to find the corresponding sentence in the target language. It uses document markup like embedded HTML tags to help with the alignment.
If you see a large discrepancy between the number of sentences in the source and target documents, your source document may not be parallel, or couldn't be aligned. The document pairs with a large difference (>10%) of sentences on each side warrant a second look to make sure they're indeed parallel.
-* ### Extracting tuning and testing data
+* ### Tuning and testing data extraction
- Tuning and testing data is optional. If you don't provide it, the system will remove an appropriate percentage from your training documents to use for tuning and testing. The removal happens dynamically as part of the training process. Since this step occurs as part of training, your uploaded documents aren't affected. You can see the final used sentence counts for each category of dataΓÇötraining, tuning, testing, and dictionaryΓÇöon the Model details page after training has succeeded.
+ Tuning and testing data is optional. If you don't provide it, the system removes an appropriate percentage from your training documents to use for tuning and testing. The removal happens dynamically as part of the training process. Since this step occurs as part of training, your uploaded documents aren't affected. You can see the final used sentence counts for each category of dataΓÇötraining, tuning, testing, and dictionaryΓÇöon the Model details page after training succeeds.
* ### Length filter * Removes sentences with only one word on either side. * Removes sentences with more than 100 words on either side. Chinese, Japanese, Korean are exempt. * Removes sentences with fewer than three characters. Chinese, Japanese, Korean are exempt.
- * Removes sentences with more than 2000 characters for Chinese, Japanese, Korean.
+ * Removes sentences with more than 2,000 characters for Chinese, Japanese, Korean.
* Removes sentences with less than 1% alphanumeric characters. * Removes dictionary entries containing more than 50 words.
To prepare for training, documents undergo a series of processing and filtering
* Remove sentences with invalid encoding. * Remove Unicode control characters.
-* If feasible, align sentences (source-to-target).
+* Align sentences (source-to-target), if feasible.
* Remove source and target sentences that don't match the source and target languages. * When source and target sentences have mixed languages, ensure that untranslated words are intentional, for example, names of organizations and products.
-* Correct grammatical and typographical errors to prevent teaching these errors to your model.
-* Though our training process handles source and target lines containing multiple sentences, it's better to have one source sentence mapped to one target sentence.
+* Avoid teaching errors to your model by making certain that grammar and typography are correct.
+* Have one source sentence mapped to one target sentence. Although our training process handles source and target lines containing multiple sentences, one-to-one mapping is a best practice.
## How do I evaluate the results?
-After your model is successfully trained, you can view the model's BLEU score and baseline model BLEU score on the model details page. We use the same set of test data to generate both the model's BLEU score and the baseline BLEU score. This data will help you make an informed decision regarding which model would be better for your use-case.
+After your model is successfully trained, you can view the model's BLEU score and baseline model BLEU score on the model details page. We use the same set of test data to generate both the model's BLEU score and the baseline BLEU score. This data helps you make an informed decision regarding which model would be better for your use-case.
## Next steps
ai-services Bleu Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/concepts/bleu-score.md
Previously updated : 07/18/2023 Last updated : 07/09/2024 #Customer intent: As a Custom Translator user, I want to understand how BLEU score works so that I understand system test outcome better.
A more extensive discussion of BLEU scores is [here](https://youtu.be/-UqDljMymM
BLEU results depend strongly on the breadth of your domain; consistency of test, training and tuning data; and how much data you have
-available for training. If your models have been trained on a narrow domain, and
+available for training. If your models are trained within a narrow domain, and
your training data is consistent with your test data, you can expect a high BLEU score.
ai-services Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/concepts/customization.md
Previously updated : 07/18/2023 Last updated : 07/09/2024
The feature can also be used to customize speech translation when used with [Azu
## Custom Translator
-With Custom Translator, you can build neural translation systems that understand the terminology used in your own business and industry. The customized translation system will then integrate into existing applications, workflows, and websites.
+With Custom Translator, you can build neural translation systems that understand the terminology used in your own business and industry. The customized translation system integrates into existing applications, workflows, and websites.
### How does it work?
-Use your previously translated documents (leaflets, webpages, documentation, etc.) to build a translation system that reflects your domain-specific terminology and style, better than a standard translation system. Users can upload TMX, XLIFF, TXT, DOCX, and XLSX documents.
+Use your previously translated documents (leaflets, webpages, documentation, etc.) to build a translation system that reflects your domain-specific terminology and style, better than a standard translation system. Users can upload `TMX`,`XLIFF`,`TXT`, `DOCX`, and `XLSX` documents.
-The system also accepts data that is parallel at the document level but isn't yet aligned at the sentence level. If users have access to versions of the same content in multiple languages but in separate documents, Custom Translator will be able to automatically match sentences across documents. The system can also use monolingual data in either or both languages to complement the parallel training data to improve the translations.
+The system also accepts data that is parallel at the document level but isn't yet aligned at the sentence level. If users have access to versions of the same content in multiple languages but in separate documents, Custom Translator os able to automatically match sentences across documents. The system can also use monolingual data in either or both languages to complement the parallel training data to improve the translations.
The customized system is then available through a regular call to Translator using the category parameter.
-Given the appropriate type and amount of training data it isn't uncommon to expect gains between 5 and 10, or even more BLEU points on translation quality by using Custom Translator.
+Given the appropriate type and amount of training data it isn't uncommon to expect gains between 5 and 10, or even more `BLEU` points on translation quality by using Custom Translator.
More details about the various levels of customization based on available data can be found in the [Custom Translator User Guide](../overview.md).
ai-services Dictionaries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/concepts/dictionaries.md
description: How to create an aligned document that specifies a list of phrases
Previously updated : 07/18/2023 Last updated : 07/09/2024
The [neural phrase dictionary](../../neural-dictionary.md) extends our [dynamic
## Sentence dictionary
-A sentence dictionary is case-insensitive. The sentence dictionary allows you to specify an exact target translation for a source sentence. For a sentence dictionary match to occur, the entire submitted sentence must match the source dictionary entry. A source dictionary entry that ends with punctuation is ignored during the match. If only a portion of the sentence matches, the entry isn't matched. When a match is detected, the target entry of the sentence dictionary is returned.
+A sentence dictionary is case-insensitive. The sentence dictionary allows you to specify an exact target translation for a source sentence. For a sentence dictionary match to occur, the entire submitted sentence must match the source dictionary entry. A source dictionary entry that ends with punctuation is ignored during the match. If only a portion of the sentence matches, the entry isn't matched. When a match is detected, the target entry of the sentence dictionary is returned.
## Dictionary-only trainings
-You can train a model using only dictionary data. To do so, select only the dictionary document (or multiple dictionary documents) that you wish to include and select **Create model**. Since this training is dictionary-only, there's no minimum number of training sentences required. Your model typically completes training faster than a standard training. The resulting models use the Microsoft baseline models for translation with the addition of the dictionaries you add. You don't get a test report.
+You can train a model using only dictionary data. To do so, select only the dictionary document (or multiple dictionary documents) that you wish to include and select **Create model**. Since this training is dictionary-only, there's no minimum number of training sentences required. Your model typically completes training faster than a standard training. The resulting models use the Microsoft baseline models for translation with the addition of the dictionaries you add. You don't get a test report.
>[!Note] >Custom Translator doesn't sentence align dictionary files, so it is important that there are an equal number of source and target phrases/sentences in your dictionary documents and that they are precisely aligned.
You can train a model using only dictionary data. To do so, select only the dict
- The phrase dictionary should be used sparingly. When a phrase within a sentence is replaced, the context of that sentence is lost or limited for translating the rest of the sentence. The result is that, while the phrase or word within the sentence is translated according to the provided dictionary, the overall translation quality of the sentence often suffers. -- The phrase dictionary works well for compound nouns like product names ("_Microsoft SQL Server_"), proper names ("_City of Hamburg_"), or product features ("_pivot table_"). It doesn't work as well for verbs or adjectives because those words are typically highly contextual within the source or target language. The best practice is to avoid phrase dictionary entries for anything but compound nouns.
+- The phrase dictionary works well for compound nouns like product names ("_Microsoft SQL Server_"), proper names ("_City of Hamburg_"), or product features ("_pivot table_"). It doesn't work as well for verbs or adjectives because, typically, those words are highly contextual within the source or target language. The best practice is to avoid phrase dictionary entries for anything but compound nouns.
- If you're using a phrase dictionary, capitalization and punctuation are important. Dictionary entries are case- and punctuation-sensitive. Custom Translator only matches words and phrases in the input sentence that use exactly the same capitalization and punctuation marks as specified in the source dictionary file. Also, translations reflect the capitalization and punctuation provided in the target dictionary file. **Example** - If you're training an English-to-Spanish system that uses a phrase dictionary and you specify _SQL server_ in the source file and _Microsoft SQL Server_ in the target file. When you request the translation of a sentence that contains the phrase _SQL server_, Custom Translator matches the dictionary entry and the translation that contains _Microsoft SQL Server_.
- - When you request translation of a sentence that includes the same phrase but **doesn't** match what is in your source file, such as _sql server_, _sql Server_ or _SQL Server_, it **won't** return a match from your dictionary.
+ - When you request translation of a sentence that includes the same phrase but **doesn't** match what is in your source file, such as _sql server_, _sql Server_, or _SQL Server_, it **won't** return a match from your dictionary.
- The translation follows the rules of the target language as specified in your phrase dictionary. - For more information about neural phrase dictionary, _see_ [neural dictionary guidance and recommendations](../../neural-dictionary.md#guidance-and-recommendations).
You can train a model using only dictionary data. To do so, select only the dict
**Example**
- - If your source dictionary contains "_This sentence ends with punctuation!_", then any translation requests containing "_This sentence ends with punctuation_" matches.
+ - If your source dictionary contains "_This sentence ends with punctuation!_", then any translation requests that contain "_This sentence ends with punctuation_" matches.
- Your dictionary should contain unique source lines. If a source line (a word, phrase, or sentence) appears more than once in a dictionary file, the system always uses the **last entry** provided and return the target when a match is found.
ai-services Document Formats Naming Convention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/concepts/document-formats-naming-convention.md
description: This article is a guide to document formats and naming conventions
Previously updated : 07/18/2023 Last updated : 07/10/2024
This table includes all supported file formats that you can use to build your tr
| Format | Extensions | Description | |-|--|--|
-| XLIFF | .XLF, .XLIFF | A parallel document format, export of Translation Memory systems. The languages used are defined inside the file. |
-| TMX | .TMX | A parallel document format, export of Translation Memory systems. The languages used are defined inside the file. |
-| ZIP | .ZIP | ZIP is an archive file format. |
-| Locstudio | .LCL | A Microsoft format for parallel documents |
-| Microsoft Word | .DOCX | Microsoft Word document |
-| Adobe Acrobat | .PDF | Adobe Acrobat portable document |
-| HTML | .HTML, .HTM | HTML document |
-| Text file | .TXT | UTF-16 or UTF-8 encoded text files. The file name must not contain Japanese characters. |
-| Aligned text file | .ALIGN | The extension `.ALIGN` is a special extension that you can use if you know that the sentences in the document pair are perfectly aligned. If you provide a `.ALIGN` file, Custom Translator won't align the sentences for you. |
-| Excel file | .XLSX | Excel file (2013 or later). First line/ row of the spreadsheet should be language code. |
+| `XLIFF` | `.XLF`, `.XLIFF` | A parallel document format, export of Translation Memory systems. The languages used are defined inside the file. |
+| `TMX` | `.TMX` | A parallel document format, export of Translation Memory systems. The languages used are defined inside the file. |
+| `ZIP` | `.ZIP` | An archive file format. |
+| `Locstudio` | `.LCL` | A Microsoft format for parallel documents |
+| Microsoft Word | `.DOCX` | Microsoft Word document |
+| Adobe Acrobat | `.PDF` | Adobe Acrobat portable document |
+| `HTML` | `.HTML`, `.HTM` | HyperText Markup Language document |
+| Text file | `.TXT` | UTF-16 or UTF-8 encoded text files. The file name must not contain Japanese characters. |
+| Aligned text file | `.ALIGN` | The extension `.ALIGN` is a special extension that you can use if you know that the sentences in the document pair are perfectly aligned. If you provide a `.ALIGN` file, Custom Translator doesn't align the sentences for you. |
+| Excel file | `.XLSX` | Excel file (2013 or later). First line/ row of the spreadsheet should be language code. |
## Dictionary formats For dictionaries, Custom Translator supports all file formats that are supported for training sets. If you're using an Excel dictionary, the first line/ row of the spreadsheet should be language codes.
-## Zip file formats
+## ZIP file formats
-Documents can be grouped into a single zip file and uploaded. The Custom Translator supports zip file formats (ZIP, GZ, and TGZ).
+Documents can be grouped into a single zip file and uploaded. The Custom Translator supports zip file formats (`ZIP`, `GZ`, and `TGZ`).
Each document in the zip file with the extension TXT, HTML, HTM, PDF, DOCX, ALIGN must follow this naming convention:
Each document in the zip file with the extension TXT, HTML, HTM, PDF, DOCX, ALIG
where {document name} is the name of your document, {language code} is the ISO LanguageID (two characters), indicating that the document contains sentences in that language. There must be an underscore (_) before the language code. For example, to upload two parallel documents within a zip for an English to
-Spanish system, the files should be named "data_en" and "data_es".
+Spanish system, the files should be named `data_en` and `data_es`.
-Translation Memory files (TMX, XLF, XLIFF, LCL, XLSX) aren't required to follow the specific language-naming convention.
+Translation Memory files (`TMX`, `XLF`, `XLIFF`, `LCL`, `XLSX`) aren't required to follow the specific language-naming convention.
## Next steps
ai-services Model Training https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/concepts/model-training.md
Previously updated : 07/18/2023 Last updated : 07/10/2024 #Customer intent: As a Custom Translator user, I want to concept of a model and training, so that I can efficiently use training, tuning and testing datasets the helps me build a translation model.
A model is the system, which provides translation for a specific language pair. The outcome of a successful training is a model. To train a model, three mutually exclusive document types are required: training, tuning, and testing. Dictionary document type can also be provided. For more information, _see_ [Sentence alignment](./sentence-alignment.md#suggested-minimum-number-of-sentences).
-If only training data is provided when queuing a training, Custom Translator will automatically assemble tuning and testing data. It will use a random subset of sentences from your training documents, and exclude these sentences from the training data itself.
+If only training data is provided when queuing a training, Custom Translator automatically assembles tuning and testing data. It uses a random subset of sentences from your training documents, and exclude these sentences from the training data itself.
## Training document type for Custom Translator
You can run multiple trainings within a project and compare the [BLEU scores](bl
Parallel documents included in this set are used by the Custom Translator to tune the translation system for optimal results.
-The tuning data is used during training to adjust all parameters and weights of the translation system to the optimal values. Choose your tuning data carefully: the tuning data should be representative of the content of the documents you intend to translate in the future. The tuning data has a major influence on the quality of the translations produced. Tuning enables the translation system to provide translations that are closest to the samples you provide in the tuning data. You don't need more than 2500 sentences in your tuning data. For optimal translation quality, it's recommended to select the tuning set manually by choosing the most representative selection of sentences.
+The tuning data is used during training to adjust all parameters and weights of the translation system to the optimal values. Choose your tuning data carefully: the tuning data should be representative of the content of the documents you intend to translate in the future. The tuning data has a major influence on the quality of the translations produced. Tuning enables the translation system to provide translations that are closest to the samples you provide in the tuning data. You don't need more than 2,500 sentences in your tuning data. For optimal translation quality, we recommend selecting the tuning set manually by choosing the most representative selection of sentences.
-When creating your tuning set, choose sentences that are a meaningful and representative length of the future sentences that you expect to translate. Choose sentences that have words and phrases that you intend to translate in the approximate distribution that you expect in your future translations. In practice, a sentence length of 7 to 10 words will produce the best results. These sentences contain enough context to show inflection and provide a phrase length that is significant, without being overly complex.
+When creating your tuning set, choose sentences that are a meaningful and representative length of the future sentences that you expect to translate. Choose sentences that have words and phrases that you intend to translate in the approximate distribution that you expect in your future translations. In practice, a sentence length of 7 to 10 words produces the best results. These sentences contain enough context to show inflection and provide a phrase length that is significant, without being overly complex.
A good description of the type of sentences to use in the tuning set is prose: actual fluent sentences. Not table cells, not poems, not lists of things, not only punctuation, or numbers in a sentence - regular language. If you manually select your tuning data, it shouldn't have any of the same sentences as your training and testing data. The tuning data has a significant impact on the quality of the translations - choose the sentences carefully.
-If you aren't sure what to choose for your tuning data, just select the training data and let Custom Translator select the tuning data for you. When you let the Custom Translator choose the tuning data automatically, it will use a random subset of sentences from your bilingual training documents and exclude these sentences from the training material itself.
+If you aren't sure what to choose for your tuning data, just select the training data and let Custom Translator select the tuning data for you. When you let the Custom Translator choose the tuning data automatically, it uses a random subset of sentences from your bilingual training documents and exclude these sentences from the training material itself.
## Testing dataset for Custom Translator
Parallel documents included in the testing set are used to compute the BLEU (Bil
The BLEU score is a measurement of the delta between the automatic translation and the reference translation. Its value ranges from 0 to 100. A score of 0 indicates that not a single word of the reference appears in the translation. A score of 100 indicates that the automatic translation exactly matches the reference: the same word is in the exact same position. The score you receive is the BLEU score average for all sentences of the testing data.
-The test data should include parallel documents where the target language sentences are the most desirable translations of the corresponding source language sentences in the source-target pair. You may want to use the same criteria you used to compose the tuning data. However, the testing data has no influence over the quality of the translation system. It's used exclusively to generate the BLEU score for you.
+The test data should include parallel documents where the target language sentences are the most desirable translations of the corresponding source language sentences in the source-target pair. You may want to use the same criteria you used to compose the tuning data. However, the testing data has no influence over the quality of the translation system and is used exclusively to generate the BLEU score for you.
-You don't need more than 2,500 sentences as the testing data. When you let the system choose the testing set automatically, it will use a random subset of sentences from your bilingual training documents, and exclude these sentences from the training material itself.
+You don't need more than 2,500 sentences as the testing data. When you let the system choose the testing set automatically, it uses a random subset of sentences from your bilingual training documents, and exclude these sentences from the training material itself.
You can view the custom translations of the testing set, and compare them to the translations provided in your testing set, by navigating to the test tab within a model.
ai-services Parallel Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/concepts/parallel-documents.md
description: Parallel documents are pairs of documents where one is the translat
Previously updated : 07/18/2023 Last updated : 07/10/2024 #Customer intent: As a Custom Translator, I want to understand how to use parallel documents to build a custom translation model.
system in either direction.
## Requirements
-You'll need a minimum of 10,000 unique aligned parallel sentences to train a system. This limitation is a safety net to ensure your parallel sentences contain enough unique vocabulary to successfully train a translation model. As a best practice, continuously add more parallel content and retrain to improve the quality of your translation system. For more information, *see* [Sentence Alignment](./sentence-alignment.md).
+You need a minimum of 10,000 unique aligned parallel sentences to train a system. This limitation is a safety net to ensure your parallel sentences contain enough unique vocabulary to successfully train a translation model. As a best practice, continuously add more parallel content and retrain to improve the quality of your translation system. For more information, *see* [Sentence Alignment](./sentence-alignment.md).
-Microsoft requires that documents uploaded to the Custom Translator don't violate a third party's copyright or intellectual properties. For more information, please see the [Terms of Use](https://azure.microsoft.com/support/legal/cognitive-services-terms/). Uploading a document using the portal doesn't alter the ownership of the intellectual property in the document itself.
+Microsoft requires that documents uploaded to the Custom Translator don't violate a third-party copyright or intellectual properties. For more information, please see the [Terms of Use](https://azure.microsoft.com/support/legal/cognitive-services-terms/). Uploading a document using the portal doesn't alter the ownership of the intellectual property in the document itself.
## Use of parallel documents
Documents uploaded are private to each workspace and can be used in as many
projects or trainings as you like. Sentences extracted from your documents are stored separately in your repository as plain Unicode text files and are available for you to delete. Don't use the Custom Translator as a document
-repository, you won't be able to download the documents you uploaded in the
-format you uploaded them.
+repository, you can't download the documents in the same
+format that was uploaded.
## Next steps
ai-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/faq.md
description: This article contains answers to frequently asked questions about t
Previously updated : 07/18/2023 Last updated : 07/10/2024
This article contains answers to frequently asked questions about [Custom Transl
There are restrictions and limits with respect to file size, model training, and model deployment. Keep these restrictions in mind when setting up your training to build a model in Custom Translator. -- Submitted files must be less than 100 MB in size.-- Monolingual data isn't supported.
+- Files for translation must be less than 100 MB in size.
+- Monolingual data isn't supported. A monolingual file has a single language not paired with another file of a different language.
-## When should I request deployment for a translation system that has been trained?
+## When should I request deployment for a trained translation system?
-It may take several trainings to create the optimal translation system for your project. You may want to try using more training data or more carefully filtered data, if the BLEU score and/ or the test results aren't satisfactory. You should be strict and careful in designing your tuning set and your test set. Make certain your sets fully represent the terminology and style of material you want to translate. You can be more liberal in composing your training data, and experiment with different options. Request a system deployment when you're satisfied with the translations in your system test results and have no more data to add to improve your trained system.
+It may take several trainings to create the optimal translation system for your project. You may want to try using more training data or more carefully filtered data, if the `BLEU` score and/ or the test results aren't satisfactory. You should be strict and careful in designing your tuning set and your test set. Make certain your sets fully represent the terminology and style of material you want to translate. You can be more liberal in composing your training data, and experiment with different options. Request a system deployment when the translations in your system test results are satisfactory and you don't have more data to add to improve your trained system.
## How many trained systems can be deployed in a project?
-Only one trained system can be deployed per project. It may take several trainings to create a suitable translation system for your project and we encourage you to request deployment of a training that gives you the best result. You can determine the quality of the training by the BLEU score (higher is better), and by consulting with reviewers before deciding that the quality of translations is suitable for deployment.
+Only one trained system can be deployed per project. It may take several trainings to create a suitable translation system for your project and we encourage you to request deployment of a training that gives you the best result. You can determine the quality of the training by the `BLEU` score (higher is better), and by consulting with reviewers before deciding that the quality of translations is suitable for deployment.
## When can I expect my trainings to be deployed?
Deployed systems can be accessed via the Microsoft Translator Text API V3 by spe
## How do I skip alignment and sentence breaking if my data is already sentence aligned?
-The Custom Translator skips sentence alignment and sentence breaking for TMX files and for text files with the `.align` extension. `.align` files give users an option to skip Custom Translator's sentence breaking and alignment process for the files that are perfectly aligned, and need no further processing. We recommend using `.align` extension only for files that are perfectly aligned.
+The Custom Translator skips sentence alignment and sentence breaking for `TMX` files and for text files with the `.align` extension. `.align` files give users an option to skip Custom Translator's sentence breaking and alignment process for the files that are perfectly aligned, and need no further processing. We recommend using `.align` extension only for files that are perfectly aligned.
-If the number of extracted sentences doesn't match the two files with the same base name, Custom Translator will still run the sentence aligner on `.align` files.
+If the number of extracted sentences doesn't match the two files with the same base name, Custom Translator runs the sentence aligner on `.align` files.
-## I tried uploading my TMX, but it says "document processing failed"
+## I tried uploading my Translation Memory Exchange (TMX) file, but it says "document processing failed"
Ensure that the TMX conforms to the [TMX 1.4b Specification](https://www.gala-global.org/tmx-14b).
ai-services Copy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/how-to/copy-model.md
description: This article explains how to copy a custom model to another workspa
Previously updated : 07/18/2023 Last updated : 07/10/2024
ai-services Create Manage Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/how-to/create-manage-project.md
description: How to create and manage a project in the Azure AI Translator Custo
Previously updated : 07/18/2023 Last updated : 07/10/2024
Creating a project is the first step in building and publishing a model.
1. Select **Create project**.
- :::image type="content" source="../media/how-to/create-project-dialog.png" alt-text="Screenshot illustrating the create project fields.":::
+ :::image type="content" source="../media/how-to/create-project-dialog.png" alt-text="Screenshot illustrating the create-project fields.":::
## Edit a project
To modify the project name, project description, or domain description:
1. Follow the [**Edit a project**](#edit-a-project) steps 1-3 above.
-1. Select **Delete** and read the delete message before you select **Delete project** to confirm.
+1. Select **Delete** and read the deleted message before you select **Delete project** to confirm.
:::image type="content" source="../media/how-to/delete-project-1.png" alt-text="Screenshot illustrating delete project fields.":::
ai-services Create Manage Training Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/how-to/create-manage-training-documents.md
description: How to build and upload parallel documents (two documents where one
Previously updated : 07/18/2023 Last updated : 07/09/2024
[Parallel documents](../concepts/parallel-documents.md) are pairs of documents where one (target) is a translation of the other (source). One document in the pair contains sentences in the source language and the other document contains those sentences translated into the target language.
-Before uploading your documents, review the [document formats and naming convention guidance](../concepts/document-formats-naming-convention.md) to make sure your file format is supported by Custom Translator.
+Before uploading your documents, review the [document formats and naming convention guidance](../concepts/document-formats-naming-convention.md) to make sure Custom Translator supports your file format.
## How to create document sets Finding in-domain quality data is often a challenging task that varies based on user classification. Here are some questions you can ask yourself as you evaluate what data may be available to you: -- Enterprises often have a wealth of translation data that has accumulated over many years of using human translation. Does your company have previous translation data available that you can use?
+- Does your company have previous translation data available that you can use? Enterprises often have a wealth of translation data accumulated over many years of using human translation.
- Do you have a vast amount of monolingual data? Monolingual data is data in only one language. If so, can you get translations for this data?
Finding in-domain quality data is often a challenging task that varies based on
| Source | What it does | Rules to follow | ||||
-| Bilingual training documents | Teaches the system your terminology and style. | **Be liberal**. Any in-domain human translation is better than machine translation. Add and remove documents as you go and try to improve the [BLEU score](../concepts/bleu-score.md?WT.mc_id=aiml-43548-heboelma). |
+| Bilingual training documents | Teaches the system your terminology and style. | **Be liberal**. Any in-domain human translation is better than machine translation. Add and remove documents as you go and try to improve the [`BLEU` score](../concepts/bleu-score.md?WT.mc_id=aiml-43548-heboelma). |
| Tuning documents | Trains the Neural Machine Translation parameters. | **Be strict**. Compose them to be optimally representative of what you are going to translation in the future. |
-| Test documents | Calculate the [BLEU score](../beginners-guide.md#what-is-a-bleu-score).| **Be strict**. Compose test documents to be optimally representative of what you plan to translate in the future. |
+| Test documents | Calculate the [`BLEU` score](../beginners-guide.md#what-is-a-bleu-score).| **Be strict**. Compose test documents to be optimally representative of what you plan to translate in the future. |
| Phrase dictionary | Forces the given translation 100% of the time. | **Be restrictive**. A phrase dictionary is case-sensitive and any word or phrase listed is translated in the way you specify. In many cases, it's better to not use a phrase dictionary and let the system learn. |
-| Sentence dictionary | Forces the given translation 100% of the time. | **Be strict**. A sentence dictionary is case-insensitive and good for common in domain short sentences. For a sentence dictionary match to occur, the entire submitted sentence must match the source dictionary entry. If only a portion of the sentence matches, the entry won't match. |
+| Sentence dictionary | Forces the given translation 100% of the time. | **Be strict**. A sentence dictionary is case-insensitive and good for common in domain short sentences. For a sentence dictionary match to occur, the entire submitted sentence must match the source dictionary entry. If only a portion of the sentence matches, the entry doesn't match. |
## How to upload documents
Document types are associated with the language pair selected when you create a
1. Select **Upload**.
-At this point, Custom Translator is processing your documents and attempting to extract sentences as indicated in the upload notification. Once done processing, you'll see the upload successful notification.
+At this point, Custom Translator is processing your documents and attempting to extract sentences as indicated in the upload notification. Once done processing, you see the upload successful notification.
:::image type="content" source="../media/quickstart/document-upload-notification.png" alt-text="Screenshot illustrating the upload document processing dialog window.":::
At this point, Custom Translator is processing your documents and attempting to
In workspace page you can view history of all document uploads details like document type, language pair, upload status etc.
-1. From the [Custom Translator](https://portal.customtranslator.azure.ai) portal workspace page,
- click Upload History tab to view history.
+1. The upload history tab shows history from the [Custom Translator](https://portal.customtranslator.azure.ai) portal workspace page.
+ :::image type="content" source="../media/how-to/upload-history-tab.png" alt-text="Screenshot showing the upload history tab."::: 2. This page shows the status of all of your past uploads. It displays
- uploads from most recent to least recent. For each upload, it shows the document name, upload status, the upload date, the number of files uploaded, type of file uploaded, the language pair of the file, and created by. You can use Filter to quickly find documents by name, status, language, and date range.
+ uploads from most recent to least recent. Each upload status shows document name, created by, upload status, upload date, number of files uploaded, type of file uploaded, and language pairs. You can use filter to quickly find documents by name, status, language, and date range.
:::image type="content" source="../media/how-to/upload-history-page.png" alt-text="Screenshot showing the upload history page.":::
-3. Select any upload history record. In upload history details page,
- you can view the files uploaded as part of the upload, uploaded status of the file, language of the file and error message (if there is any error in upload).
+3. The upload history details page shows the files uploaded as part of the uploaded status of the file, language of the file, and error message (if there's an error in upload).
## Next steps
ai-services Enable Vnet Service Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/how-to/enable-vnet-service-endpoint.md
description: This article describes how to use Custom Translator service with an
Previously updated : 08/08/2023 Last updated : 07/10/2024
Use a billing region code, listed in the following table, with the 'Create a wor
|Billing Region Name|Billing Region Code| |:-|:-|
-|East Asia|AE|
-|Southeast Asia|ASE|
-|Australia East|AUE|
-|Brazil South|BRS|
-|Canada Central|CAC|
-|France Central|FC|
-|Global|GBL|
-|Central India|INC|
-|Japan East|JPE|
-|Japan West|JPW|
-|Korea Central|KC|
-|North Europe|NEU|
-|South Africa North|SAN|
-|Sweden Central|SWC|
-|UAE North|UAEN|
-|UK South|UKS|
-|Central US|USC|
-|East US|USE|
-|East US 2|USE2|
-|North Central US|USNC|
-|South Central US|USSC|
-|West US|USW|
-|West US 2|USW2|
-|West Central US|USWC|
-|West Europe|WEU|
+|East Asia|`AE`|
+|Southeast Asia|`ASE`|
+|Australia East|`AUE`|
+|Brazil South|`BRS`|
+|Canada Central|`CAC`|
+|France Central|`FC`|
+|Global|`GBL`|
+|Central India|`INC`|
+|Japan East|`JPE`|
+|Japan West|`JPW`|
+|Korea Central|`KC`|
+|North Europe|`NEU`|
+|South Africa North|`SAN`|
+|Sweden Central|`SWC`|
+|UAE North|`UAEN`|
+|UK South|`UKS`|
+|Central US|`USC`|
+|East US|`USE`|
+|East US 2|`USE2`|
+|North Central US|`USNC`|
+|South Central US|`USSC`|
+|West US|`USW`|
+|West US 2|`USW2`|
+|West Central US|`USWC`|
+|West Europe|`WEU`|
Congratulations! You learned how to use Azure VNet service endpoints with Custom Translator.
ai-services Publish Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/how-to/publish-model.md
description: This article explains how to publish a custom model using the Azure
Previously updated : 07/18/2023 Last updated : 07/10/2024
-# Publish a custom model
+# Publish a custom model
Publishing your model makes it available for use with the Translator API. A project might have one or many successfully trained models. You can only publish one model per project; however, you can publish a model to one or multiple regions depending on your needs. For more information, see [Translator pricing](https://azure.microsoft.com/pricing/details/cognitive-services/translator/#pricing). ## Publish your trained model You can publish one model per project to one or multiple regions.
-1. Select the **Publish model** blade.
-1. Select *en-de with sample data* and select **Publish**.
+1. Select the `Publish model` blade.
-1. Check the desired region(s).
+1. Select *en-de with sample data* and select `Publish`.
-1. Select **Publish**. The status should transition from _Deploying_ to _Deployed_.
+1. Check the desired regions.
- :::image type="content" source="../media/quickstart/publish-model.png" alt-text="Screenshot illustrating the publish model blade.":::
+1. Select `Publish`. The status should transition from _Deploying_ to _Deployed_.
+
+ :::image type="content" source="../media/quickstart/publish-model.png" alt-text="Screenshot illustrating the publish-model blade.":::
## Replace a published model
-To replace a published model, you can exchange the published model with a different model in the same region(s):
+To replace a published model, you can exchange the published model with a different model in the same region:
1. Select the replacement model.
-1. Select **Publish**.
+1. Select `Publish`.
-1. Select **publish** once more in the **Publish model** dialog window.
+1. Select `publish` once more in the `Publish model` dialog window.
## Next steps
ai-services Test Your Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/how-to/test-your-model.md
description: How to test your custom model BLEU score and evaluate translations
Previously updated : 07/18/2023 Last updated : 07/09/2024 # Test your model
-Once your model has successfully trained, you can use translations to evaluate the quality of your model. In order to make an informed decision about whether to use our standard model or your custom model, you should evaluate the delta between your custom model [**BLEU score**](#bleu-score) and our standard model **Baseline BLEU**. If your models have been trained on a narrow domain, and your training data is consistent with the test data, you can expect a high BLEU score.
+Once your model is successfully trained, you can use translations to evaluate the quality of your model. In order to make an informed decision about whether to use our standard model or your custom model, you should evaluate the delta between your custom model [**BLEU score**](#bleu-score) and our standard model **Baseline BLEU**. If your model is trained within a narrow domain, and your training data is consistent with the test data, you can expect a high BLEU score.
## BLEU score
-BLEU (Bilingual Evaluation Understudy) is an algorithm for evaluating the precision or accuracy of text that has been machine translated from one language to another. Custom Translator uses the BLEU metric as one way of conveying translation accuracy.
+BLEU (Bilingual Evaluation Understudy) is an algorithm for evaluating the precision or accuracy of text that is machine translated from one language to another. Custom Translator uses the BLEU metric as one way of conveying translation accuracy.
A BLEU score is a number between zero and 100. A score of zero indicates a low-quality translation where nothing in the translation matched the reference. A score of 100 indicates a perfect translation that is identical to the reference. It's not necessary to attain a score of 100ΓÇöa BLEU score between 40 and 60 indicates a high-quality translation.
A BLEU score is a number between zero and 100. A score of zero indicates a low-q
1. Select the **Model details** blade.
-1. Select the model name. Review the training date/time, total training time, number of sentences used for training, tuning, testing, and dictionary. Check whether the system generated the test and tuning sets. You'll use the `Category ID` to make translation requests.
+1. Select the model name. Review the training date/time, total training time, number of sentences used for training, tuning, testing, and dictionary. Check whether the system generated the test and tuning sets. Use the `Category ID` to make translation requests.
-1. Evaluate the model [BLEU](../beginners-guide.md#what-is-a-bleu-score) score. Review the test set: the **BLEU score** is the custom model score and the **Baseline BLEU** is the pre-trained baseline model used for customization. A higher **BLEU score** means there's high translation quality using the custom model.
+1. Evaluate the model [BLEU](../beginners-guide.md#what-is-a-bleu-score) score. Review the test set: the **BLEU score** is the custom model score and the **Baseline BLEU** is the pretrained baseline model used for customization. A higher **BLEU score** means there's high translation quality using the custom model.
:::image type="content" source="../media/quickstart/model-details.png" alt-text="Screenshot illustrating the model detail.":::
A BLEU score is a number between zero and 100. A score of zero indicates a low-q
1. Select model **Name**.
-1. Human evaluate translation from your **Custom model** and the **Baseline model** (our pre-trained baseline used for customization) against **Reference** (target translation from the test set).
+1. Human evaluate translation from your **Custom model** and the **Baseline model** (our pretrained baseline used for customization) against **Reference** (target translation from the test set).
-1. If you're satisfied with the training results, place a deployment request for the trained model.
+1. If the training results are satisfactory, place a deployment request for the trained model.
## Next steps
ai-services Train Custom Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/how-to/train-custom-model.md
description: How to train a custom model
Previously updated : 07/18/2023 Last updated : 07/09/2024 # Train a custom model
-A model provides translations for a specific language pair. The outcome of a successful training is a model. To train a custom model, three mutually exclusive document types are required: training, tuning, and testing. If only training data is provided when queuing a training, Custom Translator will automatically assemble tuning and testing data. It will use a random subset of sentences from your training documents, and exclude these sentences from the training data itself. A minimum of 10,000 parallel training sentences are required to train a full model.
+A model provides translations for a specific language pair. The outcome of a successful training is a model. To train a custom model, three mutually exclusive document types are required: training, tuning, and testing. If only training data is provided when queuing a training, Custom Translator automatically assembles tuning and testing data. It uses a random subset of sentences from your training documents, and exclude these sentences from the training data itself. A minimum of 10,000 parallel training sentences are required to train a full model.
## Create model
A model provides translations for a specific language pair. The outcome of a suc
## When to select dictionary-only training
-For better results, we recommended letting the system learn from your training data. However, when you don't have enough parallel sentences to meet the 10,000 minimum requirements, or sentences and compound nouns must be rendered as-is, use dictionary-only training. Your model will typically complete training much faster than with full training. The resulting models will use the baseline models for translation along with the dictionaries you've added. You won't see BLEU scores or get a test report.
+For better results, we recommended letting the system learn from your training data. However, when you don't have enough parallel sentences to meet the 10,000 minimum requirements, or sentences and compound nouns must be rendered as-is, use dictionary-only training. Your model typically completes training faster than with full training. The resulting models use the baseline models for translation along with the dictionaries you added. You don't see `BLEU` scores or get a test report.
> [!Note] >Custom Translator doesn't sentence-align dictionary files. Therefore, it is important that there are an equal number of source and target phrases/sentences in your dictionary documents and that they are precisely aligned. If not, the document upload will fail.
For better results, we recommended letting the system learn from your training d
1. After successful model training, select the **Model details** blade.
-1. Select the **Model Name** to review training date/time, total training time, number of sentences used for training, tuning, testing, dictionary, and whether the system generated the test and tuning sets. You'll use `Category ID` to make translation requests.
+1. Select the **Model Name** to review training date/time, total training time, number of sentences used for training, tuning, testing, dictionary, and whether the system generated the test and tuning sets. You use `Category ID` to make translation requests.
-1. Evaluate the model [BLEU score](../beginners-guide.md#what-is-a-bleu-score). Review the test set: the **BLEU score** is the custom model score and the **Baseline BLEU** is the pre-trained baseline model used for customization. A higher **BLEU score** means higher translation quality using the custom model.
+1. Evaluate the model [`BLEU` score](../beginners-guide.md#what-is-a-bleu-score). Review the test set: the **BLEU score** is the custom model score and the **Baseline BLEU** is the pretrained baseline model used for customization. A higher **BLEU score** means higher translation quality using the custom model.
:::image type="content" source="../media/quickstart/model-details.png" alt-text="Screenshot illustrating model details fields.":::
For better results, we recommended letting the system learn from your training d
1. Fill in **New model name**.
-1. Keep **Train immediately** checked if no further data will be selected or uploaded, otherwise, check **Save as draft**
+1. Keep **Train immediately** checked if no further data is selected or uploaded, otherwise, check **Save as draft**
1. Select **Save**
For better results, we recommended letting the system learn from your training d
> > If you save the model as `Draft`, **Model details** is updated with the model name in `Draft` status. >
- > To add more documents, select on the model name and follow `Create model` section above.
+ > To add more documents, select on the model name and follow the steps in the [Create model](#create-model) section.
:::image type="content" source="../media/how-to/duplicate-model.png" alt-text="Screenshot illustrating the duplicate model blade.":::
ai-services Translate With Custom Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/how-to/translate-with-custom-model.md
Title: Translate text with a custom model
-description: How to make translation requests using custom models published with the Azure AI Translator Custom Translator.
+description: How to make translation requests using custom models published with the Azure AI Translator Custom Translator.
Previously updated : 07/18/2023 Last updated : 07/10/2024
-# Translate text with a custom model
+# Translate text with a custom model
-After you publish your custom model, you can access it with the Translator API by using the `Category ID` parameter.
+After you publish your custom model, you can access it with the Translator API by using the `Category ID` parameter.
## How to translate
After you publish your custom model, you can access it with the Translator API b
More information about the Translator Text API can be found on the [Translator API Reference](../../reference/v3-0-translate.md) page.
-1. You may also want to download and install our free [DocumentTranslator app for Windows](https://github.com/MicrosoftTranslator/DocumentTranslation/releases).
+1. You can also download and install our free [DocumentTranslator app for Windows](https://github.com/MicrosoftTranslator/DocumentTranslation/releases).
## Next steps
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/overview.md
description: Custom Translator offers similar capabilities to what Microsoft Tra
Previously updated : 07/18/2023 Last updated : 07/08/2024
Custom Translator provides different features to build custom translation system
|[Build systems that knows your business terminology](./beginners-guide.md) | Customize and build translation systems using parallel documents that understand the terminologies used in your own business and industry. | |[Use a dictionary to build your models](./how-to/train-custom-model.md#when-to-select-dictionary-only-training) | If you don't have training data set, you can train a model with only dictionary data. | |[Collaborate with others](./how-to/create-manage-workspace.md#manage-workspace-settings) | Collaborate with your team by sharing your work with different people. |
-|[Access your custom translation model](./how-to/translate-with-custom-model.md) | Your custom translation model can be accessed anytime by your existing applications/ programs via Microsoft Translator Text API V3. |
+|[Access your custom translation model](./how-to/translate-with-custom-model.md) | You can access your custom translation model anytime using your existing applications/ programs via Microsoft Translator Text API V3. |
## Get better translations Microsoft Translator released [Neural Machine Translation (NMT)](https://www.microsoft.com/translator/blog/2016/11/15/microsoft-translator-launching-neural-network-based-translations-for-all-its-speech-languages/) in 2016. NMT provided major advances in translation quality over the industry-standard [Statistical Machine Translation (SMT)](https://en.wikipedia.org/wiki/Statistical_machine_translation) technology. Because NMT better captures the context of full sentences before translating them, it provides higher quality, more human-sounding, and more fluent translations. [Custom Translator](https://portal.customtranslator.azure.ai) provides NMT for your custom models resulting better translation quality.
-You can use previously translated documents to build a translation system. These documents include domain-specific terminology and style, better than a standard translation system. Users can upload ALIGN, PDF, LCL, HTML, HTM, XLF, TMX, XLIFF, TXT, DOCX, and XLSX documents.
+You can use previously translated documents to build a translation system. These documents include domain-specific terminology and style, better than a standard translation system. Users can upload `ALIGN`, `PDF`, `LCL`, `HTML`, `HTM`, `XLF`, `TMX`, `XLIFF`, `TXT`, `DOCX`, and `XLSX` documents.
-Custom Translator also accepts data that's parallel at the document level to make data collection and preparation more effective. If users have access to versions of the same content in multiple languages but in separate documents, Custom Translator will be able to automatically match sentences across documents.
+Custom Translator also accepts data that's parallel at the document level to make data collection and preparation more effective. If users have access to versions of the same content in multiple languages but in separate documents, Custom Translator is able to automatically match sentences across documents.
-If the appropriate type and amount of training data is supplied, it's not uncommon to see [BLEU score](concepts/bleu-score.md) gains between 5 and 10 points by using Custom Translator.
+If the appropriate type and amount of training data is supplied, it's not uncommon to see [`BLEU` score](concepts/bleu-score.md) gains between 5 and 10 points by using Custom Translator.
## Be productive and cost effective With [Custom Translator](https://portal.customtranslator.azure.ai), training and deploying a custom system doesn't require any programming skills.
-The secure [Custom Translator](https://portal.customtranslator.azure.ai) portal enables users to upload training data, train systems, test systems, and deploy them to a production environment through an intuitive user interface. The system will then be available for use at scale within a few hours (actual time depends on training data size).
+The secure [Custom Translator](https://portal.customtranslator.azure.ai) portal enables users to upload training data, train systems, test systems, and deploy them to a production environment through an intuitive user interface. The system is available for use at scale within a few hours (actual time depends on training data size).
-[Custom Translator](https://portal.customtranslator.azure.ai) can also be programmatically accessed through a dedicated API. The API allows users to manage creating or updating training through their own app or webservice.
+[Custom Translator](https://portal.customtranslator.azure.ai) can also be programmatically accessed through a dedicated API. The API allows users to manage the creating or updating of training through their own app or web service.
The cost of using a custom model to translate content is based on the user's Translator Text API pricing tier. See the Azure AI services [Translator Text API pricing webpage](https://azure.microsoft.com/pricing/details/cognitive-services/translator-text-api/) for pricing tier details.
Custom systems can be seamlessly accessed and integrated into any product or bus
## Next steps
-* Read about [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/translator-text-api/).
+* Learn more about [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/translator-text-api/).
-* With [Quickstart](./quickstart.md) learn to build a translation model in Custom Translator.
+* Try the [Quickstart](./quickstart.md) and learn to build a translation model in Custom Translator.
ai-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/quickstart.md
description: A step-by-step guide to building a translation system using the Cus
Previously updated : 07/05/2023 Last updated : 07/08/2024 # Quickstart: Build, publish, and translate with custom models
-Translator is a cloud-based neural machine translation service that is part of the Azure AI services family of REST API that can be used with any operating system. Translator powers many Microsoft products and services used by thousands of businesses worldwide to perform language translation and other language-related operations. In this quickstart, learn to build custom solutions for your applications across all [supported languages](../language-support.md).
+Translator is a cloud-based neural machine translation service that is part of the Azure AI services family of REST API that can be used with any operating system. Translator powers many Microsoft products and services used by thousands of businesses worldwide to perform language translation and other language-related operations. In this quickstart, learn to build custom solutions for your applications across all [supported languages](../language-support.md).
## Prerequisites
For more information, *see* [how to create a Translator resource](../create-tran
## Custom Translator portal
-Once you have the above prerequisites, sign in to the [Custom Translator](https://portal.customtranslator.azure.ai/) portal to create workspaces, build projects, upload files, train models, and publish your custom solution.
+Once you complete the prerequisites, sign in to the [Custom Translator](https://portal.customtranslator.azure.ai/) portal to create workspaces, build projects, upload files, train models, and publish your custom solution.
You can read an overview of translation and custom translation, learn some tips, and watch a getting started video in the [Azure AI technical blog](https://techcommunity.microsoft.com/t5/azure-ai/customize-a-translation-to-make-sense-in-a-specific-context/ba-p/2811956).
You can read an overview of translation and custom translation, learn some tips,
1. [**Train your model**](#train-your-model). A model is the system that provides translation for a specific language pair. The outcome of a successful training is a model. When you train a model, three mutually exclusive document types are required: training, tuning, and testing. If only training data is provided when queuing a training, Custom Translator automatically assembles tuning and testing data. It uses a random subset of sentences from your training documents, and excludes these sentences from the training data itself. A 10,000 parallel sentence is the minimum requirement to train a model.
-1. [**Test (human evaluate) your model**](#test-your-model). The testing set is used to compute the [BLEU](beginners-guide.md#what-is-a-bleu-score) score. This score indicates the quality of your translation system.
+1. [**Test (human evaluate) your model**](#test-your-model). The testing set is used to compute the [`BLEU`](beginners-guide.md#what-is-a-bleu-score) score. This score indicates the quality of your translation system.
1. [**Publish (deploy) your trained model**](#publish-your-model). Your custom model is made available for runtime translation requests.
You can read an overview of translation and custom translation, learn some tips,
## Create a project
-Once the workspace is created successfully, you're taken to the **Projects** page.
+Once the workspace is created successfully, you see the **Projects** page.
You create English-to-German project to train a custom model with only a [training](concepts/model-training.md#training-document-type-for-custom-translator) document type.
You create English-to-German project to train a custom model with only a [traini
In order to create a custom model, you need to upload all or a combination of [training](concepts/model-training.md#training-document-type-for-custom-translator), [tuning](concepts/model-training.md#tuning-document-type-for-custom-translator), [testing](concepts/model-training.md#testing-dataset-for-custom-translator), and [dictionary](concepts/dictionaries.md) document types.
-In this quickstart, you'll upload [training](concepts/model-training.md#training-document-type-for-custom-translator) documents for customization.
+In this quickstart, we show you how to upload [training](concepts/model-training.md#training-document-type-for-custom-translator) documents for customization.
>[!Note] > You can use our sample training, phrase and sentence dictionaries dataset, [Customer sample English-to-German datasets](https://github.com/MicrosoftTranslator/CustomTranslatorSampleDatasets), for this quickstart. However, for production, it's better to upload your own training dataset.
Now you're ready to train your English-to-German model.
1. Select the model name *en-de with sample data*. Review training date/time, total training time, number of sentences used for training, tuning, testing, and dictionary. Check whether the system generated the test and tuning sets. You use the `Category ID` to make translation requests.
-1. Evaluate the model [BLEU](beginners-guide.md#what-is-a-bleu-score) score. The test set **BLEU score** is the custom model score and **Baseline BLEU** is the pretrained baseline model used for customization. A higher **BLEU score** means higher translation quality using the custom model.
+1. Evaluate the model [`BLEU`](beginners-guide.md#what-is-a-bleu-score) score. The test set **BLEU score** is the custom model score and **Baseline BLEU** is the pretrained baseline model used for customization. A higher **BLEU score** means higher translation quality using the custom model.
>[!Note] >If you train with our shared customer sample datasets, BLEU score will be different than the image.
Now you're ready to train your English-to-German model.
## Test your model
-Once your training has completed successfully, inspect the test set translated sentences.
+Once the training completes successfully, inspect the test set translated sentences.
1. Select **Test model** from the left navigation menu. 2. Select "en-de with sample data"
Publishing your model makes it available for use with the Translator API. A proj
1. Select *en-de with sample data* and select **Publish**.
-1. Check the desired region(s).
+1. Check the desired regions.
1. Select **Publish**. The status should transition from _Deploying_ to _Deployed_.
ai-services Create Use Glossaries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/how-to-guides/create-use-glossaries.md
Previously updated : 07/18/2023 Last updated : 07/09/2024 # Use glossaries with Document Translation
A glossary is a list of terms with definitions that you create for the Document
To check if your file format is supported, *see* [Get supported glossary formats](../reference/get-supported-glossary-formats.md).
- The following English-source glossary contains words that can have different meanings depending upon the context in which they're used. The glossary provides the expected translation for each word in the file to help ensure accuracy.
+ The following English-source glossary contains words that can have different meanings depending upon the context. The glossary provides the expected translation for each word in the file to help ensure accuracy.
For instance, when the word `Bank` appears in a financial document, it should be translated to reflect its financial meaning. If the word `Bank` appears in a geographical document, it may refer to shore to reflect its topographical meaning. Similarly, the word `Crane` can refer to either a bird or machine.
ai-services Firewalls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/firewalls.md
Previously updated : 07/18/2023 Last updated : 07/09/2024
If you still require IP filtering, you can get the [IP addresses details using s
> * Once you enable **Selected Networks and Private Endpoints**, you must use the **Virtual Network** endpoint to call the Translator. You can't use the standard translator endpoint (`api.cognitive.microsofttranslator.com`) and you can't authenticate with an access token. > * For more information, *see* [**Virtual Network Support**](reference/v3-0-reference.md#virtual-network-support).
-1. To grant access to an internet IP range, enter the IP address or address range (in [CIDR format](https://tools.ietf.org/html/rfc4632)) under **Firewall** > **Address Range**. Only valid public IP (`non-reserved`) addresses are accepted.
+1. To grant access to an internet IP range, enter the IP address or address range (in [`CIDR` notation](https://tools.ietf.org/html/rfc4632)) under **Firewall** > **Address Range**. Only valid public IP (`non-reserved`) addresses are accepted.
Running Microsoft Translator from behind a specific IP filtered firewall is **not recommended**. The setup is likely to break in the future without notice.
ai-services Migrate To V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/migrate-to-v3.md
Previously updated : 07/18/2023 Last updated : 07/10/2024 # Translator V2 to V3 Migration > [!NOTE]
-> V2 was deprecated on April 30, 2018. Please migrate your applications to V3 in order to take advantage of new functionality available exclusively in V3. V2 was retired on May 24, 2021.
+> V2 was deprecated on April 30, 2018. Please migrate your applications to V3 in order to take advantage of new functionality available exclusively in V3. V2 was retired on May 24, 2021.
-The Microsoft Translator team has released Version 3 (V3) of the Translator. This release includes new features, deprecated methods and a new format for sending to, and receiving data from the Microsoft Translator Service. This document provides information for changing applications to use V3.
+Version 3 (V3) of the Translator service is generally available. The release includes new features, deprecated methods and a new format for sending to, and receiving data from the Microsoft Translator Service. This document provides information for changing applications to use V3.
The end of this document contains helpful links for you to learn more. ## Summary of features
-* No Trace - In V3 No-Trace applies to all pricing tiers in the Azure portal. This feature means that no text submitted to the V3 API, will be saved by Microsoft.
-* JSON - XML is replaced by JSON. All data sent to the service and received from the service is in JSON format.
-* Multiple target languages in a single request - The Translate method accepts multiple 'to' languages for translation in a single request. For example, a single request can be 'from' English and 'to' German, Spanish and Japanese, or any other group of languages.
-* Bilingual dictionary - A bilingual dictionary method has been added to the API. This method includes 'lookup' and 'examples'.
-* Transliterate - A transliterate method has been added to the API. This method will convert words and sentences in one script into another script. For example, Arabic to Latin.
-* Languages - A new 'languages' method delivers language information, in JSON format, for use with the 'translate', 'dictionary', and 'transliterate' methods.
-* New to Translate - New capabilities have been added to the 'translate' method to support some of the features that were in the V2 API as separate methods. An example is TranslateArray.
-* Speak method - Text to speech functionality is no longer supported in the Microsoft Translator. Text to speech functionality is available in [Microsoft Speech Service](../speech-service/text-to-speech.md).
+* No Trace - In V3 No-Trace applies to all pricing tiers in the Azure portal. This feature means that the service doesn't save text submitted to the V3 API.
+* JSON - XML replaces JSON. All data sent to the service and received from the service is in JSON format.
+* Multiple target languages in a single request - The Translate method accepts multiple `to` languages for translation in a single request. For example, a single request can be `from` English and `to` German, Spanish and Japanese, or any other group of languages.
+* Bilingual dictionary - A bilingual dictionary method is added to the API. This method includes `lookup` and `examples`.
+* Transliteration - A transliterate method is added to the API. This method converts words and sentences in one script into another script. For example, Arabic to Latin.
+* Languages - A new `languages` method delivers language information, in JSON format, for use with the `translate`, `dictionary`, and `transliterate` methods.
+* New to Translate - New capabilities are added to the `translate` method to support some of the features that were in the V2 API as separate methods. An example is TranslateArray.
+* Speech method - Text to speech functionality is no longer supported in the Microsoft Translator. Text to speech functionality is available in [Microsoft Speech Service](../speech-service/text-to-speech.md).
-The following list of V2 and V3 methods identifies the V3 methods and APIs that will provide the functionality that came with V2.
+The following list of V2 and V3 methods identifies the V3 methods and APIs that provide the functionality that came with V2.
| V2 API Method | V3 API Compatibility | |:-- |:-|
The following list of V2 and V3 methods identifies the V3 methods and APIs that
## Move to JSON format
-Microsoft Translator Translation V2 accepted and returned data in XML format. In V3, all data sent and received using the API is in JSON format. XML will no longer be accepted or returned in V3.
+Microsoft Translator Translation V2 accepted and returned data in XML format. In V3, all data sent and received using the API is in JSON format. XML is no longer accepted or returned in V3.
-This change will affect several aspects of an application written for the V2 Text Translation API. As an example: The Languages API returns language information for text translation, transliteration, and the two dictionary methods. You can request all language information for all methods in one call or request them individually.
+This change affects several aspects of an application written for the V2 Text Translation API. As an example: The Languages API returns language information for text translation, transliteration, and the two dictionary methods. You can request all language information for all methods in one call or request them individually.
-The languages method does not require authentication; by selecting the following link you can see all the language information for V3 in JSON:
+The languages method doesn't require authentication; by selecting the following link you can see all the language information for V3 in JSON:
[https://api.cognitive.microsofttranslator.com/languages?api-version=3.0&scope=translation,dictionary,transliteration](https://api.cognitive.microsofttranslator.com/languages?api-version=3.0&scope=translation,dictionary,transliteration) ## Authentication Key
-The authentication key you are using for V2 will be accepted for V3. You will not need to get a new subscription. You will be able to mix V2 and V3 in your apps during the yearlong migration period, making it easier for you to release new versions while you are still migrating from V2-XML to V3-JSON.
+The authentication key used for V2 is accepted for V3. You don't need to get a new subscription. You can mix V2 and V3 in your apps during the yearlong migration period, making it easier for you to release new versions while you migrate from V2-XML to V3-JSON.
## Pricing Model
Microsoft Translator V3 is priced in the same way V2 was priced; per character,
| V3 Method | Characters Counted for Billing | |:-- |:-| | `Languages` | No characters submitted, none counted, no charge. |
-| `Translate` | Count is based on how many characters are submitted for translation, and how many languages the characters are translated into. 50 characters submitted, and 5 languages requested will be 50x5. |
+ | `Translate` | Count is based on how many characters are submitted for translation, and how many languages the characters are translated into. 50 characters submitted, and 5 counted as 50x5. |
| `Transliterate` | Number of characters submitted for transliteration are counted. | | `Dictionary lookup & example` | Number of characters submitted for Dictionary lookup and examples are counted. | | `BreakSentence` | No Charge. |
Global
## Compatibility and customization > [!NOTE]
->
-> The Microsoft Translator Hub will be retired on May 17, 2019. [View important migration information and dates](https://www.microsoft.com/translator/business/hub/).
+>
+> The Microsoft Translator Hub will be retired on May 17, 2019. [View important migration information and dates](https://www.microsoft.com/translator/business/hub/).
-Microsoft Translator V3 uses neural machine translation by default. As such, it cannot be used with the Microsoft Translator Hub. The Translator Hub only supports legacy statistical machine translation. Customization for neural translation is now available using the Custom Translator. [Learn more about customizing neural machine translation](custom-translator/overview.md)
+Microsoft Translator V3 uses neural machine translation by default. As such, it can't be used with the Microsoft Translator Hub. The Translator Hub only supports legacy statistical machine translation. Customization for neural translation is now available using the Custom Translator. [Learn more about customizing neural machine translation](custom-translator/overview.md)
-Neural translation with the V3 text API does not support the use of standard categories (SMT, speech, tech, _generalnn_).
+Neural translation with the V3 text API doesn't support the use of standard categories (`SMT`, `speech`, `tech`, `generalnn`).
| Version | Endpoint | GDPR Processor Compliance | Use Translator Hub | Use Custom Translator (Preview) | | : | :- | : | :-- | : |
Neural translation with the V3 text API does not support the use of standard cat
**Translator Version 3** * It's generally available and fully supported.
-* It's GDPR-compliant as a processor and satisfies all ISO 20001 and 20018 as well as SOC 3 certification requirements.
-* It allows you to invoke the neural network translation systems you have customized with Custom Translator (Preview), the new Translator NMT customization feature.
+* It's GDPR-compliant as a processor and satisfies all ISO 20001 and 20018 as well as SOC 3 certification requirements.
+* It allows you to invoke the neural network translation systems you customized with Custom Translator (Preview), the new Translator neural machine translation (NMT) customization feature.
* It doesn't provide access to custom translation systems created using the Microsoft Translator Hub.
-You are using Version 3 of the Translator if you are using the api.cognitive.microsofttranslator.com endpoint.
+You're using Version 3 of the Translator if you're using the api.cognitive.microsofttranslator.com endpoint.
**Translator Version 2**
-* Doesn't satisfy all ISO 20001,20018 and SOC 3 certification requirements.
-* Doesn't allow you to invoke the neural network translation systems you have customized with the Translator customization feature.
-* Provides access to custom translation systems created using the Microsoft Translator Hub.
-* You are using Version 2 of the Translator if you are using the api.microsofttranslator.com endpoint.
+* Doesn't satisfy all ISO 20001,20018 and SOC 3 certification requirements.
+* Doesn't allow you to invoke the neural network translation systems you customized with the Translator customization feature.
+* Does provide access to custom translation systems created using the Microsoft Translator Hub.
+* Uses the api.microsofttranslator.com endpoint.
No version of the Translator creates a record of your translations. Your translations are never shared with anyone. More information on the [Translator No-Trace](https://www.aka.ms/NoTrace) webpage.
ai-services Prevent Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/prevent-translation.md
Previously updated : 07/18/2023 Last updated : 07/09/2024
The Translator allows you to tag content so that it isn't translated. For exampl
<span class="notranslate">This will not be translated.</span> <span>This will be translated. </span> ```
-
+ ```html <div class="notranslate">This will not be translated.</div> <div>This will be translated. </div> ```
-2. Tag your content with `translate="no"`. This only works when the input textType is set as HTML
+2. Tag your content with `translate="no"`. This tag only works when the input textType is set as HTML
Example:
The Translator allows you to tag content so that it isn't translated. For exampl
<span translate="no">This will not be translated.</span> <span>This will be translated. </span> ```
-
+ ```html <div translate="no">This will not be translated.</div> <div>This will be translated. </div> ```
-
+ 3. Use the [dynamic dictionary](dynamic-dictionary.md) to prescribe a specific translation. 4. Don't pass the string to the Translator for translation. 5. Custom Translator: Use a [dictionary in Custom Translator](custom-translator/concepts/dictionaries.md) to prescribe the translation of a phrase with 100% probability. - ## Next steps+ > [!div class="nextstepaction"]
-> [Use the Translate operation to translate text](reference/v3-0-translate.md)
+> [Translate text reference](reference/v3-0-translate.md)
ai-services Profanity Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/profanity-filtering.md
Previously updated : 07/18/2023 Last updated : 07/10/2024
-# Add profanity filtering with the Translator
+# Add profanity filtering with Translator
Normally the Translator service retains profanity that is present in the source in the translation. The degree of profanity and the context that makes words profane differ between cultures. As a result, the degree of profanity in the target language may be amplified or reduced. If you want to avoid seeing profanity in the translation, even if profanity is present in the source text, use the profanity filtering option available in the Translate() method. This option allows you to choose whether you want the profanity deleted, marked with appropriate tags, or no action taken.
-The Translate() method takes the "options" parameter, which contains the new element "ProfanityAction." The accepted values of ProfanityAction are "NoAction," "Marked," and "Deleted." For the value of "Marked," an additional, optional element "ProfanityMarker" can take the values "Asterisk" (default) and "Tag."
-
+The Translate() method takes the "options" parameter, which contains the new element "ProfanityAction." The accepted values of ProfanityAction are "NoAction," "Marked," and "Deleted." For the value of "Marked," another optional element "ProfanityMarker" can take the values "Asterisk" (default) and "Tag."
## Accepted values and examples of ProfanityMarker and ProfanityAction+ | ProfanityAction value | ProfanityMarker value | Action | Example: Source - Spanish| Example: Target - English| |:--|:--|:--|:--|:--|
-| NoAction| | Default. Same as not setting the option. Profanity passes from source to target. | Que coche de \<insert-profane-word> | What a \<insert-profane-word> car |
-| Marked | Asterisk | Profane words are replaced by asterisks (default). | Que coche de \<insert-profane-word> | What a *** car |
-| Marked | Tag | Profane words are surrounded by XML tags \<profanity\>...\</profanity>. | Que coche de \<insert-profane-word> | What a \<profanity> \<insert-profane-word> \</profanity> car |
-| Deleted | | Profane words are removed from the output without replacement. | Que coche de \<insert-profane-word> | What a car |
+| NoAction| | Default. Same as not setting the option. Profanity passes from source to target. | `Que coche de` \<insert-profane-word> | What a \<insert-profane-word> car |
+| Marked | Asterisk | Asterisks replace profane words (default). | `Que coche de` \<insert-profane-word> | What a *** car |
+| Marked | Tag | Profane words are surrounded by XML tags \<profanity\>...\</profanity>. | `Que coche de` \<insert-profane-word> | What a \<profanity> \<insert-profane-word> \</profanity> car |
+| Deleted | | Profane words are removed from the output without replacement. | `Que coche de` \<insert-profane-word> | What a car |
In the above examples, **\<insert-profane-word>** is a placeholder for profane words. ## Next steps+ > [!div class="nextstepaction"] > [Apply profanity filtering with your Translator call](reference/v3-0-translate.md)
ai-services Quickstart Text Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/quickstart-text-sdk.md
Title: "Quickstart: Azure AI Translator SDKs"
+ Title: "Quickstart: Azure AI Translator client libraries"
description: "Learn to translate text with the Translator service SDks in a programming language of your choice: C#, Java, JavaScript, or Python." #
Previously updated : 09/06/2023 Last updated : 07/09/2024 ms.devlang: csharp # ms.devlang: csharp, java, javascript, python
zone_pivot_groups: programming-languages-set-translator-sdk
<!-- markdownlint-disable MD036 --> <!-- markdownlint-disable MD049 -->
-# Quickstart: Azure AI Translator SDKs (preview)
+# Quickstart: Azure AI Translator client libraries (preview)
> [!IMPORTANT] >
You need an active Azure subscription. If you don't have an Azure subscription,
* After your resource deploys, select **Go to resource** and retrieve your key and endpoint.
- * Get the key, endpoint, and region from the resource to connect your application to the Translator service. Paste these values into the code later in the quickstart. You can find them on the Azure portal **Keys and Endpoint** page:
+ * Get the key, endpoint, and region from the resource and connect your application to the Translator service. Paste these values into the code later in the quickstart. You can find them on the Azure portal **Keys and Endpoint** page:
:::image type="content" source="media/quickstarts/keys-and-endpoint-portal.png" alt-text="Screenshot: Azure portal keys and endpoint page.":::
ai-services Text Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/text-sdk-overview.md
Previously updated : 07/18/2023 Last updated : 07/08/2024 recommendations: false
Text Translation SDK supports the programming languages and platforms:
| Language → SDK version | Package|Client library| Supported API version| |:-:|:-|:-|:-| |[.NET/C# → 1.0.0-beta.1](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.Translation.Text/1.0.0-beta.1/https://docsupdatetracker.net/index.html)|[NuGet](https://www.nuget.org/packages/Azure.AI.Translation.Text/1.0.0-beta.1)|[Azure SDK for .NET](/dotnet/api/overview/azure/ai.translation.text-readme?view=azure-dotnet-preview&preserve-view=true)|Translator v3.0|
-|[Java&#x2731; → 1.0.0-beta.1](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-translation-text/1.0.0-beta.1/https://docsupdatetracker.net/index.html)|[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-translation-text/1.0.0-beta.1)|[Azure SDK for Java](/java/api/overview/azure/ai-translation-text-readme?view=azure-java-preview&preserve-view=true)|Translator v3.0|
+|[Java&#x2731; → 1.0.0-beta.1](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-translation-text/1.0.0-beta.1/https://docsupdatetracker.net/index.html)|[Maven repository](https://mvnrepository.com/artifact/com.azure/azure-ai-translation-text/1.0.0-beta.1)|[Azure SDK for Java](/java/api/overview/azure/ai-translation-text-readme?view=azure-java-preview&preserve-view=true)|Translator v3.0|
|[JavaScript → 1.0.0-beta.1](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-cognitiveservices-translatortext/1.0.0/https://docsupdatetracker.net/index.html)|[npm](https://www.npmjs.com/package/@azure-rest/ai-translation-text/v/1.0.0-beta.1)|[Azure SDK for JavaScript](/javascript/api/overview/azure/text-translation?view=azure-node-preview&preserve-view=true) |Translator v3.0 | |**Python → 1.0.0b1**|[PyPi](https://pypi.org/project/azure-ai-translation-text/1.0.0b1/)|[Azure SDK for Python](/python/api/azure-ai-translation-text/azure.ai.translation.text?view=azure-python-preview&preserve-view=true) |Translator v3.0|
Create a client object to interact with the Text Translation SDK, and then call
## Help options
-The [Microsoft Q&A](/answers/tags/132/azure-translator) and [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-text-translation) forums are available for the developer community to ask and answer questions about Azure Text Translation and other services. Microsoft monitors the forums and replies to questions that the community has yet to answer. To make sure that we see your question, tag it with **`azure-text-translation`**.
+The [Microsoft Q & A](/answers/tags/132/azure-translator) and [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-text-translation) forums are available for the developer community to ask and answer questions about Azure Text Translation and other services. Microsoft monitors the forums and replies to questions that the community has yet to answer. To make sure that we see your question, tag it with **`azure-text-translation`**.
## Next steps
ai-services Translator Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/translator-faq.md
Previously updated : 07/18/2023 Last updated : 07/09/2024
Translator counts the following input:
* An individual letter. * Punctuation. * A space, tab, markup, or any white-space character.
-* A repeated translation, even if you have previously translated the same text. Every character submitted to the translate function is counted even when the content is unchanged or the source and target language are the same.
+* A repeated translation, even if you previously translated the same text. Every character submitted to the translate function is counted even when the content is unchanged or the source and target language are the same.
For scripts based on graphic symbols, such as written Chinese and Japanese Kanji, the Translator service counts the number of Unicode code points. One character per symbol. Exception: Unicode surrogate pairs count as two characters.
The [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/)
## Is attribution required when using Translator?
-Attribution isn't required when using Translator for text and speech translation. It's recommended that you inform users that the content they're viewing is machine translated.
+Attribution isn't required when using Translator for text and speech translation. We recommended that you inform users that the content they're viewing is machine translated.
If attribution is present, it must conform to the [Translator attribution guidelines](https://www.microsoft.com/translator/business/attribution/).
If attribution is present, it must conform to the [Translator attribution guidel
No, both have their place as essential tools for communication. Use machine translation where the quantity of content, speed of creation, and budget constraints make it impossible to use human translation.
-Machine translation has been used as a first pass by several of our [language service provider (LSP)](https://www.microsoft.com/translator/business/partners/) partners, prior to using human translation and can improve productivity by up to 50 percent. For a list of LSP partners, visit the Translator partner page.
+Machine translation os used as a first pass, before using human translation, by several of our [language service provider (LSP)](https://www.microsoft.com/translator/business/partners/) partners and can improve productivity by up to 50 percent. For a list of LSP partners, visit the Translator partner page.
> [!TIP]
ai-services Translator Text Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/translator-text-apis.md
Title: "Use Azure AI Translator APIs"
-description: "Learn to translate text, transliterate text, detect language and more with the Translator service. Examples are provided in C#, Java, JavaScript and Python."
+description: "Learn to translate text, transliterate text, detect language, and more with the Translator service. Examples are provided in C#, Java, JavaScript, and Python."
# Previously updated : 07/18/2023 Last updated : 07/08/2024 ms.devlang: csharp # ms.devlang: csharp, golang, java, javascript, python
keywords: translator, translator service, translate text, transliterate text, la
In this how-to guide, you learn to use the [Translator service REST APIs](reference/rest-api-guide.md). You start with basic examples and move onto some core configuration options that are commonly used during development, including:
-* [Translation](#translate-text)
-* [Transliteration](#transliterate-text)
+* [Language Translation](#translate-text)
+* [Language Transliteration](#transliterate-text)
* [Language identification/detection](#detect-language)
-* [Calculate sentence length](#get-sentence-length)
-* [Get alternate translations](#dictionary-lookup-alternate-translations) and [examples of word usage in a sentence](#dictionary-examples-translations-in-context)
+* [Sentence length calculation](#get-sentence-length)
+* [Alternate translations](#dictionary-lookup-alternate-translations) and [examples of word usage in a sentence](#dictionary-examples-translations-in-context)
## Prerequisites
To call the Translator service via the [REST API](reference/rest-api-guide.md),
:::image type="content" source="media/quickstarts/newtonsoft.png" alt-text="Screenshot of the NuGet package install window.":::
-1. Select install from the right package manager window to add the package to your project.
+1. Select install from the right package manager window and add the package to your project.
:::image type="content" source="media/how-to-guides/install-newtonsoft.png" alt-text="Screenshot of the NuGet package install button.":::
To call the Translator service via the [REST API](reference/rest-api-guide.md),
1. Delete the pre-existing code, including the line `Console.WriteLine("Hello World!")`. Copy and paste the code samples into your application's Program.cs file. For each code sample, make sure you update the key and endpoint variables with values from your Azure portal Translator instance.
-1. Once you've added a desired code sample to your application, choose the green **start button** next to formRecognizer_quickstart to build and run your program, or press **F5**.
+1. Once you add a desired code sample to your application, choose the green **start button** next to formRecognizer_quickstart to build and run your program, or press **F5**.
:::image type="content" source="media/how-to-guides/run-program-visual-studio.png" alt-text="Screenshot of the run program button in Visual Studio."::: ### [Go](#tab/go)
-You can use any text editor to write Go applications. We recommend using the latest version of [Visual Studio Code and the Go extension](/azure/developer/go/configure-visual-studio-code).
+You can use any text editor to write Go applications. We recommend using the latest version of [Visual Studio Code and the Go application extension](/azure/developer/go/configure-visual-studio-code).
> [!TIP] > > If you're new to Go, try the [Get started with Go](/training/modules/go-get-started/) Learn module.
-1. If you haven't done so already, [download and install Go](https://go.dev/doc/install).
+1. If you don't have it in your dev environment, [download and install Go](https://go.dev/doc/install).
- * Download the Go version for your operating system.
+ * Download the Go application version for your operating system.
* Once the download is complete, run the installer. * Open a command prompt and enter the following to confirm Go was installed:
You can use any text editor to write Go applications. We recommend using the lat
1. Copy and paste the code samples into your **text-translator.go** file. Make sure you update the key variable with the value from your Azure portal Translator instance.
-1. Once you've added a code sample to your application, your Go program can be executed in a command or terminal prompt. Make sure your prompt's path is set to the **translator-text-app** folder and use the following command:
+1. Once you add a code sample to your application, your Go program can be executed in a command or terminal prompt. Make sure your prompt's path is set to the **translator-text-app** folder and use the following command:
```console go run translation.go
You can use any text editor to write Go applications. We recommend using the lat
1. Copy and paste the code samples `TranslatorText.java` file. **Make sure you update the key with one of the key values from your Azure portal Translator instance**.
-1. Once you've added a code sample to your application, navigate back to your main project directoryΓÇö**translator-text-app**, open a console window, and enter the following commands:
+1. Once you add a code sample to your application, navigate back to your main project directoryΓÇö**translator-text-app**, open a console window, and enter the following commands:
1. Build your application with the `build` command:
You can use any text editor to write Go applications. We recommend using the lat
### [Node.js](#tab/nodejs)
-1. If you haven't done so already, install the latest version of [Node.js](https://nodejs.org/en/download/). Node Package Manager (npm) is included with the Node.js installation.
+1. If it isn't installed in your dev environment, download the latest version of [Node.js](https://nodejs.org/en/download/). Node Package Manager (npm) is included with the Node.js installation.
> [!TIP] >
You can use any text editor to write Go applications. We recommend using the lat
* The most important attributes are name, version number, and entry point. * We recommend keeping `index.js` for the entry point name. The description, test command, GitHub repository, keywords, author, and license information are optional attributesΓÇöthey can be skipped for this project. * Accept the suggestions in parentheses by selecting **Return** or **Enter**.
- * After you've completed the prompts, a `package.json` file will be created in your translator-text-app directory.
+ * After you complete the prompts, a `package.json` file will be created in your translator-text-app directory.
1. Open a console window and use npm to install the `axios` HTTP library and `uuid` package:
You can use any text editor to write Go applications. We recommend using the lat
1. Copy and paste the code samples into your `index.js` file. **Make sure you update the key variable with the value from your Azure portal Translator instance**.
-1. Once you've added the code sample to your application, run your program:
+1. Once you add the code sample to your application, run your program:
1. Navigate to your application directory (translator-text-app).
You can use any text editor to write Go applications. We recommend using the lat
### [Python](#tab/python)
-1. If you haven't done so already, install the latest version of [Python 3.x](https://www.python.org/downloads/). The Python installer package (pip) is included with the Python installation.
+1. If you don't have it in your dev environment, install the latest version of [Python 3.x](https://www.python.org/downloads/). The Python installer package (pip) is included with the Python installation.
> [!TIP] >
You can use any text editor to write Go applications. We recommend using the lat
1. Add the following code sample to your `text-translator.py` file. **Make sure you update the key with one of the values from your Azure portal Translator instance**.
-1. Once you've added a desired code sample to your application, build and run your program:
+1. Once you add a desired code sample to your application, build and run your program:
1. Navigate to your **text-translator.py** file.
After a successful call, you should see the following response. Let's examine th
## Dictionary examples (translations in context)
-After you've performed a dictionary lookup, pass the source and translation text to the `dictionary/examples` endpoint, to get a list of examples that show both terms in the context of a sentence or phrase. Building on the previous example, you use the `normalizedText` and `normalizedTarget` from the dictionary lookup response as `text` and `translation` respectively. The source language (`from`) and output target (`to`) parameters are required.
+After you perform a dictionary lookup, pass the source and translation text to the `dictionary/examples` endpoint, to get a list of examples that show both terms in the context of a sentence or phrase. Building on the previous example, you use the `normalizedText` and `normalizedTarget` from the dictionary lookup response as `text` and `translation` respectively. The source language (`from`) and output target (`to`) parameters are required.
### [C#](#tab/csharp)
After a successful call, you should see the following response. For more informa
| 200 | OK | The request was successful. | | 400 | Bad Request | A required parameter is missing, empty, or null. Or, the value passed to either a required or optional parameter is invalid. A common issue is a header that is too long. | | 401 | Unauthorized | The request isn't authorized. Check to make sure your key or token is valid and in the correct region. *See also* [Authentication](reference/v3-0-reference.md#authentication).|
-| 429 | Too Many Requests | You've exceeded the quota or rate of requests allowed for your subscription. |
+| 429 | Too Many Requests | You exceeded the quota or rate of requests allowed for your subscription. |
| 502 | Bad Gateway | Network or server-side issue. May also indicate invalid headers. | ### Java users
-If you're encountering connection issues, it may be that your TLS/SSL certificate has expired. To resolve this issue, install the [DigiCertGlobalRootG2.crt](http://cacerts.digicert.com/DigiCertGlobalRootG2.crt) to your private store.
+If you're encountering connection issues, it may be that your TLS/SSL certificate is expired. To resolve this issue, install the [DigiCertGlobalRootG2.crt](http://cacerts.digicert.com/DigiCertGlobalRootG2.crt) to your private store.
## Next steps
ai-services Word Alignment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/word-alignment.md
Previously updated : 07/18/2023 Last updated : 07/10/2024
Alignment is returned as a string value of the following format for every word o
Example alignment string: "0:0-7:10 1:2-11:20 3:4-0:3 3:4-4:6 5:5-21:21".
-In other words, the colon separates start and end index, the dash separates the languages, and space separates the words. One word may align with zero, one, or multiple words in the other language, and the aligned words may be non-contiguous. When no alignment information is available, the Alignment element will be empty. The method returns no error in that case.
+In other words, the colon separates start and end index, the dash separates the languages, and space separates the words. One word may align with zero, one, or multiple words in the other language, and the aligned words may be noncontiguous. When no alignment information is available, the Alignment element is empty. The method returns no error in that case.
## Restrictions Alignment is only returned for a subset of the language pairs at this point:+ * from English to any other language; * from any other language to English except for Chinese Simplified, Chinese Traditional, and Latvian to English * from Japanese to Korean or from Korean to Japanese
-You will not receive alignment information if the sentence is a canned translation. Example of a canned translation is "This is a test", "I love you", and other high frequency sentences.
+You don't receive alignment information if the sentence is a canned translation. Example of a canned translation is `This is a test`, `I love you`, and other high frequency sentences.
## Example
ai-studio Evaluation Metrics Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/evaluation-metrics-built-in.md
We support the following AI-Assisted metrics for the above task types:
## Risk and safety metrics
-The risk and safety metrics draw on insights gained from our previous Large Language Model projects such as GitHub Copilot and Bing. This ensures a comprehensive approach to evaluating generated responses for risk and safety severity scores. These metrics are generated through our safety evaluation service, which employs a set of LLMs. Each model is tasked with assessing specific risks that could be present in the response (for example, sexual content, violent content, etc.). These models are provided with risk definitions and severity scales, and they annotate generated conversations accordingly. Currently, we calculate a ΓÇ£defect rateΓÇ¥ for the risk and safety metrics below. For each of these metrics, the service measures whether these types of content were detected and at what severity level. Each of the four types has three severity levels (Very low, Low, Medium, High). Users specify a threshold of tolerance, and the defect rates are produced by our service correspond to the number of instances that were generated at and above each threshold level.
+The risk and safety metrics draw on insights gained from our previous Large Language Model projects such as GitHub Copilot and Bing. This ensures a comprehensive approach to evaluating generated responses for risk and safety severity scores. These metrics are generated through our safety evaluation service, which employs a set of LLMs. Each model is tasked with assessing specific risks that could be present in the response (for example, sexual content, violent content, etc.). These models are provided with risk definitions and severity scales, and they annotate generated conversations accordingly. Currently, we calculate a ΓÇ£defect rateΓÇ¥ for the risk and safety metrics below. For each of these metrics, the service measures whether these types of content were detected and at what severity level. Each of the four types has four severity levels (Very low, Low, Medium, High). Users specify a threshold of tolerance, and the defect rates are produced by our service correspond to the number of instances that were generated at and above each threshold level.
Types of content:
ai-studio Deploy Models Llama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-llama.md
For reference about how to invoke Llama models deployed to managed compute, see
##### More inference examples
+# [Meta Llama 3](#tab/llama-three)
+
+| **Package** | **Sample Notebook** |
+|-|-|
+| OpenAI SDK (experimental) | [openaisdk.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/meta-llama3/openaisdk.ipynb) |
+| LangChain | [langchain.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/meta-llama3/langchain.ipynb) |
+| WebRequests | [webrequests.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/meta-llama3/webrequests.ipynb) |
+| LiteLLM SDK | [litellm.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/meta-llama3/litellm.ipynb) |
+
+# [Meta Llama 2](#tab/llama-two)
+ | **Package** | **Sample Notebook** | |-|-|
-| CLI using CURL and Python web requests - Command R | [command-r.ipynb](https://aka.ms/samples/cohere-command-r/webrequests)|
-| CLI using CURL and Python web requests - Command R+ | [command-r-plus.ipynb](https://aka.ms/samples/cohere-command-r-plus/webrequests)|
-| OpenAI SDK (experimental) | [openaisdk.ipynb](https://aka.ms/samples/cohere-command/openaisdk) |
-| LangChain | [langchain.ipynb](https://aka.ms/samples/cohere/langchain) |
-| Cohere SDK | [cohere-sdk.ipynb](https://aka.ms/samples/cohere-python-sdk) |
-| LiteLLM SDK | [litellm.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/cohere/litellm.ipynb) |
+| OpenAI SDK (experimental) | [openaisdk.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/llama2/openaisdk.ipynb) |
+| LangChain | [langchain.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/llama2/langchain.ipynb) |
+| WebRequests | [webrequests.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/llama2/webrequests.ipynb) |
+| LiteLLM SDK | [litellm.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/llama2/litellm.ipynb) |
++ ## Cost and quotas
ai-studio Model Catalog Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/model-catalog-overview.md
Features | Managed compute | serverless API (pay-as-you-go)
Deployment experience and billing | Model weights are deployed to dedicated Virtual Machines with Managed Online Endpoints. The managed online endpoint, which can have one or more deployments, makes available a REST API for inference. You're billed for the Virtual Machine core hours used by the deployments. | Access to models is through a deployment that provisions an API to access the model. The API provides access to the model hosted and managed by Microsoft, for inference. This mode of access is referred to as "Models as a Service". You're billed for inputs and outputs to the APIs, typically in tokens; pricing information is provided before you deploy. | API authentication | Keys and Microsoft Entra ID authentication.| Keys only. Content safety | Use Azure Content Safety service APIs. | Azure AI Content Safety filters are available integrated with inference APIs. Azure AI Content Safety filters may be billed separately.
-Network isolation | Configure Managed Network. [Learn more.]( configure-managed-network.md) |
+Network isolation | [Configure managed networks for Azure AI Studio hubs.](configure-managed-network.md) | MaaS endpoint will follow your hub's public network access (PNA) flag setting. For more information, see the [Network isolation for models deployed via Serverless APIs](#network-isolation-for-models-deployed-via-serverless-apis) section.
Model | Managed compute | Serverless API (pay-as-you-go) --|--|--
Phi-3-mini-128k-instruct <br> Phi-3-medium-4k-instruct <br> Phi-3-medium-128k-in
<!-- docutune:enable -->
-### Content safety for models deployed via Serverless API
+### Content safety for models deployed via Serverless APIs
[!INCLUDE [Feature preview](~/reusable-content/ce-skilling/azure/includes/ai-studio/includes/feature-preview.md)] Azure AI Studio implements a default configuration of [Azure AI Content Safety](../../ai-services/content-safety/overview.md) text moderation filters for harmful content (hate, self-harm, sexual, and violence) in language models deployed with MaaS. To learn more about content filtering (preview), see [harm categories in Azure AI Content Safety](../../ai-services/content-safety/concepts/harm-categories.md). Content filtering (preview) occurs synchronously as the service processes prompts to generate content, and you may be billed separately as per [AACS pricing](https://azure.microsoft.com/pricing/details/cognitive-services/content-safety/) for such use. You can disable content filtering for individual serverless endpoints when you first deploy a language model or in the deployment details page by clicking the content filtering toggle. You may be at higher risk of exposing users to harmful content if you turn off content filters.
+### Network isolation for models deployed via Serverless APIs
+Endpoints for models deployed as Serverless APIs follow the public network access (PNA) flag setting of the AI Studio Hub that has the project in which the deployment exists. To secure your MaaS endpoint, disable the PNA flag on your AI Studio Hub. You can secure inbound communication from a client to your endpoint by using a private endpoint for the hub.
-## Next steps
+To set the PNA flag for the Azure AI hub:
+
+* Go to the [Azure portal](https://ms.portal.azure.com/)
+* Search for the Resource group to which the hub belongs, and select your Azure AI hub from the resources listed for this Resource group.
+* On the hub Overview page, use the left navigation pane to go to **Settings** > **Networking**.
+* Under the __Public access__ tab, you can configure settings for the public network access flag.
+* Save your changes. Your changes might take up to five minutes to propagate.
+
+#### Limitations
+
+* If you have an AI Studio hub with a private endpoint created before July 11, 2024, new MaaS endpoints added to projects in this hub won't follow the networking configuration of the hub. Instead, you need to create a new private endpoint for the hub and create new serverless API deployments in the project so that the new deployments can follow the hub's networking configuration.
+* If you have an AI studio hub with MaaS deployments created before July 11, 2024, and you enable a private endpoint on this hub, the existing MaaS deployments won't follow the hub's networking configuration. For serverless API deployments in the hub to follow the hub's networking configuration, you need to create the deployments again.
+* Currently [On Your Data](#rag-with-models-deployed-as-serverless-apis) support isn't available for MaaS deployments in private hubs, since private hubs have the PNA flag disabled.
+* Any network configuration change (for example, enabling or disabling the PNA flag) might take up to five minutes to propagate.
+
+## Next step
- [Explore Azure AI foundation models in Azure AI Studio](models-foundation-azure-ai.md)
aks Advanced Container Networking Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/advanced-container-networking-services-overview.md
Advanced Container Networking Services is a suite of services built to significa
With Advanced Container Networking Services, the focus is on delivering a seamless and integrated experience that empowers you to maintain robust security postures, ensure comprehensive compliance and gain deep insights into your network traffic and application performance. This ensures that your containerized applications are not only secure and compliant but also meet or exceed your performance and reliability goals, allowing you to confidently manage and scale your infrastructure.
-> [!NOTE]
-> Advanced Container Networking Services is only available for clusters running Kubernetes 1.29 or higher.
- ## What is included in Advanced Container Networking Services? Advanced Network Observability is the inaugural feature of the Advanced Container Networking Services suite bringing the power of HubbleΓÇÖs control plane to both Cilium and non-Cilium Linux data planes. While Advanced Network Observability is the foundation of the Advanced Container Networking Services suite, the feature set will evolve over time offering even more insights and providing yet more new & powerful ways to manage your AKS networks.
aks Api Server Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/api-server-vnet-integration.md
API Server VNet Integration is available in all global Azure regions.
## Create an AKS cluster with API Server VNet Integration using managed VNet
-You can configure your AKS clusters with API Server VNet Integration in managed VNet or bring-your-own VNet mode. You can create the as public clusters (with API server access available via a public IP) or private clusters (where the API server is only accessible via private VNet connectivity). You can also toggle between a public and private state without redeploying your cluster.
+You can configure your AKS clusters with API Server VNet Integration in managed VNet or bring-your-own VNet mode. You can create them as public clusters (with API server access available via a public IP) or private clusters (where the API server is only accessible via private VNet connectivity). You can also toggle between a public and private state without redeploying your cluster.
### Create a resource group
aks Auto Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-cluster.md
AKS also initiates auto-upgrades for unsupported clusters. When a cluster in an
If youΓÇÖre using cluster auto-upgrade, you can no longer upgrade the control plane first, and then upgrade the individual node pools. Cluster auto-upgrade always upgrades the control plane and the node pools together. You can't upgrade the control plane only. Running the `az aks upgrade --control-plane-only` command raises the following error: `NotAllAgentPoolOrchestratorVersionSpecifiedAndUnchanged: Using managed cluster api, all Agent pools' OrchestratorVersion must be all specified or all unspecified. If all specified, they must be stay unchanged or the same with control plane.`
-If using the `node-image` cluster auto-upgrade channel or the `NodeImage` node image auto-upgrade channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default.
+If using the `node-image` (legacy and not to be used) cluster auto-upgrade channel or the `NodeImage` node image auto-upgrade channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default.
## Cluster auto-upgrade channels
The following upgrade channels are available:
| `patch`| automatically upgrades the cluster to the latest supported patch version when it becomes available while keeping the minor version the same.| For example, if a cluster runs version *1.17.7*, and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, the cluster upgrades to *1.17.9*.| | `stable`| automatically upgrades the cluster to the latest supported patch release on minor version *N-1*, where *N* is the latest supported minor version.| For example, if a cluster runs version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, the cluster upgrades to *1.18.6*.| | `rapid`| automatically upgrades the cluster to the latest supported patch release on the latest supported minor version.| In cases where the cluster's Kubernetes version is an *N-2* minor version, where *N* is the latest supported minor version, the cluster first upgrades to the latest supported patch version on *N-1* minor version. For example, if a cluster runs version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, the cluster first upgrades to *1.18.6*, then upgrades to *1.19.1*.|
-| `node-image`| automatically upgrades the node image to the latest version available.| Microsoft provides patches and new images for image nodes frequently (usually weekly), but your running nodes don't get the new images unless you do a node image upgrade. Turning on the node-image channel automatically updates your node images whenever a new version is available. If you use this channel, Linux [unattended upgrades] are disabled by default. Node image upgrades work on patch versions that are deprecated, so long as the minor Kubernetes version is still supported.|
+| `node-image`(legacy)| automatically upgrades the node image to the latest version available.| Microsoft provides patches and new images for image nodes frequently (usually weekly), but your running nodes don't get the new images unless you do a node image upgrade. Turning on the node-image channel automatically updates your node images whenever a new version is available. If you use this channel, Linux [unattended upgrades] are disabled by default. Node image upgrades work on patch versions that are deprecated, so long as the minor Kubernetes version is still supported. This channel is no longer recommended and is set to be deprecated in future. For an option that can automatically upgrade node images, see the `NodeImage` channel in [node image auto-upgrade][node-image-auto-upgrade]. |
> [!NOTE] >
aks Auto Upgrade Node Os Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-node-os-image.md
For more information on Planned Maintenance, see [Use Planned Maintenance to sch
## Node OS auto-upgrades FAQ
-* How can I check the current nodeOsUpgradeChannel value on a cluster?
+### How can I check the current nodeOsUpgradeChannel value on a cluster?
Run the `az aks show` command and check the "autoUpgradeProfile" to determine what value the `nodeOsUpgradeChannel` is set to:
Run the `az aks show` command and check the "autoUpgradeProfile" to determine wh
az aks show --resource-group myResourceGroup --name myAKSCluster --query "autoUpgradeProfile" ```
-* How can I monitor the status of node OS auto-upgrades?
+### How can I monitor the status of node OS auto-upgrades?
To view the status of your node OS auto upgrades, look up [activity logs][monitor-aks] on your cluster. You can also look up specific upgrade-related events as mentioned in [Upgrade an AKS cluster][aks-upgrade]. AKS also emits upgrade-related Event Grid events. To learn more, see [AKS as an Event Grid source][aks-eventgrid].
-* Can I change the node OS auto-upgrade channel value if my cluster auto-upgrade channel is set to `node-image` ?
+### Can I change the node OS auto-upgrade channel value if my cluster auto-upgrade channel is set to `node-image` ?
No. Currently, when you set the [cluster auto-upgrade channel][Autoupgrade] to `node-image`, it also automatically sets the node OS auto-upgrade channel to `NodeImage`. You can't change the node OS auto-upgrade channel value if your cluster auto-upgrade channel is `node-image`. In order to be able to change the node OS auto-upgrade channel values, make sure the [cluster auto-upgrade channel][Autoupgrade] isn't `node-image`.
- * Why is `SecurityPatch` recommended over `Unmanaged` channel?
+### Why is `SecurityPatch` recommended over `Unmanaged` channel?
On the `Unmanaged` channel, AKS has no control over how and when the security updates are delivered. With `SecurityPatch`, the security updates are fully tested and follow safe deployment practices. `SecurityPatch` also honors maintenance windows. For more details, see [Increased security and resiliency of Canonical workloads on Azure][Blog].
-* Does `SecurityPatch` always lead to a reimage of my nodes?
+### Does `SecurityPatch` always lead to a reimage of my nodes?
AKS limits reimages to only when absolutely necessary, such as certain kernel packages that may require a reimage to get fully applied. `SecurityPatch` is designed to minimize disruptions as much as possible. If AKS decides reimaging nodes isn't necessary, it will patch nodes live without draining pods and no VHD update is performed in such cases.
- * How do I know if a `SecurityPatch` or `NodeImage` upgrade is applied on my node?
+### How do I know if a `SecurityPatch` or `NodeImage` upgrade is applied on my node?
Run the following command to obtain node labels:
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
You use these values later in this article. The outputs list several other usefu
[!INCLUDE [create-azure-sql-database](includes/jakartaee/create-azure-sql-database.md)]
-Create an environment variable in your shell for the resource group name for the database:
+Then, use the following command to create an environment variable in your shell for the resource group name for the database:
### [Bash](#tab/in-bash)
aks Howto Deploy Java Wls App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-wls-app.md
description: Shows how to quickly stand up WebLogic Server on Azure Kubernetes S
Previously updated : 02/09/2024 Last updated : 07/10/2024
This article demonstrates how to: -- Run your Java, Java EE, or Jakarta EE on Oracle WebLogic Server (WLS).-- Stand up a WebLogic Server cluster using the Azure Marketplace offer.-- Build the application Docker image to serve as auxiliary image to provide WebLogic Deploy Tooling (WDT) models and applications.-- Deploy the containerized application to the existing WebLogic Server cluster on AKS with connection to Microsoft Azure SQL.
+- Run your Java application on Oracle WebLogic Server (WLS).
+- Stand up a WebLogic Server cluster on AKS using an Azure Marketplace offer.
+- Build an application Docker image that includes WebLogic Deploy Tooling (WDT) models.
+- Deploy the containerized application to the WebLogic Server cluster on AKS with connection to Microsoft Azure SQL.
-This article uses the Azure Marketplace offer for WebLogic Server to accelerate your journey to AKS. The offer automatically provisions several Azure resources, including the following resources:
+This article uses the [Azure Marketplace offer for WebLogic Server](https://aka.ms/wlsaks) to accelerate your journey to AKS. The offer automatically provisions several Azure resources, including the following resources:
- An Azure Container Registry instance - An AKS cluster
This article uses the Azure Marketplace offer for WebLogic Server to accelerate
- A container image including the WebLogic runtime - A WebLogic Server cluster without an application
-Then, this article introduces building an auxiliary image step by step to update an existing WebLogic Server cluster. The auxiliary image provides application and WDT models.
+Then, the article introduces building an image to update the WebLogic Server cluster. The image provides the application and WDT models.
-For full automation, you can select your application and configure datasource connection from Azure portal before the offer deployment. To see the offer, visit the [Azure portal](https://aka.ms/wlsaks).
-
-For step-by-step guidance in setting up WebLogic Server on Azure Kubernetes Service, see the official documentation from Oracle at [Azure Kubernetes Service](https://oracle.github.io/weblogic-kubernetes-operator/samples/azure-kubernetes-service/).
+If you prefer a less automated approach to deploying WebLogic on AKS, see the step-by-step guidance included in the official documentation from Oracle for [Azure Kubernetes Service](https://oracle.github.io/weblogic-kubernetes-operator/samples/azure-kubernetes-service/).
If you're interested in providing feedback or working closely on your migration scenarios with the engineering team developing WebLogic on AKS solutions, fill out this short [survey on WebLogic migration](https://aka.ms/wls-on-azure-survey) and include your contact information. The team of program managers, architects, and engineers will promptly get in touch with you to initiate close collaboration.
If you're interested in providing feedback or working closely on your migration
- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)] - Ensure the Azure identity you use to sign in and complete this article has either the [Owner](/azure/role-based-access-control/built-in-roles#owner) role in the current subscription or the [Contributor](/azure/role-based-access-control/built-in-roles#contributor) and [User Access Administrator](/azure/role-based-access-control/built-in-roles#user-access-administrator) roles in the current subscription. For an overview of Azure roles, see [What is Azure role-based access control (Azure RBAC)?](/azure/role-based-access-control/overview) For details on the specific roles required by WLS on AKS, see [Azure built-in roles](/azure/role-based-access-control/built-in-roles).
- > [!NOTE]
- > These roles must be granted at the subscription level, not the resource group level.
- Have the credentials for an Oracle single sign-on (SSO) account. To create one, see [Create Your Oracle Account](https://aka.ms/wls-aks-create-sso-account). - Accept the license terms for WebLogic Server. - Visit the [Oracle Container Registry](https://container-registry.oracle.com/) and sign in. - If you have a support entitlement, select **Middleware**, then search for and select **weblogic_cpu**. - If you don't have a support entitlement from Oracle, select **Middleware**, then search for and select **weblogic**.
- > [!NOTE]
- > Get a support entitlement from Oracle before going to production. Failure to do so results in running insecure images that are not patched for critical security flaws. For more information on Oracle's critical patch updates, see [Critical Patch Updates, Security Alerts and Bulletins](https://www.oracle.com/security-alerts/) from Oracle.
- Accept the license agreement.-- Prepare a local machine with Unix-like operating system installed (for example, Ubuntu, Azure Linux, macOS, Windows Subsystem for Linux).
+ > [!NOTE]
+ > Get a support entitlement from Oracle before going to production. Failure to do so results in running insecure images that are not patched for critical security flaws. For more information on Oracle's critical patch updates, see [Critical Patch Updates, Security Alerts and Bulletins](https://www.oracle.com/security-alerts/) from Oracle.
+- Prepare a local machine with Unix-like operating system installed - for example, Ubuntu, Azure Linux, macOS, Windows Subsystem for Linux.
- [Azure CLI](/cli/azure). Use `az --version` to test whether az works. This document was tested with version 2.55.1. - [Docker](https://docs.docker.com/get-docker). This document was tested with Docker version 20.10.7. Use `docker info` to test whether Docker Daemon is running. - [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl). Use `kubectl version` to test whether kubectl works. This document was tested with version v1.21.2.
- - A Java JDK compatible with the version of WebLogic Server you intend to run. The article directs you to install a version of WebLogic Server that uses JDK 11. Azure recommends [Microsoft Build of OpenJDK](/java/openjdk/download). Ensure that your `JAVA_HOME` environment variable is set correctly in the shells in which you run the commands.
+ - A Java Development Kit (JDK) compatible with the version of WebLogic Server you intend to run. The article directs you to install a version of WebLogic Server that uses JDK 11. Ensure that your `JAVA_HOME` environment variable is set correctly in the shells in which you run the commands.
- [Maven](https://maven.apache.org/download.cgi) 3.5.0 or higher. - Ensure that you have the zip/unzip utility installed. Use `zip/unzip -v` to test whether `zip/unzip` works.-- All of the steps in this article, except for those involving Docker, can also be executed in the Azure Cloud Shell. To learn more about Azure Cloud Shell, see [What is Azure Cloud Shell?](/azure/cloud-shell/overview)
+ > [!NOTE]
+ > You can perform all the steps of this article in the Azure Cloud Shell, except for those involving Docker. To learn more about Azure Cloud Shell, see [What is Azure Cloud Shell?](/azure/cloud-shell/overview)
## Deploy WebLogic Server on AKS
-The steps in this section direct you to deploy WebLogic Server on AKS in the simplest possible way. WebLogic Server on AKS offers a broad and deep selection of Azure integrations. For more information, see [What are solutions for running Oracle WebLogic Server on the Azure Kubernetes Service?](/azure/virtual-machines/workloads/oracle/weblogic-aks)
- The following steps show you how to find the WebLogic Server on AKS offer and fill out the **Basics** pane.
-1. In the search bar at the top of the Azure portal, enter *weblogic*. In the auto-suggested search results, in the **Marketplace** section, select **WebLogic Server on AKS**.
+1. In the search bar at the top of the Azure portal, enter *weblogic*. In the autosuggested search results, in the **Marketplace** section, select **WebLogic Server on AKS**.
:::image type="content" source="media/howto-deploy-java-wls-app/marketplace-search-results.png" alt-text="Screenshot of the Azure portal that shows WebLogic Server in the search results." lightbox="media/howto-deploy-java-wls-app/marketplace-search-results.png"::: You can also go directly to the [WebLogic Server on AKS](https://aka.ms/wlsaks) offer. 1. On the offer page, select **Create**.
-1. On the **Basics** pane, ensure the value shown in the **Subscription** field is the same one that you logged into in Azure. Make sure you have the roles listed in the prerequisites section.
+1. On the **Basics** pane, ensure the value shown in the **Subscription** field is the same one that you logged into in Azure. Make sure you have the roles listed in the prerequisites section for the subscription.
:::image type="content" source="media/howto-deploy-java-wls-app/portal-start-experience.png" alt-text="Screenshot of the Azure portal that shows WebLogic Server on AKS." lightbox="media/howto-deploy-java-wls-app/portal-start-experience.png":::
The following steps show you how to find the WebLogic Server on AKS offer and fi
1. Under **Instance details**, select the region for the deployment. For a list of Azure regions where AKS is available, see [AKS region availability](https://azure.microsoft.com/global-infrastructure/services/?products=kubernetes-service). 1. Under **Credentials for WebLogic**, leave the default value for **Username for WebLogic Administrator**. 1. Fill in `wlsAksCluster2022` for the **Password for WebLogic Administrator**. Use the same value for the confirmation and **Password for WebLogic Model encryption** fields.
-1. Scroll to the bottom of the **Basics** pane and notice the helpful links for documentation, community support, and how to report problems.
1. Select **Next**. The following steps show you how to start the deployment process.
The following steps show you how to start the deployment process.
:::image type="content" source="media/howto-deploy-java-wls-app/configure-single-sign-on.png" alt-text="Screenshot of the Azure portal that shows the configured SSO pane." lightbox="media/howto-deploy-java-wls-app/configure-single-sign-on.png":::
-1. Follow the steps in the info box starting with **Before moving forward, you must accept the Oracle Standard Terms and Restrictions.**
+1. Make sure you note the steps in the info box starting with **Before moving forward, you must accept the Oracle Standard Terms and Restrictions.**
-1. Depending on whether or not the Oracle SSO account has an Oracle support entitlement, select the appropriate option for **Select the type of WebLogic Server Images.**. If the account has a support entitlement, select **Patched WebLogic Server Images**. Otherwise, select **General WebLogic Server Images**.
+1. Depending on whether or not the Oracle SSO account has an Oracle support entitlement, select the appropriate option for **Select the type of WebLogic Server Images**. If the account has a support entitlement, select **Patched WebLogic Server Images**. Otherwise, select **General WebLogic Server Images**.
1. Leave the value in **Select desired combination of WebLogic Server...** at its default value. You have a broad range of choices for WebLogic Server, JDK, and OS version.
The following steps show you how to start the deployment process.
The following steps make it so the WebLogic Server admin console and the sample app are exposed to the public Internet with a built-in Application Gateway ingress add-on. For a more information, see [What is Application Gateway Ingress Controller?](/azure/application-gateway/ingress-controller-overview) - 1. Select **Next** to see the **TLS/SSL** pane. 1. Select **Next** to see the **Load balancing** pane. 1. Next to **Load Balancing Options**, select **Application Gateway Ingress Controller**.+
+ :::image type="content" source="media/howto-deploy-java-wls-app/configure-load-balancing.png" alt-text="Screenshot of the Azure portal that shows the simplest possible load balancer configuration on the Create Oracle WebLogic Server on Azure Kubernetes Service page." lightbox="media/howto-deploy-java-wls-app/configure-load-balancing.png":::
+ 1. Under the **Application Gateway Ingress Controller**, you should see all fields prepopulated with the defaults for **Virtual network** and **Subnet**. Leave the default values. 1. For **Create ingress for Administration Console**, select **Yes**.
If you navigated away from the **Deployment is in progress** page, the following
1. In the navigation pane, select **Outputs**. This list shows the output values from the deployment. Useful information is included in the outputs. 1. The **adminConsoleExternalUrl** value is the fully qualified, public Internet visible link to the WebLogic Server admin console for this AKS cluster. Select the copy icon next to the field value to copy the link to your clipboard. Save this value aside for later. 1. The **clusterExternalUrl** value is the fully qualified, public Internet visible link to the sample app deployed in WebLogic Server on this AKS cluster. Select the copy icon next to the field value to copy the link to your clipboard. Save this value aside for later.
-1. The **shellCmdtoOutputWlsImageModelYaml** value is the base64 string of WDT model that built in the container image. Save this value aside for later.
-1. The **shellCmdtoOutputWlsImageProperties** value is base64 string of WDT model properties that built in the container image. Save this value aside for later.
-1. The **shellCmdtoConnectAks** value is the Azure CLI command to connect to this specific AKS cluster. This lets you use `kubectl` to administer the cluster.
+1. The **shellCmdtoOutputWlsImageModelYaml** value is the base64 string of the WDT model that is used to build the container image. Save this value aside for later.
+1. The **shellCmdtoOutputWlsImageProperties** value is the base64 string of the WDT model properties that is used to build the container image. Save this value aside for later.
+1. The **shellCmdtoConnectAks** value is the Azure CLI command to connect to this specific AKS cluster.
The other values in the outputs are beyond the scope of this article, but are explained in detail in the [WebLogic on AKS user guide](https://aka.ms/wls-aks-docs).
The other values in the outputs are beyond the scope of this article, but are ex
[!INCLUDE [create-azure-sql-database](includes/jakartaee/create-azure-sql-database.md)]
-2. Create a schema for the sample application. Follow [Query the database](/azure/azure-sql/database/single-database-create-quickstart#query-the-database) to open the **Query editor** pane. Enter and run the following query:
+Then, create a schema for the sample application by using the following steps:
+
+1. Open the **Query editor** pane by following the steps in the [Query the database](/azure/azure-sql/database/single-database-create-quickstart#query-the-database) section of [Quickstart: Create a single database - Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart).
+
+1. Enter and run the following query:
```sql CREATE TABLE COFFEE (ID NUMERIC(19) NOT NULL, NAME VARCHAR(255) NULL, PRICE FLOAT(32) NULL, PRIMARY KEY (ID));
The other values in the outputs are beyond the scope of this article, but are ex
INSERT INTO SEQUENCE VALUES ('SEQ_GEN',0); ```
- After a successful run, you should see the message **Query succeeded: Affected rows: 0**. If you don't see this message, troubleshoot and resolve the problem before proceeding.
+ After a successful run, you should see the message **Query succeeded: Affected rows: 1**. If you don't see this message, troubleshoot and resolve the problem before proceeding.
The database, tables, AKS cluster, and WebLogic Server cluster are created. If you want, you can explore the admin console by opening a browser and navigating to the address of **adminConsoleExternalUrl**. Sign in with the values you entered during the WebLogic Server on AKS deployment.
The steps in this section show you how to build an auxiliary image. This image i
- The *Model in Image* model files - Your application-- The JDBC driver archive file
+- The Java Database Connectivity (JDBC) driver archive file
- The WebLogic Deploy Tooling installation An *auxiliary image* is a Docker container image containing your app and configuration. The WebLogic Kubernetes Operator combines your auxiliary image with the `domain.spec.image` in the AKS cluster that contains the WebLogic Server, JDK, and operating system. For more information about auxiliary images, see [Auxiliary images](https://oracle.github.io/weblogic-kubernetes-operator/managing-domains/model-in-image/auxiliary-images/) in the Oracle documentation.
Use the following steps to build the image:
=> => naming to docker.io/library/model-in-image:WLS-v1 0.2s ```
-1. If you have successfully created the image, then it should now be in your local machine's Docker repository. You can verify the image creation by using the following command:
+1. If you successfully created the image, then it should now be in your local machine's Docker repository. You can verify the image creation by using the following command:
```text docker images model-in-image:WLS-v1
Use the following steps to build the image:
model-in-image WLS-v1 76abc1afdcc6 2 hours ago 8.61MB ```
- After the image is created, it should have the WDT executables in */auxiliary/weblogic-deploy*, and WDT model, property, and archive files in */auxiliary/models*. Use the following command on the Docker image to verify this result:
+ After the image is created, it should have the WDT executables in */auxiliary/weblogic-deploy*, and WDT model, property, and archive files in */auxiliary/models*. Use the following command to verify the contents of the image:
```bash docker run -it --rm model-in-image:WLS-v1 find /auxiliary -maxdepth 2 -type f -print
Use the following steps to build the image:
### Apply the auxiliary image
-In the previous steps, you created the auxiliary image including models and WDT. Before you apply the auxiliary image to the WebLogic Server cluster, use the following steps to create the secret for the datasource URL, username, and password. The secret is used as part of the placeholder in the *dbmodel.yaml*.
+In the previous steps, you created the auxiliary image including models and WDT. Before you apply the auxiliary image to the WebLogic Server cluster, use the following steps to create the secret for the datasource URL, username, and password. The secret is used as part of the placeholder in *dbmodel.yaml*.
1. Connect to the AKS cluster by copying the **shellCmdtoConnectAks** value that you saved aside previously, pasting it into the Bash window, then running the command. The command should look similar to the following example:
In the previous steps, you created the auxiliary image including models and WDT.
Merged "<name>" as current context in /Users/<username>/.kube/config ```
-1. Use the following steps to get values for the variables shown in the following table. You use these values later to create the secret for the datasource connection.
+1. Use the following steps to get values for the variables shown in the following table. You use these values to create the secret for the datasource connection.
| Variable | Description | Example | ||--|--|
In the previous steps, you created the auxiliary image including models and WDT.
1. For `DB_PASSWORD`, use the value you entered when you created the database.
-1. Use the following commands to create the [Kubernetes Secret](https://kubernetes.io/docs/concepts/configuration/secret/). This article uses the secret name `sqlserver-secret` for the secret of the datasource connection. If you use a different name, make sure the value is the same as the one in *dbmodel.yaml*.
+1. Use the following commands to create the Kubernetes Secret. This article uses the secret name `sqlserver-secret` for the secret of the datasource connection. If you use a different name, make sure the value is the same as the one in *dbmodel.yaml*.
- In the following commands, be sure to set the variables `DB_CONNECTION_STRING`, `DB_USER`, and `DB_PASSWORD` correctly by replacing the placeholder examples with the values described in the previous steps. Be sure to enclose the value of the `DB_` variables in single quotes to prevent the shell from interfering with the values.
+ In the following commands, be sure to set the variables `DB_CONNECTION_STRING`, `DB_USER`, and `DB_PASSWORD` correctly by replacing the placeholder examples with the values described in the previous steps. To prevent the shell from interfering with them, enclose the value of the `DB_` variables in single quotes.
```bash export DB_CONNECTION_STRING='<example-jdbc:sqlserver://server-name.database.windows.net:1433;database=wlsaksquickstart0125>'
In the previous steps, you created the auxiliary image including models and WDT.
1. Apply the auxiliary image by patching the domain custom resource definition (CRD) using the `kubectl patch` command.
- The auxiliary image is defined in `spec.configuration.model.auxiliaryImages`, as shown in the following example. For more information, see [auxiliary images](https://oracle.github.io/weblogic-kubernetes-operator/managing-domains/model-in-image/auxiliary-images/).
-
+ The auxiliary image is defined in `spec.configuration.model.auxiliaryImages`, as shown in the following example.
+
```yaml spec: clusters:
Use the following steps to verify the functionality of the deployment by viewing
1. In the **Domain Structure** box, select **Deployments**.
-1. In the **Deployments** table, there should be one row. The name should be the same value as the `Application` value in your *appmodel.yaml* file. Select the name.
+1. In the **Deployments** table, there should be one row. The name should be the same value as the `Application` value in your *appmodel.yaml* file. Click on the name.
-1. In the **Settings** panel, select the **Testing** tab.
+1. Select the **Testing** tab.
1. Select **weblogic-cafe**.
Use the following steps to verify the functionality of the deployment by viewing
## Clean up resources
-To avoid Azure charges, you should clean up unnecessary resources. When you no longer need the cluster, use the [az group delete](/cli/azure/group#az-group-delete) command. The following command removes the resource group, container service, container registry, and all related resources:
+To avoid Azure charges, you should clean up unnecessary resources. When you no longer need the cluster, use the [az group delete](/cli/azure/group#az-group-delete) command. The following command removes the resource group, container service, container registry, database, and all related resources:
```azurecli az group delete --name <resource-group-name> --yes --no-wait
aks Intro Aks Automatic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/intro-aks-automatic.md
The following table provides a comparison of options that are available, preconf
### Application deployment, monitoring, and observability
-Application deployment can be streamlined using [automated deployments][automated-deployments] from source control, which creates Kubernetes manifest and generates CI/CD workflows. Additionally, the cluster is configured with monitoring tools such as Managed Prometheus for metrics, Managed Grafana for visualization and Container Insights for log collection.
+Application deployment can be streamlined using [automated deployments][automated-deployments] from source control, which creates Kubernetes manifest and generates CI/CD workflows. Additionally, the cluster is configured with monitoring tools such as Managed Prometheus for metrics, Managed Grafana for visualization, and Container Insights for log collection.
| Option | AKS Automatic | AKS Standard | | | | | | Application deployment | **Optional:** <ul><li>Use [automated deployments][automated-deployments] to containerize applications from source control, create Kubernetes manifests, and continuous integration/continuous deployment (CI/CD) workflows.</li><li>Create deployment pipelines using [GitHub Actions for Kubernetes][kubernetes-action].</li><li>Bring your own CI/CD pipeline.</li></ul> | **Optional:** <ul><li>Use [automated deployments][automated-deployments] to containerize applications from source control, create Kubernetes manifests, and continuous integration/continuous deployment (CI/CD) workflows.</li><li>Create deployment pipelines using [GitHub Actions for Kubernetes][kubernetes-action].</li><li>Bring your own CI/CD pipeline.</li></ul> |
-| Monitoring, logging, and visualization | **Default:** <ul><li>[Managed Prometheus][managed-prometheus] for metric collection</li><li>[Managed Grafana][managed-grafana] for visualization</li><li>[Container insights][container-insights] for log collection</li></ul> | **Optional:** <ul><li>[Managed Prometheus][managed-prometheus] for metric collection</li><li>[Managed Grafana][managed-grafana] for visualization</li><li>[Container insights][container-insights] for log collection</li></ul> |
+| Monitoring, logging, and visualization | **Default:** <ul><li>[Managed Prometheus][managed-prometheus] for metric collection when using Azure CLI or the Azure portal. </li><li>[Managed Grafana][managed-grafana] for visualization when using Azure CLI or the Azure portal.</li><li>[Container insights][container-insights] for log collection when using Azure CLI or the Azure portal.</li></ul> | **Optional:** <ul><li>[Managed Prometheus][managed-prometheus] for metric collection.</li><li>[Managed Grafana][managed-grafana] for visualization.</li><li>[Container insights][container-insights] for log collection.</li></ul> |
### Node management, scaling, and cluster operations
aks Istio About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-about.md
This service mesh add-on uses and builds on top of open-source Istio. The add-on
## Limitations
-Istio-based service mesh add-on for AKS currently has the following limitations:
+Istio-based service mesh add-on for AKS has the following limitations:
* The add-on doesn't work on AKS clusters that are using [Open Service Mesh addon for AKS][open-service-mesh-about]. * The add-on doesn't work on AKS clusters with self-managed installations of Istio. * The add-on doesn't support adding pods associated with virtual nodes to be added under the mesh. * The add-on doesn't yet support egress gateways for outbound traffic control. * The add-on doesn't yet support the sidecar-less Ambient mode. Microsoft is currently contributing to Ambient workstream under Istio open source. Product integration for Ambient mode is on the roadmap and is being continuously evaluated as the Ambient workstream evolves. * The add-on doesn't yet support multi-cluster deployments.
-* Istio doesn't support Windows Server containers.
+* The add-on doesn't yet support Windows Server containers as this is not available in open source Istio right now. Issue tracking this feature ask can be found [here][istio-oss-windows-issue].
* Customization of mesh through the following custom resources is blocked for now - `ProxyConfig, WorkloadEntry, WorkloadGroup, Telemetry, IstioOperator, WasmPlugin, EnvoyFilter`.
-* For `EnvoyFilter`, the add-on only supports customization of Lua filters (`type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua`). Note that this EnvoyFilter is allowed but any issue arising from the Lua script itself is not supported (to learn more about our support policy and distinction between "allowed" and "supported" configurations, see [the following section][istio-meshconfig-support]). Other `EnvoyFilter` types are currently blocked. other `EnvoyFilter` types are currently blocked.
+* For `EnvoyFilter`, the add-on only supports filter of the type Lua for now (`type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua`). While this EnvoyFilter is allowed, any issue arising from the Lua script itself is not supported. Other `EnvoyFilter` types are currently blocked.
* Gateway API for Istio ingress gateway or managing mesh traffic (GAMMA) are currently not yet supported with Istio addon. It's planned to allow customizations such as ingress static IP address configuration as part of the Gateway API implementation for the add-on in future.
+## Feedback and feature asks
+
+Feedback and feature asks for the Istio add-on can be provided by creating [issues with label 'service-mesh' on AKS GitHub repository][aks-github-service-mesh-issues].
+ ## Next steps * [Deploy Istio-based service mesh add-on][istio-deploy-addon]
Istio-based service mesh add-on for AKS currently has the following limitations:
[istio-ingress]: ./istio-deploy-ingress.md [istio-troubleshooting]: /troubleshoot/azure/azure-kubernetes/extensions/istio-add-on-general-troubleshooting [istio-meshconfig-support]: ./istio-meshconfig.md#allowed-supported-and-blocked-values- [istio-deploy-addon]: istio-deploy-addon.md+
+[istio-oss-windows-issue]: https://github.com/istio/istio/issues/27893
+[aks-github-service-mesh-issues]: https://github.com/Azure/AKS/issues?q=is%3Aopen+is%3Aissue+label%3Aservice-mesh
aks Istio Meshconfig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-meshconfig.md
This article walks through how to configure Istio-based service mesh add-on for
## Prerequisites
-This guide assumes you followed the [documentation][istio-deploy-addon] to enable the Istio add-on on an AKS cluster.
+This guide assumes you followed the [documentation][istio-deploy-add-on] to enable the Istio add-on on an AKS cluster.
## Set up configuration on cluster
This guide assumes you followed the [documentation][istio-deploy-addon] to enabl
The values under `defaultConfig` are mesh-wide settings applied for Envoy sidecar proxy. > [!CAUTION]
-> A default ConfigMap (for example, `istio-asm-1-18` for revision asm-1-18) is created in `aks-istio-system` namespace on the cluster when the Istio addon is enabled. However, this default ConfigMap gets reconciled by the managed Istio addon and thus users should NOT directly edit this ConfigMap. Instead users should create a revision specific Istio shared ConfigMap (for example `istio-shared-configmap-asm-1-18` for revision asm-1-18) in the aks-istio-system namespace, and then the Istio control plane will merge this with the default ConfigMap, with the default settings taking precedence.
+> A default ConfigMap (for example, `istio-asm-1-18` for revision asm-1-18) is created in `aks-istio-system` namespace on the cluster when the Istio add-on is enabled. However, this default ConfigMap gets reconciled by the managed Istio add-on and thus users should NOT directly edit this ConfigMap. Instead users should create a revision specific Istio shared ConfigMap (for example `istio-shared-configmap-asm-1-18` for revision asm-1-18) in the aks-istio-system namespace, and then the Istio control plane will merge this with the default ConfigMap, with the default settings taking precedence.
### Mesh configuration and upgrades
After the upgrade is completed or rolled back, you can delete the ConfigMap of t
Fields in `MeshConfig` are classified into three categories: -- **Blocked**: Disallowed fields are blocked via addon managed admission webhooks. API server immediately publishes the error message to the user that the field is disallowed.
+- **Blocked**: Disallowed fields are blocked via add-on managed admission webhooks. API server immediately publishes the error message to the user that the field is disallowed.
- **Supported**: Supported fields (for example, fields related to access logging) receive support from Azure support. - **Allowed**: These fields (such as proxyListenPort or proxyInboundListenPort) are allowed but they aren't covered by Azure support.
Mesh configuration and the list of allowed/supported fields are revision specifi
### MeshConfig
-| **Field** | **Supported** | **Notes** |
+Fields present in [open source MeshConfig reference documentation][istio-meshconfig] that are not covered in the following table are blocked. For example, `configSources` is blocked.
+
+| **Field** | **Supported/Allowed** | **Notes** |
|--||--|
-| proxyListenPort | false | - |
-| proxyInboundListenPort | false | - |
-| proxyHttpPort | false | - |
-| connectTimeout | false | Configurable in [DestinationRule](https://istio.io/latest/docs/reference/config/networking/destination-rule/#ConnectionPoolSettings-TCPSettings) |
-| tcpKeepAlive | false | Configurable in [DestinationRule](https://istio.io/latest/docs/reference/config/networking/destination-rule/#ConnectionPoolSettings-TCPSettings) |
-| defaultConfig | true | Used to configure [ProxyConfig](https://istio.io/latest/docs/reference/config/istio.mesh.v1alpha1/#ProxyConfig) |
-| outboundTrafficPolicy | true | Also configurable in [Sidecar CR](https://istio.io/latest/docs/reference/config/networking/sidecar/#OutboundTrafficPolicy) |
-| extensionProviders | false | - |
-| defaultProviders | false | - |
-| accessLogFile | true | - |
-| accessLogFormat | true | - |
-| accessLogEncoding | true | - |
-| enableTracing | true | - |
-| enableEnvoyAccessLogService | true | - |
-| disableEnvoyListenerLog | true | - |
-| trustDomain | false | - |
-| trustDomainAliases | false | - |
-| caCertificates | false | Configurable in [DestinationRule](https://istio.io/latest/docs/reference/config/networking/destination-rule/#ClientTLSSettings) |
-| defaultServiceExportTo | false | Configurable in [ServiceEntry](https://istio.io/latest/docs/reference/config/networking/service-entry/#ServiceEntry) |
-| defaultVirtualServiceExportTo | false | Configurable in [VirtualService](https://istio.io/latest/docs/reference/config/networking/virtual-service/#VirtualService) |
-| defaultDestinationRuleExportTo | false | Configurable in [DestinationRule](https://istio.io/latest/docs/reference/config/networking/destination-rule/#DestinationRule) |
-| localityLbSetting | false | Configurable in [DestinationRule](https://istio.io/latest/docs/reference/config/networking/destination-rule/#LoadBalancerSettings) |
-| dnsRefreshRate | false | - |
-| h2UpgradePolicy | false | Configurable in [DestinationRule](https://istio.io/latest/docs/reference/config/networking/destination-rule/#ConnectionPoolSettings-HTTPSettings) |
-| enablePrometheusMerge | true | - |
-| discoverySelectors | true | - |
-| pathNormalization | false | - |
-| defaultHttpRetryPolicy | false | Configurable in [VirtualService](https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPRetry) |
-| serviceSettings | false | - |
-| meshMTLS | false | - |
-| tlsDefaults | false | - |
+| proxyListenPort | Allowed | - |
+| proxyInboundListenPort | Allowed | - |
+| proxyHttpPort | Allowed | - |
+| connectTimeout | Allowed | Configurable in [DestinationRule](https://istio.io/latest/docs/reference/config/networking/destination-rule/#ConnectionPoolSettings-TCPSettings) |
+| tcpKeepAlive | Allowed | Configurable in [DestinationRule](https://istio.io/latest/docs/reference/config/networking/destination-rule/#ConnectionPoolSettings-TCPSettings) |
+| defaultConfig | Supported | Used to configure [ProxyConfig](https://istio.io/latest/docs/reference/config/istio.mesh.v1alpha1/#ProxyConfig) |
+| outboundTrafficPolicy | Supported | Also configurable in [Sidecar CR](https://istio.io/latest/docs/reference/config/networking/sidecar/#OutboundTrafficPolicy) |
+| extensionProviders | Allowed | - |
+| defaultProviders | Allowed | - |
+| accessLogFile | Supported | This field addresses the generation of the access logs. For a managed experience on collection and querying of logs, refer to [Azure Monitor Container Insights on AKS][container-insights-docs] |
+| accessLogFormat | Supported | This field addresses the generation of the access logs. For a managed experience on collection and querying of logs, refer to [Azure Monitor Container Insights on AKS][container-insights-docs] |
+| accessLogEncoding | Supported | This field addresses the generation of the access logs. For a managed experience on collection and querying of logs, refer to [Azure Monitor Container Insights on AKS][container-insights-docs] |
+| enableTracing | Allowed | |
+| enableEnvoyAccessLogService | Supported | This field addresses the generation of the access logs. For a managed experience on collection and querying of logs, refer to [Azure Monitor Container Insights on AKS][container-insights-docs] |
+| disableEnvoyListenerLog | Supported | This field addresses the generation of the access logs. For a managed experience on collection and querying of logs, refer to [Azure Monitor Container Insights on AKS][container-insights-docs] |
+| trustDomain | Allowed | - |
+| trustDomainAliases | Allowed | - |
+| caCertificates | Allowed | Configurable in [DestinationRule](https://istio.io/latest/docs/reference/config/networking/destination-rule/#ClientTLSSettings) |
+| defaultServiceExportTo | Allowed | Configurable in [ServiceEntry](https://istio.io/latest/docs/reference/config/networking/service-entry/#ServiceEntry) |
+| defaultVirtualServiceExportTo | Allowed | Configurable in [VirtualService](https://istio.io/latest/docs/reference/config/networking/virtual-service/#VirtualService) |
+| defaultDestinationRuleExportTo | Allowed | Configurable in [DestinationRule](https://istio.io/latest/docs/reference/config/networking/destination-rule/#DestinationRule) |
+| localityLbSetting | Allowed | Configurable in [DestinationRule](https://istio.io/latest/docs/reference/config/networking/destination-rule/#LoadBalancerSettings) |
+| dnsRefreshRate | Allowed | - |
+| h2UpgradePolicy | Allowed | Configurable in [DestinationRule](https://istio.io/latest/docs/reference/config/networking/destination-rule/#ConnectionPoolSettings-HTTPSettings) |
+| enablePrometheusMerge | Allowed | - |
+| discoverySelectors | Supported | - |
+| pathNormalization | Allowed | - |
+| defaultHttpRetryPolicy | Allowed | Configurable in [VirtualService](https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPRetry) |
+| serviceSettings | Allowed | - |
+| meshMTLS | Allowed | - |
+| tlsDefaults | Allowed | - |
### ProxyConfig (meshConfig.defaultConfig)
-| **Field** | **Supported** |
+Fields present in [open source MeshConfig reference documentation](https://istio.io/latest/docs/reference/config/istio.mesh.v1alpha1/#ProxyConfig) that are not covered in the following table are blocked.
+
+| **Field** | **Supported/Allowed** |
|--||
-| tracingServiceName | true |
-| drainDuration | true |
-| statsUdpAddress | false |
-| proxyAdminPort | false |
-| tracing | true |
-| concurrency | true |
-| envoyAccessLogService | true |
-| envoyMetricsService | true |
-| proxyMetadata | false |
-| statusPort | false |
-| extraStatTags | false |
-| proxyStatsMatcher | false |
-| terminationDrainDuration | true |
-| meshId | false |
-| holdApplicationUntilProxyStarts | true |
-| caCertificatesPem | false |
-| privateKeyProvider | false |
-
-Fields present in [open source MeshConfig reference documentation][istio-meshconfig] but not in the above table are blocked. For example, `configSources` is blocked.
+| tracingServiceName | Allowed |
+| drainDuration | Supported |
+| statsUdpAddress | Allowed |
+| proxyAdminPort | Allowed |
+| tracing | Allowed |
+| concurrency | Supported |
+| envoyAccessLogService | Allowed |
+| envoyMetricsService | Allowed |
+| proxyMetadata | Allowed |
+| statusPort | Allowed |
+| extraStatTags | Allowed |
+| proxyStatsMatcher | Allowed |
+| terminationDrainDuration | Supported |
+| meshId | Allowed |
+| holdApplicationUntilProxyStarts | Supported |
+| caCertificatesPem | Allowed |
+| privateKeyProvider | Allowed |
> [!CAUTION]
-> **Support scope of configurations:** Mesh configuration allows for extension providers such as self-managed instances of Zipkin or Apache Skywalking to be configured with the Istio addon. However, these extension providers are outside the support scope of the Istio addon. Any issues associated with extension tools are outside the support boundary of the Istio addon.
+> **Support scope of configurations:** Mesh configuration allows for extension providers such as self-managed instances of Zipkin or Apache Skywalking to be configured with the Istio add-on. However, these extension providers are outside the support scope of the Istio add-on. Any issues associated with extension tools are outside the support boundary of the Istio add-on.
## Common errors and troubleshooting tips
Fields present in [open source MeshConfig reference documentation][istio-meshcon
[istio-meshconfig]: https://istio.io/latest/docs/reference/config/istio.mesh.v1alpha1/ [istio-sidecar-race-condition]: https://istio.io/latest/docs/ops/common-problems/injection/#pod-or-containers-start-with-network-issues-if-istio-proxy-is-not-ready
-[istio-deploy-addon]: istio-deploy-addon.md
+[istio-deploy-add-on]: istio-deploy-addon.md
+[container-insights-docs]: ../azure-monitor/containers/container-insights-overview.md
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/planned-maintenance.md
The `default` option is meant exclusively for AKS weekly releases. You can switc
Planned maintenance windows are specified in Coordinated Universal Time (UTC).
-A `default` maintenance window has the following properties:
+A `default` maintenance window has the following legacy properties (no longer recommended):
|Name|Description|Default value| |--|--|--|
A `default` maintenance window has the following properties:
|`timeInWeek.hourSlots`|A list of hour-long time slots to perform maintenance on a particular day in a `default` configuration.|Not applicable| |`notAllowedTime`|A range of dates that maintenance can't run, determined by `start` and `end` child properties. This property is applicable only when you're creating the maintenance window by using a configuration file.|Not applicable|
-An `aksManagedAutoUpgradeSchedule` or `aksManagedNodeOSUpgradeSchedule` maintenance window has the following properties:
+> [!NOTE]
+> From the 2023-05-01 API version onwards, please use the below properties for `default` configuration.
+
+An `aksManagedAutoUpgradeSchedule` or `aksManagedNodeOSUpgradeSchedule` maintenance window and `default` configuration from 2023-05-01 API version onwards has the following properties:
|Name|Description|Default value| |--|--|--|
az aks maintenanceconfiguration delete --resource-group myResourceGroup --cluste
* I configured a maintenance window, but the upgrade didn't happen. Why?
- AKS auto-upgrade needs a certain amount of time to take the maintenance window into consideration. We recommend at least 24 hours between the creation or update of a maintenance configuration and the scheduled start time.
+ AKS auto-upgrade needs a certain amount of time, usually not more than 15 minutes, to take the maintenance window into consideration. We recommend at least 15 minutes between the creation or update of a maintenance configuration and the scheduled start time.
Also, ensure that your cluster is started when the planned maintenance window starts. If the cluster is stopped, its control plane is deallocated and no operations can be performed.
api-management Self Hosted Gateway Settings Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-settings-reference.md
This guidance helps you provide the required information to define how to authen
| Name | Description | Required | Default | Availability | | - | - | - | -| -|
-| Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Backend.Protocols.Tls13 | Indication whether or not SSL 3.0 is allowed towards the backend. Similar to [managing protocol ciphers in managed gateway](api-management-howto-manage-protocols-ciphers.md). | No | `true` | v2.0+ |
+| Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Backend.Protocols.Tls13 | Indication whether or not TLS 1.3 is allowed towards the backend. Similar to [managing protocol ciphers in managed gateway](api-management-howto-manage-protocols-ciphers.md). | No | `true` | v2.0+ |
| Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Backend.Protocols.Tls12 | Indication whether or not TLS 1.2 is allowed towards the backend. Similar to [managing protocol ciphers in managed gateway](api-management-howto-manage-protocols-ciphers.md). | No | `true` | v2.0+ | | Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Backend.Protocols.Tls11 | Indication whether or not TLS 1.1 is allowed towards the backend. Similar to [managing protocol ciphers in managed gateway](api-management-howto-manage-protocols-ciphers.md). | No | `false` | v2.0+ | | Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Backend.Protocols.Tls10 | Indication whether or not TLS 1.0 is allowed towards the backend. Similar to [managing protocol ciphers in managed gateway](api-management-howto-manage-protocols-ciphers.md). | No | `false` | v2.0+ |
app-service Configure Authentication Api Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-api-version.md
Title: Manage AuthN/AuthZ API versions description: Upgrade your App Service authentication API to V2 or pin it to a specific version, if needed. Previously updated : 02/17/2023 Last updated : 07/09/2023
There are two versions of the management API for App Service authentication. The
> [!WARNING] > Migration to V2 will disable management of the App Service Authentication/Authorization feature for your application through some clients, such as its existing experience in the Azure portal, Azure CLI, and Azure PowerShell. This cannot be reversed.
-The V2 API doesn't support creation or editing of Microsoft Account as a distinct provider as was done in V1. Rather, it uses the converged [Microsoft identity platform](../active-directory/develop/v2-overview.md) to sign-in users with both Microsoft Entra ID and personal Microsoft accounts. When switching to the V2 API, the V1 Microsoft Entra configuration is used to configure the Microsoft identity platform provider. The V1 Microsoft Account provider will be carried forward in the migration process and continue to operate as normal, but you should move to the newer Microsoft identity platform model. See [Support for Microsoft Account provider registrations](#support-for-microsoft-account-provider-registrations) to learn more.
+The V2 API doesn't support creation or editing of Microsoft Account as a distinct provider as was done in V1. Rather, it uses the converged [Microsoft identity platform](../active-directory/develop/v2-overview.md) to sign-in users with both Microsoft Entra and personal Microsoft accounts. When switching to the V2 API, the V1 Microsoft Entra configuration is used to configure the Microsoft identity platform provider. The V1 Microsoft Account provider will be carried forward in the migration process and continue to operate as normal, but you should move to the newer Microsoft identity platform model. See [Support for Microsoft Account provider registrations](#support-for-microsoft-account-provider-registrations) to learn more.
The automated migration process will move provider secrets into application settings and then convert the rest of the configuration into the new format. To use the automatic migration:
The following steps will allow you to manually migrate the application to the V2
In the resulting JSON payload, make note of the secret value used for each provider you've configured:
- * Microsoft Entra ID: `clientSecret`
+ * Microsoft Entra: `clientSecret`
* Google: `googleClientSecret` * Facebook: `facebookAppSecret` * Twitter: `twitterConsumerSecret`
The following steps will allow you to manually migrate the application to the V2
1. Add a property to `authsettings.json` that points to the application setting name you created earlier for each provider:
- * Microsoft Entra ID: `clientSecretSettingName`
+ * Microsoft Entra: `clientSecretSettingName`
* Google: `googleClientSecretSettingName` * Facebook: `facebookAppSecretSettingName` * Twitter: `twitterConsumerSecretSettingName`
If your existing configuration contains a Microsoft Account provider and doesn't
1. Add a new URI that matches the one you just copied, except instead have it end in `/.auth/login/aad/callback`. This will allow the registration to be used by the App Service Authentication / Authorization configuration. 1. Navigate to the App Service Authentication / Authorization configuration for your app. 1. Collect the configuration for the Microsoft Account provider.
-1. Configure the Microsoft Entra provider using the "Advanced" management mode, supplying the client ID and client secret values you collected in the previous step. For the Issuer URL, use `<authentication-endpoint>/<tenant-id>/v2.0`, and replace *\<authentication-endpoint>* with the [authentication endpoint for your cloud environment](../active-directory/develop/authentication-national-cloud.md#azure-ad-authentication-endpoints) (e.g., "https://login.microsoftonline.com" for global Azure), also replacing *\<tenant-id>* with your **Directory (tenant) ID**.
+1. Configure the Microsoft Entra provider using the "Advanced" management mode, supplying the client ID and client secret values you collected in the previous step. For the Issuer URL, use `<authentication-endpoint>/<tenant-id>/v2.0`, and replace *\<authentication-endpoint>* with the [authentication endpoint for your cloud environment](../active-directory/develop/authentication-national-cloud.md#azure-ad-authentication-endpoints) (e.g., "https://login.microsoftonline.com" for global Microsoft Entra ID), also replacing *\<tenant-id>* with your **Directory (tenant) ID**.
1. Once you've saved the configuration, test the login flow by navigating in your browser to the `/.auth/login/aad` endpoint on your site and complete the sign-in flow. 1. At this point, you've successfully copied the configuration over, but the existing Microsoft Account provider configuration remains. Before you remove it, make sure that all parts of your app reference the Microsoft Entra provider through login links, etc. Verify that all parts of your app work as expected. 1. Once you've validated that things work against the Microsoft Entra provider, you may remove the Microsoft Account provider configuration.
app-service Side By Side Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/side-by-side-migrate.md
description: Learn how to migrate your App Service Environment v2 to App Service
Previously updated : 6/28/2024 Last updated : 7/11/2024 # Migration to App Service Environment v3 using the side-by-side migration feature
Side-by-side migration requires a three to six hour service window for App Servi
When this step completes, your application traffic is still going to your old App Service Environment v2 front ends and the inbound IP that was assigned to it. However, your apps are actually running on workers in your new App Service Environment v3.
+> [!NOTE]
+> Due to a known bug, web jobs might not start during the hybrid deployment step. If you use web jobs, this bug might cause app issues/downtime. Open a support case if you have any questions or concerns about this issue.
+>
+ ### Get the inbound IP address for your new App Service Environment v3 and update dependent resources The new inbound IP address is given so that you can set up new endpoints with services like [Traffic Manager](../../traffic-manager/traffic-manager-overview.md) or [Azure Front Door](../../frontdoor/front-door-overview.md) and update any of your private DNS zones. Don't move on to the next step until you make these changes. There's downtime if you don't update dependent resources with the new inbound IP. **It's your responsibility to update any and all resources that are impacted by the IP address change associated with the new App Service Environment v3. Don't move on to the next step until you've made all required updates.**
az rest --method get --uri "${ASE_ID}/configurations/networking?api-version=2022
### 5. Update dependent resources with new outbound IPs
-By using the new outbound IPs, update any of your resources or networking components to ensure that your new environment functions as intended after migration is started. It's your responsibility to make any necessary updates. The new outbound IPs are used once the App Service Environment v3 is created during the migration step.
+By using the new outbound IPs, update any of your resources or networking components to ensure that your new environment functions as intended after migration is started. It's your responsibility to make any necessary updates. The new outbound IPs are used once the App Service Environment v3 is created during the migration step. For example, if you have a custom domain suffix and an Azure Key Vault and are managing access restrictions with a firewall, you need to update the Azure Key Vault's firewall to allow either just the new outbound IPs or the entire new subnet.
### 6. Delegate your App Service Environment subnet
If you're using a system assigned managed identity for your custom domain suffix
After you complete all of the preceding steps, you can start the migration. Make sure that you understand the [implications of migration](#migrate-to-app-service-environment-v3).
-This step takes three to six hours complete. During that time, there's no application downtime. Scaling, deployments, and modifications to your existing App Service Environment are blocked during this step.
+This step takes three to six hours complete. During that time, there's no application downtime if you've followed the previous steps. Scaling, deployments, and modifications to your existing App Service Environment are blocked during this step.
+
+> [!NOTE]
+> Due to a known bug, web jobs might not start during the hybrid deployment step. If you use web jobs, this bug may cause app issues/downtime. Open a support case if you have any questions or concerns about this issue.
+>
Run the following command to start the migration:
app-service Overview Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-tls.md
Transport Layer Security (TLS) is a widely adopted security protocol designed to secure connections and communications between servers and clients. App Service allows customers to use TLS/SSL certificates to secure incoming requests to their web apps. App Service currently supports different set of TLS features for customers to secure their web apps.
-## What TLS options are available in App Service?
+## Supported TLS Version on App Service?
-For incoming requests to your web app, App Service supports TLS versions 1.0, 1.1, and 1.2. [In the next few months, App Service will begin supporting TLS version 1.3](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/upcoming-tls-1-3-on-azure-app-service-for-web-apps-functions-and/ba-p/3974138).
+For incoming requests to your web app, App Service supports TLS versions 1.0, 1.1, 1.2, and 1.3.
### Minimum TLS Version and SCM Minimum TLS Version App Service also allows you to set minimum TLS version for incoming requests to your web app and to SCM site. By default, the minimum TLS version for incoming requests to your web app and to SCM would be set to 1.2 on both portal and API.
-## TLS 1.0 and 1.1
+### TLS 1.0 and 1.1
-TLS 1.0 and 1.1 are considered legacy protocols and are no longer considered secure. It's generally recommended for customers to use TLS 1.2 as the minimum TLS version, which is also the default.
+TLS 1.0 and 1.1 are considered legacy protocols and are no longer considered secure. It's generally recommended for customers to use TLS 1.2 or above as the minimum TLS version. When creating a web app, the default minimum TLS version would be TLS 1.2.
To ensure backward compatibility for TLS 1.0 and TLS 1.1, App Service will continue to support TLS 1.0 and 1.1 for incoming requests to your web app. However, since the default minimum TLS version is set to TLS 1.2, you need to update the minimum TLS version configurations on your web app to either TLS 1.0 or 1.1 so the requests won't be rejected.
To ensure backward compatibility for TLS 1.0 and TLS 1.1, App Service will conti
> ## Next steps
-* [Secure a custom DNS name with a TLS/SSL binding](configure-ssl-bindings.md)
+* [Secure a custom DNS name with a TLS/SSL binding](configure-ssl-bindings.md)
application-gateway Understanding Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/understanding-pricing.md
The parameter with the highest utilization among these three parameters is used
#### Capacity Unit related to Instance Count <h4 id="instance-count"></h4>+ You can also pre-provision resources by specifying the **Instance Count**. Each instance guarantees a minimum of 10 capacity units in terms of processing capability. The same instance could potentially support more than 10 capacity units for different traffic patterns depending upon the capacity unit parameters. Manually defined scale and limits set for autoscaling (minimum or maximum) are set in terms of instance count. The manually set scale for instance count and the minimum instance count in autoscale config reserves 10 capacity units/instance. These reserved capacity units are billed as long as the application gateway is active regardless of the actual resource consumption. If actual consumption crosses the 10 capacity units/instance threshold, additional capacity units are billed under the variable component.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/overview.md
Title: Overview of the Azure Connected System Center Virtual Machine Manager description: This article provides a detailed overview of the Azure Arc-enabled System Center Virtual Machine Manager. Previously updated : 06/27/2024 Last updated : 07/11/2024 ms.
Azure Arc-enabled SCVMM is currently supported in the following regions:
- Southeast Asia - Australia East
-### Resource bridge networking requirements
-
-The following firewall URL exceptions are needed for the Azure Arc resource bridge VM:
--
-In addition, SCVMM requires the following exception:
-
-| **Service** | **Port** | **URL** | **Direction** | **Notes**|
-| | | | | |
-| SCVMM Management Server | 443 | URL of the SCVMM management server. | Appliance VM IP and control plane endpoint need outbound connection. | Used by the SCVMM server to communicate with the Appliance VM and the control plane. |
-| WinRM | WinRM Port numbers (Default: 5985 and 5986). | URL of the WinRM service. | IPs in the IP Pool used by the Appliance VM and control plane need connection with the VMM server. | Used by the SCVMM server to communicate with the Appliance VM. |
--
-For a complete list of network requirements for Azure Arc features and Azure Arc-enabled services, see [Azure Arc network requirements (Consolidated)](../network-requirements-consolidated.md).
- ## Data Residency Azure Arc-enabled SCVMM doesn't store/process customer data outside the region the customer deploys the service instance in.
azure-arc Support Matrix For System Center Virtual Machine Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/support-matrix-for-system-center-virtual-machine-manager.md
Previously updated : 07/10/2024 Last updated : 07/11/2024 keywords: "VMM, Arc, Azure" # Customer intent: As a VI admin, I want to understand the support matrix for System Center Virtual Machine Manager.
In addition, SCVMM requires the following exception:
| SCVMM Management Server | 443 | URL of the SCVMM management server. | Appliance VM IP and control plane endpoint need outbound connection. | Used by the SCVMM server to communicate with the Appliance VM and the control plane. | | WinRM | WinRM Port numbers (Default: 5985 and 5986). | URL of the WinRM service. | IPs in the IP Pool used by the Appliance VM and control plane need connection with the VMM server. | Used by the SCVMM server to communicate with the Appliance VM. |
-Generally, connectivity requirements include these principles:
-- All connections are TCP unless otherwise specified. -- All HTTP connections use HTTPS and SSL/TLS with officially signed and verifiable certificates. -- All connections are outbound unless otherwise specified. -
-To use a proxy, verify that the agents and the machine performing the onboarding process meet the network requirements in this article. For a complete list of network requirements for Azure Arc features and Azure Arc-enabled services, see [Azure Arc network requirements (Consolidated)](../network-requirements-consolidated.md).
+For a complete list of network requirements for Azure Arc features and Azure Arc-enabled services, see [Azure Arc network requirements (Consolidated)](../network-requirements-consolidated.md).
### Azure role/permission requirements
azure-functions Functions Dotnet Class Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-class-library.md
Support for .NET 8 still uses version 4.x of the Functions runtime, and no chang
To update your local project, first make sure you are using the latest versions of local tools. Then ensure that the project references [version 4.4.0 or later of Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/4.4.0). You can then change your `TargetFramework` to "net8.0". You must also update `local.settings.json` to include both `FUNCTIONS_WORKER_RUNTIME` set to "dotnet" and `FUNCTIONS_INPROC_NET8_ENABLED` set to "1".
-The following is an example of a minimal `local.settings.json` file with these changes:
+The following is an example of a minimal `project` file with these changes:
```xml <Project Sdk="Microsoft.NET.Sdk">
azure-functions Storage Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/storage-considerations.md
Title: Storage considerations for Azure Functions description: Learn about the storage requirements of Azure Functions and about encrypting stored data. Previously updated : 06/03/2024 Last updated : 07/10/2024 # Storage considerations for Azure Functions
Azure Functions requires an Azure Storage account when you create a function app
|Storage service | Functions usage | |||
-| [Azure Blob storage](../storage/blobs/storage-blobs-introduction.md) | Maintain bindings state and function keys<sup>1</sup>. <br/>Used by default for [task hubs in Durable Functions](durable/durable-functions-task-hubs.md). <br/>Can be used to store function app code for [Linux Consumption remote build](functions-deployment-technologies.md#remote-build) or as part of [external package URL deployments](functions-deployment-technologies.md#external-package-url). |
+| [Azure Blob storage](../storage/blobs/storage-blobs-introduction.md) | Maintain bindings state and function keys<sup>1</sup>.<br/>Deployment source for apps that run in a [Flex Consumption plan](flex-consumption-plan.md).<br/>Used by default for [task hubs in Durable Functions](durable/durable-functions-task-hubs.md). <br/>Can be used to store function app code for [Linux Consumption remote build](functions-deployment-technologies.md#remote-build) or as part of [external package URL deployments](functions-deployment-technologies.md#external-package-url). |
| [Azure Files](../storage/files/storage-files-introduction.md)<sup>2</sup> | File share used to store and run your function app code in a [Consumption Plan](consumption-plan.md) and [Premium Plan](functions-premium-plan.md). <br/> | | [Azure Queue storage](../storage/queues/storage-queues-introduction.md) | Used by default for [task hubs in Durable Functions](durable/durable-functions-task-hubs.md). Used for failure and retry handling in [specific Azure Functions triggers](./functions-bindings-storage-blob-trigger.md). Used for object tracking by the [Blob storage trigger](functions-bindings-storage-blob-trigger.md). | | [Azure Table storage](../storage/tables/table-storage-overview.md) | Used by default for [task hubs in Durable Functions](durable/durable-functions-task-hubs.md). |
A key scenario for Functions is file processing of files in a blob container, su
### Trigger on a blob container >[!NOTE]
->The Flex Consumption plan supports only the event-based Blob storage trigger.
+>The [Flex Consumption plan](flex-consumption-plan.md) supports only the event-based Blob storage trigger.
There are several ways to execute your function code based on changes to blobs in a storage container. Use the following table to determine which function trigger best fits your needs:
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
When an email address is rate limited, a notification is sent to communicate tha
When you use Azure Resource Manager for email notifications, you can send email to the members of a subscription's role. Email is sent to Microsoft Entra ID **user** or **group** members of the role. This includes support for roles assigned through Azure Lighthouse. > [!NOTE]
-> Action Groups only supports emailing the following roles: Owner, Contirbutor, Reader, Monitoring Contributor, Monitoring Reader.
+> Action Groups only supports emailing the following roles: Owner, Contributor, Reader, Monitoring Contributor, Monitoring Reader.
If your primary email doesn't receive notifications, configure the email address for the Email Azure Resource Manager role:
azure-monitor Opentelemetry Add Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md
Telemetry emitted by these Azure SDKs is automatically collected by default:
[//]: # "console.log(str)"
-#### [Java Native](#tab/java-native)
+#### [Java native](#tab/java-native)
Requests for Spring Boot native applications * Spring Web
Metrics
Logs for Spring Boot native applications * Logback
-For Quartz native applications, please look at the [Quarkus documentation](https://quarkus.io/guides/opentelemetry).
+For Quarkus native applications, please look at the [Quarkus documentation](https://quarkus.io/guides/opentelemetry).
#### [Node.js](#tab/nodejs)
var metricsProvider = Sdk.CreateMeterProviderBuilder()
### [Java](#tab/java) You can't extend the Java Distro with community instrumentation libraries. To request that we include another instrumentation library, open an issue on our GitHub page. You can find a link to our GitHub page in [Next Steps](#next-steps).
-### [Java Native](#tab/java-native)
+### [Java native](#tab/java-native)
You can't use commmunity instrumentation libraries with GraalVM Java native applications.
public class Program {
} ```
-#### [Java Native](#tab/java-native)
+#### [Java native](#tab/java-native)
1. Inject `OpenTelemetry`
public class Program {
} } ```
-#### [Java Native](#tab/java-native)
+#### [Java native](#tab/java-native)
1. Inject `OpenTelemetry`
public class Program {
} } ```
-#### [Java Native](#tab/java-native)
+#### [Java native](#tab/java-native)
1. Inject `OpenTelemetry`
You can use `opentelemetry-api` to update the status of a span and record except
span.recordException(e); ```
-#### [Java Native](#tab/java-native)
+#### [Java native](#tab/java-native)
Set status to `error` and record an exception in your code:
you can add your spans by using the OpenTelemetry API.
} ```
-#### [Java Native](#tab/java-native)
+#### [Java native](#tab/java-native)
1. Inject `OpenTelemetry`
You can use `opentelemetry-api` to create span events, which populate the `trace
Span.current().addEvent("eventName"); ```
-#### [Java Native](#tab/java-native)
+#### [Java native](#tab/java-native)
You can use OpenTelemetry API to create span events, which populate the `traces` table in Application Insights. The string passed in to `addEvent()` is saved to the `message` field within the trace.
telemetryClient.TrackEvent("testEvent");
}
-#### [Java Native](#tab/java-native)
+#### [Java native](#tab/java-native)
-It's not possible to send custom telemetry using the Application Insights Classic API in Java Native.
+It's not possible to send custom telemetry using the Application Insights Classic API in Java native.
#### [Node.js](#tab/nodejs)
Adding one or more span attributes populates the `customDimensions` field in the
Span.current().setAttribute(attributeKey, "myvalue1"); ```
-##### [Java Native](#tab/java-native)
+##### [Java native](#tab/java-native)
Add custom dimensions in your code:
activity.SetTag("http.client_ip", "<IP Address>");
Java automatically populates this field.
-##### [Java Native](#tab/java-native)
+##### [Java native](#tab/java-native)
This field is automatically populated.
Populate the `user ID` field in the `requests`, `dependencies`, or `exceptions`
Span.current().setAttribute("enduser.id", "myuser"); ```
-##### [Java Native](#tab/java-native)
+##### [Java native](#tab/java-native)
Populate the `user ID` field in the `requests`, `dependencies`, or `exceptions` table.
Logback, Log4j, and java.util.logging are [autoinstrumented](#logs). Attaching c
* [Log4j 1.2 MDC](https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/MDC.html)
-#### [Java Native](#tab/java-native)
+#### [Java native](#tab/java-native)
For Spring Boot native applications, Logback is instrumented out of the box.
You might use the following ways to filter out telemetry before it leaves your a
See [sampling overrides](java-standalone-config.md#sampling-overrides) and [telemetry processors](java-standalone-telemetry-processors.md).
-### [Java Native](#tab/java-native)
+### [Java native](#tab/java-native)
-It's not possible to filter telemetry in Java Native.
+It's not possible to filter telemetry in Java native.
### [Node.js](#tab/nodejs)
You can use `opentelemetry-api` to get the trace ID or span ID.
String spanId = span.getSpanContext().getSpanId(); ```
-### [Java Native](#tab/java-native)
+### [Java native](#tab/java-native)
Get the request trace ID and the span ID in your code:
span_id = trace.get_current_span().get_span_context().span_id
- To enable usage experiences, see [Enable web or browser user monitoring](javascript.md). - See the [release notes](https://github.com/microsoft/ApplicationInsights-Java/releases) on GitHub.
-### [Java Native](#tab/java-native)
+### [Java native](#tab/java-native)
+ - For details on adding and modifying Azure Monitor OpenTelemetry, see [Add and modify Azure Monitor OpenTelemetry](opentelemetry-add-modify.md). - To review the source code, see [Azure Monitor OpenTelemetry Distro in Spring Boot native image Java application](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/spring/spring-cloud-azure-starter-monitor) and [Quarkus OpenTelemetry Exporter for Azure](https://github.com/quarkiverse/quarkus-opentelemetry-exporter/tree/main/quarkus-opentelemetry-exporter-azure).
azure-monitor Opentelemetry Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md
Use one of the following two ways to configure the connection string:
To set the connection string, see [Connection string](java-standalone-config.md#connection-string).
-### [Java Native](#tab/java-native)
+### [Java native](#tab/java-native)
Use one of the following two ways to configure the connection string:
To set the cloud role name, see [cloud role name](java-standalone-config.md#clou
To set the cloud role instance, see [cloud role instance](java-standalone-config.md#cloud-role-instance).
-### [Java Native](#tab/java-native)
+### [Java native](#tab/java-native)
To set the cloud role name: * Use the `spring.application.name` for Spring Boot native image applications
You might want to enable sampling to reduce your data ingestion volume, which re
> [!NOTE] > Metrics and Logs are unaffected by sampling.
-#### [ASP.NET Core](#tab/aspnetcore)
+### [ASP.NET Core](#tab/aspnetcore)
The sampler expects a sample rate of between 0 and 1 inclusive. A rate of 0.1 means approximately 10% of your traces are sent.
var app = builder.Build();
app.Run(); ```
-#### [.NET](#tab/net)
+### [.NET](#tab/net)
The sampler expects a sample rate of between 0 and 1 inclusive. A rate of 0.1 means approximately 10% of your traces are sent.
var tracerProvider = Sdk.CreateTracerProviderBuilder()
}); ```
-#### [Java](#tab/java)
+### [Java](#tab/java)
Starting from 3.4.0, rate-limited sampling is available and is now the default. For more information about sampling, see [Java sampling]( java-standalone-config.md#sampling).
-#### [Java Native](#tab/java-native)
+### [Java native](#tab/java-native)
For Spring Boot native applications, the [sampling configurations of the OpenTelemetry Java SDK are applicable](https://opentelemetry.io/docs/languages/java/configuration/#sampler). For Quarkus native applications, please look at the [Quarkus OpenTelemetry documentation](https://quarkus.io/guides/opentelemetry#sampler).
-#### [Node.js](#tab/nodejs)
+### [Node.js](#tab/nodejs)
The sampler expects a sample rate of between 0 and 1 inclusive. A rate of 0.1 means approximately 10% of your traces are sent.
const options: AzureMonitorOpenTelemetryOptions = {
useAzureMonitor(options); ```
-#### [Python](#tab/python)
+### [Python](#tab/python)
The `configure_azure_monitor()` function automatically utilizes ApplicationInsightsSampler for compatibility with Application Insights SDKs and
export OTEL_TRACES_SAMPLER_ARG=0.1
[Live metrics](live-stream.md) provides a real-time analytics dashboard for insight into application activity and performance.
-#### [ASP.NET Core](#tab/aspnetcore)
+### [ASP.NET Core](#tab/aspnetcore)
> [!IMPORTANT] > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
builder.Services.AddOpenTelemetry().UseAzureMonitor(options => {
}); ```
-#### [.NET](#tab/net)
+### [.NET](#tab/net)
This feature isn't available in the Azure Monitor .NET Exporter.
-#### [Java](#tab/java)
+### [Java](#tab/java)
The Live Metrics experience is enabled by default. For more information on Java configuration, see [Configuration options: Azure Monitor Application Insights for Java](java-standalone-config.md#configuration-options-azure-monitor-application-insights-for-java).
-#### [Java Native](#tab/java-native)
+### [Java native](#tab/java-native)
The Live Metrics are not available today for GraalVM native applications.
-#### [Node.js](#tab/nodejs)
+### [Node.js](#tab/nodejs)
> [!IMPORTANT] > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
Configuration sample
-->
-#### [Python](#tab/python)
+### [Python](#tab/python)
> [!IMPORTANT] > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
configure_azure_monitor(
You might want to enable Microsoft Entra authentication for a more secure connection to Azure, which prevents unauthorized telemetry from being ingested into your subscription.
-#### [ASP.NET Core](#tab/aspnetcore)
+### [ASP.NET Core](#tab/aspnetcore)
We support the credential classes provided by [Azure Identity](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/identity/Azure.Identity#credential-classes).
We support the credential classes provided by [Azure Identity](https://github.co
app.Run(); ```
-#### [.NET](#tab/net)
+### [.NET](#tab/net)
We support the credential classes provided by [Azure Identity](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/identity/Azure.Identity#credential-classes).
We support the credential classes provided by [Azure Identity](https://github.co
}); ```
-#### [Java](#tab/java)
+### [Java](#tab/java)
For more information about Java, see the [Java supplemental documentation](java-standalone-config.md).
-#### [Java Native](#tab/java-native)
+### [Java native](#tab/java-native)
Microsoft Entra ID authentication is not available for GraalVM Native applications.
-#### [Node.js](#tab/nodejs)
+### [Node.js](#tab/nodejs)
We support the credential classes provided by [Azure Identity](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/identity/identity#credential-classes).
const options: AzureMonitorOpenTelemetryOptions = {
useAzureMonitor(options); ```
-#### [Python](#tab/python)
+### [Python](#tab/python)
```python # Import the `ManagedIdentityCredential` class from the `azure.identity` package.
Configuring Offline Storage and Automatic Retries isn't available in Java.
For a full list of available configurations, see [Configuration options](./java-standalone-config.md).
-### [Java Native](#tab/java-native)
+### [Java native](#tab/java-native)
Configuring Offline Storage and Automatic Retries isn't available in Java native image applications.
You might want to enable the OpenTelemetry Protocol (OTLP) Exporter alongside th
> [!NOTE] > The OTLP Exporter is shown for convenience only. We don't officially support the OTLP Exporter or any components or third-party experiences downstream of it.
-#### [ASP.NET Core](#tab/aspnetcore)
+### [ASP.NET Core](#tab/aspnetcore)
1. Install the [OpenTelemetry.Exporter.OpenTelemetryProtocol](https://www.nuget.org/packages/OpenTelemetry.Exporter.OpenTelemetryProtocol/) package in your project.
You might want to enable the OpenTelemetry Protocol (OTLP) Exporter alongside th
app.Run(); ```
-#### [.NET](#tab/net)
+### [.NET](#tab/net)
1. Install the [OpenTelemetry.Exporter.OpenTelemetryProtocol](https://www.nuget.org/packages/OpenTelemetry.Exporter.OpenTelemetryProtocol/) package in your project.
You might want to enable the OpenTelemetry Protocol (OTLP) Exporter alongside th
.AddOtlpExporter(); ```
-#### [Java](#tab/java)
+### [Java](#tab/java)
For more information about Java, see the [Java supplemental documentation](java-standalone-config.md).
-#### [Java Native](#tab/java-native)
+### [Java native](#tab/java-native)
You can't enable the OpenTelemetry Protocol (OTLP) Exporter alongside the Azure Monitor Exporter to send your telemetry to two locations.
-#### [Node.js](#tab/nodejs)
+### [Node.js](#tab/nodejs)
1. Install the [OpenTelemetry Collector Trace Exporter](https://www.npmjs.com/package/@opentelemetry/exporter-trace-otlp-http) and other OpenTelemetry packages in your project.
You can't enable the OpenTelemetry Protocol (OTLP) Exporter alongside the Azure
npm install @opentelemetry/sdk-trace-node ```
-2. Add the following code snippet. This example assumes you have an OpenTelemetry Collector with an OTLP receiver running. For details, see the [example on GitHub](https://github.com/open-telemetry/opentelemetry-js/tree/main/examples/otlp-exporter-node).
+1. Add the following code snippet. This example assumes you have an OpenTelemetry Collector with an OTLP receiver running. For details, see the [example on GitHub](https://github.com/open-telemetry/opentelemetry-js/tree/main/examples/otlp-exporter-node).
```typescript // Import the useAzureMonitor function, the AzureMonitorOpenTelemetryOptions class, the trace module, the ProxyTracerProvider class, the BatchSpanProcessor class, the NodeTracerProvider class, and the OTLPTraceExporter class from the @azure/monitor-opentelemetry, @opentelemetry/api, @opentelemetry/sdk-trace-base, @opentelemetry/sdk-trace-node, and @opentelemetry/exporter-trace-otlp-http packages, respectively.
You can't enable the OpenTelemetry Protocol (OTLP) Exporter alongside the Azure
useAzureMonitor(options); ```
-#### [Python](#tab/python)
+### [Python](#tab/python)
1. Install the [opentelemetry-exporter-otlp](https://pypi.org/project/opentelemetry-exporter-otlp/) package.
The following OpenTelemetry configurations can be accessed through environment v
For more information about Java, see the [Java supplemental documentation](java-standalone-config.md).
-### [Java Native](#tab/java-native)
+### [Java native](#tab/java-native)
| Environment variable | Description | | -- | -- |
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Follow the steps in this section to instrument your application with OpenTelemet
<!NOTE TO CONTRIBUTORS: PLEASE DO NOT SEPARATE OUT JAVASCRIPT AND TYPESCRIPT INTO DIFFERENT TABS.>
-### [ASP.NET Core](#tab/aspnetcore)
+#### [ASP.NET Core](#tab/aspnetcore)
- [ASP.NET Core Application](/aspnet/core/introduction-to-aspnet-core) using an officially supported version of [.NET](https://dotnet.microsoft.com/download/dotnet)
-### [.NET](#tab/net)
+#### [.NET](#tab/net)
- Application using an officially supported version of [.NET Core](https://dotnet.microsoft.com/download/dotnet) or [.NET Framework](https://dotnet.microsoft.com/download/dotnet-framework) that's at least .NET Framework 4.6.2
-### [Java](#tab/java)
+#### [Java](#tab/java)
- A Java application using Java 8+
-### [Java Native](#tab/java-native)
+#### [Java native](#tab/java-native)
- A Java application using GraalVM 17+
-### [Node.js](#tab/nodejs)
+#### [Node.js](#tab/nodejs)
> [!NOTE] > If you rely on any properties in the [not-supported table](https://github.com/microsoft/ApplicationInsights-node.js/blob/bet#ApplicationInsights-Shim-Unsupported-Properties), use the distro, and we'll provide a migration guide soon. If not, the App Insights shim is your easiest path forward when it's out of beta.
Follow the steps in this section to instrument your application with OpenTelemet
- [OpenTelemetry supported runtimes](https://github.com/open-telemetry/opentelemetry-js#supported-runtimes) - [Azure Monitor OpenTelemetry Exporter supported runtimes](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry-exporter#currently-supported-environments)
-### [Python](#tab/python)
+#### [Python](#tab/python)
- Python Application using Python 3.8+
Install the latest `Azure.Monitor.OpenTelemetry.AspNetCore` [NuGet package](http
dotnet add package Azure.Monitor.OpenTelemetry.AspNetCore ```
-### [.NET](#tab/net)
+#### [.NET](#tab/net)
Install the latest `Azure.Monitor.OpenTelemetry.Exporter` [NuGet package](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.Exporter):
Download the [applicationinsights-agent-3.5.3.jar](https://github.com/microsoft/
> [3.2.0](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.2.0), and > [3.1.0](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.1.0) -
-#### [Java Native](#tab/java-native)
+#### [Java native](#tab/java-native)
For Spring Boot native applications: * [Import the OpenTelemetry Bills of Materials (BOM)](https://opentelemetry.io/docs/zero-code/java/spring-boot-starter/getting-started/).
Point the Java virtual machine (JVM) to the jar file by adding `-javaagent:"path
> [!TIP] > If you develop a Spring Boot application, you can optionally replace the JVM argument by a programmatic configuration. For more information, see [Using Azure Monitor Application Insights with Spring Boot](./java-spring-boot.md). -
-##### [Java Native](#tab/java-native)
+##### [Java native](#tab/java-native)
Several automatic instrumentations are enabled through configuration changes; no code changes are required
As part of using Application Insights instrumentation, we collect and send diagn
Azure Monitor OpenTelemetry sample applications are available for all supported languages.
-#### [ASP.NET Core](#tab/aspnetcore)
+### [ASP.NET Core](#tab/aspnetcore)
- [ASP.NET Core sample app](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.AspNetCore/tests/Azure.Monitor.OpenTelemetry.AspNetCore.Demo)
-##### [.NET](#tab/net)
+### [.NET](#tab/net)
- [NET sample app](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.Exporter/tests/Azure.Monitor.OpenTelemetry.Exporter.Demo)
-##### [Java](#tab/java)
+### [Java](#tab/java)
- [Java sample apps](https://github.com/Azure-Samples/ApplicationInsights-Java-Samples)
-##### [Java Native](#tab/java-native)
+### [Java native](#tab/java-native)
- [Java GraalVM native sample apps](https://github.com/Azure-Samples/java-native-telemetry)
-##### [Node.js](#tab/nodejs)
+### [Node.js](#tab/nodejs)
- [Node.js sample app](https://github.com/Azure-Samples/azure-monitor-opentelemetry-node.js)
-##### [Python](#tab/python)
+### [Python](#tab/python)
- [Python sample apps](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/monitor/azure-monitor-opentelemetry/samples)
Azure Monitor OpenTelemetry sample applications are available for all supported
- To learn more about OpenTelemetry and its community, see the [OpenTelemetry .NET GitHub repository](https://github.com/open-telemetry/opentelemetry-dotnet). - To enable usage experiences, [enable web or browser user monitoring](javascript.md).
-#### [.NET](#tab/net)
+### [.NET](#tab/net)
- For details on adding and modifying Azure Monitor OpenTelemetry, see [Add and modify Azure Monitor OpenTelemetry](opentelemetry-add-modify.md). - To further configure the OpenTelemetry distro, see [Azure Monitor OpenTelemetry configuration](opentelemetry-configuration.md).
Azure Monitor OpenTelemetry sample applications are available for all supported
- Enable usage experiences by seeing [Enable web or browser user monitoring](javascript.md). - Review the [release notes](https://github.com/microsoft/ApplicationInsights-Java/releases) on GitHub.
-### [Java Native](#tab/java-native)
+### [Java native](#tab/java-native)
- See [Add and modify Azure Monitor OpenTelemetry](opentelemetry-add-modify.md) for details on adding and modifying Azure Monitor OpenTelemetry. - Review the source code in the [Azure Monitor OpenTelemetry Distro in Spring Boot native image Java application](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/spring/spring-cloud-azure-starter-monitor) and [Quarkus OpenTelemetry Exporter for Azure](https://github.com/quarkiverse/quarkus-opentelemetry-exporter/tree/main/quarkus-opentelemetry-exporter-azure). - Learn more about OpenTelemetry and its community in the [OpenTelemetry Java GitHub repository](https://github.com/open-telemetry/opentelemetry-java-instrumentation).
azure-monitor Prometheus Remote Write Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-remote-write-active-directory.md
Last updated 4/18/2024
# Send Prometheus data to Azure Monitor by using Microsoft Entra authentication
-This article describes how to set up [remote write](prometheus-remote-write.md) to send data from a self-managed Prometheus server running in your Azure Kubernetes Service (AKS) cluster or Azure Arc-enabled Kubernetes cluster by using Microsoft Entra authentication.
+This article describes how to set up [remote write](prometheus-remote-write.md) to send data from a self-managed Prometheus server running in your Azure Kubernetes Service (AKS) cluster or Azure Arc-enabled Kubernetes cluster by using Microsoft Entra authentication and a side car container that Azure Monitor provides. Note that you can also directly configure remote-write in the Prometheus configuration for the same.
+
+> [!NOTE]
+> We recommend that you directly configure Prometheus running on your Kubernetes cluster to remote-write into Azure Monitor Workspace. See [Send Prometheus data to Azure Monitor using Microsoft Entra Id authentication](../essentials/prometheus-remote-write-virtual-machines.md#set-up-authentication-for-remote-write) to learn more. The steps below use the Azure Monitor side car container.
## Cluster configurations
azure-monitor Prometheus Remote Write Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-remote-write-managed-identity.md
Last updated 4/18/2024
# Send Prometheus data to Azure Monitor by using managed identity authentication
-This article describes how to set up [remote write](prometheus-remote-write.md) to send data from a self-managed Prometheus server running in your Azure Kubernetes Service (AKS) cluster or Azure Arc-enabled Kubernetes cluster by using managed identity authentication. You can either use an existing identity that's created by AKS or [create your own](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md). Both options are described here.
+This article describes how to set up [remote write](prometheus-remote-write.md) to send data from a self-managed Prometheus server running in your Azure Kubernetes Service (AKS) cluster or Azure Arc-enabled Kubernetes cluster by using managed identity authentication and a side car container provided by Azure Monitor. You can either use an existing identity that's created by AKS or [create your own](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md). Both options are described here.
+
+> [!NOTE]
+> If you are using the user-assigned managed identity, we recommend that you directly configure Prometheus running on your Kubernetes cluster to remote-write into Azure Monitor Workspace. See [Send Prometheus data to Azure Monitor using user-assigned managed identity](../essentials/prometheus-remote-write-virtual-machines.md#set-up-authentication-for-remote-write) to learn more. The steps below use the Azure Monitor side car container.
## Cluster configurations
azure-monitor Prometheus Remote Write https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-remote-write.md
Last updated 4/18/2024
# Azure Monitor managed service for Prometheus remote write
-Azure Monitor managed service for Prometheus is intended to be a replacement for self managed Prometheus so you don't need to manage a Prometheus server in your Kubernetes clusters. You may also choose to use the managed service to centralize data from self-managed Prometheus clusters for long term data retention and to create a centralized view across your clusters. In this case, you can use [remote_write](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage) to send data from your self-managed Prometheus into the Azure managed service.
+Azure Monitor managed service for Prometheus is intended to be a replacement for self managed Prometheus so you don't need to manage a Prometheus server in your Kubernetes clusters. You may also choose to use the managed service to centralize data from self-managed Prometheus clusters for long term data retention and to create a centralized view across your clusters. In this case, you can use [remote_write](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write) to send data from your self-managed Prometheus into the Azure managed service.
## Architecture
-Azure Monitor provides a reverse proxy container (Azure Monitor [side car container](/azure/architecture/patterns/sidecar)) that provides an abstraction for ingesting Prometheus remote write metrics and helps in authenticating packets. The Azure Monitor side car container currently supports User Assigned Identity and Microsoft Entra ID based authentication to ingest Prometheus remote write metrics to Azure Monitor workspace.
+
+You can configure Prometheus running on your Kubernetes cluster to remote-write into Azure Monitor Workspace. Currently user-assigned managed identity or Microsoft Entra ID application are the supported authentication types using Prometheus remote-write configuration to ingest metrics to Azure Monitor Workspace.
+
+Azure Monitor also provides a reverse proxy container (Azure Monitor [side car container](/azure/architecture/patterns/sidecar)) that provides an abstraction for ingesting Prometheus remote write metrics and helps in authenticating packets.
+
+We recommend configuring remote-write directly in your self-managed Prometheus config running in your environment. The Azure Monitor side car container can be used in case your preferred authentication is not supported through directly configuration. We plan to add those authentication options to the direct configuration and deprecate the side-car container.
## Supported versions
Azure Monitor provides a reverse proxy container (Azure Monitor [side car contai
Configuring remote write depends on your cluster configuration and the type of authentication that you use. - Managed identity is recommended for Azure Kubernetes service (AKS) and Azure Arc-enabled Kubernetes cluster. -- Microsoft Entra ID can be used for Azure Kubernetes service (AKS) and Azure Arc-enabled Kubernetes cluster and is required for Kubernetes cluster running in another cloud or on-premises.
+- Microsoft Entra ID can be used for Azure Kubernetes service (AKS) and Azure Arc-enabled Kubernetes cluster and is required for Kubernetes cluster running in another cloud or on-premises.
See the following articles for more information on how to configure remote write for Kubernetes clusters: -- [Microsoft Entra ID authorization proxy](/azure/azure-monitor/containers/prometheus-authorization-proxy?tabs=remote-write-example)-- [Send Prometheus data from AKS to Azure Monitor by using managed identity authentication](/azure/azure-monitor/containers/prometheus-remote-write-managed-identity)-- [Send Prometheus data from AKS to Azure Monitor by using Microsoft Entra ID authentication](/azure/azure-monitor/containers/prometheus-remote-write-active-directory)-- [Send Prometheus data to Azure Monitor by using Microsoft Entra ID pod-managed identity (preview) authentication](/azure/azure-monitor/containers/prometheus-remote-write-azure-ad-pod-identity)-- [Send Prometheus data to Azure Monitor by using Microsoft Entra ID Workload ID (preview) authentication](/azure/azure-monitor/containers/prometheus-remote-write-azure-workload-identity)
+- (**Recommended**) [Send Prometheus data to Azure Monitor by directly configuring Prometheus remote-write](../essentials/prometheus-remote-write-virtual-machines.md#set-up-authentication-for-remote-write). This option can be used for self-managed Prometheus running in any environment. The supported authentication options are user-assigned managed identity and Microsoft Entra ID application.
+- [Send Prometheus data from AKS to Azure Monitor using side car container with managed identity authentication](/azure/azure-monitor/containers/prometheus-remote-write-managed-identity)
+- [Send Prometheus data from AKS to Azure Monitor using side car container with Microsoft Entra ID authentication](/azure/azure-monitor/containers/prometheus-remote-write-active-directory)
+- [Send Prometheus data to Azure Monitor using side car container with Microsoft Entra ID pod-managed identity (preview) authentication](/azure/azure-monitor/containers/prometheus-remote-write-azure-ad-pod-identity)
+- [Send Prometheus data to Azure Monitor using side car container with Microsoft Entra ID Workload ID (preview) authentication](/azure/azure-monitor/containers/prometheus-remote-write-azure-workload-identity)
## Remote write from Virtual Machines and Virtual Machine Scale sets
azure-resource-manager Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/best-practices.md
Previously updated : 06/23/2023 Last updated : 07/11/2024 # Best practices for Bicep
For more information about Bicep variables, see [Variables in Bicep](variables.m
* It's a good practice to use template expressions to create resource names, like in this example:
- :::code language="bicep" source="~/azure-docs-bicep-samples/samples/best-practices/resource-name-expressions.bicep" highlight="3":::
-
+ ```bicep
+ param shortAppName string = 'toy'
+ param shortEnvironmentName string = 'prod'
+ param appServiceAppName string = '${shortAppName}-${shortEnvironmentName}-${uniqueString(resourceGroup().id)}'
+ ```
+
Using template expressions to create resource names gives you several benefits: * Strings generated by `uniqueString()` aren't meaningful. It's helpful to use a template expression to create a name that includes meaningful information, such as a short descriptor of the project or environment name, as well as a random component to make the name more likely to be unique.
For more information about Bicep variables, see [Variables in Bicep](variables.m
* Avoid using `name` in a symbolic name. The symbolic name represents the resource, not the resource's name. For example, instead of the following syntax: ```bicep
- resource cosmosDBAccountName 'Microsoft.DocumentDB/databaseAccounts@2023-04-15' = {
+ resource cosmosDBAccountName 'Microsoft.DocumentDB/databaseAccounts@2023-11-15' = {
``` Use: ```bicep
- resource cosmosDBAccount 'Microsoft.DocumentDB/databaseAccounts@2023-04-15' = {
+ resource cosmosDBAccount 'Microsoft.DocumentDB/databaseAccounts@2023-11-15' = {
``` * Avoid distinguishing variables and parameters by the use of suffixes.
azure-resource-manager Bicep Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-cli.md
Title: Bicep CLI commands and overview
description: Describes the commands that you can use in the Bicep CLI. These commands include building Azure Resource Manager templates from Bicep. Previously updated : 01/02/2024 Last updated : 07/11/2024 # Bicep CLI commands
The command returns an array of available versions.
```azurecli [
+ "v0.28.1",
+ "v0.27.1",
+ "v0.26.170",
+ "v0.26.54",
+ "v0.25.53",
+ "v0.25.3",
+ "v0.24.24",
+ "v0.23.1",
+ "v0.22.6",
+ "v0.21.1",
"v0.20.4", "v0.19.5", "v0.18.4",
The command returns an array of available versions.
"v0.9.1", "v0.8.9", "v0.8.2",
- "v0.7.4",
- "v0.6.18",
- "v0.6.11",
- "v0.6.1",
- "v0.5.6",
- "v0.4.1318",
- "v0.4.1272",
- "v0.4.1124",
- "v0.4.1008",
- "v0.4.613",
- "v0.4.451"
+ "v0.7.4"
] ```
azure-resource-manager Bicep Extensibility Kubernetes Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-extensibility-kubernetes-provider.md
Title: Bicep extensibility Kubernetes provider
description: Learn how to Bicep Kubernetes provider to deploy .NET applications to Azure Kubernetes Service clusters. Previously updated : 03/20/2024 Last updated : 07/11/2024 # Bicep extensibility Kubernetes provider (Preview)
The Kubernetes provider allows you to create Kubernetes resources directly with
> Kubernetes provider is not currently supported for private clusters: > > ```bicep
-> resource AKS 'Microsoft.ContainerService/managedClusters@2023-01-02-preview' = {
+> resource AKS 'Microsoft.ContainerService/managedClusters@2024-02-01' = {
> ... > properties: { > apiServerAccessProfile: {
import 'kubernetes@1.0.0' with {
The following sample shows how to pass `kubeConfig` value from a parent Bicep file: ```bicep
-resource aks 'Microsoft.ContainerService/managedClusters@2022-05-02-preview' existing = {
+resource aks 'Microsoft.ContainerService/managedClusters@2024-02-01' existing = {
name: 'demoAKSCluster' }
azure-resource-manager Bicep Functions Array https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-array.md
Title: Bicep functions - arrays
description: Describes the functions to use in a Bicep file for working with arrays. Previously updated : 01/11/2024 Last updated : 07/11/2024 # Array functions for Bicep
param dnsServers array = []
...
-resource vnet 'Microsoft.Network/virtualNetworks@2021-02-01' = {
+resource vnet 'Microsoft.Network/virtualNetworks@2023-11-01' = {
name: vnetName location: location properties: {
param availabilityZones array = [
'2' ]
-resource exampleApim 'Microsoft.ApiManagement/service@2021-08-01' = {
+resource exampleApim 'Microsoft.ApiManagement/service@2023-05-01-preview' = {
name: apiManagementName location: location sku: {
The following example is extracted from a quickstart template, [Two VMs in VNET
... var numberOfInstances = 2
-resource networkInterface 'Microsoft.Network/networkInterfaces@2021-05-01' = [for i in range(0, numberOfInstances): {
+resource networkInterface 'Microsoft.Network/networkInterfaces@2023-11-01' = [for i in range(0, numberOfInstances): {
name: '${networkInterfaceName}${i}' location: location properties: {
resource networkInterface 'Microsoft.Network/networkInterfaces@2021-05-01' = [fo
} }]
-resource vm 'Microsoft.Compute/virtualMachines@2021-11-01' = [for i in range(0, numberOfInstances): {
+resource vm 'Microsoft.Compute/virtualMachines@2024-03-01' = [for i in range(0, numberOfInstances): {
name: '${vmNamePrefix}${i}' location: location properties: {
azure-resource-manager Bicep Functions Date https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-date.md
Title: Bicep functions - date
description: Describes the functions to use in a Bicep file to work with dates. Previously updated : 01/17/2024 Last updated : 07/11/2024 # Date functions for Bicep
var startTime = dateTimeAdd(baseTime, 'PT1H')
...
-resource scheduler 'Microsoft.Automation/automationAccounts/schedules@2022-08-08' = {
+resource scheduler 'Microsoft.Automation/automationAccounts/schedules@2023-11-01' = {
name: concat(omsAutomationAccountName, '/', scheduleName) properties: { description: 'Demo Scheduler'
The next example shows how to use a value from the function when setting a tag v
param utcShort string = utcNow('d') param rgName string
-resource myRg 'Microsoft.Resources/resourceGroups@2022-09-01' = {
+resource myRg 'Microsoft.Resources/resourceGroups@2024-03-01' = {
name: rgName location: 'westeurope' tags: {
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-resource.md
param allowedLocations array = [
'australiacentral' ]
-resource policyDefinition 'Microsoft.Authorization/policyDefinitions@2021-06-01' = {
+resource policyDefinition 'Microsoft.Authorization/policyDefinitions@2023-04-01' = {
name: 'locationRestriction' properties: { policyType: 'Custom'
resource policyDefinition 'Microsoft.Authorization/policyDefinitions@2021-06-01'
} }
-resource policyAssignment 'Microsoft.Authorization/policyAssignments@2022-06-01' = {
+resource policyAssignment 'Microsoft.Authorization/policyAssignments@2024-04-01' = {
name: 'locationAssignment' properties: { policyDefinitionId: policyDefinition.id
param adminLogin string
@secure() param adminPassword string
-resource sqlServer 'Microsoft.Sql/servers@2022-08-01-preview' = {
+resource sqlServer 'Microsoft.Sql/servers@2023-08-01-preview' = {
... } ```
param subscriptionId string
param kvResourceGroup string param kvName string
-resource keyVault 'Microsoft.KeyVault/vaults@2023-02-01' existing = {
+resource keyVault 'Microsoft.KeyVault/vaults@2023-07-01' existing = {
name: kvName scope: resourceGroup(subscriptionId, kvResourceGroup ) }
Other `list` functions have different return formats. To see the format of a fun
The following example deploys a storage account and then calls `listKeys` on that storage account. The key is used when setting a value for [deployment scripts](../templates/deployment-script-template.md). ```bicep
-resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: 'dscript${uniqueString(resourceGroup().id)}' location: location kind: 'StorageV2'
resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
} }
-resource dScript 'Microsoft.Resources/deploymentScripts@2020-10-01' = {
+resource dScript 'Microsoft.Resources/deploymentScripts@2023-08-01' = {
name: 'scriptWithStorage' location: location ...
param allowedLocations array = [
var mgScope = tenantResourceId('Microsoft.Management/managementGroups', targetMG) var policyDefinitionName = 'LocationRestriction'
-resource policyDefinition 'Microsoft.Authorization/policyDefinitions@2021-06-01' = {
+resource policyDefinition 'Microsoft.Authorization/policyDefinitions@2023-04-01' = {
name: policyDefinitionName properties: { policyType: 'Custom'
resource policyDefinition 'Microsoft.Authorization/policyDefinitions@2021-06-01'
} }
-resource location_lock 'Microsoft.Authorization/policyAssignments@2021-06-01' = {
+resource location_lock 'Microsoft.Authorization/policyAssignments@2024-04-01' = {
name: 'location-lock' properties: { scope: mgScope
The following example deploys a storage account. The first two outputs give you
param storageAccountName string = uniqueString(resourceGroup().id) param location string = resourceGroup().location
-resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: storageAccountName location: location kind: 'Storage'
To get a property from an existing resource that isn't deployed in the template,
```bicep param storageAccountName string
-resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' existing = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2023-04-01' existing = {
name: storageAccountName }
For example:
param storageAccountName string param location string = resourceGroup().location
-resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: storageAccountName location: location kind: 'Storage'
To get the resource ID for a resource that isn't deployed in the Bicep file, use
```bicep param storageAccountName string
-resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' existing = {
- name: storageAccountName
+resource storageAccount 'Microsoft.Storage/storageAccounts@2023-04-01' existing = {
+ name: storageAccountName
} output storageID string = storageAccount.id
param policyDefinitionID string = '0a914e76-4921-4c19-b460-a2d36003525a'
@description('Specifies the name of the policy assignment, can be used defined or an idempotent name as the defaultValue provides.') param policyAssignmentName string = guid(policyDefinitionID, resourceGroup().name)
-resource policyAssignment 'Microsoft.Authorization/policyAssignments@2022-06-01' = {
+resource policyAssignment 'Microsoft.Authorization/policyAssignments@2024-04-01' = {
name: policyAssignmentName properties: { scope: subscriptionResourceId('Microsoft.Resources/resourceGroups', resourceGroup().name)
azure-resource-manager Bicep Functions Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-scope.md
Title: Bicep functions - scopes
description: Describes the functions to use in a Bicep file to retrieve values about deployment scopes. Previously updated : 03/20/2024 Last updated : 07/11/2024 # Scope functions for Bicep
targetScope = 'managementGroup'
param mgName string = 'mg-${uniqueString(newGuid())}'
-resource newMG 'Microsoft.Management/managementGroups@2020-05-01' = {
+resource newMG 'Microsoft.Management/managementGroups@2023-04-01' = {
scope: tenant() name: mgName properties: {
It returns:
Some resources require setting the tenant ID for a property. Rather than passing the tenant ID as a parameter, you can retrieve it with the tenant function. ```bicep
-resource kv 'Microsoft.KeyVault/vaults@2021-06-01-preview' = {
+resource kv 'Microsoft.KeyVault/vaults@2023-07-01' = {
name: 'examplekeyvault' location: 'westus' properties: {
azure-resource-manager Bicep Functions String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-string.md
Title: Bicep functions - string
description: Describes the functions to use in a Bicep file to work with strings. Previously updated : 07/02/2024 Last updated : 07/11/2024 # String functions for Bicep
param guidValue string = newGuid()
var storageName = 'storage${uniqueString(guidValue)}'
-resource myStorage 'Microsoft.Storage/storageAccounts@2018-07-01' = {
+resource myStorage 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: storageName location: 'West US' sku: {
uniqueString(resourceGroup().id, deployment().name)
The following example shows how to create a unique name for a storage account based on your resource group. Inside the resource group, the name isn't unique if constructed the same way. ```bicep
-resource mystorage 'Microsoft.Storage/storageAccounts@2018-07-01' = {
+resource mystorage 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: 'storage${uniqueString(resourceGroup().id)}' ... }
azure-resource-manager Compare Template Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/compare-template-syntax.md
Title: Compare syntax for Azure Resource Manager templates in JSON and Bicep
description: Compares Azure Resource Manager templates developed with JSON and Bicep, and shows how to convert between the languages. Previously updated : 06/23/2023 Last updated : 07/11/2024 # Comparing JSON and Bicep for templates
targetScope = 'subscription'
To declare a resource: ```bicep
-resource virtualMachine 'Microsoft.Compute/virtualMachines@2023-03-01' = {
+resource virtualMachine 'Microsoft.Compute/virtualMachines@2024-03-01' = {
... } ```
resource virtualMachine 'Microsoft.Compute/virtualMachines@2023-03-01' = {
"resources": [ { "type": "Microsoft.Compute/virtualMachines",
- "apiVersion": "2020-06-01",
+ "apiVersion": "2024-03-01",
... } ]
resource virtualMachine 'Microsoft.Compute/virtualMachines@2023-03-01' = {
To conditionally deploy a resource: ```bicep
-resource virtualMachine 'Microsoft.Compute/virtualMachines@2023-03-01' = if(deployVM) {
+resource virtualMachine 'Microsoft.Compute/virtualMachines@2024-03-01' = if(deployVM) {
... } ```
resource virtualMachine 'Microsoft.Compute/virtualMachines@2023-03-01' = if(depl
{ "condition": "[parameters('deployVM')]", "type": "Microsoft.Compute/virtualMachines",
- "apiVersion": "2023-03-01",
+ "apiVersion": "2024-03-01",
... } ]
For Bicep, you can set an explicit dependency but this approach isn't recommende
The following shows a network interface with an implicit dependency on a network security group. It references the network security group with `netSecurityGroup.id`. ```bicep
-resource netSecurityGroup 'Microsoft.Network/networkSecurityGroups@2022-11-01' = {
+resource netSecurityGroup 'Microsoft.Network/networkSecurityGroups@2023-11-01' = {
... }
-resource nic1 'Microsoft.Network/networkInterfaces@2022-11-01' = {
+resource nic1 'Microsoft.Network/networkInterfaces@2023-11-01' = {
name: nic1Name location: location properties: {
storageAccount.properties.primaryEndpoints.blob
To get a property from an existing resource that isn't deployed in the template: ```bicep
-resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' existing = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2023-04-01' existing = {
name: storageAccountName }
azure-resource-manager Conditional Resource Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/conditional-resource-deployment.md
Title: Conditional deployment with Bicep
description: Describes how to conditionally deploy a resource in Bicep. Previously updated : 03/20/2024 Last updated : 07/11/2024 # Conditional deployments in Bicep with the if expression
In Bicep, you can conditionally deploy a resource by passing in a parameter that
```bicep param deployZone bool
-resource dnsZone 'Microsoft.Network/dnszones@2018-05-01' = if (deployZone) {
+resource dnsZone 'Microsoft.Network/dnsZones@2023-07-01-preview' = if (deployZone) {
name: 'myZone' location: 'global' }
param location string = resourceGroup().location
]) param newOrExisting string = 'new'
-resource saNew 'Microsoft.Storage/storageAccounts@2022-09-01' = if (newOrExisting == 'new') {
+resource saNew 'Microsoft.Storage/storageAccounts@2023-04-01' = if (newOrExisting == 'new') {
name: storageAccountName location: location sku: {
resource saNew 'Microsoft.Storage/storageAccounts@2022-09-01' = if (newOrExistin
kind: 'StorageV2' }
-resource saExisting 'Microsoft.Storage/storageAccounts@2022-09-01' existing = if (newOrExisting == 'existing') {
+resource saExisting 'Microsoft.Storage/storageAccounts@2023-04-01' existing = if (newOrExisting == 'existing') {
name: storageAccountName }
param vmName string
param location string param logAnalytics string = ''
-resource vmName_omsOnboarding 'Microsoft.Compute/virtualMachines/extensions@2023-03-01' = if (!empty(logAnalytics)) {
+resource vmName_omsOnboarding 'Microsoft.Compute/virtualMachines/extensions@2024-03-01' = if (!empty(logAnalytics)) {
name: '${vmName}/omsOnboarding' location: location properties: {
azure-resource-manager Create Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/create-resource-group.md
targetScope='subscription'
param resourceGroupName string param resourceGroupLocation string
-resource newRG 'Microsoft.Resources/resourceGroups@2022-09-01' = {
+resource newRG 'Microsoft.Resources/resourceGroups@2024-03-01' = {
name: resourceGroupName location: resourceGroupLocation }
param resourceGroupLocation string
param storageName string param storageLocation string
-resource newRG 'Microsoft.Resources/resourceGroups@2022-09-01' = {
+resource newRG 'Microsoft.Resources/resourceGroups@2024-03-01' = {
name: resourceGroupName location: resourceGroupLocation }
The module uses a Bicep file named **storage.bicep** with the following contents
param storageLocation string param storageName string
-resource storageAcct 'Microsoft.Storage/storageAccounts@2022-09-01' = {
+resource storageAcct 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: storageName location: storageLocation sku: {
azure-resource-manager Decompile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/decompile.md
Suppose you have the following ARM template:
"resources": [ { "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2019-06-01",
+ "apiVersion": "2023-04-01",
"name": "[variables('storageAccountName')]", "location": "[parameters('location')]", "sku": {
param storageAccountType string = 'Standard_LRS'
@description('Location for all resources.') param location string = resourceGroup().location
-var storageAccountName_var = 'store${uniqueString(resourceGroup().id)}'
+var storageAccountName = 'store${uniqueString(resourceGroup().id)}'
-resource storageAccountName 'Microsoft.Storage/storageAccounts@2019-06-01' = {
- name: storageAccountName_var
+resource storageAccount 'Microsoft.Storage/storageAccounts@2023-04-01' = {
+ name: storageAccountName
location: location sku: { name: storageAccountType
resource storageAccountName 'Microsoft.Storage/storageAccounts@2019-06-01' = {
properties: {} }
-output storageAccountName string = storageAccountName_var
+output storageAccountName string = storageAccountName
``` The decompiled file works, but it has some names that you might want to change. The variable `var storageAccountName_var` has an unusual naming convention. Let's change it to:
To rename across the file, right-click the name, and then select **Rename symbol
The resource has a symbolic name that you might want to change. Instead of `storageAccountName` for the symbolic name, use `exampleStorage`. ```bicep
-resource exampleStorage 'Microsoft.Storage/storageAccounts@2019-06-01' = {
+resource exampleStorage 'Microsoft.Storage/storageAccounts@2023-04-01' = {
``` The complete file is:
param location string = resourceGroup().location
var uniqueStorageName = 'store${uniqueString(resourceGroup().id)}'
-resource exampleStorage 'Microsoft.Storage/storageAccounts@2019-06-01' = {
+resource exampleStorage 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: uniqueStorageName location: location sku: {
azure-resource-manager Deploy To Management Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-to-management-group.md
To deploy resources to the target management group, add those resources with the
targetScope = 'managementGroup' // policy definition created in the management group
-resource policyDefinition 'Microsoft.Authorization/policyDefinitions@2021-06-01' = {
+resource policyDefinition 'Microsoft.Authorization/policyDefinitions@2023-04-01' = {
... } ```
targetScope = 'managementGroup'
param mgName string = 'mg-${uniqueString(newGuid())}'
-resource newMG 'Microsoft.Management/managementGroups@2021-04-01' = {
+resource newMG 'Microsoft.Management/managementGroups@2023-04-01' = {
scope: tenant() name: mgName properties: {}
targetScope = 'managementGroup'
param mgName string = 'mg-${uniqueString(newGuid())}'
-resource newMG 'Microsoft.Management/managementGroups@2021-04-01' = {
+resource newMG 'Microsoft.Management/managementGroups@2023-04-01' = {
scope: tenant() name: mgName properties: {
param allowedLocations array = [
'australiacentral' ]
-resource policyDefinition 'Microsoft.Authorization/policyDefinitions@2021-06-01' = {
+resource policyDefinition 'Microsoft.Authorization/policyDefinitions@2023-04-01' = {
name: 'locationRestriction' properties: { policyType: 'Custom'
resource policyDefinition 'Microsoft.Authorization/policyDefinitions@2021-06-01'
} }
-resource policyAssignment 'Microsoft.Authorization/policyAssignments@2022-06-01' = {
+resource policyAssignment 'Microsoft.Authorization/policyAssignments@2024-04-01' = {
name: 'locationAssignment' properties: { policyDefinitionId: policyDefinition.id
azure-resource-manager Deploy To Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-to-resource-group.md
Title: Use Bicep to deploy resources to resource groups
description: Describes how to deploy resources in a Bicep file. It shows how to target more than one resource group. Previously updated : 03/20/2024 Last updated : 07/11/2024 # Resource group deployments with Bicep files
To deploy resources to the target resource group, add those resources to the Bic
```bicep // resource deployed to target resource group
-resource exampleResource 'Microsoft.Storage/storageAccounts@2019-06-01' = {
+resource exampleResource 'Microsoft.Storage/storageAccounts@2023-04-01' = {
... } ```
Instead of using a module, you can set the scope to `tenant()` for some resource
param mgName string = 'mg-${uniqueString(newGuid())}' // ManagementGroup deployed at tenant
-resource managementGroup 'Microsoft.Management/managementGroups@2020-05-01' = {
+resource managementGroup 'Microsoft.Management/managementGroups@2023-04-01' = {
scope: tenant() name: mgName properties: {}
Both modules use the same Bicep file named **storage.bicep**.
param storageLocation string param storageName string
-resource storageAcct 'Microsoft.Storage/storageAccounts@2019-06-01' = {
+resource storageAcct 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: storageName location: storageLocation sku: {
azure-resource-manager Deploy To Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-to-subscription.md
To deploy resources to the target subscription, add those resources with the `re
targetScope = 'subscription' // resource group created in target subscription
-resource exampleResource 'Microsoft.Resources/resourceGroups@2022-09-01' = {
+resource exampleResource 'Microsoft.Resources/resourceGroups@2024-03-01' = {
... } ```
targetScope = 'subscription'
param mgName string = 'mg-${uniqueString(newGuid())}' // management group created at tenant
-resource managementGroup 'Microsoft.Management/managementGroups@2021-04-01' = {
+resource managementGroup 'Microsoft.Management/managementGroups@2023-04-01' = {
scope: tenant() name: mgName properties: {}
param policyDefinitionID string
param policyName string param policyParameters object = {}
-resource policyAssign 'Microsoft.Authorization/policyAssignments@2022-06-01' = {
+resource policyAssign 'Microsoft.Authorization/policyAssignments@2024-04-01' = {
name: policyName properties: { policyDefinitionId: policyDefinitionID
You can [define](../../governance/policy/concepts/definition-structure.md) and a
```bicep targetScope = 'subscription'
-resource locationPolicy 'Microsoft.Authorization/policyDefinitions@2021-06-01' = {
+resource locationPolicy 'Microsoft.Authorization/policyDefinitions@2023-04-01' = {
name: 'locationpolicy' properties: { policyType: 'Custom'
resource locationPolicy 'Microsoft.Authorization/policyDefinitions@2021-06-01' =
} }
-resource locationRestrict 'Microsoft.Authorization/policyAssignments@2022-06-01' = {
+resource locationRestrict 'Microsoft.Authorization/policyAssignments@2024-04-01' = {
name: 'allowedLocation' properties: { policyDefinitionId: locationPolicy.id
param roleAssignmentName string = guid(principalId, roleDefinitionId, resourceGr
var roleID = '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/${roleDefinitionId}'
-resource newResourceGroup 'Microsoft.Resources/resourceGroups@2022-09-01' = {
+resource newResourceGroup 'Microsoft.Resources/resourceGroups@2024-03-01' = {
name: resourceGroupName location: resourceGroupLocation properties: {}
azure-resource-manager Deploy To Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-to-tenant.md
Title: Use Bicep to deploy resources to tenant
description: Describes how to deploy resources at the tenant scope in a Bicep file. Previously updated : 06/23/2023 Last updated : 07/11/2024 # Tenant deployments with Bicep file
Resources defined within the Bicep file are applied to the tenant.
targetScope = 'tenant' // create resource at tenant
-resource mgName_resource 'Microsoft.Management/managementGroups@2021-04-01' = {
+resource mgName_resource 'Microsoft.Management/managementGroups@2023-04-01' = {
... } ```
The following template creates a management group.
targetScope = 'tenant' param mgName string = 'mg-${uniqueString(newGuid())}'
-resource mgName_resource 'Microsoft.Management/managementGroups@2021-04-01' = {
+resource mgName_resource 'Microsoft.Management/managementGroups@2023-04-01' = {
name: mgName properties: {} }
azure-resource-manager Deployment Script Bicep Configure Dev https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-script-bicep-configure-dev.md
Title: Configure development environment for deployment scripts in Bicep | Microsoft Docs description: Configure development environment for deployment scripts in Bicep. Previously updated : 12/13/2023 Last updated : 07/11/2024 ms.devlang: azurecli
var fileShareName = '${projectName}share'
var containerGroupName = '${projectName}cg' var containerName = '${projectName}container'
-resource storageAccount 'Microsoft.Storage/storageAccounts@2023-01-01' = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: storageAccountName location: location sku: {
resource storageAccount 'Microsoft.Storage/storageAccounts@2023-01-01' = {
} }
-resource fileshare 'Microsoft.Storage/storageAccounts/fileServices/shares@2023-01-01' = {
+resource fileshare 'Microsoft.Storage/storageAccounts/fileServices/shares@2023-04-01' = {
name: '${storageAccountName}/default/${fileShareName}' dependsOn: [ storageAccount
azure-resource-manager Existing Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/existing-resource.md
Title: Reference existing resource in Bicep
description: Describes how to reference a resource that already exists. Previously updated : 06/23/2023 Last updated : 07/11/2024 # Existing resources in Bicep
The resource isn't redeployed when referenced with the `existing` keyword.
The following example gets an existing storage account in the same resource group as the current deployment. Notice that you provide only the name of the existing resource. The properties are available through the symbolic name. ```bicep
-resource stg 'Microsoft.Storage/storageAccounts@2022-09-01' existing = {
+resource stg 'Microsoft.Storage/storageAccounts@2023-04-01' existing = {
name: 'examplestorage' }
output blobEndpoint string = stg.properties.primaryEndpoints.blob
Set the `scope` property to access a resource in a different scope. The following example references an existing storage account in a different resource group. ```bicep
-resource stg 'Microsoft.Storage/storageAccounts@2022-09-01' existing = {
+resource stg 'Microsoft.Storage/storageAccounts@2023-04-01' existing = {
name: 'examplestorage' scope: resourceGroup(exampleRG) }
azure-resource-manager File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/file.md
Title: Bicep file structure and syntax
description: Describes the structure and properties of a Bicep file using declarative syntax. Previously updated : 06/03/2024 Last updated : 07/11/2024 # Understand the structure and syntax of Bicep files
param location string = resourceGroup().location
var uniqueStorageName = '${storagePrefix}${uniqueString(resourceGroup().id)}'
-resource stg 'Microsoft.Storage/storageAccounts@2022-09-01' = {
+resource stg 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: uniqueStorageName location: location sku: {
param storageAccountConfig storageAccountConfigType = {
sku: 'Standard_LRS' }
-resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: storageAccountConfig.name location: location sku: {
var uniqueStorageName = '${storagePrefix}${uniqueString(resourceGroup().id)}'
Apply this variable wherever you need the complex expression. ```bicep
-resource stg 'Microsoft.Storage/storageAccounts@2019-04-01' = {
+resource stg 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: uniqueStorageName ```
Use the `resource` keyword to define a resource to deploy. Your resource declara
The resource declaration includes the resource type and API version. Within the body of the resource declaration, include properties that are specific to the resource type. ```bicep
-resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' = {
+resource stg 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: uniqueStorageName location: location sku: {
By default, resources are deployed in parallel. When you add the `batchSize(int)
```bicep @batchSize(3)
-resource storageAccountResources 'Microsoft.Storage/storageAccounts@2019-06-01' = [for storageName in storageAccounts: {
+resource storageAccountResources 'Microsoft.Storage/storageAccounts@2023-04-01' = [for storageName in storageAccounts: {
... }] ```
You can add a resource or module to your Bicep file that is conditionally deploy
```bicep param deployZone bool
-resource dnsZone 'Microsoft.Network/dnszones@2018-05-01' = if (deployZone) {
+resource dnsZone 'Microsoft.Network/dnsZones@2023-07-01-preview' = if (deployZone) {
name: 'myZone' location: 'global' }
Spaces and tabs are ignored when authoring Bicep files.
Bicep is newline sensitive. For example: ```bicep
-resource sa 'Microsoft.Storage/storageAccounts@2019-06-01' = if (newOrExisting == 'new') {
+resource sa 'Microsoft.Storage/storageAccounts@2023-04-01' = if (newOrExisting == 'new') {
... } ```
resource sa 'Microsoft.Storage/storageAccounts@2019-06-01' = if (newOrExisting =
Can't be written as: ```bicep
-resource sa 'Microsoft.Storage/storageAccounts@2019-06-01' =
+resource sa 'Microsoft.Storage/storageAccounts@2023-04-01' =
if (newOrExisting == 'new') { ... }
The following example shows a single-line comment.
```bicep // This is your primary NIC.
-resource nic1 'Microsoft.Network/networkInterfaces@2020-06-01' = {
- ...
+resource nic1 'Microsoft.Network/networkInterfaces@2023-11-01' = {
+ ...
} ```
azure-resource-manager Linter Rule Explicit Values For Loc Params https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-explicit-values-for-loc-params.md
Title: Linter rule - use explicit values for module location parameters
description: Linter rule - use explicit values for module location parameters. Previously updated : 03/20/2024 Last updated : 07/11/2024 # Linter rule - use explicit values for module location parameters
module m1 'module1.bicep' = {
name: 'm1' }
-resource storageaccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
+resource storageaccount 'Microsoft.Storage/storageAccounts@2024-03-01' = {
name: 'storageaccount' location: location kind: 'StorageV2'
resource storageaccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
```bicep param location string = resourceGroup().location
-resource stg 'Microsoft.Storage/storageAccounts@2022-09-01' = {
+resource stg 'Microsoft.Storage/storageAccounts@2024-03-01' = {
name: 'stg' location: location kind: 'StorageV2'
module m1 'module1.bicep' = {
} }
-resource storageaccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
+resource storageaccount 'Microsoft.Storage/storageAccounts@2024-03-01' = {
name: 'storageaccount' location: location kind: 'StorageV2'
azure-resource-manager Linter Rule Nested Deployment Template Scoping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-nested-deployment-template-scoping.md
Title: Linter rule - nested deployment template scoping
description: Linter rule - nested deployment template scoping Previously updated : 03/20/2024 Last updated : 07/11/2024 # Linter rule - nested deployment template scoping
The following example fails this test because `fizz` is defined in the parent te
```bicep var fizz = 'buzz'
-resource nested 'Microsoft.Resources/deployments@2020-10-01' = {
+resource nested 'Microsoft.Resources/deployments@2024-03-01' = {
name: 'name' properties: { mode: 'Incremental'
resource nested 'Microsoft.Resources/deployments@2020-10-01' = {
contentVersion: '1.0.0.0' resources: [ {
- apiVersion: '2022-09-01'
+ apiVersion: '2024-03-01'
type: 'Microsoft.Resources/tags' name: 'default' properties: {
azure-resource-manager Linter Rule No Deployments Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-no-deployments-resources.md
Title: Linter rule - no deployments resources
description: Linter rule - no deployments resources Previously updated : 03/20/2024 Last updated : 07/11/2024 # Linter rule - no deployments resources
In ARM templates, you can reuse or modularize a template through nesting or link
"resources": [ { "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2023-01-01",
+ "apiVersion": "2023-04-01",
"name": "[parameters('storageAccountName')]", "location": "[parameters('location')]", "sku": {
In Bicep, you can still use the `Microsoft.Resources/deployments` resource for n
param storageAccountName string = 'store${uniqueString(resourceGroup().id)}' param location string = resourceGroup().location
-resource nestedTemplate1 'Microsoft.Resources/deployments@2023-07-01' = {
+resource nestedTemplate1 'Microsoft.Resources/deployments@2024-03-01' = {
name: 'nestedTemplate1' properties:{ mode: 'Incremental'
resource nestedTemplate1 'Microsoft.Resources/deployments@2023-07-01' = {
resources: [ { type: 'Microsoft.Storage/storageAccounts'
- apiVersion: '2023-01-01'
+ apiVersion: '2023-04-01'
name: storageAccountName location: location sku: {
_nested_nestedTemplate1.bicep_:
param storageAccountName string param location string
-resource storageAccount 'Microsoft.Storage/storageAccounts@2023-01-01' = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: storageAccountName location: location sku: {
_createStorage.json_:
"resources": [ { "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2023-01-01",
+ "apiVersion": "2023-04-01",
"name": "[parameters('storageAccountName')]", "location": "[parameters('location')]", "sku": {
azure-resource-manager Linter Rule No Hardcoded Environment Urls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-no-hardcoded-environment-urls.md
Title: Linter rule - no hardcoded environment URL
description: Linter rule - no hardcoded environment URL Previously updated : 03/20/2024 Last updated : 07/11/2024 # Linter rule - no hardcoded environment URL
In some cases, you can fix it by getting a property from a resource you've deplo
param storageAccountName string param location string = resourceGroup().location
-resource sa 'Microsoft.Storage/storageAccounts@2022-09-01' = {
+resource sa 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: storageAccountName location: location sku: {
azure-resource-manager Linter Rule No Hardcoded Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-no-hardcoded-location.md
Title: Linter rule - no hardcoded locations
description: Linter rule - no hardcoded locations Previously updated : 03/20/2024 Last updated : 07/11/2024 # Linter rule - no hardcoded locations
Rather than using a hardcoded string or variable value, use a parameter, the str
The following example fails this test because the resource's `location` property uses a string literal: ```bicep
- resource stg 'Microsoft.Storage/storageAccounts@2021-02-01' = {
+ resource stg 'Microsoft.Storage/storageAccounts@2023-04-01' = {
location: 'westus' } ```
You can fix it by creating a new `location` string parameter (which may optional
```bicep param location string = resourceGroup().location
- resource stg 'Microsoft.Storage/storageAccounts@2021-02-01' = {
+ resource stg 'Microsoft.Storage/storageAccounts@2023-04-01' = {
location: location } ```
The following example fails this test because the resource's `location` property
```bicep var location = 'westus'
- resource stg 'Microsoft.Storage/storageAccounts@2021-02-01' = {
+ resource stg 'Microsoft.Storage/storageAccounts@2023-04-01' = {
location: location } ```
You can fix it by turning the variable into a parameter:
```bicep param location string = 'westus'
- resource stg 'Microsoft.Storage/storageAccounts@2021-02-01' = {
+ resource stg 'Microsoft.Storage/storageAccounts@2023-04-01' = {
location: location } ```
where module1.bicep is:
```bicep param location string
-resource storageaccount 'Microsoft.Storage/storageAccounts@2021-02-01' = {
+resource storageaccount 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: 'storageaccount' location: location kind: 'StorageV2'
azure-resource-manager Linter Rule No Loc Expr Outside Params https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-no-loc-expr-outside-params.md
Title: Linter rule - no location expressions outside of parameter default values
description: Linter rule - no location expressions outside of parameter default values Previously updated : 03/20/2024 Last updated : 07/11/2024 # Linter rule - no location expressions outside of parameter default values
Template users may have limited access to regions where they can create resource
Best practice suggests that to set your resources' locations, your template should have a string parameter named `location`. If you default the `location` parameter to `resourceGroup().location` or `deployment().location` instead of using these functions elsewhere in the template, users of the template can use the default value when convenient but also specify a different location when needed. ```bicep
-resource storageaccount 'Microsoft.Storage/storageAccounts@2021-02-01' = {
+resource storageaccount 'Microsoft.Storage/storageAccounts@2023-04-01' = {
location: resourceGroup().location } ```
You can fix the failure by creating a `location` property that defaults to `reso
```bicep param location string = resourceGroup().location
-resource storageaccount 'Microsoft.Storage/storageAccounts@2021-02-01' = {
+resource storageaccount 'Microsoft.Storage/storageAccounts@2023-04-01' = {
location: location } ```
azure-resource-manager Linter Rule No Unnecessary Dependson https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-no-unnecessary-dependson.md
Title: Linter rule - no unnecessary dependsOn entries
description: Linter rule - no unnecessary dependsOn entries Previously updated : 03/20/2024 Last updated : 07/11/2024 # Linter rule - no unnecessary dependsOn entries
The following example fails this test because the dependsOn entry `appServicePla
```bicep param location string = resourceGroup().location
-resource appServicePlan 'Microsoft.Web/serverfarms@2022-03-01' = {
+resource appServicePlan 'Microsoft.Web/serverfarms@2023-12-01' = {
name: 'name' location: location sku: {
resource appServicePlan 'Microsoft.Web/serverfarms@2022-03-01' = {
} }
-resource webApplication 'Microsoft.Web/sites@2022-03-01' = {
+resource webApplication 'Microsoft.Web/sites@2023-12-01' = {
name: 'name' location: location properties: {
You can fix it by removing the unnecessary dependsOn entry.
```bicep param location string = resourceGroup().location
-resource appServicePlan 'Microsoft.Web/serverfarms@2022-03-01' = {
+resource appServicePlan 'Microsoft.Web/serverfarms@2023-12-01' = {
name: 'name' location: location sku: {
resource appServicePlan 'Microsoft.Web/serverfarms@2022-03-01' = {
} }
-resource webApplication 'Microsoft.Web/sites@2022-03-01' = {
+resource webApplication 'Microsoft.Web/sites@2023-12-01' = {
name: 'name' location: location properties: {
azure-resource-manager Linter Rule No Unused Existing Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-no-unused-existing-resources.md
Title: Linter rule - no unused existing resources
description: Linter rule - no unused existing resources Previously updated : 03/20/2024 Last updated : 07/11/2024 # Linter rule - no unused existing resources
To reduce confusion in your template, delete any [existing resources](./existing
The following example fails this test because the existing resource **stg** is declared but never used: ```bicep
-resource stg 'Microsoft.Storage/storageAccounts@2022-09-01' existing = {
+resource stg 'Microsoft.Storage/storageAccounts@2023-04-01' existing = {
name: 'examplestorage' } ```
azure-resource-manager Linter Rule Outputs Should Not Contain Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-outputs-should-not-contain-secrets.md
Title: Linter rule - outputs should not contain secrets
description: Linter rule - outputs should not contain secrets Previously updated : 03/20/2024 Last updated : 07/11/2024 # Linter rule - outputs should not contain secrets
The following example fails because it uses a [`list*`](./bicep-functions-resour
```bicep param storageName string
-resource stg 'Microsoft.Storage/storageAccounts@2021-04-01' existing = {
+resource stg 'Microsoft.Storage/storageAccounts@2023-04-01' existing = {
name: storageName }
azure-resource-manager Linter Rule Protect Commandtoexecute Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-protect-commandtoexecute-secrets.md
Title: Linter rule - use protectedSettings for commandToExecute secrets
description: Linter rule - use protectedSettings for commandToExecute secrets Previously updated : 03/20/2024 Last updated : 07/11/2024 # Linter rule - use protectedSettings for commandToExecute secrets
param location string
param fileUris string param storageAccountName string
-resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' existing = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2023-04-01' existing = {
name: storageAccountName }
-resource customScriptExtension 'Microsoft.HybridCompute/machines/extensions@2023-03-15-preview' = {
+resource customScriptExtension 'Microsoft.HybridCompute/machines/extensions@2023-10-03-preview' = {
name: '${vmName}/CustomScriptExtension' location: location properties: {
param location string
param fileUris string param storageAccountName string
-resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' existing = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2023-04-01' existing = {
name: storageAccountName }
-resource customScriptExtension 'Microsoft.HybridCompute/machines/extensions@2023-03-15-preview' = {
+resource customScriptExtension 'Microsoft.HybridCompute/machines/extensions@2023-10-03-preview' = {
name: '${vmName}/CustomScriptExtension' location: location properties: {
azure-resource-manager Linter Rule Secure Params In Nested Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-secure-params-in-nested-deploy.md
Title: Linter rule - secure params in nested deploy
description: Linter rule - secure params in nested deploy Previously updated : 03/20/2024 Last updated : 07/11/2024 # Linter rule - secure params in nested deploy
The following example fails this test because a secure parameter is referenced i
@secure() param secureValue string
-resource nested 'Microsoft.Resources/deployments@2021-04-01' = {
+resource nested 'Microsoft.Resources/deployments@2024-03-01' = {
name: 'nested' properties: { mode: 'Incremental'
resource nested 'Microsoft.Resources/deployments@2021-04-01' = {
{ name: 'outerImplicit' type: 'Microsoft.Network/networkSecurityGroups'
- apiVersion: '2019-11-01'
+ apiVersion: '2023-11-01'
location: '[resourceGroup().location]' properties: { securityRules: [
You can fix it by setting the deployment's properties.expressionEvaluationOption
@secure() param secureValue string
-resource nested 'Microsoft.Resources/deployments@2021-04-01' = {
+resource nested 'Microsoft.Resources/deployments@2024-03-01' = {
name: 'nested' properties: { mode: 'Incremental'
resource nested 'Microsoft.Resources/deployments@2021-04-01' = {
{ name: 'outerImplicit' type: 'Microsoft.Network/networkSecurityGroups'
- apiVersion: '2019-11-01'
+ apiVersion: '2023-11-01'
location: '[resourceGroup().location]' properties: { securityRules: [
resource nested 'Microsoft.Resources/deployments@2021-04-01' = {
} } }- ``` ## Next steps
azure-resource-manager Linter Rule Simplify Interpolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-simplify-interpolation.md
Title: Linter rule - simplify interpolation
description: Linter rule - simplify interpolation Previously updated : 03/20/2024 Last updated : 07/11/2024 # Linter rule - simplify interpolation
The following example fails this test because it just references a parameter.
```bicep param AutomationAccountName string
-resource AutomationAccount 'Microsoft.Automation/automationAccounts@2022-08-08' = {
+resource AutomationAccount 'Microsoft.Automation/automationAccounts@2023-11-01' = {
name: '${AutomationAccountName}' ... }
You can fix it by removing the string interpolation syntax.
```bicep param AutomationAccountName string
-resource AutomationAccount 'Microsoft.Automation/automationAccounts@2022-08-08' = {
+resource AutomationAccount 'Microsoft.Automation/automationAccounts@2023-11-01' = {
name: AutomationAccountName ... }
azure-resource-manager Linter Rule Simplify Json Null https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-simplify-json-null.md
Title: Linter rule - simplify JSON null
description: Linter rule - simplify JSON null Previously updated : 03/20/2024 Last updated : 07/11/2024 # Linter rule - simplify JSON null
param availabilityZones array = [
'3' ]
-resource apiManagementService 'Microsoft.ApiManagement/service@2022-08-01' = {
+resource apiManagementService 'Microsoft.ApiManagement/service@2023-05-01-preview' = {
name: apiManagementServiceName location: location zones: ((length(availabilityZones) == 0) ? json('null') : availabilityZones)
param availabilityZones array = [
'3' ]
-resource apiManagementService 'Microsoft.ApiManagement/service@2022-08-01' = {
+resource apiManagementService 'Microsoft.ApiManagement/service@2023-05-01-preview' = {
name: apiManagementServiceName location: location zones: ((length(availabilityZones) == 0) ? null : availabilityZones)
azure-resource-manager Linter Rule Use Parent Property https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-use-parent-property.md
Title: Linter rule - use parent property
description: Linter rule - use parent property Previously updated : 03/20/2024 Last updated : 07/11/2024 # Linter rule - use parent property
The following example fails this test because of the name values for `service` a
```bicep param location string = resourceGroup().location
-resource storage 'Microsoft.Storage/storageAccounts@2021-02-01' = {
+resource storage 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: 'examplestorage' location: location kind: 'StorageV2'
resource storage 'Microsoft.Storage/storageAccounts@2021-02-01' = {
} }
-resource service 'Microsoft.Storage/storageAccounts/fileServices@2021-02-01' = {
+resource service 'Microsoft.Storage/storageAccounts/fileServices@2023-04-01' = {
name: 'examplestorage/default' dependsOn: [ storage ] }
-resource share 'Microsoft.Storage/storageAccounts/fileServices/shares@2021-02-01' = {
+resource share 'Microsoft.Storage/storageAccounts/fileServices/shares@2023-04-01' = {
name: 'examplestorage/default/exampleshare' dependsOn: [ service
You can fix the problem by using the `parent` property:
```bicep param location string = resourceGroup().location
-resource storage 'Microsoft.Storage/storageAccounts@2021-02-01' = {
+resource storage 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: 'examplestorage' location: location kind: 'StorageV2'
resource storage 'Microsoft.Storage/storageAccounts@2021-02-01' = {
} }
-resource service 'Microsoft.Storage/storageAccounts/fileServices@2021-02-01' = {
+resource service 'Microsoft.Storage/storageAccounts/fileServices@2023-04-01' = {
parent: storage name: 'default' }
-resource share 'Microsoft.Storage/storageAccounts/fileServices/shares@2021-02-01' = {
+resource share 'Microsoft.Storage/storageAccounts/fileServices/shares@2023-04-01' = {
parent: service name: 'exampleshare' }
azure-resource-manager Linter Rule Use Resource Symbol Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-use-resource-symbol-reference.md
Title: Linter rule - use resource symbol reference
description: Linter rule - use resource symbol reference Previously updated : 03/20/2024 Last updated : 07/11/2024 # Linter rule - use resource symbol reference
param location string = resourceGroup().location
param storageAccountName string = uniqueString(resourceGroup().id)
-resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' existing = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2023-04-01' existing = {
name: storageAccountName }
-resource cluster 'Microsoft.HDInsight/clusters@2021-06-01' = {
+resource cluster 'Microsoft.HDInsight/clusters@2023-08-15-preview' = {
name: clusterName location: location properties: {
resource cluster 'Microsoft.HDInsight/clusters@2021-06-01' = {
storageProfile: { storageaccounts: [ {
- name: replace(replace(reference(storageAccount.id, '2022-09-01').primaryEndpoints.blob, 'https://', ''), '/', '')
+ name: replace(replace(reference(storageAccount.id, '2023-04-01').primaryEndpoints.blob, 'https://', ''), '/', '')
isDefault: true container: clusterName
- key: listKeys(storageAccount.id, '2022-09-01').keys[0].value
+ key: listKeys(storageAccount.id, '2023-04-01').keys[0].value
} ] }
param location string = resourceGroup().location
param storageAccountName string = uniqueString(resourceGroup().id)
-resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' existing = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2023-04-01' existing = {
name: storageAccountName }
-resource cluster 'Microsoft.HDInsight/clusters@2021-06-01' = {
+resource cluster 'Microsoft.HDInsight/clusters@2023-08-15-preview' = {
name: clusterName location: location properties: {
azure-resource-manager Linter Rule Use Secure Value For Secure Inputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-use-secure-value-for-secure-inputs.md
Title: Linter rule - adminPassword should be assigned a secure value
description: Linter rule - adminPassword should be assigned a secure value. Previously updated : 05/06/2024 Last updated : 07/11/2024 # Linter rule - adminPassword should be assigned a secure value.
Assign a secure value to the property with the property path `properties.osProfi
The following examples fail this test because the `adminPassword` is not a secure value. ```bicep
-resource ubuntuVM 'Microsoft.Compute/virtualMachineScaleSets@2023-09-01' = {
+resource ubuntuVM 'Microsoft.Compute/virtualMachineScaleSets@2024-03-01' = {
name: 'name' location: 'West US' properties: {
resource ubuntuVM 'Microsoft.Compute/virtualMachineScaleSets@2023-09-01' = {
``` ```bicep
-resource ubuntuVM 'Microsoft.Compute/virtualMachines@2023-09-01' = {
+resource ubuntuVM 'Microsoft.Compute/virtualMachines@2024-03-01' = {
name: 'name' location: 'West US' properties: {
resource ubuntuVM 'Microsoft.Compute/virtualMachines@2023-09-01' = {
```bicep param adminPassword string
-resource ubuntuVM 'Microsoft.Compute/virtualMachines@2023-09-01' = {
+resource ubuntuVM 'Microsoft.Compute/virtualMachines@2024-03-01' = {
name: 'name' location: 'West US' properties: {
param adminPassword string
param adminUsername string param location string = resourceGroup().location
-resource ubuntuVM 'Microsoft.Compute/virtualMachines@2023-09-01' = {
+resource ubuntuVM 'Microsoft.Compute/virtualMachines@2024-03-01' = {
name: 'name' location: location properties: {
azure-resource-manager Linter Rule Use Stable Resource Identifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-use-stable-resource-identifier.md
Title: Linter rule - use stable resource identifier
description: Linter rule - use stable resource identifier Previously updated : 03/20/2024 Last updated : 07/11/2024 # Linter rule - use stable resource identifier
The following example fails this test because `utcNow()` is used in the resource
param location string = resourceGroup().location param time string = utcNow()
-resource sa 'Microsoft.Storage/storageAccounts@2021-09-01' = {
+resource sa 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: 'store${toLower(time)}' location: location sku: {
You can fix it by removing the `utcNow()` function from the example.
```bicep param location string = resourceGroup().location
-resource sa 'Microsoft.Storage/storageAccounts@2021-09-01' = {
+resource sa 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: 'store${uniqueString(resourceGroup().id)}' location: location sku: {
azure-resource-manager Linter Rule Use Stable Vm Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-use-stable-vm-image.md
Title: Linter rule - use stable VM image
description: Linter rule - use stable VM image Previously updated : 03/20/2024 Last updated : 07/11/2024 # Linter rule - use stable VM image
The following example fails this test.
```bicep param location string = resourceGroup().location
-resource vm 'Microsoft.Compute/virtualMachines@2023-03-01' = {
+resource vm 'Microsoft.Compute/virtualMachines@2024-03-01' = {
name: 'virtualMachineName' location: location properties: {
You can fix it by using an image that does not contain the string `preview` in t
```bicep param location string = resourceGroup().location
-resource vm 'Microsoft.Compute/virtualMachines@2023-03-01' = {
+resource vm 'Microsoft.Compute/virtualMachines@2024-03-01' = {
name: 'virtualMachineName' location: location properties: {
azure-resource-manager Loops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/loops.md
Title: Iterative loops in Bicep
description: Use loops to iterate over collections in Bicep Previously updated : 11/03/2023 Last updated : 07/11/2024 # Iterative loops in Bicep
The next example creates the number of storage accounts specified in the `storag
param location string = resourceGroup().location param storageCount int = 2
-resource storageAcct 'Microsoft.Storage/storageAccounts@2022-09-01' = [for i in range(0, storageCount): {
+resource storageAcct 'Microsoft.Storage/storageAccounts@2023-04-01' = [for i in range(0, storageCount): {
name: '${i}storage${uniqueString(resourceGroup().id)}' location: location sku: {
param storageNames array = [
'coho' ]
-resource storageAcct 'Microsoft.Storage/storageAccounts@2022-09-01' = [for name in storageNames: {
+resource storageAcct 'Microsoft.Storage/storageAccounts@2023-04-01' = [for name in storageNames: {
name: '${name}${uniqueString(resourceGroup().id)}' location: location sku: {
var storageConfigurations = [
} ]
-resource storageAccountResources 'Microsoft.Storage/storageAccounts@2022-09-01' = [for (config, i) in storageConfigurations: {
+resource storageAccountResources 'Microsoft.Storage/storageAccounts@2023-04-01' = [for (config, i) in storageConfigurations: {
name: '${storageAccountNamePrefix}${config.suffix}${i}' location: resourceGroup().location sku: {
param orgNames array = [
'Coho' ]
-resource nsg 'Microsoft.Network/networkSecurityGroups@2020-06-01' = [for name in orgNames: {
+resource nsg 'Microsoft.Network/networkSecurityGroups@2023-11-01' = [for name in orgNames: {
name: 'nsg-${name}' location: location }]
param nsgValues object = {
} }
-resource nsg 'Microsoft.Network/networkSecurityGroups@2020-06-01' = [for nsg in items(nsgValues): {
+resource nsg 'Microsoft.Network/networkSecurityGroups@2023-11-01' = [for nsg in items(nsgValues): {
name: nsg.value.name location: nsg.value.location }]
module stgModule './storageAccount.bicep' = [for i in range(0, storageCount): if
The next example shows how to apply a condition that is specific to the current element in the array. ```bicep
-resource parentResources 'Microsoft.Example/examples@2020-06-06' = [for parent in parents: if(parent.enabled) {
+resource parentResources 'Microsoft.Example/examples@2024-06-06' = [for parent in parents: if(parent.enabled) {
name: parent.name properties: { children: [for child in parent.children: {
To serially deploy instances of a resource, add the [batchSize decorator](./file
param location string = resourceGroup().location @batchSize(2)
-resource storageAcct 'Microsoft.Storage/storageAccounts@2022-09-01' = [for i in range(0, 4): {
+resource storageAcct 'Microsoft.Storage/storageAccounts@2023-04-01' = [for i in range(0, 4): {
name: '${i}storage${uniqueString(resourceGroup().id)}' location: location sku: {
You can't use a loop for a nested child resource. To create more than one instan
For example, suppose you typically define a file service and file share as nested resources for a storage account. ```bicep
-resource stg 'Microsoft.Storage/storageAccounts@2022-09-01' = {
+resource stg 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: 'examplestorage' location: resourceGroup().location kind: 'StorageV2'
To create more than one file share, move it outside of the storage account. You
The following example shows how to create a storage account, file service, and more than one file share: ```bicep
-resource stg 'Microsoft.Storage/storageAccounts@2022-09-01' = {
+resource stg 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: 'examplestorage' location: resourceGroup().location kind: 'StorageV2'
resource stg 'Microsoft.Storage/storageAccounts@2022-09-01' = {
} }
-resource service 'Microsoft.Storage/storageAccounts/fileServices@2021-06-01' = {
+resource service 'Microsoft.Storage/storageAccounts/fileServices@2023-04-01' = {
name: 'default' parent: stg }
-resource share 'Microsoft.Storage/storageAccounts/fileServices/shares@2021-06-01' = [for i in range(0, 3): {
+resource share 'Microsoft.Storage/storageAccounts/fileServices/shares@2023-04-01' = [for i in range(0, 3): {
name: 'exampleshare${i}' parent: service }]
The outputs of the two samples in [Integer index](#integer-index) can be written
param location string = resourceGroup().location param storageCount int = 2
-resource storageAcct 'Microsoft.Storage/storageAccounts@2022-09-01' = [for i in range(0, storageCount): {
+resource storageAcct 'Microsoft.Storage/storageAccounts@2023-04-01' = [for i in range(0, storageCount): {
name: '${i}storage${uniqueString(resourceGroup().id)}' location: location sku: {
This Bicep file is transpiled into the following ARM JSON template that utilizes
"count": "[length(range(0, parameters('storageCount')))]" }, "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2022-09-01",
+ "apiVersion": "2023-04-01",
"name": "[format('{0}storage{1}', range(0, parameters('storageCount'))[copyIndex()], uniqueString(resourceGroup().id))]", "location": "[parameters('location')]", "sku": {
azure-resource-manager Msbuild Bicep File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/msbuild-bicep-file.md
Title: Use MSBuild to convert Bicep to JSON description: Use MSBuild to convert a Bicep file to Azure Resource Manager template (ARM template) JSON. Previously updated : 01/31/2024 Last updated : 07/11/2024
You need a Bicep file and a BicepParam file to be converted to JSON.
var storageAccountName = 'storage${uniqueString(resourceGroup().id)}'
- resource storageAccount 'Microsoft.Storage/storageAccounts@2023-01-01' = {
+ resource storageAccount 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: storageAccountName location: location sku: {
azure-resource-manager Operator Safe Dereference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/operator-safe-dereference.md
Title: Bicep safe-dereference operator
description: Describes Bicep safe-dereference operator. Previously updated : 03/20/2024 Last updated : 07/11/2024 # Bicep safe-dereference operator
param storageAccountSettings array = []
param storageCount int param location string = resourceGroup().location
-resource storage 'Microsoft.Storage/storageAccounts@2022-09-01' = [for i in range(0, storageCount): {
+resource storage 'Microsoft.Storage/storageAccounts@2023-04-01' = [for i in range(0, storageCount): {
name: storageAccountSettings[?i].?name ?? 'defaultname' location: storageAccountSettings[?i].?location ?? location kind: storageAccountSettings[?i].?kind ?? 'StorageV2'
azure-resource-manager Operators Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/operators-access.md
Title: Bicep accessor operators
description: Describes Bicep resource access operator and property access operator. Previously updated : 06/23/2023 Last updated : 07/11/2024 # Bicep accessor operators
Two functions - [getSecret](bicep-functions-resource.md#getsecret) and [list*](b
The following example references an existing key vault, then uses `getSecret` to pass a secret to a module. ```bicep
-resource kv 'Microsoft.KeyVault/vaults@2023-02-01' existing = {
+resource kv 'Microsoft.KeyVault/vaults@2023-07-01' existing = {
name: kvName scope: resourceGroup(subscriptionId, kvResourceGroup ) }
Within the parent resource, you reference the nested resource with just the symb
The following example shows how to reference a nested resource from within the parent resource and from outside of the parent resource. ```bicep
-resource demoParent 'demo.Rp/parentType@2023-01-01' = {
+resource demoParent 'demo.Rp/parentType@2024-01-01' = {
name: 'demoParent' location: 'West US'
Output from the example:
Typically, you use the property accessor with a resource deployed in the Bicep file. The following example creates a public IP address and uses property accessors to return a value from the deployed resource. ```bicep
-resource publicIp 'Microsoft.Network/publicIPAddresses@2022-11-01' = {
+resource publicIp 'Microsoft.Network/publicIPAddresses@2023-11-01' = {
name: publicIpResourceName location: location properties: {
azure-resource-manager Outputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/outputs.md
param deployStorage bool = true
param storageName string param location string = resourceGroup().location
-resource myStorageAccount 'Microsoft.Storage/storageAccounts@2019-06-01' = if (deployStorage) {
+resource myStorageAccount 'Microsoft.Storage/storageAccounts@2023-04-01' = if (deployStorage) {
name: storageName location: location kind: 'StorageV2'
param orgNames array = [
'Coho' ]
-resource nsg 'Microsoft.Network/networkSecurityGroups@2020-06-01' = [for name in orgNames: {
+resource nsg 'Microsoft.Network/networkSecurityGroups@2023-11-01' = [for name in orgNames: {
name: 'nsg-${name}' location: nsgLocation }]
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/overview.md
Bicep provides the following advantages:
param location string = resourceGroup().location param storageAccountName string = 'toylaunch${uniqueString(resourceGroup().id)}'
- resource storageAccount 'Microsoft.Storage/storageAccounts@2021-06-01' = {
+ resource storageAccount 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: storageAccountName location: location sku: {
Bicep provides the following advantages:
"resources": [ { "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2021-06-01",
+ "apiVersion": "2023-04-01",
"name": "[parameters('storageAccountName')]", "location": "[parameters('location')]", "sku": {
azure-vmware Extended Security Updates Windows Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/extended-security-updates-windows-sql-server.md
For machines that run SQL Server where guest management is enabled, the Azure Ex
- Use Azure Resource Graph queries:
- - You can use the query [VM ESU subscription status](/sql/sql-server/end-of-support/sql-server-extended-security-updates?#view-esu-subscriptions) as an example to show that you can view eligible SQL Server ESU instances and their ESU subscription status.
-
+ - You can use the query [List Arc-enabled SQL Server instances subscribed to ESU](https://learn.microsoft.com/sql/sql-server/azure-arc/manage-configuration?view=sql-server-ver16&tabs=azure&branch=main#list-arc-enabled-sql-server-instances-subscribed-to-esu) as an example to show how you can view eligible SQL Server ESU instances and their ESU subscription status.
+
### Windows Server To enable ESUs for Windows Server environments that run in VMs in Azure VMware Solution, contact [Microsoft Support] for configuration assistance.
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md
Agent-based faults are injected into **Azure Virtual Machines** or **Virtual Mac
| Windows | [Network Disconnect (Via Firewall)](#network-disconnect-via-firewall) | Network disruption | | Windows, Linux | [Physical Memory Pressure](#physical-memory-pressure) | Memory capacity loss, resource pressure | | Windows, Linux | [Stop Service](#stop-service) | Service disruption/restart |
-| Windows, Linux | [Time Change](#time-change) | Time synchronization issues |
-| Windows, Linux | [Virtual Memory Pressure](#virtual-memory-pressure) | Memory capacity loss, resource pressure |
+| Windows | [Time Change](#time-change) | Time synchronization issues |
+| Windows | [Virtual Memory Pressure](#virtual-memory-pressure) | Memory capacity loss, resource pressure |
| Linux | [Arbitrary Stress-ng Stressor](#arbitrary-stress-ng-stressor) | General system stress testing | | Linux | [Linux DiskIO Pressure](#linux-disk-io-pressure) | Disk I/O performance degradation | | Windows | [DiskIO Pressure](#disk-io-pressure) | Disk I/O performance degradation |
communication-services File Sharing Tutorial Interop Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/file-sharing-tutorial-interop-chat.md
# Enable file sharing using UI Library in Teams Interoperability Chat -
-In a Teams Interoperability Chat ("Interop Chat"), we can enable file sharing between Azure Communication Services end users and Teams users. Note, Interop Chat is different from the Azure Communication Services Chat. If you want to enable file sharing in an Azure Communication Services Chat, refer to [Add file sharing with UI Library in Azure Communication Services Chat](./file-sharing-tutorial-acs-chat.md). Currently, the Azure Communication Services end user is only able to receive file attachments from the Teams user. Please refer to [UI Library Use Cases](../concepts/ui-library/ui-library-use-cases.md) to learn more.
+In a Teams Interoperability Chat ("Interop Chat"), we can enable file sharing between Azure Communication Services end users and Teams users. Note, Interop Chat is different from the Azure Communication Services Chat. If you want to enable file sharing in an Azure Communication Services Chat, refer to [Add file sharing with UI Library in Azure Communication Services Chat](./file-sharing-tutorial-acs-chat.md). Currently, the Azure Communication Services end user is only able to receive file attachments from the Teams user. Refer to [UI Library Use Cases](../concepts/ui-library/ui-library-use-cases.md) to learn more.
>[!IMPORTANT] >
Access the code for this tutorial on [GitHub](https://github.com/Azure-Samples/c
- [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms). - [Node.js](https://nodejs.org/), Active LTS and Maintenance LTS versions. Use the `node --version` command to check your version. - An active Communication Services resource and connection string. [Create a Communication Services resource](../quickstarts/create-communication-resource.md).-- Using the UI library version [1.7.0-beta.1](https://www.npmjs.com/package/@azure/communication-react/v/1.7.0-beta.1) or the latest.
+- Using the UI library version [1.17.0](https://www.npmjs.com/package/@azure/communication-react/v/1.17.0) or the latest.
- Have a Teams meeting created and the meeting link ready. - Be familiar with how [ChatWithChat Composite](https://azure.github.io/communication-ui-library/?path=/docs/composites-call-with-chat-basicexample--basic-example) works.
export type CallWithChatExampleProps = {
token: string; displayName: string; endpointUrl: string;
- locator: TeamsMeetingLinkLocator | CallAndChatLocator;
+ locator: TeamsMeetingLinkLocator | TeamsMeetingIdLocator | CallAndChatLocator;
// Props to customize the CallWithChatComposite experience fluentTheme?: PartialTheme | Theme;
export type CallWithChatExampleProps = {
```
-To be able to start the Composite for meeting chat, we need to pass `TeamsMeetingLinkLocator`, which looks like this:
+To be able to start the Composite for meeting chat, we need to pass `TeamsMeetingLinkLocator` or `TeamsMeetingIdLocator`, which looks like this:
```js { "meetingLink": "<TEAMS_MEETING_LINK>" } ```
+Or
+
+```js
+{ "meetingId": "<TEAMS_MEETING_ID>", "passcode": "<TEAMS_MEETING_PASSCODE>"}
+```
+ This is all you need - and there's no other setup needed to enable the Azure Communication Services end user to receive file attachments from the Teams user! ## Permissions
cosmos-db Migration Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/migration-options.md
Last updated 11/17/2023
This document describes the various options to lift and shift your MongoDB workloads to Azure Cosmos DB for MongoDB vCore offering.
-## Azure Data Studio (Offline)
+## Azure Data Studio (Online)
The [The MongoDB migration extension for Azure Data Studio](/azure-data-studio/extensions/database-migration-for-mongo-extension) is the preferred tool in migrating your MongoDB workloads to the API for MongoDB vCore.
cosmos-db Multi Region Writes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/multi-region-writes.md
One of the primary differences in a multi-region-write account is the presence o
| Timestamp | Meaning | When exposed | | | - | - | | `_ts` | The server epoch time at which the entity was written. | Always exposed by all read and query APIs. |
-| `crts` | The epoch time at which the Multi-Write conflict was resolved, or the absence of a conflict was confirmed. For Multi-Write region configuration, this timestamp defines the order of changes for Continuous backup and Change Feed:<br><br><ul><li>Used to find start time for Change Feed requests</li><li>Used as sort order for in Change Feed response.</li><li>Used to order the writes for Continuous Backup</li><li>The log backup only captures confirmed or conflict resolved writes and hence restore result of a Continuous backup only returns confirmed writes.</li></ul> | Exposed in response to Change Feed requests and only when "New Wire Model" is enabled by the request. This is the default for [All versions and deletes](change-feed.md#all-versions-and-deletes-mode-preview) Change Feed mode. |
+| `crts` | The epoch time at which the Multi-Write conflict was resolved, or the absence of a conflict was confirmed. For Multi-Write region configuration, this timestamp defines the order of changes for Change Feed:<br><br><ul><li>Used to find start time for Change Feed requests</li><li>Used as sort order in Change Feed response.</li></ul> | Exposed in response to Change Feed requests and only when "New Wire Model" is enabled by the request. This is the default for [All versions and deletes](change-feed.md#all-versions-and-deletes-mode-preview) Change Feed mode. |
defender-for-cloud Defender For Storage Malware Scan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-malware-scan.md
Malware Scanning doesn't automatically block access or change permissions to the
- Unsupported storage accounts: Legacy v1 storage accounts aren't supported by malware scanning. - Unsupported service: Azure Files isn't supported by malware scanning.
+- Unsupported client: Blobs uploaded with Network File System (NFS) 3.0 protocol will not be scanned for malware upon upload.
+ - Unsupported regions: Jio India West, Korea South, South Africa West. - Regions that are supported by Defender for Storage but not by malware scanning. Learn more about [availability for Defender for Storage.](defender-for-storage-introduction.md)
defender-for-cloud Management Groups Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/management-groups-roles.md
- Title: Organize subscriptions into management groups and assign roles to users
-description: Learn how to organize your Azure subscriptions into management groups in Microsoft Defender for Cloud and assign roles to users in your organization.
- Previously updated : 03/12/2024---
-# Organize subscriptions into management groups and assign roles to users
-
-Manage your organizationΓÇÖs security posture at scale by applying security policies to all Azure subscriptions linked to your Microsoft Entra tenant.
-
-For visibility into the security posture of all subscriptions linked to a Microsoft Entra tenant, you'll need an Azure role with sufficient read permissions assigned on the root management group.
-
-## Organize your subscriptions into management groups
-
-### Overview of management groups
-
-Use management groups to efficiently manage access, policies, and reporting on groups of subscriptions, and effectively manage the entire Azure estate by performing actions on the root management group. You can organize subscriptions into management groups and apply your governance policies to the management groups. All subscriptions within a management group automatically inherit the policies applied to the management group.
-
-Each Microsoft Entra tenant is given a single top-level management group called the root management group. This root management group is built into the hierarchy to have all management groups and subscriptions fold up to it. This group allows global policies and Azure role assignments to be applied at the directory level.
-
-The root management group is created automatically when you do any of the following actions:
--- In the [Azure portal](https://portal.azure.com), select **Management Groups**.-- Create a management group with an API call.-- Create a management group with PowerShell. For PowerShell instructions, see [Create management groups for resource and organization management](../governance/management-groups/create-management-group-portal.md).-
-Management groups aren't required to onboard Defender for Cloud, but we recommend creating at least one so that the root management group gets created. After the group is created, all subscriptions under your Microsoft Entra tenant will be linked to it.
-
-For a detailed overview of management groups, see the [Organize your resources with Azure management groups](../governance/management-groups/overview.md) article.
-
-### View and create management groups in the Azure portal
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Search for and select **Management Groups**.
-
-1. To create a management group, select **Create**, enter the relevant details, and select **Submit**.
-
- :::image type="content" source="media/management-groups-roles/add-management-group.png" alt-text="Adding a management group to Azure." lightbox="media/management-groups-roles/add-management-group.png":::
-
- - The **Management Group ID** is the directory unique identifier that is used to submit commands on this management group. This identifier isn't editable after creation as it is used throughout the Azure system to identify this group.
-
- - The display name field is the name that is displayed within the Azure portal. A separate display name is an optional field when creating the management group and can be changed at any time.
-
-### Add subscriptions to a management group
-
-You can add subscriptions to the management group that you created.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Search for and select **Management Groups**.
-
-1. Select the management group for your subscription.
-
-1. When the group's page opens, select **Subscriptions**.
-
-1. From the subscriptions page, select **Add**, then select your subscriptions and select **Save**. Repeat until you've added all the subscriptions in the scope.
-
- :::image type="content" source="./media/management-groups-roles/management-group-add-subscriptions.png" alt-text="Adding a subscription to a management group." lightbox="media/management-groups-roles/management-group-add-subscriptions.png":::
-
- > [!IMPORTANT]
- > Management groups can contain both subscriptions and child management groups. When you assign a user an Azure role to the parent management group, the access is inherited by the child management group's subscriptions. Policies set at the parent management group are also inherited by the children.
-
-## Assign Azure roles to other users
-
-### Assign Azure roles to users through the Azure portal
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Search for and select **Management Groups**.
-
-1. Select the relevant management group.
-
-1. Select **Access control (IAM)**, open the **Role assignments** tab and select **Add** > **Add role assignment**.
-
- :::image type="content" source="./media/management-groups-roles/add-user.png" alt-text="Adding a user to a management group." lightbox="media/management-groups-roles/add-user.png":::
-
-1. From the **Add role assignment** page, select the relevant role.
-
- :::image type="content" source="./media/management-groups-roles/add-role-assignment-page.png" alt-text="Add role assignment page." lightbox="media/management-groups-roles/add-role-assignment-page.png":::
-
-1. From the **Members** tab, select **+ Select members** and assign the role to the relevant members.
-
-1. On the **Review + assign** tab, select **Review + assign** to assign the role.
-
-### Assign Azure roles to users with PowerShell
-
-1. Install [Azure PowerShell](/powershell/azure/install-azure-powershell).
-1. Run the following commands:
-
- ```azurepowershell
- # Login to Azure as a Global Administrator user
- Connect-AzAccount
- ```
-
-1. When prompted, sign in with global admin credentials.
-
- ![Sign in prompt screenshot.](./media/management-groups-roles/azurerm-sign-in.PNG)
-
-1. Grant reader role permissions by running the following command:
-
- ```azurepowershell
- # Add Reader role to the required user on the Root Management Group
- # Replace "user@domian.comΓÇ¥ with the user to grant access to
- New-AzRoleAssignment -SignInName "user@domain.com" -RoleDefinitionName "Reader" -Scope "/"
- ```
-
-1. To remove the role, use the following command:
-
- ```azurepowershell
- Remove-AzRoleAssignment -SignInName "user@domain.com" -RoleDefinitionName "Reader" -Scope "/"
- ```
-
-## Remove elevated access
-
-Once the Azure roles are assigned to the users, the tenant administrator should remove itself from the user access administrator role.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. In the navigation list, select **Microsoft Entra ID** and then select **Properties**.
-
-1. Under **Access management for Azure resources**, set the switch to **No**.
-
-1. To save your setting, select **Save**.
-
-## Next steps
-
-On this page, you learned how to organize subscriptions into management groups and assign roles to users. For related information, see:
--- [Permissions in Microsoft Defender for Cloud](permissions.md)
defender-for-cloud Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/permissions.md
Title: User roles and permissions description: Learn how Microsoft Defender for Cloud uses role-based access control to assign permissions to users and identify the permitted actions for each role. Previously updated : 05/12/2024 Last updated : 07/11/2024 # User roles and permissions
To allow the Security Admin role to automatically provision agents and extension
| Defender for Containers provisioning Azure Policy for Kubernetes | ΓÇó Kubernetes Extension Contributor<br>ΓÇó Contributor<br>ΓÇó Azure Kubernetes Service Contributor | | Defender for Containers provisioning Policy extension for Arc-enabled Kubernetes | ΓÇó Azure Kubernetes Service Contributor<br>ΓÇó Kubernetes Extension Contributor<br>ΓÇó Contributor |
+## Permissions on AWS
+
+When you onboard an Amazon Web Services (AWS) connector, Defender for Cloud will create roles and assign permissions on your AWS account. The following table shows the roles and permission assigned by each plan on your AWS account.
+
+| Defender for Cloud plan | Role created | Permission assigned on AWS account |
+|--|--|--|
+| Defender CSPM | CspmMonitorAws | To discover AWS resources permissions, read all resources except:<br> "consolidatedbilling:*"<br> "freetier:*"<br> "invoicing:*"<br> "payments:*"<br> "billing:*"<br> "tax:*"<br> "cur:*" |
+| Defender CSPM <br><br> Defender for Servers | DefenderForCloud-AgentlessScanner | To create and clean up disk snapshots (scoped by tag) ΓÇ£CreatedByΓÇ¥: "Microsoft Defender for Cloud" Permissions:<br> "ec2:DeleteSnapshot" "ec2:ModifySnapshotAttribute"<br> "ec2:DeleteTags"<br> "ec2:CreateTags"<br> "ec2:CreateSnapshots"<br> "ec2:CopySnapshot"<br> "ec2:CreateSnapshot"<br> "ec2:DescribeSnapshots"<br> "ec2:DescribeInstanceStatus"<br> Permission to EncryptionKeyCreation "kms:CreateKey"<br> "kms:ListKeys"<br> Permissions to EncryptionKeyManagement "kms:TagResource"<br> "kms:GetKeyRotationStatus"<br> "kms:PutKeyPolicy"<br> "kms:GetKeyPolicy"<br> "kms:CreateAlias"<br> "kms:TagResource"<br> "kms:ListResourceTags"<br> "kms:GenerateDataKeyWithoutPlaintext"<br> "kms:DescribeKey"<br> "kms:RetireGrant"<br> "kms:CreateGrant"<br> "kms:ReEncryptFrom" |
+| Defender CSPM <br><br> Defender for Storage | SensitiveDataDiscovery | Permissions to discover S3 buckets in the AWS account, permission for the Defender for Cloud scanner to access data in the S3 buckets.<br> S3 read only; KMS decrypt "kms:Decrypt" |
+| CIEM | DefenderForCloud-Ciem <br> DefenderForCloud-OidcCiem | Permissions for Ciem Discovery<br> "sts:AssumeRole"<br> "sts:AssumeRoleWithSAML"<br> "sts:GetAccessKeyInfo"<br> "sts:GetCallerIdentity"<br> "sts:GetFederationToken"<br> "sts:GetServiceBearerToken"<br> "sts:GetSessionToken"<br> "sts:TagSession" |
+| Defender for Servers | DefenderForCloud-DefenderForServers | Permissions to configure JIT Network Access: <br>"ec2:RevokeSecurityGroupIngress"<br> "ec2:AuthorizeSecurityGroupIngress"<br> "ec2:DescribeInstances"<br> "ec2:DescribeSecurityGroupRules"<br> "ec2:DescribeVpcs"<br> "ec2:CreateSecurityGroup"<br> "ec2:DeleteSecurityGroup"<br> "ec2:ModifyNetworkInterfaceAttribute"<br> "ec2:ModifySecurityGroupRules"<br> "ec2:ModifyInstanceAttribute"<br> "ec2:DescribeSubnets"<br> "ec2:DescribeSecurityGroups" |
+| Defender for Containers | DefenderForCloud-Containers-K8s | Permissions to List EKS clusters and Collect Data from EKS clusters. <br>"eks:UpdateClusterConfig"<br> "eks:DescribeCluster" |
+| Defender for Containers | DefenderForCloud-DataCollection | Permissions to CloudWatch Log Group created by Defender for Cloud <br>ΓÇ£logs:PutSubscriptionFilter"<br> "logs:DescribeSubscriptionFilters"<br> "logs:DescribeLogGroups" autp "logs:PutRetentionPolicy"<br><br> Permissions to use SQS queue created by Defender for Cloud <br>"sqs:ReceiveMessage"<br> "sqs:DeleteMessage" |
+| Defender for Containers | DefenderForCloud-Containers-K8s-cloudwatch-to-kinesis | Permissions to access Kinesis Data Firehose delivery stream created by Defender for Cloud<br> "firehose:*" |
+| Defender for Containers | DefenderForCloud-Containers-K8s-kinesis-to-s3 | Permissions to access S3 bucket created by Defender for Cloud <br> "s3:GetObject"<br> "s3:GetBucketLocation"<br> "s3:AbortMultipartUpload"<br> "s3:GetBucketLocation"<br> "s3:GetObject"<br> "s3:ListBucket"<br> "s3:ListBucketMultipartUploads"<br> "s3:PutObject" |
+| Defender for Containers <br><br> Defender CSPM | MDCContainersAgentlessDiscoveryK8sRole | Permissions to Collecting Data from EKS clusters. Updating EKS clusters to support IP restriction and create iamidentitymapping for EKS clusters<br> “eks:DescribeCluster” <br>“eks:UpdateClusterConfig*” |
+| Defender for Containers <br><br> Defender CSPM | MDCContainersImageAssessmentRole | Permissions to Scan images from ECR and ECR Public. <br>AmazonEC2ContainerRegistryReadOnlyΓÇ»<br>AmazonElasticContainerRegistryPublicReadOnly <br>AmazonEC2ContainerRegistryPowerUserΓÇ» <br> AmazonElasticContainerRegistryPublicPowerUser |
+| Defender for Servers | DefenderForCloud-ArcAutoProvisioning | Permissions to install Azure Arc on all EC2 instances using SSM <br>"ssm:CancelCommand"<br> "ssm:DescribeInstanceInformation"<br> "ssm:GetCommandInvocation"<br> "ssm:UpdateServiceSetting"<br> "ssm:GetServiceSetting"<br> "ssm:GetAutomationExecution"<br> "ec2:DescribeIamInstanceProfileAssociations"<br> "ec2:DisassociateIamInstanceProfile"<br> "ec2:DescribeInstances"<br> "ssm:StartAutomationExecution"<br> "iam:GetInstanceProfile"<br> "iam:ListInstanceProfilesForRole"<br> "ssm:GetAutomationExecution"<br> "ec2:DescribeIamInstanceProfileAssociations"<br> "ec2:DisassociateIamInstanceProfile"<br> "ec2:DescribeInstances"<br> "ssm:StartAutomationExecution"<br> "iam:GetInstanceProfile"<br> "iam:ListInstanceProfilesForRole" |
+| Defender CSPM | DefenderForCloud-DataSecurityPostureDB | Permission to Discover RDS instances in AWS account, create RDS instance snapshot, <br> - List all RDS DBs/clusters <br> - List all DB/Cluster snapshots <br> - Copy all DB/cluster snapshots <br> - Delete/update DB/cluster snapshot with prefixΓÇ»*defenderfordatabases* <br> - List all KMS keys <br> - Use all KMS keys only for RDS on source account <br> - List KMS keys with tag prefixΓÇ»*DefenderForDatabases* <br> - Create alias for KMS keys <br><br> Permissions required to discover, RDS instances<br> "rds:DescribeDBInstances"<br> "rds:DescribeDBClusters"<br> "rds:DescribeDBClusterSnapshots"<br> "rds:DescribeDBSnapshots"<br> "rds:CopyDBSnapshot"<br> "rds:CopyDBClusterSnapshot"<br> "rds:DeleteDBSnapshot"<br> "rds:DeleteDBClusterSnapshot"<br> "rds:ModifyDBSnapshotAttribute"<br> "rds:ModifyDBClusterSnapshotAttribute" "rds:DescribeDBClusterParameters"<br> "rds:DescribeDBParameters"<br> "rds:DescribeOptionGroups"<br> "kms:CreateGrant"<br> "kms:ListAliases"<br> "kms:CreateKey"<br> "kms:TagResource"<br> "kms:ListGrants"<br> "kms:DescribeKey"<br> "kms:PutKeyPolicy"<br> "kms:Encrypt"<br> "kms:CreateGrant"<br> "kms:EnableKey"<br> "kms:CancelKeyDeletion"<br> "kms:DisableKey"<br> "kms:ScheduleKeyDeletion"<br> "kms:UpdateAlias"<br> "kms:UpdateKeyDescription" |
+
+## Permissions on GCP
+
+When you onboard an Google Cloud Projects (GCP) connector, Defender for Cloud will create roles and assign permissions on your GCP project. The following table shows the roles and permission assigned by each plan on your GCP project.
+
+| Defender for Cloud plan | Role created | Permission assigned on AWS account |
+|--|--|--|
+| Defender CSPM | MDCCspmCustomRole | To discover GCP resources <br> resourcemanager.folders.getIamPolicy<br> resourcemanager.folders.list<br> resourcemanager.organizations.get<br> resourcemanager.organizations.getIamPolicy<br> storage.buckets.getIamPolicy resourcemanager.folders.get<br> resourcemanager.projects.get<br> resourcemanager.projects.list<br> serviceusage.services.enable<br> iam.roles.create<br> iam.roles.list<br> iam.serviceAccounts.actAs<br> compute.projects.get<br> compute.projects.setCommonInstanceMetadata" |
+| Defender for Servers | microsoft-defender-for-servers <br> azure-arc-for-servers-onboard | Read-only access to get and list Compute Engine <br> resources roles/compute.viewer<br> roles/iam.serviceAccountTokenCreator<br> roles/osconfig.osPolicyAssignmentAdmin<br> roles/osconfig.osPolicyAssignmentReportViewer |
+| Defender for Database | defender-for-databases-arc-ap | Permissions to Defender for databases ARC auto provisioning <br> roles/compute.viewer <br> roles/iam.workloadIdentityUser <br> roles/iam.serviceAccountTokenCreator<br> roles/osconfig.osPolicyAssignmentAdmin<br> roles/osconfig.osPolicyAssignmentReportViewer |
+| Defender CSPM <br><br> Defender for Storage | data-security-posture-storage | Permission for the Defender for Cloud scanner to discover GCP storage buckets, to access data in the GCP storage buckets <br> storage.objects.list<br> storage.objects.get<br> storage.buckets.get |
+| Defender CSPM <br><br> Defender for Storage | data-security-posture-storage | Permission for the Defender for Cloud scanner to discover GCP storage buckets, to access data in the GCP storage buckets<br> storage.objects.list<br> storage.objects.get<br> storage.buckets.get |
+| Defender CSPM | microsoft-defender-ciem | Permissions to get details about the organization resource.<br> resourcemanager.folders.getIamPolicy<br> resourcemanager.folders.list<br> resourcemanager.organizations.get<br> resourcemanager.organizations.getIamPolicy<br> storage.buckets.getIamPolicy |
+| Defender CSPM <br><br> Defender for Servers | MDCAgentlessScanningRole | Permissions for agentless disk scanning:<br> compute.disks.createSnapshot<br> compute.instances.get |
+| Defender CSPM <br><br> Defender for servers | cloudkms.cryptoKeyEncrypterDecrypter | Permissions to an existing GCP KMS role are granted to support scanning disks that are encrypted with CMEK |
+| Defender CSPM <br><br> Defender for Containers | mdc-containers-artifact-assess | Permission to Scan images from GAR and GCR. <br> Roles/artifactregistry.readerΓÇ»<br> Roles/storage.objectViewer |
+| Defender for Containers | mdc-containers-k8s-operator | Permissions to Collect Data from GKE clusters. Update GKE clusters to support IP restriction. <br> Roles/container.viewerΓÇ»<br> MDCGkeClusterWriteRole container.clusters.update* |
+| Defender for Containers | microsoft-defender-containers | Permissions to create and manage log sink to route logs to a Cloud Pub/Sub topic. <br> logging.sinks.list<br> logging.sinks.get<br> logging.sinks.create<br> logging.sinks.update<br> logging.sinks.delete<br> resourcemanager.projects.getIamPolicy<br> resourcemanager.organizations.getIamPolicy<br> iam.serviceAccounts.get <br>iam.workloadIdentityPoolProviders.get |
+| Defender for Containers | ms-defender-containers-stream | Permissions to allow logging to send logs to pub sub:<br> pubsub.subscriptions.consume <br> pubsub.subscriptions.get |
+ ## Next steps This article explained how Defender for Cloud uses Azure RBAC to assign permissions to users and identified the allowed actions for each role. Now that you're familiar with the role assignments needed to monitor the security state of your subscription, edit security policies, and apply recommendations, learn how to:
event-grid Get Access Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/get-access-keys.md
Access keys are used to authenticate an application publishing events to Azure E
This article describes how to get access keys for an Event Grid resource (topic or domain) using Azure portal, PowerShell, or CLI. > [!IMPORTANT]
-> From August 5, 2024 to August 15, 2024, Azure Event Grid will rollout a security improvement extending the Shared Access Signature (SAS) key length from 44 to 84 characters. This change is being made to strengthen the security of your data in Event Grid resources. The change doesn't impact any application or service that currently publishes events to Event Grid with the old SAS key but it may impact only if you regenerate the SAS key of your Event Grid topics, domains, namespaces, and partner topics, after the update.
+> From August 5, 2024 to August 15, 2024, Azure Event Grid will rollout a security improvement which will increase the SAS key size from 44 to 84 characters. This change is being made to strengthen the security of your data in Event Grid resources. The change doesn't impact any application or service that currently publishes events to Event Grid with the old SAS key but it may impact only if you regenerate the SAS key of your Event Grid topics, domains, namespaces, and partner topics, after the update.
> > We recommend that you regenerate your SAS key on or after August 15, 2024. After regenerating the key, update any event publishing applications or services that use the old key to use the enhanced SAS key.
external-attack-surface-management Data Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/data-connections.md
On the leftmost pane in your Defender EASM resource pane, under **Manage**, sele
To successfully create a data connection, you must first ensure that you've completed the required steps to grant Defender EASM permission to the tool of your choice. This process enables the application to ingest your exported data. It also provides the authentication credentials needed to configure the connection. > [!NOTE]
-> Defender EASM data connections do not support Log Analytics workspaces that have private link(s) configured.
+> Defender EASM data connections do not support private links or networks.
## Configure Log Analytics permissions
firewall Firewall Copilot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-copilot.md
Last updated 05/20/2024
ms.localizationpriority: high-+ # Azure Firewall integration in Microsoft Copilot for Security (preview)
frontdoor How To Configure Https Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-configure-https-custom-domain.md
Register the service principal for Azure Front Door as an app in your Microsoft
Azure public cloud: ```azurepowershell-interactive
- New-MgServicePrincipal -ApplicationId '205478c0-bd83-4e1b-a9d6-db63a3e1e1c8'
+ New-MgServicePrincipal -AppId '205478c0-bd83-4e1b-a9d6-db63a3e1e1c8'
``` Azure government cloud: ```azurepowershell-interactive
- New-MgServicePrincipal -ApplicationId 'd4631ece-daab-479b-be77-ccb713491fc0'
+ New-MgServicePrincipal -AppId 'd4631ece-daab-479b-be77-ccb713491fc0'
``` # [Azure CLI](#tab/cli)
governance Effect Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effect-modify.md
The `modify` effect is used to add, update, or remove properties or tags on a su
The `modify` effect supports the following operations: -- Add, replace, or remove resource tags. For tags, a Modify policy should have [mode](./definition-structure.md#resource-manager-modes) set to `indexed` unless the target resource is a resource group.-- Add or replace the value of managed identity type (`identity.type`) of virtual machines and Virtual Machine Scale Sets. You can only modify the `identity.type` for virtual machines or Virtual Machine Scale Sets.-- Add or replace the values of certain aliases.
+- _Add_, _replace_, or _remove_ resource tags. Only tags can be removed. For tags, a Modify policy should have [mode](./definition-structure.md#resource-manager-modes) set to `indexed` unless the target resource is a resource group.
+- _Add_ or _replace_ the value of managed identity type (`identity.type`) of virtual machines and Virtual Machine Scale Sets. You can only modify the `identity.type` for virtual machines or Virtual Machine Scale Sets.
+- _Add_ or _replace_ the values of certain aliases.
- Use `Get-AzPolicyAlias | Select-Object -ExpandProperty 'Aliases' | Where-Object { $_.DefaultMetadata.Attributes -eq 'Modifiable' }` in Azure PowerShell **4.6.0** or higher to get a list of aliases that can be used with `modify`. > [!IMPORTANT]
The `modify` effect supports the following operations:
## Modify evaluation
-Modify evaluates before the request gets processed by a Resource Provider during the creation or updating of a resource. The `modify` operations are applied to the request content when the `if` condition of the policy rule is met. Each `modify` operation can specify a condition that determines when it's applied. Operations with _false_ condition evaluations are skipped.
+Modify evaluates before the request gets processed by a Resource Provider during the creation or updating of a resource. The `modify` operations are applied to the request content when the `if` condition of the policy rule is met. Each `modify` operation can specify a condition that determines when it's applied.
-When an alias is specified, the more checks are performed to ensure that the `modify` operation doesn't change the request content in a way that causes the resource provider to reject it:
+When an alias is specified, more checks are performed to ensure that the `modify` operation doesn't change the request content in a way that causes the resource provider to reject it:
- The property the alias maps to is marked as **Modifiable** in the request's API version. - The token type in the `modify` operation matches the expected token type for the property in the request's API version.
If either of these checks fail, the policy evaluation falls back to the specifie
> same alias behaves differently between API versions, conditional modify operations can be used to > determine the `modify` operation used for each API version.
+There are some cases when modify operations are skipped during evaluation:
+- When the condition of an operation in the `operations` array is evaluated to _false_, that particular operation is skipped.
+- If an alias specified for an operation isn't modifiable in the request's API version, then evaluation uses the conflict effect. If the conflict effect is set to _deny_, the request is blocked. If the conflict effect is set to _audit_, the request is allowed through but the modify operation is skipped.
+- In some cases, modifiable properties are nested within other properties and have an alias like `Microsoft.Storage/storageAccounts/blobServices/deleteRetentionPolicy.enabled`. If the "parent" property, in this case `deleteRetentionPolicy`, isn't present in the request, modification is skipped because that property is assumed to be omitted intentionally.
+- When a modify operation attempts to add or replace the `identity.type` field on a resource other than a Virtual Machine or Virtual Machine Scale Set, policy evaluation is skipped altogether so the modification isn't performed. In this case, the resource is considered not [applicable](../concepts/policy-applicability.md) to the policy.
+ When a policy definition using the `modify` effect is run as part of an evaluation cycle, it doesn't make changes to resources that already exist. Instead, it marks any resource that meets the `if` condition as non-compliant. ## Modify properties
The `details` property of the `modify` effect has all the subproperties that def
- An array of all tag operations to be completed on matching resources. - Properties: - `operation` (required)
- - Defines what action to take on a matching resource. Options are: _addOrReplace_, _Add_, _Remove_. _Add_ behaves similar to the [append](./effect-append.md) effect.
+ - Defines what action to take on a matching resource. Options are: `addOrReplace`, `Add`, and `Remove`.
+ - `Add` behaves similar to the [append](./effect-append.md) effect.
+ - `Remove` is only supported for resource tags.
- `field` (required) - The tag to add, replace, or remove. Tag names must adhere to the same naming convention for other [fields](./definition-structure-policy-rule.md#fields). - `value` (optional)
The `operation` property has the following options:
|-|-| | `addOrReplace` | Adds the defined property or tag and value to the resource, even if the property or tag already exists with a different value. | | `add` | Adds the defined property or tag and value to the resource. |
-| `remove` | Removes the defined property or tag from the resource. |
+| `remove` | Removes the defined tag from the resource. Only supported for tags. |
## Modify examples
Example 3: Ensure that a storage account doesn't allow blob public access, the `
- Understand how to [programmatically create policies](../how-to/programmatically-create.md). - Learn how to [get compliance data](../how-to/get-compliance-data.md). - Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).-- Review [Azure management groups](../../management-groups/overview.md).
+- Review [Azure management groups](../../management-groups/overview.md).
hdinsight-aks Create Cluster Error Dictionary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/create-cluster-error-dictionary.md
Last updated 08/31/2023
# Cluster creation errors on Azure HDInsight on AKS + This article describes how to troubleshoot and resolve errors that could occur when you create Azure HDInsight on AKS clusters. |Sr. No|Error message|Cause|Resolution|
hdinsight-aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/faq.md
Last updated 08/29/2023
This article addresses some common questions about Azure HDInsight on AKS. + ## General * What is HDInsight on AKS?
This article addresses some common questions about Azure HDInsight on AKS.
* What is state backend management and how it's done in HDInsight on AKS?
- Backends determine where state is stored. When checkpointing is activated, state is persisted upon checkpoints to guard against data loss and recover consistently. How the state is represented internally, and how and where it's persisted upon checkpoints depends on the chosen State Backend. For more information,see [Flink overview](./flink/flink-overview.md)
+ Backends determine where state is stored. When checkpointing is activated, state is persisted upon checkpoints to guard against data loss and recover consistently. How the state is represented internally, and how and where it's persisted upon checkpoints depends on the chosen State Backend. For more information, see [Flink overview](./flink/flink-overview.md)
### Apache Spark
hdinsight-aks Use Flink Delta Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-flink-delta-connector.md
Connector's version Flink's version
0.7.0 X >= 1.16.1 We use this in Flink 1.17.0 ```
-For more information, see [Flink/Delta Connector](https://github.com/delta-io/connectors/blob/master/flink/README.md).
## Prerequisites
import org.apache.hadoop.conf.Configuration;
} ```
-For other continuous model example, see [Data Source Modes](https://github.com/delta-io/connectors/blob/master/flink/README.md#modes).
## Writing to Delta sink
public static DataStream<RowData> createDeltaSink(
return stream; } ```
-For other Sink creation example, see [Data Sink Metrics](https://github.com/delta-io/connectors/blob/master/flink/README.md#modes).
+ ## Full code
Once the data is in delta sink, you can run the query in Power BI desktop and cr
:::image type="content" source="./media/use-flink-delta-connector/adls-gen-2-details.png" alt-text="Screenshot shows ADLS Gen2-details.":::
-1. Create M-query for the source and invoke the function, which queries the data from storage account. Refer [Delta Power BI connectors](https://github.com/delta-io/connectors/tree/master/powerbi).
+2. Create M-query for the source and invoke the function, which queries the data from storage account.
-1. Once the data is readily available, you can create reports.
+3. Once the data is readily available, you can create reports.
:::image type="content" source="./media/use-flink-delta-connector/create-reports.png" alt-text="Screenshot shows how to create reports."::: ## References
-* [Delta connectors](https://github.com/delta-io/connectors/tree/master/flink).
-* [Delta Power BI connectors](https://github.com/delta-io/connectors/tree/master/powerbi).
* Apache, Apache Flink, Flink, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Hdinsight On Aks Autoscale Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/hdinsight-on-aks-autoscale-clusters.md
Last updated 02/06/2024
# Auto Scale HDInsight on AKS Clusters The sizing of any cluster to meet job performance and manage costs ahead of time is always tricky, and hard to determine! One of the lucrative benefits of building data lake house over Cloud is its elasticity, which means to use autoscale feature to maximize the utilization of resources at hand. Auto scale with Kubernetes is one key to establishing a cost optimized ecosystem. With varied usage patterns in any enterprise, there could be variations in cluster loads over time that could lead to clusters being under-provisioned (lousy performance) or overprovisioned (unnecessary costs due to idle resources).
hdinsight-aks How To Azure Monitor Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/how-to-azure-monitor-integration.md
Last updated 08/29/2023
# How to integrate with Log Analytics + This article describes how to enable Log Analytics to monitor & collect logs for cluster pool and cluster operations on HDInsight on AKS. You can enable the integration during cluster pool creation or post the creation. Once the integration at cluster pool is enabled, it isn't possible to disable the integration. However, you can disable the log analytics for individual clusters, which are part of the same pool.
hdinsight-aks In Place Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/in-place-upgrade.md
Last updated 03/22/2024
# Upgrade your HDInsight on AKS clusters and cluster pools + Learn how to update your HDInsight on AKS clusters and cluster pools to the latest AKS patches, security updates, cluster patches, and cluster hotfixes with in-place upgrade. ## Why to upgrade
hdinsight-aks Manage Cluster Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/manage-cluster-pool.md
Last updated 08/29/2023
# Manage cluster pools + Cluster pools are a logical grouping of clusters and maintain a set of clusters in the same pool. It helps in building robust interoperability across multiple cluster types and allow enterprises to have the clusters in the same virtual network. One cluster pool corresponds to one cluster in AKS infrastructure. This article describes how to manage a cluster pool.
hdinsight-aks Manage Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/manage-cluster.md
Last updated 08/29/2023
# Manage clusters + Clusters are individual compute workloads such as Apache Spark, Apache Flink, and Trino, which can be created rapidly in few minutes with preset configurations and few clicks. This article describes how to manage a cluster using Azure portal.
hdinsight-aks Manage Script Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/manage-script-actions.md
Last updated 08/29/2023
# Script actions during cluster creation + Azure HDInsight on AKS provides a mechanism called **Script Actions** that invoke custom scripts to customize the cluster. These scripts are used to install additional components and change configuration settings. Script actions can be provisioned only during cluster creation as of now. Post cluster creation, Script Actions are part of the roadmap. This article explains how you can provision script actions when you create an HDInsight on AKS cluster.
hdinsight-aks Manual Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/manual-scale.md
Last updated 02/06/2024
# Manual scale + HDInsight on AKS provides elasticity with options to scale up and scale down the number of cluster nodes. This elasticity works to help increase resource utilization and improve cost efficiency. ## Utility to scale clusters
hdinsight-aks Powershell Cluster Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/powershell-cluster-create.md
Last updated 12/11/2023
# Manage HDInsight on AKS clusters using PowerShell + Azure PowerShell is a powerful scripting environment that you can use to control and automate the deployment and management of your workloads in Microsoft Azure. This document provides information about how to create a HDInsight on AKS cluster by using Azure PowerShell. It also includes an example script.
hdinsight-aks Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/preview.md
This article describes the Azure HDInsight on AKS preview state, and provides di
Your use of the Microsoft Azure HDInsight on AKS Cluster Pool or Microsoft Azure HDInsight on AKS Clusters preview experiences and features is governed by the preview online service terms and conditions of the agreement(s) under which you obtained the services and the [supplemental preview terms](https://go.microsoft.com/fwlink/?linkid=2240967).
-Previews are provided ΓÇ£as-is,ΓÇ¥ ΓÇ£with all faults,ΓÇ¥ and ΓÇ£as available,ΓÇ¥ and are excluded from the service level agreements and limited warranty. Customer support may not cover previews. We may change or discontinue Previews at any time without notice. We also may choose not to release a Preview into ΓÇ£General AvailabilityΓÇ¥.
+Previews are provided **as-is**, **with all faults**, and **as available**, and are excluded from the service level agreements and limited warranty. Customer support may not cover previews. We may change or discontinue Previews at any time without notice. We also may choose not to release a Preview into **General Availability**.
Previews may be subject to reduced or different security, compliance and privacy commitments, as further explained in the [Microsoft Privacy Statement](https://go.microsoft.com/fwlink/?LinkId=521839), [Microsoft Trust Center](https://go.microsoft.com/fwlink/?linkid=2179910), the [Product Terms](https://go.microsoft.com/fwlink/?linkid=2173816), the [Microsoft Products and Services Data Protection Addendum (ΓÇ£DPAΓÇ¥)](https://go.microsoft.com/fwlink/?linkid=2153219), and any extra notices provided with the Preview.
hdinsight-aks Quickstart Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/quickstart-create-cli.md
Last updated 06/18/2024
# Quickstart: Create an HDInsight on AKS cluster pool using Azure CLI + HDInsight on AKS introduces the concept of cluster pools and clusters, which allow you to realize the complete value of data lakehouse. - **Cluster pools** are a logical grouping of clusters and maintain a set of clusters in the same pool, which helps in building robust interoperability across multiple cluster types. It can be created within an existing virtual network or outside a virtual network.
hdinsight-aks Quickstart Create Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/quickstart-create-cluster.md
Last updated 06/18/2024
# Quickstart: Create an HDInsight on AKS cluster pool using Azure portal + HDInsight on AKS introduces the concept of cluster pools and clusters, which allow you to realize the complete value of data lakehouse. - **Cluster pools** are a logical grouping of clusters and maintain a set of clusters in the same pool, which helps in building robust interoperability across multiple cluster types. It can be created within an existing virtual network or outside a virtual network.
hdinsight-aks Quickstart Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/quickstart-create-powershell.md
Last updated 06/19/2024
# Quickstart: Create an HDInsight on AKS cluster pool using Azure PowerShell + HDInsight on AKS introduces the concept of cluster pools and clusters, which allow you to realize the complete value of data lakehouse. - **Cluster pools** are a logical grouping of clusters and maintain a set of clusters in the same pool, which helps in building robust interoperability across multiple cluster types. It can be created within an existing virtual network or outside a virtual network.
hdinsight-aks Quickstart Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/quickstart-get-started.md
Last updated 08/29/2023
# Get started with one-click deployment + One-click deployments are designed for users to experience zero touch creation of HDInsight on AKS. It eliminates the need to manually perform certain steps. This article describes how to use readily available ARM templates to create a cluster pool and cluster in few clicks.
hdinsight-aks Quickstart Prerequisites Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/quickstart-prerequisites-resources.md
Last updated 04/08/2024
# Resource prerequisites + This article details the resources required for getting started with HDInsight on AKS. It covers the necessary and the optional resources and how to create them. ## Necessary resources
hdinsight-aks Quickstart Prerequisites Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/quickstart-prerequisites-subscription.md
Last updated 05/06/2024
# Subscription prerequisites + If you're using Azure subscription first time for HDInsight on AKS, the following features might need to be enabled.
hdinsight-aks Rest Api Cluster Creation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/rest-api-cluster-creation.md
Last updated 11/26/2023
# Manage HDInsight on AKS clusters using Azure REST API + Learn how to create an HDInsight cluster using an Azure Resource Manager template and the Azure REST API. The Azure REST API allows you to perform management operations on services hosted in the Azure platform, including the creation of new resources such as HDInsight clusters.
hdinsight-aks Sdk Cluster Creation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/sdk-cluster-creation.md
Last updated 11/23/2023
# Manage HDInsight on AKS clusters using .NET SDK + This article describes how you can create and manage cluster in Azure HDInsight on AKS using .NET SDK. The HDInsight .NET SDK provides .NET client libraries, so that it's easier to work with HDInsight clusters from .NET.
hdinsight-aks Secure Traffic By Firewall Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/secure-traffic-by-firewall-azure-portal.md
Last updated 08/3/2023
# Use firewall to restrict outbound traffic using Azure portal + When an enterprise wants to use their own virtual network for the cluster deployments, securing the traffic of the virtual network becomes important. This article provides the steps to secure outbound traffic from your HDInsight on AKS cluster via Azure Firewall using Azure portal.
hdinsight-aks Secure Traffic By Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/secure-traffic-by-firewall.md
Last updated 02/19/2024
# Use firewall to restrict outbound traffic using Azure CLI + When an enterprise wants to use their own virtual network for the cluster deployments, securing the traffic of the virtual network becomes important. This article provides the steps to secure outbound traffic from your HDInsight on AKS cluster via Azure Firewall using [Azure CLI](/azure/cloud-shell/quickstart?tabs=azurecli).
hdinsight-aks Secure Traffic By Nsg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/secure-traffic-by-nsg.md
Last updated 08/3/2023
# Use NSG to restrict traffic to HDInsight on AKS + HDInsight on AKS relies on AKS outbound dependencies and they're entirely defined with FQDNs, which don't have static addresses behind them. The lack of static IP addresses means one can't use Network Security Groups (NSGs) to lock down the outbound traffic from the cluster using IPs. If you still prefer to use NSG to secure your traffic, then you need to configure the following rules in the NSG to do a coarse-grained control.
hdinsight-aks Subscribe To Release Notes Repo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/subscribe-to-release-notes-repo.md
Last updated 11/20/2023
# Subscribe to HDInsight on AKS release notes GitHub repo + Learn how to subscribe to HDInsight on AKS release notes GitHub repo to get email notifications. ## Prerequisites
healthcare-apis Fhir Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-faq.md
In the managed service, you can't access the underlying data. This is to ensure
### What identity provider do you support?
-We support Microsoft Entra ID as the identity provider.
+We support Microsoft Entra ID and third party identity provider that support OpenID connect.
### Can I use Azure AD B2C with the FHIR service?
For more information, see [Supported FHIR features](fhir-features-supported.md).
### What is the difference between Azure API for FHIR and the FHIR service in the Azure Health Data Services?
-Azure API for FHIR was our initial generally available product and is being retired as of September 30, 2026. The Azure Health Data Services FHIR service supports additional capabilities such as:
+Azure API for FHIR was our initial generally available product and is being retired as of September 30, 2026. Below table provides difference between Azure API for FHIR and Azure Health Data Services, FHIR service
-- [Transaction bundles](https://www.hl7.org/fhir/http.html#transaction).-- [Incremental Import](configure-import-data.md)-- [Autoscaling](fhir-service-autoscale.md) enabled by default
+|Capabilities|Azure API for FHIR|Azure Health Data Services|
+|||--|
+|**Data ingress**|Tools available in OSS|$import operation. For information visit [Import operation](configure-import-data.md)|
+|**Autoscaling**|Supported on request and incurs charge|[Autoscaling](fhir-service-autoscale.md) enabled by default at no extra charge|
+|**Search parameters**|Bundle type supported: Batch <br> ΓÇó Include and revinclude, iterate modifier not supported <br> ΓÇó Sorting supported by first name, last name, birthdate and clinical date|Bundle type supported: Batch and transaction <br> ΓÇó [Selectable search parameters](selectable-search-parameters.md) <br> ΓÇó Include, revinclude, and iterate modifier is supported <br>ΓÇó Sorting supported by string and dateTime fields|
+|**Events**|Not Supported|Supported|
+|**Convert-data**|Supports enabling "Allow trusted services" in Account container registry| There is a known issue -Enabling private link with Azure Container Registry may result in access issues when attempting to use the container registry from the FHIR service.|
+|**Business continuity**|Supported:<br> ΓÇó Cross region DR (disaster recovery) <br>|Supported: <br> ΓÇó PITR (point in time recovery) <br> ΓÇó Availability zone support|
By default each Azure Health Data Services, FHIR instance is limited to storage capacity of 4TB. To provision a FHIR instance with storage capacity beyond 4TB, create support request with Issue type 'Service and Subscription limit (quotas)'.
We currently support posting [batch bundles](https://www.hl7.org/fhir/valueset-b
We support the [$patient-everything operation](patient-everything.md) which will get you all data related to a single patient.
-### What is the default sort when searching for resources in the FHIR service?
-
-We support sorting by string and dateTime fields in the FHIR service. For more information about other supported search parameters, see [Overview of FHIR search](overview-of-search.md).
- ### Does the FHIR service support any terminology operations? No, the FHIR service doesn't support terminology operations today.
-### What are the differences between delete types in the FHIR service?
-
-There are two basic Delete types supported within the FHIR service. They are [Delete and Conditional Delete](rest-api-capabilities.md#delete-and-conditional-delete).
-
-* With Delete, you can choose to do a soft delete (most common type) and still be able to recover historic versions of your record.
-* With Conditional Delete, you can pass search criteria to delete a resource one item at a time or several at a time.
-* If you passed the `hardDelete` parameter with either Delete or Conditional Delete, all the records and history are deleted and unrecoverable.
## Using the FHIR service
iot-edge Troubleshoot Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-common-errors.md
description: Resolve common issues in Azure IoT Edge solutions. Learn how to tro
Previously updated : 06/06/2024 Last updated : 07/10/2024
You don't need to disable socket activation on a distribution where socket activ
1. Change the iotedge config to use `/var/lib/iotedge/*.sock` in both `connect` and `listen` sections 1. If you already have modules, they have the old `/var/run/iotedge/*.sock` mounts, so `docker rm -f` them.
+### Message queue clean up is slow
+
+#### Symptoms
+
+The message queue isn't being cleaned up after messages are processed. The message queue grows over time and eventually causes the IoT Edge runtime to run out of memory.
+
+#### Cause
+
+The message cleanup interval is controlled by the client message TTL (time to live) and the *EdgeHub* *MessageCleanupIntervalSecs* environment variable. The default message TTL value is two hours and the default *MessageCleanupIntervalSecs* value is 30 minutes. If your application uses a TTL value that is shorter than the default and you don't adjust the *MessageCleanupIntervalSecs* value, expired messages won't be cleaned up until the next cleanup interval.
+
+#### Solution
+
+If you change the TTL value for your application that is shorter than the default, also adjust the *MessageCleanupIntervalSecs* value. The *MessageCleanupIntervalSecs* value should be significantly smaller than the smallest TTL value that the client is using. For example, if the client application defines a TTL of five minutes in the message header, set the *MessageCleanupIntervalSecs* value to one minute. These settings ensure that messages are cleaned up within six (5 + 1) minutes.
+
+To configure the *MessageCleanupIntervalSecs* value, set the environment variable in the deployment manifest for the IoT Edge hub module. For more information about setting runtime environment variables, see [Edge Agent and Edge Hub Environment Variables](https://github.com/Azure/iotedge/blob/main/doc/EnvironmentVariables.md).
+ ## Networking ### IoT Edge security daemon fails with an invalid hostname
load-balancer Tutorial Cross Region Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-cross-region-portal.md
In this section, you create a
7. Select **IPv4** or **IPv6** for **IP version**.
-8. In **Public IP address**, select **Create new**. Enter **myPublicIP-cr** in **Name**. Select **OK**.
+8. In **Public IP address**, select **Create new**. Enter **myPublicIP-cr** in **Name**. Select **Save** for the Add Public IP Address Dialog.
-9. Select **Add**.
+9. Select **Save**.
10. Select **Next: Backend pools** at the bottom of the page.
logic-apps Export From Consumption To Standard Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/export-from-consumption-to-standard-logic-app.md
Consider the following recommendations when you select logic apps for export:
1. On the Visual Studio Code Activity Bar, select **Azure** to open the **Azure** window (Shift + Alt + A).
- :::image type="content" source="media/export-from-consumption-to-standard-logic-app/select-azure-view.png" alt-text="Screenshot showing Visual Studio Code Activity Bar with Azure icon selected.":::
+ :::image type="content" source="media/export-from-consumption-to-standard-logic-app/select-azure-view.png" alt-text="Screenshot showing Visual Studio Code Activity Bar with Azure icon selected." lightbox="media/export-from-consumption-to-standard-logic-app/select-azure-view.png":::
1. On the **Workspace** section toolbar, from the **Azure Logic Apps** menu, select **Export logic app**.
- :::image type="content" source="media/export-from-consumption-to-standard-logic-app/select-export-logic-app.png" alt-text="Screenshot showing Azure window, Workspace section toolbar, and Export Logic App selected.":::
+ :::image type="content" source="media/export-from-consumption-to-standard-logic-app/select-export-logic-app.png" alt-text="Screenshot showing Azure window, Workspace section toolbar, and Export Logic App selected." lightbox="media/export-from-consumption-to-standard-logic-app/select-export-logic-app.png":::
1. After the **Export** tab opens, select your Azure subscription and region, and then select **Next**.
- :::image type="content" source="media/export-from-consumption-to-standard-logic-app/select-subscription-consumption.png" alt-text="Screenshot showing Export tab with Azure subscription and region selected.":::
+ :::image type="content" source="media/export-from-consumption-to-standard-logic-app/select-subscription-consumption.png" alt-text="Screenshot showing Export tab with Azure subscription and region selected." lightbox="media/export-from-consumption-to-standard-logic-app/select-subscription-consumption.png":::
1. Select the logic apps to export. Each selected logic app appears on the **Selected logic apps** list to the side.
Consider the following recommendations when you select logic apps for export:
> > You can also search for logic apps and filter on resource group.
- :::image type="content" source="media/export-from-consumption-to-standard-logic-app/select-logic-apps.png" alt-text="Screenshot showing 'Select logic apps to export' section with logic apps selected for export.":::
+ :::image type="content" source="media/export-from-consumption-to-standard-logic-app/select-logic-apps.png" alt-text="Screenshot showing 'Select logic apps to export' section with logic apps selected for export." lightbox="media/export-from-consumption-to-standard-logic-app/select-logic-apps.png":::
The export tool starts to validate whether your selected logic apps are eligible for export.
Consider the following recommendations when you select logic apps for export:
For example, **SourceLogicApp2** has an error and can't be exported until fixed:
- :::image type="content" source="media/export-from-consumption-to-standard-logic-app/select-back-button-remove-app.png" alt-text="Screenshot showing 'Review export status' section and validation status for logic app workflow with error.":::
+ :::image type="content" source="media/export-from-consumption-to-standard-logic-app/select-back-button-remove-app.png" alt-text="Screenshot showing 'Review export status' section and validation status for logic app workflow with error." lightbox="media/export-from-consumption-to-standard-logic-app/select-back-button-remove-app.png":::
- Logic apps that pass validation with or without warnings are still eligible for export. To continue, select **Export** if all apps validate successfully, or select **Export with warnings** if apps have warnings. For example, **SourceLogicApp3** has a warning, but you can still continue to export:
- :::image type="content" source="media/export-from-consumption-to-standard-logic-app/select-export-with-warnings.png" alt-text="Screenshot showing 'Review export status' section and validation status for logic app workflow with warning.":::
+ :::image type="content" source="media/export-from-consumption-to-standard-logic-app/select-export-with-warnings.png" alt-text="Screenshot showing 'Review export status' section and validation status for logic app workflow with warning." lightbox="media/export-from-consumption-to-standard-logic-app/select-export-with-warnings.png":::
The following table provides more information about each validation icon and status:
Consider the following recommendations when you select logic apps for export:
1. After the **Finish export** section appears, for **Export location**, browse and select a local folder for your new Standard logic app project.
- :::image type="content" source="media/export-from-consumption-to-standard-logic-app/select-local-folder.png" alt-text="Screenshot showing 'Finish export' section and 'Export location' property with selected local export project folder.":::
+ :::image type="content" source="media/export-from-consumption-to-standard-logic-app/select-local-folder.png" alt-text="Screenshot showing 'Finish export' section and 'Export location' property with selected local export project folder." lightbox="media/export-from-consumption-to-standard-logic-app/select-local-folder.png":::
1. If your workflow has *managed* connections that you want to deploy, which is only recommended for non-production environments, select **Deploy managed connections**, which shows existing resource groups in your Azure subscription. Select the resource group where you want to deploy the managed connections.
- :::image type="content" source="media/export-from-consumption-to-standard-logic-app/select-deploy-managed-connections-resource-group.png" alt-text="Screenshot showing 'Finish export' section with selected local export folder, 'Deploy managed connections' selected, and target resource group selected.":::
+ :::image type="content" source="media/export-from-consumption-to-standard-logic-app/select-deploy-managed-connections-resource-group.png" alt-text="Screenshot showing 'Finish export' section with selected local export folder, 'Deploy managed connections' selected, and target resource group selected." lightbox="media/export-from-consumption-to-standard-logic-app/select-deploy-managed-connections-resource-group.png":::
1. Under **After export steps**, review any required post-export steps, for example:
- :::image type="content" source="media/export-from-consumption-to-standard-logic-app/review-post-export-steps.png" alt-text="Screenshot showing 'After export steps' section and required post-export steps, if any.":::
+ :::image type="content" source="media/export-from-consumption-to-standard-logic-app/review-post-export-steps.png" alt-text="Screenshot showing 'After export steps' section and required post-export steps, if any." lightbox="media/export-from-consumption-to-standard-logic-app/review-post-export-steps.png":::
1. Based on your scenario, select **Export and finish** or **Export with warnings and finish**. The export tool downloads your project to your selected folder location, expands the project in Visual Studio Code, and deploys any managed connections, if you selected that option.
- :::image type="content" source="media/export-from-consumption-to-standard-logic-app/export-status.png" alt-text="Screenshot showing the 'Export status' section with export progress.":::
+ :::image type="content" source="media/export-from-consumption-to-standard-logic-app/export-status.png" alt-text="Screenshot showing the 'Export status' section with export progress." lightbox="media/export-from-consumption-to-standard-logic-app/export-status.png":::
1. After this process completes, Visual Studio Code opens a new workspace. You can now safely close the export window. 1. From your Standard logic app project, open and review the README.md file for the required post-export steps.
- :::image type="content" source="medi file opened.":::
+ :::image type="content" source="medi file opened." lightbox="media/export-from-consumption-to-standard-logic-app/open-readme.png":::
## Post-export steps
machine-learning Concept Manage Ml Pitfalls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-manage-ml-pitfalls.md
Title: Avoid overfitting & imbalanced data with Automated machine learning
+ Title: Prevent overfitting and imbalanced data with Automated ML
-description: Identify and manage common pitfalls of ML models with Azure Machine Learning's Automated ML solutions.
+description: Identify and manage common pitfalls of machine learning models by using Automated ML solutions in Azure Machine Learning.
-+ Previously updated : 06/15/2023 Last updated : 07/11/2024+
+#customer intent: As a developer, I want to use Automated ML solutions in Azure Machine Learning, so I can find and address common issues like overfitting and imbalanced data.
# Prevent overfitting and imbalanced data with Automated ML
-Overfitting and imbalanced data are common pitfalls when you build machine learning models. By default, Azure Machine Learning's Automated ML provides charts and metrics to help you identify these risks, and implements best practices to help mitigate them.
+Overfitting and imbalanced data are common pitfalls when you build machine learning models. By default, the Automated ML feature in Azure Machine Learning provides charts and metrics to help you identify these risks. This article describes how you can implement best practices in Automated ML to help mitigate common issues.
## Identify overfitting
-Overfitting in machine learning occurs when a model fits the training data too well, and as a result can't accurately predict on unseen test data. In other words, the model has memorized specific patterns and noise in the training data, but is not flexible enough to make predictions on real data.
+Overfitting in machine learning occurs when a model fits the training data too well. As a result, the model can't make accurate predictions on unseen test data. The model memorized specific patterns and noise in the training data, and it's not flexible enough to make predictions on real data.
-Consider the following trained models and their corresponding train and test accuracies.
+Consider the following trained models and their corresponding train and test accuracies:
| Model | Train accuracy | Test accuracy |
-|-|-||
+| :: | :: | :: |
| A | 99.9% | 95% |
-| B | 87% | 87% |
+| B | 87% | 87% |
| C | 99.9% | 45% |
-Consider model **A**, there is a common misconception that if test accuracy on unseen data is lower than training accuracy, the model is overfitted. However, test accuracy should always be less than training accuracy, and the distinction for overfit vs. appropriately fit comes down to *how much* less accurate.
+- Model **A**: The test for this model produces slightly less accuracy than the model training. There's a common misconception that if test accuracy on unseen data is lower than training accuracy, the model is overfitted. However, test accuracy should always be less than training accuracy. The distinction between overfitting versus appropriately fitting data comes down to measuring _how much_ less is the accuracy.
-Compare models **A** and **B**, model **A** is a better model because it has higher test accuracy, and although the test accuracy is slightly lower at 95%, it is not a significant difference that suggests overfitting is present. You wouldn't choose model **B** because the train and test accuracies are closer together.
+- Model **A** versus model **B**: Model **A** is a better model because it has higher test accuracy. Although the test accuracy is slightly lower at 95%, it's not a significant difference that suggests overfitting is present. Model **B** isn't preferred because the train and test accuracies are similar.
-Model **C** represents a clear case of overfitting; the training accuracy is high but the test accuracy isn't anywhere near as high. This distinction is subjective, but comes from knowledge of your problem and data, and what magnitudes of error are acceptable.
+- Model **C**: This model represents a clear case of overfitting. The training accuracy is high and the test accuracy is low. This distinction is subjective, but comes from knowledge of your problem and data, and what are the acceptable magnitudes of error.
## Prevent overfitting
-In the most egregious cases, an overfitted model assumes that the feature value combinations seen during training always results in the exact same output for the target.
+In the most egregious cases, an overfitted model assumes the feature value combinations visible during training always result in the exact same output for the target. To avoid overfitting your data, the recommendation is to follow machine learning best practices. The are several methods you can configure in your model implementation. Automated ML also provides other options by default to help prevent overfitting.
-The best way to prevent overfitting is to follow ML best practices including:
+The following table summarizes common best practices:
-* Using more training data, and eliminating statistical bias
-* Preventing target leakage
-* Using fewer features
-* **Regularization and hyperparameter optimization**
-* **Model complexity limitations**
-* **Cross-validation**
+| Best practice | Implementation | Automated ML |
+| | :: | :: |
+| Use more training data, and eliminate statistical bias | X | |
+| Prevent target leakage | X | |
+| Incorporate fewer features | X | |
+| Support regularization and hyperparameter optimization | | X |
+| Apply model complexity limitations | | X |
+| Use cross-validation | | X |
-In the context of Automated ML, the first three ways lists best practices you implement. The last three bolded items are **best practices Automated ML implements** by default to protect against overfitting. In settings other than Automated ML, all six best practices are worth following to avoid overfitting models.
+## Apply best practices to prevent overfitting
-## Best practices you implement
+The following sections describe best practices you can use in your machine learning model implementation to prevent overfitting.
### Use more data
-Using more data is the simplest and best possible way to prevent overfitting, and as an added bonus typically increases accuracy. When you use more data, it becomes harder for the model to memorize exact patterns, and it is forced to reach solutions that are more flexible to accommodate more conditions. It's also important to recognize statistical bias, to ensure your training data doesn't include isolated patterns that don't exist in live-prediction data. This scenario can be difficult to solve, because there could be overfitting present when compared to live test data.
+Using more data is the simplest and best possible way to prevent overfitting, and this approach typically increases accuracy. When you use more data, it becomes harder for the model to memorize exact patterns. The model is forced to reach solutions that are more flexible to accommodate more conditions. It's also important to recognize statistical bias, to ensure your training data doesn't include isolated patterns that don't exist in live-prediction data. This scenario can be difficult to solve because there can be overfitting present when compared to live test data.
### Prevent target leakage
-Target leakage is a similar issue, where you may not see overfitting between train/test sets, but rather it appears at prediction-time. Target leakage occurs when your model "cheats" during training by having access to data that it shouldn't normally have at prediction-time. For example, to predict on Monday what a commodity price will be on Friday, if your features accidentally included data from Thursdays, that would be data the model won't have at prediction-time since it can't see into the future. Target leakage is an easy mistake to miss, but is often characterized by abnormally high accuracy for your problem. If you're attempting to predict stock price and trained a model at 95% accuracy, there's likely target leakage somewhere in your features.
+Target leakage is a similar issue. You might not see overfitting between the train and test sets, but the leakage issue appears at prediction-time. Target leakage occurs when your model "cheats" during training by accessing data that it shouldn't normally have at prediction-time. An example is for the model to predict on Monday what the commodity price is for Friday. If your features accidentally include data from Thursdays, the model has access to data not available at prediction-time because it can't see into the future. Target leakage is an easy mistake to miss. It's often visible where you have abnormally high accuracy for your problem. If you're attempting to predict stock price and trained a model at 95% accuracy, there's likely target leakage somewhere in your features.
-### Use fewer features
+### Incorporate fewer features
-Removing features can also help with overfitting by preventing the model from having too many fields to use to memorize specific patterns, thus causing it to be more flexible. It can be difficult to measure quantitatively, but if you can remove features and retain the same accuracy, you have likely made the model more flexible and have reduced the risk of overfitting.
+Removing features can also help with overfitting by preventing the model from having too many fields to use to memorize specific patterns, thus causing it to be more flexible. It can be difficult to measure quantitatively. If you can remove features and retain the same accuracy, your model can be more flexible and reduce the risk of overfitting.
-## Best practices Automated ML implements
+## Review Automated ML features to prevent overfitting
-### Regularization and hyperparameter tuning
+The following sections describe best practices provided by default in Automated ML to help prevent overfitting.
-**Regularization** is the process of minimizing a cost function to penalize complex and overfitted models. There's different types of regularization functions, but in general they all penalize model coefficient size, variance, and complexity. Automated ML uses L1 (Lasso), L2 (Ridge), and ElasticNet (L1 and L2 simultaneously) in different combinations with different model hyperparameter settings that control overfitting. Automated ML varies how much a model is regulated and choose the best result.
+### Support regularization and hyperparameter tuning
-### Model complexity limitations
+**Regularization** is the process of minimizing a cost function to penalize complex and overfitted models. There are different types of regularization functions. In general, all functions penalize model coefficient size, variance, and complexity. Automated ML uses L1 (Lasso), L2 (Ridge), and ElasticNet (L1 and L2 simultaneously) in different combinations with different model hyperparameter settings that control overfitting. Automated ML varies how much a model is regulated and chooses the best result.
-Automated ML also implements explicit model complexity limitations to prevent overfitting. In most cases, this implementation is specifically for decision tree or forest algorithms, where individual tree max-depth is limited, and the total number of trees used in forest or ensemble techniques are limited.
+### Apply model complexity limitations
-### Cross-validation
+Automated ML also implements explicit model complexity limitations to prevent overfitting. In most cases, this implementation is specifically for decision tree or forest algorithms. Individual tree max-depth is limited, and the total number of trees used in forest or ensemble techniques are limited.
-Cross-validation (CV) is the process of taking many subsets of your full training data and training a model on each subset. The idea is that a model could get "lucky" and have great accuracy with one subset, but by using many subsets the model won't achieve this high accuracy every time. When doing CV, you provide a validation holdout dataset, specify your CV folds (number of subsets) and Automated ML trains your model and tune hyperparameters to minimize error on your validation set. One CV fold could be overfitted, but by using many of them it reduces the probability that your final model is overfitted. The tradeoff is that CV results in longer training times and greater cost, because you train a model once for each *n* in the CV subsets.
+### Use cross-validation
-> [!NOTE]
-> Cross-validation isn't enabled by default; it must be configured in Automated machine learning settings. However, after cross-validation is configured and a validation data set has been provided, the process is automated for you.
+Cross-validation (CV) is the process of taking many subsets of your full training data and training a model on each subset. The idea is that a model might get "lucky" and have great accuracy with one subset, but by using many subsets, the model can't achieve high accuracy every time. When doing CV, you provide a validation holdout dataset, specify your CV folds (number of subsets) and Automated ML trains your model and tunes hyperparameters to minimize error on your validation set. One CV fold might be overfitted, but by using many of them, the process reduces the probability that your final model is overfitted. The tradeoff is that CV results in longer training times and greater cost, because you train a model one time for each *n* in the CV subsets.
-<a name="imbalance"></a>
+> [!NOTE]
+> Cross-validation isn't enabled by default. This feature must be configured in Automated machine learning settings. However, after cross-validation is configured and a validation data set is provided, the process is automated for you.
## Identify models with imbalanced data
Imbalanced data is commonly found in data for machine learning classification sc
In addition, Automated ML jobs generate the following charts automatically. These charts help you understand the correctness of the classifications of your model, and identify models potentially impacted by imbalanced data.
-Chart| Description
-|
-[Confusion Matrix](how-to-understand-automated-ml.md#confusion-matrix)| Evaluates the correctly classified labels against the actual labels of the data.
-[Precision-recall](how-to-understand-automated-ml.md#precision-recall-curve)| Evaluates the ratio of correct labels against the ratio of found label instances of the data
-[ROC Curves](how-to-understand-automated-ml.md#roc-curve)| Evaluates the ratio of correct labels against the ratio of false-positive labels.
+| Chart | Description |
+| | |
+| [Confusion matrix](how-to-understand-automated-ml.md#confusion-matrix) | Evaluates the correctly classified labels against the actual labels of the data. |
+| [Precision-recall](how-to-understand-automated-ml.md#precision-recall-curve) | Evaluates the ratio of correct labels against the ratio of found label instances of the data. |
+| [ROC curves](how-to-understand-automated-ml.md#roc-curve) | Evaluates the ratio of correct labels against the ratio of false-positive labels. |
## Handle imbalanced data
-As part of its goal of simplifying the machine learning workflow, Automated ML has built in capabilities to help deal with imbalanced data such as,
--- A weight column: Automated ML creates a column of weights as input to cause rows in the data to be weighted up or down, which can be used to make a class more or less "important."--- The algorithms used by Automated ML detect imbalance when the number of samples in the minority class is equal to or fewer than 20% of the number of samples in the majority class, where minority class refers to the one with fewest samples and majority class refers to the one with most samples. Subsequently, automated machine learning will run an experiment with subsampled data to check if using class weights would remedy this problem and improve performance. If it ascertains a better performance through this experiment, then this remedy is applied.
+As part of the goal to simplify the machine learning workflow, Automated ML offers built-in capabilities to help deal with imbalanced data:
-- Use a performance metric that deals better with imbalanced data. For example, the AUC_weighted is a primary metric that calculates the contribution of every class based on the relative number of samples representing that class, hence is more robust against imbalance.
+- Automated ML creates a **column of weights** as input to cause rows in the data to be weighted up or down, which can be used to make a class more or less "important."
-The following techniques are additional options to handle imbalanced data outside of Automated ML.
+- The algorithms used by Automated ML detect imbalance when the number of samples in the minority class is equal to or fewer than 20% of the number of samples in the majority class. The minority class refers to the one with fewest samples and the majority class refers to the one with most samples. Later, automated machine learning runs an experiment with subsampled data to check if using class weights can remedy this problem and improve performance. If it ascertains a better performance through this experiment, it applies the remedy.
-- Resampling to even the class imbalance, either by up-sampling the smaller classes or down-sampling the larger classes. These methods require expertise to process and analyze.
+- Use a performance metric that deals better with imbalanced data. For example, the AUC_weighted is a primary metric that calculates the contribution of every class based on the relative number of samples representing that class. This metric is more robust against imbalance.
-- Review performance metrics for imbalanced data. For example, the F1 score is the harmonic mean of precision and recall. Precision measures a classifier's exactness, where higher precision indicates fewer false positives, while recall measures a classifier's completeness, where higher recall indicates fewer false negatives.
+The following techniques are other options to handle imbalanced data outside of Automated ML:
-## Next steps
+- Resample to even the class imbalance. You can up-sample the smaller classes or down-sample the larger classes. These methods require expertise to process and analyze.
-See examples and learn how to build models using Automated ML:
+- Review performance metrics for imbalanced data. For example, the F1 score is the harmonic mean of precision and recall. Precision measures a classifier's exactness, where higher precision indicates fewer false positives. Recall measures a classifier's completeness, where higher recall indicates fewer false negatives.
-+ Follow the [Tutorial: Train an object detection model with automated machine learning and Python](tutorial-auto-train-image-models.md).
+## Next step
-+ Configure the settings for automatic training experiment:
- + In Azure Machine Learning studio, [use these steps](how-to-use-automated-ml-for-ml-models.md).
- + With the Python SDK, [use these steps](how-to-configure-auto-train.md).
+> [!div class="nextstepaction"]
+> [Train an object detection model with automated machine learning and Python](tutorial-auto-train-image-models.md)
machine-learning Concept Model Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-catalog.md
For language models deployed to MaaS, Azure Machine Learning implements a defaul
Content filtering (preview) occurs synchronously as the service processes prompts to generate content, and you might be billed separately as per [AACS pricing](https://azure.microsoft.com/pricing/details/cognitive-services/content-safety/) for such use. You can disable content filtering (preview) for individual serverless endpoints either at the time when you first deploy a language model or in the deployment details page by selecting the content filtering toggle. If you use a model in MaaS via an API other than the [Azure AI Model Inference API](../ai-studio/reference/reference-model-inference-api.md), content filtering isn't enabled unless you implement it separately by using [Azure AI Content Safety](../ai-services/content-safety/quickstart-text.md). If you use a model in MaaS without content filtering, you run a higher risk of exposing users to harmful content.
+### Network isolation for models deployed via Serverless APIs
+
+Endpoints for models deployed as Serverless APIs follow the public network access (PNA) flag setting of the workspace in which the deployment exists. To secure your MaaS endpoint, disable the PNA flag on your workspace. You can secure inbound communication from a client to your endpoint by using a private endpoint for the workspace.
+
+To set the PNA flag for the workspace:
+
+* Go to the [Azure portal](https://ms.portal.azure.com/).
+* Search for _Azure Machine Learning_, and select your workspace from the list of workspaces.
+* On the Overview page, use the left navigation pane to go to **Settings** > **Networking**.
+* Under the **Public access** tab, you can configure settings for the public network access flag.
+* Save your changes. Your changes might take up to five minutes to propagate.
+
+#### Limitations
+
+* If you have a workspace with a private endpoint created before July 11, 2024, new MaaS endpoints added to this workspace won't follow its networking configuration. Instead, you need to create a new private endpoint for the workspace and create new serverless API deployments in the workspace so that the new deployments can follow the workspace's networking configuration.
+* If you have a workspace with MaaS deployments created before July 11, 2024, and you enable a private endpoint on this workspace, the existing MaaS deployments won't follow the workspace's networking configuration. For serverless API deployments in the workspace to follow the workspace's configuration, you need to create the deployments again.
+* Currently [On Your Data](#rag-with-models-deployed-through-maas) support isn't available for MaaS deployments in private workspaces, since private workspaces have the PNA flag disabled.
+* Any network configuration change (for example, enabling or disabling the PNA flag) might take up to five minutes to propagate.
+ ## Learn more * Learn [how to use foundation Models in Azure Machine Learning](./how-to-use-foundation-models.md) for fine-tuning, evaluation, and deployment using Azure Machine Learning studio UI or code based methods.
machine-learning How To Deploy Models Llama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-llama.md
For reference about how to invoke Meta Llama 3 models deployed to real-time endp
#### Additional inference examples
+# [Meta Llama 3](#tab/llama-three)
+ | **Package** | **Sample Notebook** | |-|-|
-| CLI using CURL and Python web requests | [cohere-embed.ipynb](https://aka.ms/samples/embed-v3/webrequests)|
-| OpenAI SDK (experimental) | [openaisdk.ipynb](https://aka.ms/samples/cohere-embed/openaisdk) |
-| LangChain | [langchain.ipynb](https://aka.ms/samples/cohere-embed/langchain) |
-| Cohere SDK | [cohere-sdk.ipynb](https://aka.ms/samples/cohere-embed/cohere-python-sdk) |
-| LiteLLM SDK | [litellm.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/cohere/litellm.ipynb) |
+| OpenAI SDK (experimental) | [openaisdk.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/meta-llama3/openaisdk.ipynb) |
+| LangChain | [langchain.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/meta-llama3/langchain.ipynb) |
+| WebRequests | [webrequests.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/meta-llama3/webrequests.ipynb) |
+| LiteLLM SDK | [litellm.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/meta-llama3/litellm.ipynb) |
+
+# [Meta Llama 2](#tab/llama-two)
+
+| **Package** | **Sample Notebook** |
+|-|-|
+| OpenAI SDK (experimental) | [openaisdk.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/llama2/openaisdk.ipynb) |
+| LangChain | [langchain.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/llama2/langchain.ipynb) |
+| WebRequests | [webrequests.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/llama2/webrequests.ipynb) |
+| LiteLLM SDK | [litellm.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/llama2/litellm.ipynb) |
++ ## Cost and quotas
machine-learning How To Auto Train Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-models-v1.md
Title: 'AutoML-train regression model (SDK v1)'
+ Title: Train regression model with Automated ML (SDK v1)
-description: Train a regression model to predict NYC taxi fares with the Azure Machine Learning Python SDK using Azure Machine Learning automated ML SDK (v1).
+description: Train a regression model to predict taxi fares with the Azure Machine Learning Python SDK by using the Azure Machine Learning Automated ML SDK (v1).
Previously updated : 01/25/2023 Last updated : 07/11/2024 +
+#customer intent: As a developer, I want to train a regression model with Automated ML, so I can use the Azure Machine Learning Python SDK.
-# Train a regression model with AutoML and Python (SDK v1)
+# Train regression model with Automated ML and Python (SDK v1)
[!INCLUDE [sdk v1](../includes/machine-learning-sdk-v1.md)]
-In this article, you learn how to train a regression model with the Azure Machine Learning Python SDK using Azure Machine Learning automated ML. This regression model predicts NYC taxi fares.
-
-This process accepts training data and configuration settings, and automatically iterates through combinations of different feature normalization/standardization methods, models, and hyperparameter settings to arrive at the best model.
+In this article, you learn how to train a regression model with the Azure Machine Learning Python SDK by using Azure Machine Learning Automated ML. The regression model predicts passenger fares for taxi cabs operating in New York City (NYC). You write code with the Python SDK to configure a workspace with prepared data, train the model locally with custom parameters, and explore the results.
-![Flow diagram](./media/how-to-auto-train-models/flow2.png)
+The process accepts training data and configuration settings. It automatically iterates through combinations of different feature normalization/standardization methods, models, and hyperparameter settings to arrive at the best model. The following diagram illustrates the process flow for the regression model training:
-You write code using the Python SDK in this article. You learn the following tasks:
-> [!div class="checklist"]
-> * Download, transform, and clean data using Azure Open Datasets
-> * Train an automated machine learning regression model
-> * Calculate model accuracy
+## Prerequisites
-For no-code AutoML, try the following tutorials:
+- An Azure subscription. You can create a [free or paid account](https://azure.microsoft.com/free/) of Azure Machine Learning.
-* [Tutorial: Train no-code classification models](../tutorial-first-experiment-automated-ml.md)
+- An Azure Machine Learning workspace or compute instance. To prepare these resources, see [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md).
-* [Tutorial: Forecast demand with automated machine learning](../tutorial-automated-ml-forecast.md)
+- Get the prepared sample data for the tutorial exercises by loading a notebook into your workspace:
-## Prerequisites
+ 1. Go to your workspace in the Azure Machine Learning studio, select **Notebooks**, and then select the **Samples** tab.
-If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version](https://azure.microsoft.com/free/) of Azure Machine Learning today.
+ 1. In the list of notebooks, expand the **Samples** > **SDK v1** > **tutorials** > **regression-automl-nyc-taxi-data** node.
+
+ 1. Select the _regression-automated-ml.ipynb_ notebook.
-* Complete the [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md) if you don't already have an Azure Machine Learning workspace or a compute instance.
-* After you complete the quickstart:
- 1. Select **Notebooks** in the studio.
- 1. Select the **Samples** tab.
- 1. Open the *SDK v1/tutorials/regression-automl-nyc-taxi-data/regression-automated-ml.ipynb* notebook.
- 1. To run each cell in the tutorial, select **Clone this notebook**
+ 1. To run each notebook cell as part of this tutorial, select **Clone this file**.
-This article is also available on [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/tutorials) if you wish to run it in your own [local environment](how-to-configure-environment.md).
-To get the required packages,
-* [Install the full `automl` client](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment).
-* Run `pip install azureml-opendatasets azureml-widgets` to get the required packages.
+ **Alternate approach**: If you prefer, you can run the tutorial exercises in a [local environment](how-to-configure-environment.md). The tutorial is available in the [Azure Machine Learning Notebooks repository](https://github.com/Azure/MachineLearningNotebooks/tree/master/tutorials) on GitHub. For this approach, follow these steps to get the required packages:
+
+ 1. [Install the full `automl` client](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment).
+
+ 1. Run the `pip install azureml-opendatasets azureml-widgets` command on your local machine to get the required packages.
## Download and prepare data
-Import the necessary packages. The Open Datasets package contains a class representing each data source (`NycTlcGreen` for example) to easily filter date parameters before downloading.
+The Open Datasets package contains a class that represents each data source (such as `NycTlcGreen`) to easily filter date parameters before downloading.
+
+The following code imports the necessary packages:
```python from azureml.opendatasets import NycTlcGreen
from datetime import datetime
from dateutil.relativedelta import relativedelta ```
-Begin by creating a dataframe to hold the taxi data. When you work in a non-Spark environment, Open Datasets only allows downloading one month of data at a time with certain classes to avoid `MemoryError` with large datasets.
+The first step is to create a dataframe for the taxi data. When you work in a non-Spark environment, the Open Datasets package allows downloading only one month of data at a time with certain classes. This approach helps to avoid the `MemoryError` issue that can occur with large datasets.
-To download taxi data, iteratively fetch one month at a time, and before appending it to `green_taxi_df` randomly sample 2,000 records from each month to avoid bloating the dataframe. Then preview the data.
+To download the taxi data, iteratively fetch one month at a time. Before you append the next set of data to the `green_taxi_df` dataframe, randomly sample 2,000 records from each month, and then preview the data. This approach helps to avoid bloating the dataframe.
+The following code creates the dataframe, fetches the data, and loads it into the dataframe:
```python green_taxi_df = pd.DataFrame([])
start = datetime.strptime("1/1/2015","%m/%d/%Y")
end = datetime.strptime("1/31/2015","%m/%d/%Y") for sample_month in range(12):
- temp_df_green = NycTlcGreen(start + relativedelta(months=sample_month), end + relativedelta(months=sample_month)) \
- .to_pandas_dataframe()
- green_taxi_df = green_taxi_df.append(temp_df_green.sample(2000))
+ temp_df_green = NycTlcGreen(start + relativedelta(months=sample_month), end + relativedelta(months=sample_month)) \
+ .to_pandas_dataframe()
+ green_taxi_df = green_taxi_df.append(temp_df_green.sample(2000))
green_taxi_df.head(10) ```
-|vendorID| lpepPickupDatetime| lpepDropoffDatetime| passengerCount| tripDistance| puLocationId| doLocationId| pickupLongitude| pickupLatitude| dropoffLongitude |...| paymentType |fareAmount |extra| mtaTax| improvementSurcharge| tipAmount| tollsAmount| ehailFee| totalAmount| tripType|
-|-|-|-|-|-|-||--||||-|-|-|--||-|--|-|-|-|
-|131969|2|2015-01-11 05:34:44|2015-01-11 05:45:03|3|4.84|None|None|-73.88|40.84|-73.94|...|2|15.00|0.50|0.50|0.3|0.00|0.00|nan|16.30|
-|1129817|2|2015-01-20 16:26:29|2015-01-20 16:30:26|1|0.69|None|None|-73.96|40.81|-73.96|...|2|4.50|1.00|0.50|0.3|0.00|0.00|nan|6.30|
-|1278620|2|2015-01-01 05:58:10|2015-01-01 06:00:55|1|0.45|None|None|-73.92|40.76|-73.91|...|2|4.00|0.00|0.50|0.3|0.00|0.00|nan|4.80|
-|348430|2|2015-01-17 02:20:50|2015-01-17 02:41:38|1|0.00|None|None|-73.81|40.70|-73.82|...|2|12.50|0.50|0.50|0.3|0.00|0.00|nan|13.80|
-1269627|1|2015-01-01 05:04:10|2015-01-01 05:06:23|1|0.50|None|None|-73.92|40.76|-73.92|...|2|4.00|0.50|0.50|0|0.00|0.00|nan|5.00|
-|811755|1|2015-01-04 19:57:51|2015-01-04 20:05:45|2|1.10|None|None|-73.96|40.72|-73.95|...|2|6.50|0.50|0.50|0.3|0.00|0.00|nan|7.80|
-|737281|1|2015-01-03 12:27:31|2015-01-03 12:33:52|1|0.90|None|None|-73.88|40.76|-73.87|...|2|6.00|0.00|0.50|0.3|0.00|0.00|nan|6.80|
-|113951|1|2015-01-09 23:25:51|2015-01-09 23:39:52|1|3.30|None|None|-73.96|40.72|-73.91|...|2|12.50|0.50|0.50|0.3|0.00|0.00|nan|13.80|
-|150436|2|2015-01-11 17:15:14|2015-01-11 17:22:57|1|1.19|None|None|-73.94|40.71|-73.95|...|1|7.00|0.00|0.50|0.3|1.75|0.00|nan|9.55|
-|432136|2|2015-01-22 23:16:33 2015-01-22 23:20:13 1 0.65|None|None|-73.94|40.71|-73.94|...|2|5.00|0.50|0.50|0.3|0.00|0.00|nan|6.30|
-
-Remove some of the columns that you won't need for training or other feature building. Automate machine learning will automatically handle time-based features such as **lpepPickupDatetime**.
+The following table shows the many columns of values in the sample taxi data:
+
+| vendorID | lpepPickupDatetime | lpepDropoffDatetime | passengerCount | tripDistance | puLocationId | doLocationId | pickupLongitude | pickupLatitude | dropoffLongitude |...| paymentType | fareAmount | extra | mtaTax | improvementSurcharge | tipAmount | tollsAmount | ehailFee | totalAmount | tripType |
+|::|::|::|::|::|::|::|::|::|::|::|::|::|::|::|::|::|::|::|::|::|
+| 2 | 2015-01-30 18:38:09 | 2015-01-30 19:01:49 | 1 | 1.88 | None | None | -73.996155 | 40.690903 | -73.964287 | ... | 1 | 15.0 | 1.0 | 0.5 | 0.3 | 4.00 | 0.0 | None | 20.80 | 1.0 |
+| 1 | 2015-01-17 23:21:39 | 2015-01-17 23:35:16 | 1 | 2.70 | None | None | -73.978508 | 40.687984 | -73.955116 | ... | 1 | 11.5 | 0.5 | 0.5 | 0.3 | 2.55 | 0.0 | None | 15.35 | 1.0 |
+| 2 | 2015-01-16 01:38:40 | 2015-01-16 01:52:55 | 1 | 3.54 | None | None | -73.957787 | 40.721779 | -73.963005 | ... | 1 | 13.5 | 0.5 | 0.5 | 0.3 | 2.80 | 0.0 | None | 17.60 | 1.0 |
+| 2 | 2015-01-04 17:09:26 | 2015-01-04 17:16:12 | 1 | 1.00 | None | None | -73.919914 | 40.826023 | -73.904839 | ... | 2 | 6.5 | 0.0 | 0.5 | 0.3 | 0.00 | 0.0 | None | 7.30 | 1.0 |
+| 1 | 2015-01-14 10:10:57 | 2015-01-14 10:33:30 | 1 | 5.10 | None | None | -73.943710 | 40.825439 | -73.982964 | ... | 1 | 18.5 | 0.0 | 0.5 | 0.3 | 3.85 | 0.0 | None | 23.15 | 1.0 |
+| 2 | 2015-01-19 18:10:41 | 2015-01-19 18:32:20 | 1 | 7.41 | None | None | -73.940918 | 40.839714 | -73.994339 | ... | 1 | 24.0 | 0.0 | 0.5 | 0.3 | 4.80 | 0.0 | None | 29.60 | 1.0 |
+| 2 | 2015-01-01 15:44:21 | 2015-01-01 15:50:16 | 1 | 1.03 | None | None | -73.985718 | 40.685646 | -73.996773 | ... | 1 | 6.5 | 0.0 | 0.5 | 0.3 | 1.30 | 0.0 | None | 8.60 | 1.0 |
+| 2 | 2015-01-12 08:01:21 | 2015-01-12 08:14:52 | 5 | 2.94 | None | None | -73.939865 | 40.789822 | -73.952957 | ... | 2 | 12.5 | 0.0 | 0.5 | 0.3 | 0.00 | 0.0 | None | 13.30 | 1.0 |
+| 1 | 2015-01-16 21:54:26 | 2015-01-16 22:12:39 | 1 | 3.00 | None | None | -73.957939 | 40.721928 | -73.926247 | ... | 1 | 14.0 | 0.5 | 0.5 | 0.3 | 2.00 | 0.0 | None | 17.30 | 1.0 |
+| 2 | 2015-01-06 06:34:53 | 2015-01-06 06:44:23 | 1 | 2.31 | None | None | -73.943825 | 40.810257 | -73.943062 | ... | 1 | 10.0 | 0.0 | 0.5 | 0.3 | 2.00 | 0.0 | None | 12.80 | 1.0 |
+
+It's helpful to remove some columns that you don't need for training or other feature building. For example, you might remove the **lpepPickupDatetime** column because Automated ML automatically handles time-based features.
+
+The following code removes 14 columns from the sample data:
```python columns_to_remove = ["lpepDropoffDatetime", "puLocationId", "doLocationId", "extra", "mtaTax",
- "improvementSurcharge", "tollsAmount", "ehailFee", "tripType", "rateCodeID",
- "storeAndFwdFlag", "paymentType", "fareAmount", "tipAmount"
- ]
+ "improvementSurcharge", "tollsAmount", "ehailFee", "tripType", "rateCodeID",
+ "storeAndFwdFlag", "paymentType", "fareAmount", "tipAmount"
+ ]
for col in columns_to_remove:
- green_taxi_df.pop(col)
+ green_taxi_df.pop(col)
green_taxi_df.head(5) ``` ### Cleanse data
-Run the `describe()` function on the new dataframe to see summary statistics for each field.
+The next step is to cleanse the data.
+
+The following code runs the `describe()` function on the new dataframe to produce summary statistics for each field:
```python green_taxi_df.describe() ```
-|vendorID|passengerCount|tripDistance|pickupLongitude|pickupLatitude|dropoffLongitude|dropoffLatitude| totalAmount|month_num day_of_month|day_of_week|hour_of_day
-|-|-|||-|||||||
-|count|48000.00|48000.00|48000.00|48000.00|48000.00|48000.00|48000.00|48000.00|48000.00|48000.00|
-|mean|1.78|1.37|2.87|-73.83|40.69|-73.84|40.70|14.75|6.50|15.13|
-|std|0.41|1.04|2.93|2.76|1.52|2.61|1.44|12.08|3.45|8.45|
-|min|1.00|0.00|0.00|-74.66|0.00|-74.66|0.00|-300.00|1.00|1.00|
-|25%|2.00|1.00|1.06|-73.96|40.70|-73.97|40.70|7.80|3.75|8.00|
-|50%|2.00|1.00|1.90|-73.94|40.75|-73.94|40.75|11.30|6.50|15.00|
-|75%|2.00|1.00|3.60|-73.92|40.80|-73.91|40.79|17.80|9.25|22.00|
-|max|2.00|9.00|97.57|0.00|41.93|0.00|41.94|450.00|12.00|30.00|
--
-From the summary statistics, you see that there are several fields that have outliers or values that reduce model accuracy. First filter the lat/long fields to be within the bounds of the Manhattan area. This filters out longer taxi trips or trips that are outliers in respect to their relationship with other features.
+The following table shows summary statistics for the remaining fields in the sample data:
+
+| | vendorID | passengerCount | tripDistance | pickupLongitude | pickupLatitude | dropoffLongitude | dropoffLatitude | totalAmount |
+|::|::|::|::|::|::|::|::|::|
+| **count** | 24000.00 | 24000.00 | 24000.00 | 24000.00 | 24000.00 | 24000.00 | 24000.00 | 24000.00 |
+| **mean** | 1.777625 | 1.373625 | 2.893981 | -73.827403 | 40.689730 | -73.819670 | 40.684436 | 14.892744 |
+| **std** | 0.415850 | 1.046180 | 3.072343 | 2.821767 | 1.556082 | 2.901199 | 1.599776 | 12.339749 |
+| **min** | 1.00 | 0.00 | 0.00 | -74.357101 | 0.00 | -74.342766 | 0.00 | -120.80 |
+| **25%** | 2.00 | 1.00 | 1.05 | -73.959175 | 40.699127 | -73.966476 | 40.699459 | 8.00 |
+| **50%** | 2.00 | 1.00 | 1.93 | -73.945049 | 40.746754 | -73.944221 | 40.747536 | 11.30 |
+| **75%** | 2.00 | 1.00 | 3.70 | -73.917089 | 40.803060 | -73.909061 | 40.791526 | 17.80 |
+| **max** | 2.00 | 8.00 | 154.28 | 0.00 | 41.109089 | 0.00 | 40.982826 | 425.00 |
-Additionally filter the `tripDistance` field to be greater than zero but less than 31 miles (the haversine distance between the two lat/long pairs). This eliminates long outlier trips that have inconsistent trip cost.
+The summary statistics reveal several fields that are outliers, which are values that reduce model accuracy. To address this issue, filter the latitude/longitude (lat/long) fields so the values are within the bounds of the Manhattan area. This approach filters out longer taxi trips or trips that are outliers in respect to their relationship with other features.
-Lastly, the `totalAmount` field has negative values for the taxi fares, which don't make sense in the context of our model, and the `passengerCount` field has bad data with the minimum values being zero.
+Next, filter the `tripDistance` field for values that are greater than zero but less than 31 miles (the haversine distance between the two lat/long pairs). This technique eliminates long outlier trips that have inconsistent trip cost.
-Filter out these anomalies using query functions, and then remove the last few columns unnecessary for training.
+Lastly, the `totalAmount` field has negative values for the taxi fares, which don't make sense in the context of the model. The `passengerCount` field also contains bad data where the minimum value is zero.
+The following code filters out these value anomalies by using query functions. The code then removes the last few columns that aren't necessary for training:
```python final_df = green_taxi_df.query("pickupLatitude>=40.53 and pickupLatitude<=40.88")
final_df = final_df.query("passengerCount>0 and totalAmount>0")
columns_to_remove_for_training = ["pickupLongitude", "pickupLatitude", "dropoffLongitude", "dropoffLatitude"] for col in columns_to_remove_for_training:
- final_df.pop(col)
+ final_df.pop(col)
```
-Call `describe()` again on the data to ensure cleansing worked as expected. You now have a prepared and cleansed set of taxi, holiday, and weather data to use for machine learning model training.
+The last step in this sequence is to call the `describe()` function again on the data to ensure cleansing worked as expected. You now have a prepared and cleansed set of taxi, holiday, and weather data to use for machine learning model training:
```python final_df.describe()
final_df.describe()
## Configure workspace
-Create a workspace object from the existing workspace. A [Workspace](/python/api/azureml-core/azureml.core.workspace.workspace) is a class that accepts your Azure subscription and resource information. It also creates a cloud resource to monitor and track your model runs. `Workspace.from_config()` reads the file **config.json** and loads the authentication details into an object named `ws`. `ws` is used throughout the rest of the code in this article.
+Create a workspace object from the existing workspace. A [Workspace](/python/api/azureml-core/azureml.core.workspace.workspace) is a class that accepts your Azure subscription and resource information. It also creates a cloud resource to monitor and track your model runs.
+
+The following code calls the `Workspace.from_config()` function to read the _config.json_ file and load the authentication details into an object named `ws`.
```python from azureml.core.workspace import Workspace ws = Workspace.from_config() ```
-## Split the data into train and test sets
+The `ws` object is used throughout the rest of the code in this tutorial.
+
+## Split data into train and test sets
-Split the data into training and test sets by using the `train_test_split` function in the `scikit-learn` library. This function segregates the data into the x (**features**) data set for model training and the y (**values to predict**) data set for testing.
+Split the data into training and test sets by using the `train_test_split` function in the _scikit-learn_ library. This function segregates the data into the x (**features**) data set for model training and the y (**values to predict**) data set for testing.
The `test_size` parameter determines the percentage of data to allocate to testing. The `random_state` parameter sets a seed to the random generator, so that your train-test splits are deterministic.
+The following code calls the `train_test_split` function to load the x and y datasets:
+ ```python from sklearn.model_selection import train_test_split x_train, x_test = train_test_split(final_df, test_size=0.2, random_state=223) ```
-The purpose of this step is to have data points to test the finished model that haven't been used to train the model, in order to measure true accuracy.
+The purpose of this step is to prepare data points to test the finished model that aren't used to train the model. These points are used to measure true accuracy. A well-trained model is one that can make accurate predictions from unseen data. You now have data prepared for autotraining a machine learning model.
-In other words, a well-trained model should be able to accurately make predictions from data it hasn't already seen. You now have data prepared for auto-training a machine learning model.
-
-## Automatically train a model
+## Automatically train model
To automatically train a model, take the following steps:+ 1. Define settings for the experiment run. Attach your training data to the configuration, and modify settings that control the training process.
-1. Submit the experiment for model tuning. After submitting the experiment, the process iterates through different machine learning algorithms and hyperparameter settings, adhering to your defined constraints. It chooses the best-fit model by optimizing an accuracy metric.
+
+1. Submit the experiment for model tuning. After you submit the experiment, the process iterates through different machine learning algorithms and hyperparameter settings, adhering to your defined constraints. It chooses the best-fit model by optimizing an accuracy metric.
### Define training settings
-Define the experiment parameter and model settings for training. View the full list of [settings](how-to-configure-auto-train.md). Submitting the experiment with these default settings take approximately 5-20 min, but if you want a shorter run time, reduce the `experiment_timeout_hours` parameter.
+Define the experiment parameter and model settings for training. View the full list of [settings](how-to-configure-auto-train.md). Submitting the experiment with these default settings takes approximately 5-20 minutes. To decrease the run time, reduce the `experiment_timeout_hours` parameter.
-|Property| Value in this article |Description|
+| Property | Value in this tutorial | Description |
|-|-||
-|**iteration_timeout_minutes**|10|Time limit in minutes for each iteration. Increase this value for larger datasets that need more time for each iteration.|
-|**experiment_timeout_hours**|0.3|Maximum amount of time in hours that all iterations combined can take before the experiment terminates.|
-|**enable_early_stopping**|True|Flag to enable early termination if the score isn't improving in the short term.|
-|**primary_metric**| spearman_correlation | Metric that you want to optimize. The best-fit model is chosen based on this metric.|
-|**featurization**| auto | By using **auto**, the experiment can preprocess the input data (handling missing data, converting text to numeric, etc.)|
-|**verbosity**| logging.INFO | Controls the level of logging.|
-|**n_cross_validations**|5|Number of cross-validation splits to perform when validation data isn't specified.|
+| `iteration_timeout_minutes` | 10 | Time limit in minutes for each iteration. Increase this value for larger datasets that need more time for each iteration. |
+| `experiment_timeout_hours` | 0.3 | Maximum amount of time in hours that all iterations combined can take before the experiment terminates. |
+| `enable_early_stopping` | True | Flag to enable early termination if the score isn't improving in the short term. |
+| `primary_metric` | spearman_correlation | Metric that you want to optimize. The best-fit model is chosen based on this metric. |
+| `featurization` | auto | The _auto_ value allows the experiment to preprocess the input data, including handling missing data, converting text to numeric, and so on. |
+| `verbosity` | logging.INFO | Controls the level of logging. |
+| `n_cross_validations` | 5 | Number of cross-validation splits to perform when validation data isn't specified. |
+
+The following code submits the experiment:
```python import logging automl_settings = {
- "iteration_timeout_minutes": 10,
- "experiment_timeout_hours": 0.3,
- "enable_early_stopping": True,
- "primary_metric": 'spearman_correlation',
- "featurization": 'auto',
- "verbosity": logging.INFO,
- "n_cross_validations": 5
+ "iteration_timeout_minutes": 10,
+ "experiment_timeout_hours": 0.3,
+ "enable_early_stopping": True,
+ "primary_metric": 'spearman_correlation',
+ "featurization": 'auto',
+ "verbosity": logging.INFO,
+ "n_cross_validations": 5
} ```
-Use your defined training settings as a `**kwargs` parameter to an `AutoMLConfig` object. Additionally, specify your training data and the type of model, which is `regression` in this case.
+The following code lets you use your defined training settings as a `**kwargs` parameter to an `AutoMLConfig` object. Additionally, you specify your training data and the type of model, which is `regression` in this case.
```python from azureml.train.automl import AutoMLConfig automl_config = AutoMLConfig(task='regression',
- debug_log='automated_ml_errors.log',
- training_data=x_train,
- label_column_name="totalAmount",
- **automl_settings)
+ debug_log='automated_ml_errors.log',
+ training_data=x_train,
+ label_column_name="totalAmount",
+ **automl_settings)
``` > [!NOTE]
-> Automated machine learning pre-processing steps (feature normalization, handling missing data,
-> converting text to numeric, etc.) become part of the underlying model. When using the model for
-> predictions, the same pre-processing steps applied during training are applied to
-> your input data automatically.
+> Automated ML pre-processing steps (feature normalization, handling missing data, converting text to numeric, and so on) become part of the underlying model. When you use the model for predictions, the same pre-processing steps applied during training are applied to your input data automatically.
-### Train the automatic regression model
+### Train automatic regression model
-Create an experiment object in your workspace. An experiment acts as a container for your individual jobs. Pass the defined `automl_config` object to the experiment, and set the output to `True` to view progress during the job.
+Create an experiment object in your workspace. An experiment acts as a container for your individual jobs. Pass the defined `automl_config` object to the experiment, and set the output to _True_ to view progress during the job.
-After starting the experiment, the output shown updates live as the experiment runs. For each iteration, you see the model type, the run duration, and the training accuracy. The field `BEST` tracks the best running training score based on your metric type.
+After you start the experiment, the displayed output updates live as the experiment runs. For each iteration, you see the model type, run duration, and training accuracy. The field `BEST` tracks the best running training score based on your metric type:
```python from azureml.core.experiment import Experiment
experiment = Experiment(ws, "Tutorial-NYCTaxi")
local_run = experiment.submit(automl_config, show_output=True) ```
+Here's the output:
+ ```output Running on local machine Parent Run ID: AutoML_1766cdf7-56cf-4b28-a340-c4aeee15b12b
METRIC: The result of computing score on the fitted pipeline.
BEST: The best observed score thus far. ****************************************************************************************************
- ITERATION PIPELINE DURATION METRIC BEST
- 0 StandardScalerWrapper RandomForest 0:00:16 0.8746 0.8746
- 1 MinMaxScaler RandomForest 0:00:15 0.9468 0.9468
- 2 StandardScalerWrapper ExtremeRandomTrees 0:00:09 0.9303 0.9468
- 3 StandardScalerWrapper LightGBM 0:00:10 0.9424 0.9468
- 4 RobustScaler DecisionTree 0:00:09 0.9449 0.9468
- 5 StandardScalerWrapper LassoLars 0:00:09 0.9440 0.9468
- 6 StandardScalerWrapper LightGBM 0:00:10 0.9282 0.9468
- 7 StandardScalerWrapper RandomForest 0:00:12 0.8946 0.9468
- 8 StandardScalerWrapper LassoLars 0:00:16 0.9439 0.9468
- 9 MinMaxScaler ExtremeRandomTrees 0:00:35 0.9199 0.9468
- 10 RobustScaler ExtremeRandomTrees 0:00:19 0.9411 0.9468
- 11 StandardScalerWrapper ExtremeRandomTrees 0:00:13 0.9077 0.9468
- 12 StandardScalerWrapper LassoLars 0:00:15 0.9433 0.9468
- 13 MinMaxScaler ExtremeRandomTrees 0:00:14 0.9186 0.9468
- 14 RobustScaler RandomForest 0:00:10 0.8810 0.9468
- 15 StandardScalerWrapper LassoLars 0:00:55 0.9433 0.9468
- 16 StandardScalerWrapper ExtremeRandomTrees 0:00:13 0.9026 0.9468
- 17 StandardScalerWrapper RandomForest 0:00:13 0.9140 0.9468
- 18 VotingEnsemble 0:00:23 0.9471 0.9471
- 19 StackEnsemble 0:00:27 0.9463 0.9471
+ ITERATION PIPELINE DURATION METRIC BEST
+ 0 StandardScalerWrapper RandomForest 0:00:16 0.8746 0.8746
+ 1 MinMaxScaler RandomForest 0:00:15 0.9468 0.9468
+ 2 StandardScalerWrapper ExtremeRandomTrees 0:00:09 0.9303 0.9468
+ 3 StandardScalerWrapper LightGBM 0:00:10 0.9424 0.9468
+ 4 RobustScaler DecisionTree 0:00:09 0.9449 0.9468
+ 5 StandardScalerWrapper LassoLars 0:00:09 0.9440 0.9468
+ 6 StandardScalerWrapper LightGBM 0:00:10 0.9282 0.9468
+ 7 StandardScalerWrapper RandomForest 0:00:12 0.8946 0.9468
+ 8 StandardScalerWrapper LassoLars 0:00:16 0.9439 0.9468
+ 9 MinMaxScaler ExtremeRandomTrees 0:00:35 0.9199 0.9468
+ 10 RobustScaler ExtremeRandomTrees 0:00:19 0.9411 0.9468
+ 11 StandardScalerWrapper ExtremeRandomTrees 0:00:13 0.9077 0.9468
+ 12 StandardScalerWrapper LassoLars 0:00:15 0.9433 0.9468
+ 13 MinMaxScaler ExtremeRandomTrees 0:00:14 0.9186 0.9468
+ 14 RobustScaler RandomForest 0:00:10 0.8810 0.9468
+ 15 StandardScalerWrapper LassoLars 0:00:55 0.9433 0.9468
+ 16 StandardScalerWrapper ExtremeRandomTrees 0:00:13 0.9026 0.9468
+ 17 StandardScalerWrapper RandomForest 0:00:13 0.9140 0.9468
+ 18 VotingEnsemble 0:00:23 0.9471 0.9471
+ 19 StackEnsemble 0:00:27 0.9463 0.9471
```
-## Explore the results
+## Explore results
Explore the results of automatic training with a [Jupyter widget](/python/api/azureml-widgets/azureml.widgets). The widget allows you to see a graph and table of all individual job iterations, along with training accuracy metrics and metadata. Additionally, you can filter on different accuracy metrics than your primary metric with the dropdown selector.
+The following code produces a graph to explore the results:
+ ```python from azureml.widgets import RunDetails RunDetails(local_run).show() ```
-![Jupyter widget run details](./media/how-to-auto-train-models/automl-dash-output.png)
-![Jupyter widget plot](./media/how-to-auto-train-models/automl-chart-output.png)
+The run details for the Jupyter widget:
+
-### Retrieve the best model
+The plot chart for the Jupyter widget:
-Select the best model from your iterations. The `get_output` function returns the best run and the fitted model for the last fit invocation. By using the overloads on `get_output`, you can retrieve the best run and fitted model for any logged metric or a particular iteration.
+
+### Retrieve best model
+
+The following code lets you select the best model from your iterations. The `get_output` function returns the best run and the fitted model for the last fit invocation. By using the overloads on the `get_output` function, you can retrieve the best run and fitted model for any logged metric or a particular iteration.
```python best_run, fitted_model = local_run.get_output()
print(best_run)
print(fitted_model) ```
-### Test the best model accuracy
+### Test best model accuracy
-Use the best model to run predictions on the test data set to predict taxi fares. The function `predict` uses the best model and predicts the values of y, **trip cost**, from the `x_test` data set. Print the first 10 predicted cost values from `y_predict`.
+Use the best model to run predictions on the test data set to predict taxi fares. The `predict` function uses the best model and predicts the values of y, **trip cost**, from the `x_test` data set.
+
+The following code prints the first 10 predicted cost values from the `y_predict` data set:
```python y_test = x_test.pop("totalAmount")
y_predict = fitted_model.predict(x_test)
print(y_predict[:10]) ```
-Calculate the `root mean squared error` of the results. Convert the `y_test` dataframe to a list to compare to the predicted values. The function `mean_squared_error` takes two arrays of values and calculates the average squared error between them. Taking the square root of the result gives an error in the same units as the y variable, **cost**. It indicates roughly how far the taxi fare predictions are from the actual fares.
+Calculate the `root mean squared error` of the results. Convert the `y_test` dataframe to a list and compare with the predicted values. The `mean_squared_error` function takes two arrays of values and calculates the average squared error between them. Taking the square root of the result gives an error in the same units as the y variable, **cost**. It indicates roughly how far the taxi fare predictions are from the actual fares.
```python from sklearn.metrics import mean_squared_error
Run the following code to calculate mean absolute percent error (MAPE) by using
sum_actuals = sum_errors = 0 for actual_val, predict_val in zip(y_actual, y_predict):
- abs_error = actual_val - predict_val
- if abs_error < 0:
- abs_error = abs_error * -1
+ abs_error = actual_val - predict_val
+ if abs_error < 0:
+ abs_error = abs_error * -1
- sum_errors = sum_errors + abs_error
- sum_actuals = sum_actuals + actual_val
+ sum_errors = sum_errors + abs_error
+ sum_actuals = sum_actuals + actual_val
mean_abs_percent_error = sum_errors / sum_actuals print("Model MAPE:")
print("Model Accuracy:")
print(1 - mean_abs_percent_error) ```
+Here's the output:
+ ```output Model MAPE: 0.14353867606052823
Model Accuracy:
0.8564613239394718 ``` - From the two prediction accuracy metrics, you see that the model is fairly good at predicting taxi fares from the data set's features, typically within +- $4.00, and approximately 15% error.
-The traditional machine learning model development process is highly resource-intensive, and requires significant domain knowledge and time investment to run and compare the results of dozens of models. Using automated machine learning is a great way to rapidly test many different models for your scenario.
+The traditional machine learning model development process is highly resource-intensive. It requires significant domain knowledge and time investment to run and compare the results of dozens of models. Using automated machine learning is a great way to rapidly test many different models for your scenario.
## Clean up resources
-Don't complete this section if you plan on running other Azure Machine Learning tutorials.
+If you don't plan to work on other Azure Machine Learning tutorials, complete the following steps to remove the resources you no longer need.
+
+### Stop compute
+
+If you used a compute, you can stop the virtual machine when you aren't using it and reduce your costs:
-### Stop the compute instance
+1. Go to your workspace in the Azure Machine Learning studio, and select **Compute**.
-If you used a compute instance, stop the VM when you aren't using it to reduce cost.
+1. In the list, select the compute you want to stop, and then select **Stop**.
-1. In your workspace, select **Compute**.
+When you're ready to use the compute again, you can restart the virtual machine.
-1. From the list, select the name of the compute instance.
+### Delete other resources
-1. Select **Stop**.
+If you don't plan to use the resources you created in this tutorial, you can delete them and avoid incurring further charges.
-1. When you're ready to use the server again, select **Start**.
+Follow these steps to remove the resource group and all resources:
-### Delete everything
+1. In the Azure portal, go to **Resource groups**.
-If you don't plan to use the resources you created, delete them, so you don't incur any charges.
+1. In the list, select the resource group you created in this tutorial, and then select **Delete resource group**.
-1. In the Azure portal, select **Resource groups** on the far left.
-1. From the list, select the resource group you created.
-1. Select **Delete resource group**.
-1. Enter the resource group name. Then select **Delete**.
+1. At the confirmation prompt, enter the resource group name, and then select **Delete**.
-You can also keep the resource group but delete a single workspace. Display the workspace properties and select **Delete**.
+If you want to keep the resource group, and delete a single workspace only, follow these steps:
-## Next steps
+1. In the Azure portal, go to the resource group that contains the workspace you want to remove.
-In this automated machine learning article, you did the following tasks:
+1. Select the workspace, select **Properties**, and then select **Delete**.
-> [!div class="checklist"]
-> * Configured a workspace and prepared data for an experiment.
-> * Trained by using an automated regression model locally with custom parameters.
-> * Explored and reviewed training results.
+## Next step
-[Set up AutoML to train computer vision models with Python (v1)](how-to-auto-train-image-models.md)
+> [!div class="nextstepaction"]
+> [Set up Automated ML to train computer vision models with Python (v1)](how-to-auto-train-image-models.md)
migrate Migrate V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-v1.md
Title: Work with the previous version of Azure Migrate
description: Describes how to work with the previous version of Azure Migrate. -+ Last updated 03/08/2023
The Azure readiness view in the assessment shows the readiness status of each VM
Ready for Azure | No compatibility issues. The machine can be migrated as-is to Azure, and it will boot in Azure with full Azure support. | For VMs that are ready, Azure Migrate recommends a VM size in Azure. Conditionally ready for Azure | The machine might boot in Azure, but might not have full Azure support. For example, a machine with an older version of Windows Server that isn't supported in Azure. | Azure Migrate explains the readiness issues, and provides remediation steps. Not ready for Azure | The VM won't boot in Azure. For example, if a VM has a disk that's more than 4 TB, it can't be hosted on Azure. | Azure Migrate explains the readiness issues and provides remediation steps.
-Readiness unknown | Azure Migrate can't identify Azure readiness, usually because data isn't available.
-
+Readiness unknown | Azure Migrate can't identify Azure readiness, usually because data isn't available. |
#### Azure VM properties Readiness takes into account a number of VM properties, to identify whether the VM can run in Azure.
network-watcher Traffic Analytics Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics-schema.md
Previously updated : 06/07/2024 Last updated : 07/11/2024 #CustomerIntent: As a administrator, I want learn about traffic analytics schema so I can easily use the queries and understand their output.
The following table lists the fields in the schema and what they signify for vir
> | **FlowStartTime** | Date and time in UTC | First occurrence of the flow (which gets aggregated) in the flow log processing interval between `FlowIntervalStartTime` and `FlowIntervalEndTime`. This flow gets aggregated based on aggregation logic. | > | **FlowEndTime** | Date and time in UTC | Last occurrence of the flow (which gets aggregated) in the flow log processing interval between `FlowIntervalStartTime` and `FlowIntervalEndTime`. | > | **FlowType** | - IntraVNet <br> - InterVNet <br> - S2S <br> - P2S <br> - AzurePublic <br> - ExternalPublic <br> - MaliciousFlow <br> - Unknown Private <br> - Unknown | See [Notes](#notes) for definitions. |
-> | **SrcIP** | Source IP address | Blank in AzurePublic and ExternalPublic flows. |
-> | **DestIP** | Destination IP address | Blank in AzurePublic and ExternalPublic flows. |
+> | **SrcIp** | Source IP address | Blank in AzurePublic and ExternalPublic flows. |
+> | **DestIp** | Destination IP address | Blank in AzurePublic and ExternalPublic flows. |
> | **TargetResourceId** | ResourceGroupName/ResourceName | The ID of the resource at which flow logging and traffic analytics is enabled. | > | **TargetResourceType** | VirtualNetwork/Subnet/NetworkInterface | Type of resource at which flow logging and traffic analytics is enabled (virtual network, subnet, NIC or network security group).| > | **FlowLogResourceId** | ResourceGroupName/NetworkWatcherName/FlowLogName | The resource ID of the flow log. |
The following table lists the fields in the schema and what they signify for vir
> | **FlowDirection** | - **I** = Inbound <br> - **O** = Outbound | Direction of the flow: in or out of the target resource per flow log. | > | **FlowStatus** | - **A** = Allowed <br> - **D** = Denied | Status of flow: allowed or denied by target resource per flow log. | > | **NSGList** | \<SUBSCRIPTIONID\>/\<RESOURCEGROUP_NAME\>/\<NSG_NAME\> | Network security group associated with the flow. |
-> | **NSGRule** | NSG_RULENAME | Network security group rule that allowed or denied the flow. |
-> | **NSGRuleType** | - User Defined <br> - Default | The type of network security group rule used by the flow. |
+> | **NsgRule** | NSG_RULENAME | Network security group rule that allowed or denied the flow. |
+> | **NsgRuleType** | - User Defined <br> - Default | The type of network security group rule used by the flow. |
> | **MACAddress** | MAC Address | MAC address of the NIC at which the flow was captured. | > | **SrcSubscription** | Subscription ID | Subscription ID of virtual network / network interface / virtual machine that the source IP in the flow belongs to. | > | **DestSubscription** | Subscription ID | Subscription ID of virtual network / network interface / virtual machine that the destination IP in the flow belongs to. | > | **SrcRegion** | Azure Region | Azure region of virtual network / network interface / virtual machine to which the source IP in the flow belongs to. | > | **DestRegion** | Azure Region | Azure region of virtual network to which the destination IP in the flow belongs to. |
-> | **SrcNIC** | \<resourcegroup_Name\>/\<NetworkInterfaceName\> | NIC associated with the source IP in the flow. |
-> | **DestNIC** | \<resourcegroup_Name\>/\<NetworkInterfaceName\> | NIC associated with the destination IP in the flow. |
-> | **SrcVM** | \<resourcegroup_Name\>/\<VirtualMachineName\> | Virtual machine associated with the source IP in the flow. |
-> | **DestVM** | \<resourcegroup_Name\>/\<VirtualMachineName\> | Virtual machine associated with the destination IP in the flow. |
+> | **SrcNic** | \<resourcegroup_Name\>/\<NetworkInterfaceName\> | NIC associated with the source IP in the flow. |
+> | **DestNic** | \<resourcegroup_Name\>/\<NetworkInterfaceName\> | NIC associated with the destination IP in the flow. |
+> | **SrcVm** | \<resourcegroup_Name\>/\<VirtualMachineName\> | Virtual machine associated with the source IP in the flow. |
+> | **DestVm** | \<resourcegroup_Name\>/\<VirtualMachineName\> | Virtual machine associated with the destination IP in the flow. |
> | **SrcSubnet** | \<ResourceGroup_Name\>/\<VirtualNetwork_Name\>/\<SubnetName\> | Subnet associated with the source IP in the flow. | > | **DestSubnet** | \<ResourceGroup_Name\>/\<VirtualNetwork_Name\>/\<SubnetName\> | Subnet associated with the destination IP in the flow. | > | **SrcApplicationGateway** | \<SubscriptionID\>/\<ResourceGroupName\>/\<ApplicationGatewayName\> | Application gateway associated with the source IP in the flow. |
List of threat types:
## Notes -- In case of `AzurePublic` and `ExternalPublic` flows, customer owned Azure virtual machine IP is populated in `VMIP_s` field, while the Public IP addresses are populated in the `PublicIPs_s` field. For these two flow types, you should use `VMIP_s` and `PublicIPs_s` instead of `SrcIP_s` and `DestIP_s` fields. For AzurePublic and ExternalPublic IP addresses, we aggregate further, so that the number of records ingested to Log Analytics workspace is minimal. (This field will be deprecated. Use SrcIP_ and DestIP_s depending on whether the virtual machine was the source or the destination in the flow).
+- In case of `AzurePublic` and `ExternalPublic` flows, customer owned Azure virtual machine IP is populated in `VMIP_s` field, while the Public IP addresses are populated in the `PublicIPs_s` field. For these two flow types, you should use `VMIP_s` and `PublicIPs_s` instead of `SrcIP_s` and `DestIP_s` fields. For AzurePublic and ExternalPublic IP addresses, we aggregate further, so that the number of records ingested to Log Analytics workspace is minimal. (This field will be deprecated. Use SrcIP_s and DestIP_s depending on whether the virtual machine was the source or the destination in the flow).
- Some field names are appended with `_s` or `_d`, which don't signify source and destination but indicate the data types *string* and *decimal* respectively. - Based on the IP addresses involved in the flow, we categorize the flows into the following flow types: - `IntraVNet`: Both IP addresses in the flow reside in the same Azure virtual network.
network-watcher Vnet Flow Logs Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-overview.md
Previously updated : 04/24/2024- Last updated : 07/11/2024 #CustomerIntent: As an Azure administrator, I want to learn about virtual network flow logs so that I can log my network traffic to analyze and optimize network performance.
Virtual network flow logs simplify the scope of traffic monitoring because you c
Virtual network flow logs also avoid the need to enable multiple-level flow logging, such as in [network security group flow logs](nsg-flow-logs-overview.md#best-practices). In network security group flow logs, network security groups are configured at both the subnet and the network interface (NIC).
-In addition to existing support to identify traffic that [network security group rules](../virtual-network/network-security-groups-overview.md) allow or deny, Virtual network flow logs support identification of traffic that [Azure Virtual Network Manager security admin rules](../virtual-network-manager/concept-security-admins.md) allow or deny. Virtual network flow logs also support evaluating the encryption status of your network traffic in scenarios where you're using [virtual network encryption](../virtual-network/virtual-network-encryption-overview.md).
+In addition to existing support to identify traffic that [network security group rules](../virtual-network/network-security-groups-overview.md) allow or deny, Virtual network flow logs support identification of traffic that [Azure Virtual Network Manager security admin rules](../virtual-network-manager/concept-security-admins.md) allow or deny. Virtual network flow logs also support evaluating the encryption status of your network traffic in scenarios where you're using [virtual network encryption](../virtual-network/virtual-network-encryption-overview.md?toc=/azure/network-watcher/toc.json).
> [!IMPORTANT] > We recommend disabling network security group flow logs before enabling virtual network flow logs on the same underlying workloads to avoid duplicate traffic recording and additional costs. If you enable network security group flow logs on the network security group of a subnet, then you enable virtual network flow logs on the same subnet or parent virtual network, you might get duplicate logging (both network security group flow logs and virtual network flow logs generated for all supported workloads in that particular subnet).
Virtual network flow logs have the following properties:
| `NX_LOCAL_DST` | **Destination is on the same host**. Encryption is configured, but the source and destination virtual machines are running on the same Azure host. In this case, the connection isn't encrypted by design. | | `NX_FALLBACK` | **Fall back to no encryption**. Encryption is configured with the **Allow unencrypted** policy for both source and destination endpoints. The system attempted encryption but had a problem. In this case, the connection is allowed but isn't encrypted. For example, a virtual machine initially landed on a node that supports encryption, but this support was removed later. |
-Traffic in your virtual networks is unencrypted (`NX`) by default. For encrypted traffic, see [Virtual network encryption](../virtual-network/virtual-network-encryption-overview.md).
+Traffic in your virtual networks is unencrypted (`NX`) by default. For encrypted traffic, see [Virtual network encryption](../virtual-network/virtual-network-encryption-overview.md?toc=/azure/network-watcher/toc.json).
## Sample log record
For continuation (`C`) and end (`E`) flow states, byte and packet counts are agg
- Virtual network flow logs are charged per gigabyte of ***Network flow logs collected*** and come with a free tier of 5 GB/month per subscription.
- > [!NOTE]
- > Virtual network flow logs will be billed effective June 1, 2024.
- - If traffic analytics is enabled with virtual network flow logs, traffic analytics pricing applies at per gigabyte processing rates. Traffic analytics isn't offered with a free tier of pricing. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/). - Storage of logs is charged separately. For more information, see [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/).
open-datasets Dataset 1000 Genomes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-1000-genomes.md
Title: 1000 Genomes
description: Learn how to use the 1000 Genomes dataset in Azure Open Datasets. Previously updated : 04/16/2021+ Last updated : 07/10/2024 # 1000 Genomes
-The 1000 Genomes Project ran between 2008 and 2015, creating the largest public catalog of human variation and genotype data. The final data set contains data for 2,504 individuals from 26 populations and 84 million identified variants. For more information, see the 1000 Genome Project website and the following publications:
+The 1000 Genomes Project ran between 2008 and 2015, to create the largest public catalog of human variation and genotype data. The final data set contains data for 2,504 individuals from 26 populations and 84 million identified variants. For more information, visit the 1000 Genome Project [website](https://www.internationalgenome.org/) and these publications:
-Pilot Analysis: A map of human genome variation from population-scale sequencing Nature 467, 1061-1073 (28 October 2010)
+[Pilot Analysis: A map of human genome variation from population-scale sequencing Nature 467, 1061-1073 (28 October 2010)](https://www.nature.com/articles/nature09534)
-Phase 1 Analysis: An integrated map of genetic variation from 1,092 human genomes Nature 491, 56-65 (01 November 2012)
+[Phase 1 Analysis: An integrated map of genetic variation from 1,092 human genomes Nature 491, 56-65 (01 November 2012)](https://www.nature.com/articles/nature11632)
-Phase 3 Analysis: A global reference for human genetic variation Nature 526, 68-74 (01 October 2015) and An integrated map of structural variation in 2,504 human genomes Nature 526, 75-81 (01 October 2015)
+[Phase 3 Analysis: A global reference for human genetic variation Nature 526, 68-74 (01 October 2015) and An integrated map of structural variation in 2,504 human genomes Nature 526, 75-81](https://www.nature.com/articles/nature15394)
-For details on data formats refer to http://www.internationalgenome.org/formats
+Visit [this resource](http://www.internationalgenome.org/formats) for more information about the relevant data formats.
-**[NEW]** the dataset is also available in [parquet format](https://github.com/microsoft/genomicsnotebook/tree/main/vcf2parquet-conversion/1000genomes)
+**[NEW]**: The dataset is also available in [parquet format](https://github.com/microsoft/genomicsnotebook/tree/main/vcf2parquet-conversion/1000genomes).
[!INCLUDE [Open Dataset usage notice](./includes/open-datasets-usage-note.md)] ## Data source
-This dataset is a mirror of ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/
+This dataset is a mirror of [this](ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/) FTP resource.
## Data volumes and update frequency
-This dataset contains approximately 815 TB of data and is updated daily.
-
-## Storage location
-
-This dataset is stored in the West US 2 and West Central US Azure regions. Allocating compute resources in West US 2 or West Central US is recommended for affinity.
-
-## Data Access
-
-West US 2: 'https://dataset1000genomes.blob.core.windows.net/dataset'
-
-West Central US: 'https://dataset1000genomes-secondary.blob.core.windows.net/dataset'
-
-[SAS Token](../storage/common/storage-sas-overview.md): sv=2019-10-10&si=prod&sr=c&sig=9nzcxaQn0NprMPlSh4RhFQHcXedLQIcFgbERiooHEqM%3D
-
-## Data Access: Curated 1000 genomes dataset in parquet format
-
-East US: `https://curated1000genomes.blob.core.windows.net/dataset`
-
-SAS Token: sv=2018-03-28&si=prod&sr=c&sig=BgIomQanB355O4FhxqBL9xUgKzwpcVlRZdBewO5%2FM4E%3D
+This dataset contains approximately 815 TB of data. It receives daily updates.
## Use Terms
-Following the final publications, data from the 1000 Genomes Project is publicly available without embargo to anyone for use under the terms provided by the dataset source ([http://www.internationalgenome.org/data](http://www.internationalgenome.org/data)). Use of the data should be cited per details available in the [FAQs]() from the 1000 Genome Project.
+Following the final publications, data from the 1000 Genomes Project is publicly available, without embargo, to anyone for use under the terms provided by the [dataset source](http://www.internationalgenome.org/data). Use of the data should be cited per details available in the 1000 Genome Project [FAQ resource](https://www.internationalgenome.org/faq).
## Contact
-https://www.internationalgenome.org/contact
+Scroll down at [this resource](https://www.internationalgenome.org/contact) for the contact information.
## Next steps
operational-excellence Relocation App Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-app-gateway.md
# Relocate Azure Application Gateway and Web Application Firewall (WAF) to another region + This article covers the recommended approach, guidelines, and practices to relocate Application Gateway and WAF between Azure regions. >[!IMPORTANT]
operational-excellence Relocation Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-automation.md
Title: Relocation guidance for Azure Automation
+ Title: Relocate Azure Automation to another region
description: Learn how to relocate an Azure Automation to a another region
This article covers relocation guidance for [Azure Automation](../automation/overview.md) across regions. +++ If your Azure Automation instance doesn't have any configuration and the instance itself needs to be moved alone, you can choose to redeploy the NetApp File instance by using [Bicep, ARM Template, or Terraform](/azure/templates/microsoft.automation/automationaccounts?tabs=bicep&pivots=deployment-language-bicep).
operational-excellence Relocation Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-cosmos-db.md
# Relocate an Azure Cosmos DB NoSQL account to another region + This article describes how to either:
operational-excellence Relocation Event Grid Custom Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-event-grid-custom-topics.md
# Relocate Azure Event Grid custom topics to another region
-This article describes how to relocate your Azure Event Grid resources to another Azure region. You might relocate these resources for a number of reasons, such as to take advantage of a new Azure region, to meet internal policy and governance requirements, or in response to capacity planning requirements.
+++
+This article describes how to relocate your Azure Event Grid resources to another Azure region.
The high-level steps are:
operational-excellence Relocation Event Grid Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-event-grid-domains.md
# Relocate Azure Event Grid domains to another region
- This article covers the recommended approach, guidelines, and practices to relocate Event Grid domains to another region.
+This article covers the recommended approach, guidelines, and practices to relocate Event Grid domains to another region.
+++ The high-level steps are:
operational-excellence Relocation Event Grid System Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-event-grid-system-topics.md
# Relocate Azure Event Grid system topics to another region
-You might want to move your resources to another region for a number of reasons. For example, to take advantage of a new Azure region, to meet internal policy and governance requirements, or in response to capacity planning requirements.
+
+This article covers the recommended approach, guidelines, and practices to relocate Event Grid system topics to another region.
+++ Here are the high-level steps covered in this article:
operational-excellence Relocation Event Hub Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-event-hub-cluster.md
This article shows you how to export an Azure Resource Manager template for an e
If you have other resources such as namespaces and event hubs in the Azure resource group that contains the Event Hubs cluster, you may want to export the template at the resource group level so that all related resources can be moved to the new region in one step. The steps in this article show you how to export an **Event Hubs cluster** to the template. The steps for exporting a **resource group** to the template are similar. ## Prerequisites+ Ensure that the dedicated cluster can be created in the target region. The easiest way to find out is to use the Azure portal to try to [create an Event Hubs dedicated cluster](../event-hubs/event-hubs-dedicated-cluster-create-portal.md). You see the list of regions that are supported at that point of time for creating the cluster.
operational-excellence Relocation Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-event-hub.md
Title: Relocation guidance in Azure Event Hubs
+ Title: Relocate Azure Event Hubs to another region
description: Learn how to relocate Azure Event Hubs to a another region
operational-excellence Relocation Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-key-vault.md
# Relocate Azure Key Vault to another region ++ Azure Key Vault doesn't support key vault relocation to another region. Instead of relocation, you need to:
operational-excellence Relocation Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-log-analytics.md
Title: Relocation guidance for Log Analytics workspace
+ Title: Relocate Log Analytics workspace to another region
description: Learn how to relocate Log Analytics workspace to a new region.
# Relocate Azure Monitor - Log Analytics workspace to another region + A relocation plan for Log Analytics workspace must include the relocation of any resources that log data with Log Analytics Workspace. Log Analytics workspace doesn't natively support migrating workspace data from one region to another and associated devices. Instead, you must create a new Log Analytics workspace in the target region and reconfigure the devices and settings in the new workspace.
operational-excellence Relocation Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-managed-identity.md
# Relocate managed identities for Azure resources to another region
-There are situations in which you'd want to move your existing user-assigned managed identities from one region to another. For example, you may need to move a solution that uses user-assigned managed identities to another region. You may also want to move an existing identity to another region as part of disaster recovery planning, and testing.
Moving user-assigned managed identities across Azure regions isn't supported. You can however, recreate a user-assigned managed identity in the target region.
operational-excellence Relocation Postgresql Flexible Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-postgresql-flexible-server.md
# Relocate Azure Database for PostgreSQL to another region ++ This article covers relocation guidance for Azure Database for PostgreSQL, Single Server, and Flexible Servers across geographies where region pairs aren't available for replication and geo-restore. ++ To learn how to relocate Azure Cosmos DB for PostgreSQL (formerly called Azure Database for PostgreSQL - Hyperscale (Citus)), see [Read replicas in Azure Cosmos DB for PostgreSQL](/azure/cosmos-db/postgresql/concepts-read-replicas). For an overview of the region pairs supported by native replication, see [cross-region replication](../postgresql/concepts-read-replicas.md#cross-region-replication).
operational-excellence Relocation Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-private-link.md
# Relocate Azure Private Link Service to another region ++ This article shows you how to relocate [Azure Private Link Service](/azure/private-link/private-link-overview) when moving your workload to another region. + To learn how to to reconfigure [private endpoints](/azure/private-link/private-link-overview) for a particular service, see the [appropriate service relocation guide](overview-relocation.md).
operational-excellence Relocation Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-site-recovery.md
# Relocate Azure Recovery Vault and Site Recovery to another region - This article shows you how to relocate [Azure Recovery Vault and Site Recovery](../site-recovery/site-recovery-overview.md) when moving your workload to another region.
operational-excellence Relocation Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-storage-account.md
# Relocate Azure Storage Account to another region + This article shows you how to relocate an Azure Storage Account to a new region by creating a copy of your storage account into another region. You also learn how to relocate your data to that account by using AzCopy, or another tool of your choice.
operational-excellence Relocation Virtual Network Nsg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-virtual-network-nsg.md
# Relocate Azure network security group (NSG) to another region ++ This article shows you how to relocate an NSG to a new region by creating a copy of the source configuration and security rules of the NSG to another region.
oracle Database Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/database-overview.md
To purchase Oracle Database@Azure, contact [Oracle's sales team](https://go.orac
Billing and payment for the service is done through Azure. Payment for Oracle Database@Azure counts toward your Microsoft Azure Consumption Commitment (MACC). Existing Oracle Database software customers can use the Bring Your Own License (BYOL) option or Unlimited License Agreements (ULAs). On your regular Microsoft Azure invoices, you can see charges for Oracle Database@Azure alongside charges for your other Azure Marketplace services.
+## Compliance
+
+Oracle Database@Azure is an Oracle Cloud database service that runs Oracle Database workloads in a customer's Azure environment. Oracle Database@Azure offers various Oracle Database Services through customerΓÇÖs Microsoft Azure environment. This service allows customers to monitor database metrics, audit logs, events, logging data, and telemetry natively in Azure. It runs on infrastructure managed by Oracle's Cloud Infrastructure operations team who performs software patching, infrastructure updates, and other operations through a connection to Oracle Cloud.
+All infrastructure for Oracle Database@Azure is co-located in Azure's physical data centers and uses Azure Virtual Network for networking, managed within the Azure environment. Federated identity and access management for Oracle Database@Azure is provided by Microsoft Entra ID.
+
+For detailed information on the compliance certifications please visit [Microsoft Services Trust Portal](https://servicetrust.microsoft.com/) and [Oracle compliance website](https://docs.oracle.com/en-us/iaas/Content/multicloud/compliance.htm). If you have further questions about OracleDB@Azure compliance please reach out to your account team and/or get information through [Oracle and Microsoft support for Oracle Database@Azure](https://docs.oracle.com/en-us/iaas/Content/multicloud/oaahelp.htm).
+ ## Available regions Oracle Database@Azure is available in the following locations. Oracle Database@Azure infrastructure resources must be provisioned in the Azure regions listed.
postgresql Concepts Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-logging.md
Title: Logs
description: Describes logging configuration, storage and analysis in Azure Database for PostgreSQL - Flexible Server. - Previously updated : 04/27/2024 Last updated : 7/11/2024
To learn how to configure parameters in Azure Database for PostgreSQL flexible s
Azure Database for PostgreSQL flexible server is integrated with Azure Monitor diagnostic settings. Diagnostic settings allows you to send your Azure Database for PostgreSQL flexible server logs in JSON format to Azure Monitor Logs for analytics and alerting, Event Hubs for streaming, and Azure Storage for archiving.
+## Data retention policy and pricing
+
+If you select Event Hubs or a Storage account, you can specify a retention policy. This policy deletes data that is older than a selected time period. If you specify Log Analytics, the retention policy depends on the selected pricing tier. Logs ingested into your **Log Analytics** workspace can be retained at no charge for up to first 31 days. Logs retained beyond these no-charge periods will be charged for each GB of data retained for a month (pro-rated daily). For more details, refer [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
+ ### Log format The following table describes the fields for the **PostgreSQLLogs** type. Depending on the output endpoint you choose, the fields included and the order in which they appear may vary.
postgresql How To Manage High Availability Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-high-availability-portal.md
High availability feature provisions physically separate primary and standby rep
This page provides guidelines how you can enable or disable high availability. This operation doesn't change your other settings including VNET configuration, firewall settings, and backup retention. Similarly, enabling and disabling of high availability is an online operation and doesn't impact your application connectivity and operations.
+> [!IMPORTANT]
+> _Billing Model Update for Azure Database for PostgreSQL Flexible Server (v5 HA):_
+In April, we implemented a billing model update for v5 SKU with High Availability (HA) enabled servers. This change aims to correctly reflect the charges, by accounting for both the primary and standby servers. Before this change we were incorrectly charging customers for the primary server only. Customers using v5 SKU with HA enabled servers will now see billing quantities multiplied by 2. This update does not impact v4 and v3 SKUs.
+ ## Prerequisites > [!IMPORTANT]
This page provides guidelines how you can enable or disable high availability. T
This section provides details specifically for HA-related fields. You can follow these steps to deploy high availability while creating your Azure Database for PostgreSQL flexible server instance.
-1. In the [Azure portal](https://portal.azure.com/), choose Azure Database for PostgreSQL flexible server and select create. For details on how to fill details such as **Subscription**, **Resource group**, **server name**, **region**, and other fields, see how-to documentation for the server creation.
+1. In the [Azure portal](https://portal.azure.com/), choose Azure Database for PostgreSQL flexible server and select create. For details on how to fill details such as **Subscription**, **Resource group**, **Server name**, **Region**, and other fields, see [how to create an Azure Database for PostgreSQL - Flexible Server](./quickstart-create-server-portal.md).
:::image type="content" source="./media/how-to-manage-high-availability-portal/subscription-region.png" alt-text="Screenshot of subscription and region selection.":::
postgresql Automigration Single To Flexible Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/automigration-single-to-flexible-postgresql.md
The automigration provides a highly resilient and self-healing offline migration
## Nomination Eligibility
-If you own a Single Server workload with no complex features (CMK, Microsoft Entra ID, Read Replica, Private Link) enabled, you can now nominate yourself (if not already scheduled by the service) for automigration. Submit your server details through this [form](https://forms.office.com/r/4pF55L8TxY).
+> [!NOTE]
+> The nomination process is for users who want to voluntarily fast-track their migration to Flexible server.
+
+If you own a Single Server workload, you can now nominate yourself (if not already scheduled by the service) for automigration. Submit your server details through this [form](https://forms.office.com/r/4pF55L8TxY).
## Configure migration alerts and review migration schedule
-Servers eligible for automigration are sent an advance notification by the service.
+Servers eligible for automigration are sent advance Azure health notifications by the service. The health notifications are sent **30 days, 14 days and 7 days** before the migration date. Notifications are also sent when the migration is **in progress, has completed, and 6 days after migration** before the legacy Single server is dropped. You can check and configure the Azure portal to receive the automigration notifications via email or SMS.
Following described are the ways to check and configure automigration notifications:
postgresql Best Practices Migration Service Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/migration-service/best-practices-migration-service-postgresql.md
There are special conditions that typically refer to unique circumstances, confi
### Online migration
-Online migration makes use of [pgcopydb follow](https://pgcopydb.readthedocs.io/en/latest/ref/pgcopydb_follow.html) and some of the [logical decoding restrictions](https://pgcopydb.readthedocs.io/en/latest/ref/pgcopydb_follow.html#pgcopydb-follow) apply. In addition, it's recommended to have a primary key in all the tables of a database undergoing Online migration. If primary key is absent, the deficiency may result in only insert operations being reflected during migration, excluding updates or deletes. Add a temporary primary key to the relevant tables before proceeding with the online migration.
+Online migration makes use of [pgcopydb follow](https://pgcopydb.readthedocs.io/en/latest/ref/pgcopydb_follow.html) and some of the [logical decoding restrictions](https://pgcopydb.readthedocs.io/en/latest/ref/pgcopydb_follow.html#pgcopydb-follow) apply. In addition, it's recommended to have a primary key in all the tables of a database undergoing Online migration. If primary key is absent, the deficiency will result in only insert operations being reflected during migration, excluding updates or deletes. Add a temporary primary key to the relevant tables before proceeding with the online migration.
+
+> [!NOTE]
+> In the case of Online migration of tables without a primary key, only Insert operations are replayed on the target. This can potentially introduce inconsistency in the Database if records that are updated or deleted on the source do not reflect on the target.
An alternative is to use the `ALTER TABLE` command where the action is [REPLICA IDENTIY](https://www.postgresql.org/docs/current/sql-altertable.html#SQL-ALTERTABLE-REPLICA-IDENTITY) with the `FULL` option. The `FULL` option records the old values of all columns in the row so that even in the absence of a Primary key, all CRUD operations are reflected on the target during the Online migration. If none of these options work, perform an offline migration as an alternative.
postgresql Concepts Migration Service Runtime Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/migration-service/concepts-migration-service-runtime-server.md
The migration runtime server is essential for transferring data between differen
## How do you use the Migration Runtime Server feature?
-To use the Migration Runtime Server feature within the migration service in Azure Database for PostgreSQL, follow these steps in the Azure portal:
+To use the Migration Runtime Server feature within the migration service in Azure Database for PostgreSQL, you can select the appropriate migration option either through the Azure portal during the setup or by specifying the `migrationRuntimeResourceId` in the JSON properties file during the migration create command in the Azure CLI. Here's how to do it in both methods:
+
+### Use the Azure portal
- Sign in to the Azure portal and access the migration service (from the target server) in the Azure Database for PostgreSQL instance. - Begin a new migration workflow within the service. - When you reach the "Select runtime server" tab, use the Migration Runtime Server by selecting "Yes."
-Choose your Azure subscription and resource group and the location of the VNet-integrated Azure Database for PostgreSQLΓÇöFlexible server.
+- Choose your Azure subscription and resource group and the location of the VNet-integrated Azure Database for PostgreSQLΓÇöFlexible server.
- Select the appropriate Azure Database for PostgreSQL Flexible Server to serve as your Migration Runtime Server. :::image type="content" source="media/concepts-migration-service-runtime-server/select-runtime-server.png" alt-text="Screenshot of selecting migration runtime server.":::
+### Use Azure CLI
+
+- Open your command-line interface.
+- Ensure you have the Azure CLI installed and you're logged into your Azure account using az sign-in.
+- The version should be at least 2.62.0 or above to use the migration runtime server option.
+- The `az postgres flexible-server migration create` command requires a JSON file path as part of `--properties` parameter, which contains configuration details for the migration. Provide the `migrationRuntimeResourceId` in the JSON properties file.
+ ## Migration Runtime Server essentials - **Minimal Configuration**ΓÇöDespite being created from an Azure Database for PostgreSQL Flexible Server, the migration runtime server solely facilitates migration without the need for HA, backups, version specificity, or advanced storage features.
postgresql Concepts User Roles Migration Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/migration-service/concepts-user-roles-migration-service.md
Another important consideration is the deprecation of the **pg_pltemplate** syst
pg_restore error: could not execute query <GRANT/REVOKE> <PRIVILEGES> on <affected TABLE/VIEWS> to <user>. ```
+#### Workaround
+ To resolve this error, it's necessary to undo the privileges granted to users and roles on the affected pg_catalog tables and views. You can accomplish this by taking the following steps. **Step 1: Identify Privileges**
REVOKE SELECT ON pg_shadow FROM adminuser2;
REVOKE UPDATE ON pg_shadow FROM adminuser2; ``` > [!NOTE]
-> Make sure you perform the above steps for all the databases included in the migration to avoid any permission-related issues during the migration..
+> Make sure you perform the above steps for all the databases included in the migration to avoid any permission-related issues during the migration.
After completing these steps, you can proceed to initiate a new migration from the single server to the flexible server using the migration service. You shouldn't encounter permission-related issues during this process.
postgresql How To Setup Azure Cli Commands Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/migration-service/how-to-setup-azure-cli-commands-postgresql.md
The `az postgres flexible-server migration create` command requires a JSON file
| `targetServerUserName` | The default value is the admin user created during the creation of the PostgreSQL target flexible server, and the password provided is used for authentication against this user. | | `dbsToMigrate` | Specify the list of databases that you want to migrate to Flexible Server. You can include a maximum of eight database names at a time. Providing the list of DBs in array format. | | `overwriteDBsInTarget` | When set to true (default), if the target server happens to have an existing database with the same name as the one you're trying to migrate, the migration service automatically overwrites the database |
+| `migrationRuntimeResourceId` | Required if a runtime server needs to be used for migration. The format is - `/subscriptions/<<Subscription ID>>/resourceGroups/<<Resource Group Name>>/providers/Microsoft.DBforPostgreSQL/flexibleServers/<<PostgreSQL Flexible Server name>>` |
| `sourceType` | Required parameter. Values can be - on-premises, AWS_RDS, AzureVM, PostgreSQLSingleServer | | `sslMode` | SSL modes for migration. SSL mode for PostgreSQLSingleServer is VerifyFull and Prefer/Require for other source types. |
postgresql Troubleshoot Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/migration-service/troubleshoot-error-codes.md
Migration failures often manifest through error messages that indicate connectiv
- server closed the connection unexpectedly - SSL SYSCALL error: EOF detected - unexpected EOF on client connection-- could not receive data from client: Connection reset by peer
+- couldn't receive data from client: Connection reset by peer
### Cause
In the context of migration service in Azure Database for PostgreSQL, a connecti
- `tcp_keepalives_interval=10` - `tcp_keepalives_count=60`
-These settings will help maintain the connection by sending keepalive probes to prevent timeouts due to inactivity. Importantly, modifying these TCP parameters does not require a restart of the source or target PostgreSQL instances. Changes can be applied dynamically, allowing for a seamless continuation of service without interrupting the database operations.
+These settings help maintain the connection by sending keepalive probes to prevent timeouts due to inactivity. Importantly, modifying these TCP parameters doesn't require a restart of the source or target PostgreSQL instances. Changes can be applied dynamically, allowing for a seamless continuation of service without interrupting the database operations.
-## Premigration validation error codes
+## Migration error codes
| Error Code | Error message | Resolution | | | | |
These settings will help maintain the connection by sending keepalive probes to
| 603101 | Database exists in target. Database `{dbName}` exists on the target server. Ensure the target server doesn't have the database and retry the migration. | N/A | | 603102 | Source Database Missing. Database `{dbName}` doesn't exist on the source server. Provide a valid database and retry the migration. | N/A | | 603103 | Missing Microsoft Entra role. Microsoft Entra role `{roleNames}` is missing on the target server. Create the Entra role and retry the migration. | N/A |
-| 603104 | Missing Replication Role. User `{0}` doesn't have the replication role on server `{1}`. Grant the replication role before retrying migration. | Use `ALTER ROLE {0} WITH REPLICATION;` to grant the required permission. |
+| 603104 | Missing Replication Role. User `{0}` doesn't have the replication role on server `{1}`. Grant the replication role before retrying migration. | Use `ALTER ROLE <rolename> WITH REPLICATION;` to grant the required permission. |
| 603105 | GUC Settings Error. Insufficient replication slots on the source server for migration. Increase the `max_replication_slots` GUC parameter to `{0}` or higher. | Source server doesn't have sufficient replication slots available to perform online migration. Use this query `SELECT * FROM pg_replication_slots WHERE active = false AND slot_type = 'logical';` to get the list of inactive replication slots and drop them using `SELECT pg_drop_replication_slot('slot_name');` before initiating the migration. Alternatively, set the 'max_replication_slots' server parameter to `{0}` or higher. Ensure that the `max_wal_senders` parameter is also changed to be greater than or equal to the `max_replication_slots' parameter`. | | 603106 | GUC Settings Error. The `max_wal_senders` GUC parameter is set to `{0}`. Ensure it matches or exceeds the 'max_replication_slots' value. | N/A | | 603107 | GUC Settings Error. Source server WAL level parameter is set to `{0}`. Set GUC parameter WAL level to be 'logical'. | N/A | | 603108 | Extensions allowlist required. Extensions `{0}` couldn't be installed on the target server because they're not allowlisted. Allowlist the extensions and retry the migration. | Set the allowlist by following the steps mentioned in [PostgreSQL extensions](https://aka.ms/allowlist-extensions). | | 603109 | Shared preload libraries configuration error. Add allowlisted extensions `{0}` to 'shared_preload_libraries' on the target server and retry the migration. | Set the shared preload libraries by following the steps mentioned in [PostgreSQL extensions](https://aka.ms/allowlist-extensions). This requires a server restart. |
+| 603110 | Insufficient privileges. Migration user lacks necessary permissions for database access. Ensure that the migration user is owner of source databases and has both read and write privileges and retry the migration.| N/A |
+| 603111 | Target database cleanup failed. Unable to terminate active connections on the target database during the premigration phase. Grant pg_signal_backend role to migration user and retry the migration. | Add pg_signal_backend role to migration user using the command 'GRANT pg_signal_backend to <migration_user>' |
+| 603112 | GUC settings error. Failed to set default_transaction_read_only GUC parameter to off. Ensure that user write access is properly set and retry the migration. | Set 'default_transaction_read_only' to OFF on source server via Azure portal or through psql command(for example, ALTER SYSTEM SET default_transaction_read_only = off). |
+| 603113 | Cutover failure. Cutover can't be initiated for database '{dbName}' as the migration has already been with the status Completed/Failed/Canceled. | N/A |
+| 603114 | Cutover failure. Cutover can't be initiated for database '{dbName}' for migration mode offline. | N/A |
+| 603115 | Missing user privileges. Migration user '{0}' isn't a member of azure_pg_admin role. Add necessary privileges on target server and retry the migration. | N/A |
+| 603116 | Missing user privileges. Migration user '{0}' doesn't have the create role privilege. Add necessary privileges on target server and retry the migration. | Run query `ALTER ROLE <rolename> WITH CREATEROLE;` on target server. |
+| 603117 | Missing user privileges. Migration user '{0}' lacks necessary privileges to delete the '{dbName}' database on the target server. Drop the database manually from the target server and retry the migration. | N/A |
| 603400 | Unsupported source version. Migration of PostgreSQL versions below `{0}` is unsupported. | You must use another migration method. | | 603401 | Collation mismatch. Collation `{0}` in database `{1}` isn't present on target server. | N/A | | 603402 | Collation mismatch. Collation `{0}` for table `{1}` in column `{2}` isn't present on target server. | [Contact Microsoft support](https://support.microsoft.com/contactus) to add the necessary collations. |
These settings will help maintain the connection by sending keepalive probes to
| 603407 | Extension Schema Error. Extensions `{0}` located in the system schema on the source server are unsupported on the target server. Drop and recreate the extensions in a nonsystem schema, then retry the migration. | Visit [PostgreSQL extensions](../../flexible-server/concepts-extensions.md). | | 603408 | Unsupported Extensions. Target server version 16 doesn't support `{0}` extensions. Migrate to version 15 or lower, then upgrade once the extensions are supported. | N/A | | 603409 | User-defined casts present. Source database `{0}` contains user-defined casts that can't be migrated to the target server. | N/A |
-| 603410 | System table permission error. Users have access to system tables like pg_authid and pg_shadow that can't be migrated to the target. Revoke these permissions and retry the migration. | Validating the default permissions granted to `pg_catalog` tables/views (such as `pg_authid` and `pg_shadow`) is essential. However, these permissions can't be assigned to the target. Specifically, User `{1}` possesses `{2}` permissions, while User `{3}` holds `{4}` permissions. For a workaround, visit https://aka.ms/troubleshooting-user-roles-permission-ownerships-issues. |
+| 603410 | System table permission error. Users have access to system tables like pg_authid and pg_shadow that can't be migrated to the target. Revoke these permissions and retry the migration. | Validating the default permissions granted to `pg_catalog` tables/views (such as `pg_authid` and `pg_shadow`) is essential. However, these permissions can't be assigned to the target. Specifically, User `{1}` possesses `{2}` permissions, while User `{3}` holds `{4}` permissions. For a workaround, visit [User, Roles, and Permissions](https://aka.ms/troubleshooting-user-roles-permission-ownerships-issues) |
+| 603700 | Target database cleanup failed. Unable to terminate active connections on the target database during the pre-migration/post-migration phase. | N/A |
+| 603701 | Internal server error. Failed to create roles on the target server. | [Contact Microsoft support](https://support.microsoft.com/contactus) for further analysis. |
+| 603702 | Internal server error. Failed to dump roles from source server. | [Contact Microsoft support](https://support.microsoft.com/contactus) for further analysis. |
+| 603703 | Internal server error. Failed to edit the global role dump file. | [Contact Microsoft support](https://support.microsoft.com/contactus) for further analysis. |
+| 603704 | Internal server error. Failed to make all source roles a member of target migration user. | [Contact Microsoft support](https://support.microsoft.com/contactus) for further analysis. |
+| 603705 | Internal server error. Failed to restore grants/revokes. | [Contact Microsoft support](https://support.microsoft.com/contactus) for further analysis. |
+| 603706 | Internal server error. Failed to clean up the target server migration user. Your target migration user can be part of multiple roles. Remove all unnecessary roles from target server migration user and retry the migration. | [Contact Microsoft support](https://support.microsoft.com/contactus) for further analysis. |
+| 603707 | Internal server error. Failed to grant azure_pg_admin to the source server admin user. | [Contact Microsoft support](https://support.microsoft.com/contactus) for further analysis.|
+| 603708 | Internal server error. Failed to alter the owner of public schema to azure_pg_admin in database '{dbName}'. Change the owner of public schema to azure_pg_admin manually and retry the migration. | [Contact Microsoft support](https://support.microsoft.com/contactus) for further analysis. |
+| 603709 | Migration setup failed. | [Contact Microsoft support](https://support.microsoft.com/contactus) for further analysis. |
## Related content
reliability Cross Region Replication Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/cross-region-replication-azure.md
The table below lists Azure regions without a region pair:
| Geography | Region | |--|-| | Qatar | Qatar Central |
+| Mexico | Mexico Central |
| Poland | Poland Central | | Israel | Israel Central| | Italy | Italy North| | Austria | Austria East (Coming soon) |
-| Spain | Spain Central (Coming soon) |
+| Spain | Spain Central|
## Next steps - [Azure services and regions that support availability zones](availability-zones-service-support.md)
route-server Peer Route Server With Virtual Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/peer-route-server-with-virtual-appliance.md
+
+ Title: 'Tutorial: Configure BGP peering between Azure Route Server and NVA'
+description: This tutorial shows you how to configure an Azure Route Server and peer it with a Network Virtual Appliance (NVA) using the Azure portal.
++++ Last updated : 07/11/2024++
+# Tutorial: Configure BGP peering between Azure Route Server and network virtual appliance (NVA)
+
+This tutorial shows you how to deploy an Azure Route Server and a Windows Server network virtual appliance (NVA) into a virtual network and establish a BGP peering connection between them.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> - Create a virtual network
+> - Deploy an Azure Route Server
+> - Deploy a virtual machine
+> - Configure BGP on the virtual machine
+> - Configure BGP peering between the Route Server and the NVA
+> - Check learned routes
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Prerequisites
+
+- An active Azure subscription.
+
+## Sign in to Azure
+
+Sign in to the [Azure portal](https://portal.azure.com).
+
+## Create a virtual network
+
+Create a virtual network to deploy both the Route Server and the NVA in it. Azure Route Server must be deployed in a dedicated subnet called *RouteServerSubnet*.
+
+1. In the search box at the top of the portal, enter ***virtual network***, and select **Virtual networks** from the search results.
+
+ :::image type="content" source="./media/peer-route-server-with-virtual-appliance/portal-search.png" alt-text="Screenshot of searching for virtual networks in the Azure portal." lightbox="./media/peer-route-server-with-virtual-appliance/portal-search.png":::
+
+1. On the **Virtual networks** page, select **+ Create**.
+
+1. On the **Basics** tab of **Create virtual network**, enter, or select the following information:
+
+ | Settings | Value |
+ | -- | -- |
+ | **Project details** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **Create new**. </br>In **Name** enter ***myResourceGroup***. </br>Select **OK**. |
+ | **Instance details** | |
+ | Name | Enter ***myVirtualNetwork***. |
+ | Region | Select an Azure region. This tutorial uses **East US**. |
+
+ :::image type="content" source="./media/peer-route-server-with-virtual-appliance/create-virtual-network-basics.png" alt-text="Screenshot of the Basics tab of creating a virtual network in the Azure portal." lightbox="./media/peer-route-server-with-virtual-appliance/create-virtual-network-basics.png":::
+
+1. Select **IP Addresses** tab or **Next** button twice.
+
+1. On the **IP Addresses** tab, configure **IPv4 address space** to **10.0.0.0/16**, then configure the following subnets:
+
+ | Subnet name | Subnet address range |
+ | -- | -- |
+ | mySubnet | 10.0.0.0/24 |
+ | RouteServerSubnet | 10.0.1.0/24 |
+
+ :::image type="content" source="./media/peer-route-server-with-virtual-appliance/create-virtual-network-ip-addresses.png" alt-text="Screenshot of the IP addresses tab of creating a virtual network in the Azure portal." lightbox="./media/peer-route-server-with-virtual-appliance/create-virtual-network-ip-addresses.png":::
+
+1. Select **Review + create** and then select **Create** after the validation passes.
+
+## Create an Azure Route Server
+
+In this section, you create an Azure Route Server.
+
+1. In the search box at the top of the portal, enter ***route server***, and select **Route Servers** from the search results.
+
+1. On the **Route Servers** page, select **+ Create**.
+
+1. On the **Basics** tab of **Create a Route Server** page, enter, or select the following information:
+
+ | Settings | Value |
+ | -- | -- |
+ | **Project details** | |
+ | Subscription | Select your Azure subscription that you used for the virtual network. |
+ | Resource group | Select **myResourceGroup**. |
+ | **Instance details** | |
+ | Name | Enter *myRouteServer*. |
+ | Region | Select **East US** region. |
+ | Routing Preference | Select the default **ExpressRoute** option. Other available options are: **VPN** and **ASPath**. <br>You can change your selection later from the Route Server **Configuration**. |
+ | **Configure virtual networks** | |
+ | Virtual Network | Select **myVirtualNetwork**. |
+ | Subnet | Select **RouteServerSubnet (10.0.1.0/24)**. This subnet is a dedicated Route Server subnet. |
+ | **Public IP address** | |
+ | Public IP address | Select **Create new** and accept the default name **myVirtualNetwork-ip** or enter a different one. This Standard IP address ensures connectivity to the backend service that manages the Route Server configuration. |
+
+ :::image type="content" source="./media/peer-route-server-with-virtual-appliance/create-route-server.png" alt-text="Screenshot of creating a Route Server in the Azure portal." lightbox="./media/peer-route-server-with-virtual-appliance/create-route-server.png":::
+
+1. Select **Review + create** and then select **Create** after validation passes. The Route Server takes about 15 minutes to deploy.
+
+1. Once the deployment is complete, select **Go to resource** to go to the **Overview** page of **myRouteServer**.
+
+1. Take a note of the **ASN** and **Peer IPs** in the **Overview** page. You need this information to configure the NVA in the next section.
+
+ :::image type="content" source="./media/peer-route-server-with-virtual-appliance/route-server-overview.png" alt-text="Screenshot that shows the Route Server ASN and Peer IPs in the Overview page." lightbox="./media/peer-route-server-with-virtual-appliance/route-server-overview.png":::
+
+ > [!NOTE]
+ > - The ASN of Azure Route Server is always 65515.
+ > - The Peer IPs are the private IP addresses of the Route Server in the RouteServerSubnet.
+
+## Create a network virtual appliance (NVA)
+
+In this section, you create a Windows Server NVA that communicates and exchanges routes with the Route Server over a BGP peering connection.
+
+### Create a virtual machine (VM)
+
+In this section, you create a Windows Server VM in the virtual network you created earlier to act as a network virtual appliance.
+
+1. In the search box at the top of the portal, enter ***virtual machine***, and select **Virtual machines** from the search results.
+
+1. Select **Create**, then select **Azure virtual machine**.
+
+1. On the **Basics** tab of **Create a virtual machine**, enter, or select the following information:
+
+ | Settings | Value |
+ | -- | -- |
+ | **Project details** | |
+ | Subscription | Select your Azure subscription that you used for the virtual network. |
+ | Resource group | Select **myResourceGroup**. |
+ | **Instance details** | |
+ | Virtual machine name | Enter ***myNVA***. |
+ | Region | Select **(US) East US**. |
+ | Availability options | Select **No infrastructure required**. |
+ | Security type | Select a security type. This tutorial uses **Standard**. |
+ | Image | Select a **Windows Server** image. This tutorial uses **Windows Server 2022 Datacenter: Azure Edition - x64 Gen2** image. |
+ | Size | Choose a size or leave the default setting. |
+ | **Administrator account** | |
+ | Username | Enter a username. |
+ | Password | Enter a password. |
+ | Confirm password | Reenter the password. |
+
+ :::image type="content" source="./media/peer-route-server-with-virtual-appliance/create-virtual-machine-basics.png" alt-text="Screenshot of the Basics tab of creating a VM in the Azure portal." lightbox="./media/peer-route-server-with-virtual-appliance/create-virtual-machine-basics.png":::
+
+1. Select **Networking** tab or **Next: Disks >** then **Next: Networking >**.
+
+1. On the **Networking** tab, select the following network settings:
+
+ | Settings | Value |
+ | -- | -- |
+ | Virtual network | Select **myVirtualNetwork**. |
+ | Subnet | Select **mySubnet (10.0.0.0/24)**. |
+ | Public IP | Leave as default. |
+ | NIC network security group | Select **Basic**. |
+ | Public inbound ports | Select **Allow selected ports**. |
+ | Select inbound ports | Select **RDP (3389)**. |
+
+ > [!CAUTION]
+ > Leaving the RDP port open to the internet is not recommended. Restrict access to the RDP port to a specific IP address or range of IP addresses. For production environments, it's recommended to block internet access to the RDP port and use [Azure Bastion](../bastion/bastion-overview.md?toc=/azure/route-server/toc.json) to securely connect to your virtual machine from the Azure portal.
+
+ :::image type="content" source="./media/peer-route-server-with-virtual-appliance/create-virtual-machine-networking.png" alt-text="Screenshot of the Networking tab of creating a VM in the Azure portal." lightbox="./media/peer-route-server-with-virtual-appliance/create-virtual-machine-networking.png":::
+
+1. Select **Review + create** and then **Create** after validation passes.
+
+### Configure BGP on the virtual machine
+
+In this section, you configure BGP settings on the VM so it acts as an NVA and can exchange routes with the Route Server.
+
+1. Go to **myNVA** virtual machine and select **Connect**.
+
+ :::image type="content" source="./media/peer-route-server-with-virtual-appliance/connect-vm.png" alt-text="Screenshot that shows how to connect to a VM using RDP in the Azure portal." lightbox="./media/peer-route-server-with-virtual-appliance/connect-vm.png":::
+
+1. On the **Connect** page, select **Download RDP file** under **Native RDP**.
+
+1. Open the downloaded file.
+
+1. Select **Connect** and then enter the username and password that you created in the previous steps. Accept the certificate if prompted.
+
+1. Run PowerShell as an administrator.
+
+1. In PowerShell, execute the following cmdlets:
+
+ ```powershell
+ # Install required Windows features.
+ Install-WindowsFeature RemoteAccess
+ Install-WindowsFeature RSAT-RemoteAccess-PowerShell
+ Install-WindowsFeature Routing
+ Install-RemoteAccess -VpnType RoutingOnly
+
+ # Configure BGP & Router ID on the Windows Server
+ Add-BgpRouter -BgpIdentifier 10.0.0.4 -LocalASN 65001
+
+ # Configure Azure Route Server as a BGP Peer.
+ Add-BgpPeer -LocalIPAddress 10.0.0.4 -PeerIPAddress 10.0.1.4 -PeerASN 65515 -Name RS_IP1
+ Add-BgpPeer -LocalIPAddress 10.0.0.4 -PeerIPAddress 10.0.1.5 -PeerASN 65515 -Name RS_IP2
+
+ # Originate and announce BGP routes.
+ Add-BgpCustomRoute -network 172.16.1.0/24
+ Add-BgpCustomRoute -network 172.16.2.0/24
+ ```
+
+## Configure Route Server peering
+
+1. Go to the Route Server you created in the previous steps.
+
+1. Select **Peers** under **Settings**. Then, select **+ Add** to add a new peer.
+
+1. On the **Add Peer** page, enter the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter ***myNVA***. Use this name to identify the peer. It doesn't have to be the same name of the VM that you configured as an NVA. |
+ | ASN | Enter ***65001***. This is the ASN of the NVA. You configured it in the previous section. |
+ | IPv4 Address | Enter ***10.0.0.4***. This is the private IP address of the NVA. |
+
+1. Select **Add** to save the configuration.
+
+ :::image type="content" source="./media/peer-route-server-with-virtual-appliance/add-peer.png" alt-text="Screenshot that shows how to add the NVA to the Route Server as a peer." lightbox="./media/peer-route-server-with-virtual-appliance/add-peer.png":::
+
+1. Once you add the NVA as a peer, the **Peers** page shows the **myNVA** as a peer:
+
+ :::image type="content" source="./media/peer-route-server-with-virtual-appliance/route-server-peers.png" alt-text="Screenshot that shows the peers of a Route Server." lightbox="./media/peer-route-server-with-virtual-appliance/route-server-peers.png":::
+
+## Check learned routes
+
+Use [Get-AzRouteServerPeerLearnedRoute](/powershell/module/az.network/get-azrouteserverpeerlearnedroute) cmdlet to check the routes learned by the Route Server.
+
+```azurepowershell-interactive
+Get-AzRouteServerPeerLearnedRoute -ResourceGroupName 'myResourceGroup' -RouteServerName 'myRouteServer' -PeerName 'myNVA'
+```
+
+The output should look like the following example. The output shows the two learned routes from the NVA:
+
+```output
+LocalAddress Network NextHop SourcePeer Origin AsPath Weight
+ - - -
+10.0.1.5 172.16.1.0/24 10.0.0.4 10.0.0.4 EBgp 65001 32768
+10.0.1.5 172.16.2.0/24 10.0.0.4 10.0.0.4 EBgp 65001 32768
+10.0.1.4 172.16.1.0/24 10.0.0.4 10.0.0.4 EBgp 65001 32768
+10.0.1.4 172.16.2.0/24 10.0.0.4 10.0.0.4 EBgp 65001 32768
+```
+
+## Clean up resources
+
+When no longer needed, you can delete all resources created in this tutorial by deleting **myResourceGroup** resource group:
+
+1. In the search box at the top of the portal, enter ***myResourceGroup***. Select **myResourceGroup** from the search results.
+
+1. Select **Delete resource group**.
+
+1. In **Delete a resource group**, select **Apply force delete for selected Virtual machines and Virtual machine scale sets**.
+
+1. Enter ***myResourceGroup***, and then select **Delete**.
+
+1. Select **Delete** to confirm the deletion of the resource group and all its resources.
+
+## Related content
+
+In this tutorial, you learned how to create and configure an Azure Route Server with a network virtual appliance (NVA). To learn more about Route Servers, see [Azure Route Server frequently asked questions (FAQ)](route-server-faq.md).
route-server Tutorial Configure Route Server With Quagga https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/tutorial-configure-route-server-with-quagga.md
- Title: 'Tutorial: Configure peering between Azure Route Server and Network Virtual Appliance'
-description: This tutorial shows you how to configure an Azure Route Server and peer it with a Quagga network virtual appliance (NVA) using the Azure portal.
---- Previously updated : 07/10/2023---
-# Tutorial: Configure peering between Azure Route Server and Network Virtual Appliance
-
-This tutorial shows you how to deploy an Azure Route Server into a virtual network and establish a BGP peering connection with a Quagga network virtual appliance (NVA). You deploy a virtual network with four subnets. One subnet is dedicated to the Route Server and another subnet dedicated to the Quagga NVA. The Quagga NVA will be configured to exchange routes with the Route Server. Lastly, you'll test to make sure routes are properly exchanged on the Route Server and Quagga NVA.
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Create a virtual network with four subnets
-> * Deploy an Azure Route Server
-> * Deploy a virtual machine running Quagga
-> * Configure Route Server peering
-> * Check learned routes
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-## Prerequisites
-
-* An active Azure subscription
-
-## Sign in to Azure
-
-Sign in to the [Azure portal](https://portal.azure.com).
-
-## Create a virtual network
-
-You need a virtual network to deploy both the Route Server and the Quagga NVA. Azure Route Server must be deployed in a dedicated subnet called *RouteServerSubnet*.
-
-1. On the Azure portal home page, search for *virtual network*, and select **Virtual networks** from the search results.
-
- :::image type="content" source="./media/tutorial-configure-route-server-with-quagga/create-new-virtual-network.png" alt-text="Screenshot of create a new virtual network resource.":::
-
-1. On the **Virtual networks** page, select **+ Create**.
-
-1. On the **Basics** tab of **Create virtual network**, enter or select the following information:
-
- | Settings | Value |
- | -- | -- |
- | **Project details** | |
- | Subscription | Select your Azure subscription. |
- | Resource group | Select **Create new**. </br> In **Name** enter **myRouteServerRG**. </br> Select **OK**. |
- | **Instance details** | |
- | Name | Enter *myVirtualNetwork*. |
- | Region | Select **East US**. |
-
- :::image type="content" source="./media/tutorial-configure-route-server-with-quagga/virtual-network-basics-tab.png" alt-text="Screenshot of basics tab settings for the virtual network.":::
-
-1. Select **IP Addresses** tab or **Next : IP Addresses >** button.
-
-1. On the **IP Addresses** tab, configure **IPv4 address space** to **10.1.0.0/16**, then configure the following subnets:
-
- | Subnet name | Subnet address range |
- | -- | -- |
- | RouteServerSubnet | 10.1.1.0/25 |
- | subnet1 | 10.1.2.0/24 |
- | subnet2 | 10.1.3.0/24 |
- | subnet3 | 10.1.4.0/24 |
-
- :::image type="content" source="./media/tutorial-configure-route-server-with-quagga/virtual-network-ip-addresses.png" alt-text="Screenshot of IP address settings for the virtual network.":::
-
-1. Select **Review + create** and then select **Create** after the validation passes.
-
-## Create the Azure Route Server
-
-The Route Server is used to communicate with your NVA and exchange virtual network routes using a BGP peering connection.
-
-1. On the Azure portal, search for *route server*, and select **Route Servers** from the search results.
-
-1. On the **Route Servers** page, select **+ Create**.
-
-1. On the **Basics** tab of **Create a Route Server** page, enter or select the following information:
-
- | Settings | Value |
- | -- | -- |
- | **Project details** | |
- | Subscription | Select your Azure subscription that you used for the virtual network. |
- | Resource group | Select **myRouteServerRG**. |
- | **Instance details** | |
- | Name | Enter *myRouteServer*. |
- | Region | Select **East US** region. |
- | **Configure virtual networks** | |
- | Virtual Network | Select **myVirtualNetwork**. |
- | Subnet | Select **RouteServerSubnet (10.1.0.0/25)**. This subnet is a dedicated Route Server subnet. |
- | **Public IP address** | |
- | Public IP address | Select **Create new**, and then enter *myRouteServer-ip*. This Standard IP address ensures connectivity to the backend service that manages the Route Server configuration. |
-
- :::image type="content" source="./media/tutorial-configure-route-server-with-quagga/route-server-basics-tab.png" alt-text="Screenshot of basics tab for Route Server creation.":::
-
-1. Select **Review + create** and then select **Create** after validation passes. The Route Server takes about 15 minutes to deploy.
-
-## Create Quagga network virtual appliance
-
-To configure the Quagga network virtual appliance, you need to deploy a Linux virtual machine, and then configure it with this [script](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.network/route-server-quagga/scripts/quaggadeploy.sh).
-
-### Create Quagga virtual machine (VM)
-
-1. On the Azure portal, search for *virtual machine*, and select **Virtual machines** from the search results.
-
-1. Select **Create**, then select **Azure virtual machine**.
-
-1. On the **Basics** tab of **Create a virtual machine**, enter or select the following information:
-
- | Settings | Value |
- | -- | -- |
- | **Project details** | |
- | Subscription | Select your Azure subscription that you used for the virtual network. |
- | Resource group | Select **myRouteServerRG**. |
- | **Instance details** | |
- | Virtual machine name | Enter *Quagga*. |
- | Region | Select **(US) East US**. |
- | Availability options | Select **No infrastructure required**. |
- | Security type | Select **Standard**. |
- | Image | Select an **Ubuntu** image. This tutorial uses **Ubuntu 18.04 LTS - Gen 2** image. |
- | Size | Select **Standard_B2s - 2vcpus, 4GiB memory**. |
- | **Administrator account** | |
- | Authentication type | Select **Password**. |
- | Username | Enter *azureuser*. Don't use *quagga* for the username as it causes the setup to fail in a later step. |
- | Password | Enter a password of your choosing. |
- | Confirm password | Reenter the password. |
- | **Inbound port rules** | |
- | Public inbound ports | Select **Allow selected ports**. |
- | Select inbound ports | Select **SSH (22)**. |
-
- :::image type="content" source="./media/tutorial-configure-route-server-with-quagga/create-quagga-basics-tab.png" alt-text="Screenshot of basics tab for creating a new virtual machine." lightbox="./media/tutorial-configure-route-server-with-quagga/create-quagga-basics-tab-expanded.png":::
-
-1. On the **Networking** tab, select the following network settings:
-
- | Settings | Value |
- | -- | -- |
- | Virtual network | Select **myVirtualNetwork**. |
- | Subnet | Select **subnet3 (10.1.4.0/24)**. |
- | Public IP | Leave as default. |
- | NIC network security group | Select **Basic**. |
- | Public inbound ports | Select **Allow selected ports**. |
- | Select inbound ports | Select **SSH (22)**. |
-
- :::image type="content" source="./media/tutorial-configure-route-server-with-quagga/create-quagga-networking-tab.png" alt-text="Screenshot of networking tab for creating a new virtual machine." lightbox="./media/tutorial-configure-route-server-with-quagga/create-quagga-networking-tab-expanded.png":::
-
-1. Select **Review + create** and then **Create** after validation passes.
-
-1. Once the virtual machine has deployed, go to the **Networking** page of **Quagga** virtual machine and select the network interface.
-
- :::image type="content" source="./media/tutorial-configure-route-server-with-quagga/quagga-network-settings.png" alt-text="Screenshot of networking page of the Quagga VM.":::
-
-1. Select **IP configuration** under **Settings** and then select **ipconfig1**.
-
- :::image type="content" source="./media/tutorial-configure-route-server-with-quagga/quagga-ip-configuration.png" alt-text="Screenshot of IP configurations page of the Quagga VM.":::
-
-1. Under **Private IP address Settings**, change the **Assignment** from **Dynamic** to **Static**, and then change the **IP address** from **10.1.4.4** to **10.1.4.10**. The [script](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.network/route-server-quagga/scripts/quaggadeploy.sh) that you run in a later step uses **10.1.4.10**. If you want to use a different IP address, ensure to update the IP in the script.
-
-1. Take note of the public IP, and select **Save** to update the IP configurations of the virtual machine.
-
- :::image type="content" source="./media/tutorial-configure-route-server-with-quagga/change-ip-configuration.png" alt-text="Screenshot of changing IP configurations the Quagga VM.":::
-
-### Configure Quagga virtual machine
-
-1. If you are on a Mac or Linux machine, open a Bash prompt. If you are on a Windows machine, open a PowerShell prompt.
-
-1. At your prompt, open an SSH connection to the Quagga VM by executing the following command. Replace the IP address with the one you took note of in the previous step.
-
- ```console
- ssh azureuser@52.240.57.121
- ```
-
-1. When prompted, enter the password you previously created for the Quagga VM.
-
-1. Once logged in, enter `sudo su` to switch to super user to avoid errors running the script.
-
-1. Copy and paste the following commands into the SSH session. These commands download and install this [script](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.network/route-server-quagga/scripts/quaggadeploy.sh) to configure the virtual machine with Quagga along with other network settings.
-
- ```console
- wget "raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.network/route-server-quagga/scripts/quaggadeploy.sh"
-  
- chmod +x quaggadeploy.sh
-  
- ./quaggadeploy.sh
- ```
-
-## Configure Route Server peering
-
-1. Go to the Route Server you created in the previous step.
-
-1. Select **Peers** under **Settings**. Then, select **+ Add** to add a new peer.
-
- :::image type="content" source="./media/tutorial-configure-route-server-with-quagga/peers.png" alt-text="Screenshot of peers page for Route Server.":::
-
-1. On the **Add Peer** page, enter the following information, and then select **Add** to save the configuration:
-
- | Setting | Value |
- | - | -- |
- | Name | Enter *Quagga*. This name is used to identify the peer. |
- | ASN | Enter *65001*. This ASN is defined in the script for Quagga NVA. |
- | IPv4 Address | Enter *10.1.4.10*. This IPv4 is the private IP of the Quagga NVA. |
-
- :::image type="content" source="./media/tutorial-configure-route-server-with-quagga/add-peer.png" alt-text="Screenshot of add peer page.":::
-
-1. Once you add the Quagga NVA as a peer, the **Peers** page should look like this:
-
- :::image type="content" source="./media/tutorial-configure-route-server-with-quagga/peer-configured.png" alt-text="Screenshot of a configured peer.":::
-
-## Check learned routes
-
-1. To check the routes learned by the Route Server, use this command in Azure portal Cloud Shell:
-
- ```azurepowershell-interactive
- $routes = @{
- RouteServerName = 'myRouteServer'
- ResourceGroupName = 'myRouteServerRG'
- PeerName = 'Quagga'
- }
- Get-AzRouteServerPeerLearnedRoute @routes | ft
- ```
-
- The output should look like the following output:
-
- :::image type="content" source="./media/tutorial-configure-route-server-with-quagga/routes-learned.png" alt-text="Screenshot of routes learned by Route Server.":::
-
-1. To check the routes learned by the Quagga NVA, enter `vtysh` and then enter `show ip bgp` on the NVA. The output should look like the following output:
-
- ```
- root@Quagga:/home/azureuser# vtysh
-
- Hello, this is Quagga (version 1.2.4).
- Copyright 1996-2005 Kunihiro Ishiguro, et al.
-
- Quagga# show ip bgp
- BGP table version is 0, local router ID is 10.1.4.10
- Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,
- i internal, r RIB-failure, S Stale, R Removed
- Origin codes: i - IGP, e - EGP, ? - incomplete
-
- Network Next Hop Metric LocPrf Weight Path
- 10.1.0.0/16 10.1.1.4 0 65515 i
- 10.1.1.5 0 65515 i
- *> 10.100.1.0/24 0.0.0.0 0 32768 i
- *> 10.100.2.0/24 0.0.0.0 0 32768 i
- *> 10.100.3.0/24 0.0.0.0 0 32768 i
- ```
-
-## Clean up resources
-
-When no longer needed, you can delete all resources created in this tutorial by following these steps:
-
-1. On the Azure portal menu, select **Resource groups**.
-
-1. Select the **myRouteServerRG** resource group.
-
-1. Select **Delete a resource group**.
-
-1. Select **Apply force delete for selected Virtual machines and Virtual machine scale sets**.
-
-1. Enter *myRouteServerRG* and select **Delete**.
-
-## Next steps
-
-In this tutorial, you learned how to create and configure an Azure Route Server with a network virtual appliance (NVA). To learn more about Route Servers, see [Azure Route Server frequently asked questions (FAQs)](route-server-faq.md).
sentinel Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing.md
Combining the pricing tiers offers a simplification to the overall billing and c
Billable meters are the individual components of your service that appear on your bill and are shown in Microsoft Cost Management. At the end of your billing cycle, the charges for each meter are summed. Your bill or invoice shows a section for all Microsoft Sentinel costs. There's a separate line item for each meter.
-To see your Azure bill, select **Cost Analysis** in the left navigation of **Cost Management**. On the **Cost analysis** screen, select the drop-down caret in the **View** field, and select **Invoice details**.
+To see your Azure bill, select **Cost Analysis** in the left navigation of **Cost Management**. On the **Cost analysis** screen, find and select the **Invoice details** from **All views**.
The costs shown in the following image are for example purposes only. They're not intended to reflect actual costs. Starting July 1, 2023, legacy pricing tiers are prefixed with **Classic**.
sentinel Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/feature-availability.md
Previously updated : 06/17/2024 Last updated : 07/11/2024 # Microsoft Sentinel feature support for Azure commercial/other clouds
-This article describes the features available in Microsoft Sentinel across different Azure environments. Features are listed as GA (generally available), public preview, or shown as not available.
+This article describes the features available in Microsoft Sentinel across different Azure environments. Features are listed as GA (generally available), public preview, or shown as not available.
-While Microsoft Sentinel is also available in the [Microsoft Defender portal](microsoft-sentinel-defender-portal.md), this article only covers Azure environments.
+While Microsoft Sentinel is also available in the [Microsoft Defender portal](microsoft-sentinel-defender-portal.md), this article only covers Azure environments. Microsoft Sentinel within the Microsoft unified security operations platform is currently supported only in the Azure commercial cloud.
> [!NOTE] > These lists and tables do not include feature or bundle availability in the Azure Government Secret or Azure Government Top Secret clouds.
sentinel Microsoft 365 Defender Sentinel Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/microsoft-365-defender-sentinel-integration.md
description: Learn how using Microsoft Defender XDR together with Microsoft Sent
Previously updated : 07/08/2024 Last updated : 07/11/2024 appliesto: - Microsoft Sentinel in the Azure portal and the Microsoft Defender portal
Integrate Microsoft Defender XDR with Microsoft Sentinel to stream all Defender
Alternatively, onboard Microsoft Sentinel with Defender XDR to the unified security operations platform in the Defender portal. The unified security operations platform brings together the full capabilities of Microsoft Sentinel, Defender XDR, and generative AI built specifically for cybersecurity. For more information, see the following resources: -- [Unified security operations platform with Microsoft Sentinel and Defender XDR](https://aka.ms/unified-soc-announcement)
+- Blog post: [General availability of the Microsoft unified security operations platform](https://aka.ms/unified-soc-announcement)
- [Microsoft Sentinel in the Microsoft Defender portal](microsoft-sentinel-defender-portal.md) - [Microsoft Copilot in Microsoft Defender](/defender-xdr/security-copilot-in-microsoft-365-defender)
Other services whose alerts are collected by Defender XDR include:
- Microsoft Purview Data Loss Prevention ([Learn more](/microsoft-365/security/defender/investigate-dlp)) - Microsoft Entra ID Protection ([Learn more](/defender-cloud-apps/aadip-integration))
-The Defender XDR connector also brings incidents from Microsoft Defender for Cloud. To synchronize alerts and entities from these incidents as well, you must enable the Microsoft Defender for Cloud connector. Otherwise, your Microsoft Defender for Cloud incidents appear empty. For more information, see [Ingest Microsoft Defender for Cloud incidents with Microsoft Defender XDR integration](ingest-defender-for-cloud-incidents.md).
+The Defender XDR connector also brings incidents from Microsoft Defender for Cloud. To synchronize alerts and entities from these incidents as well, you must enable the Defender for Cloud connector in Microsoft Sentinel. Otherwise, your Defender for Cloud incidents appear empty. For more information, see [Ingest Microsoft Defender for Cloud incidents with Microsoft Defender XDR integration](ingest-defender-for-cloud-incidents.md).
In addition to collecting alerts from these components and other services, Defender XDR generates alerts of its own. It creates incidents from all of these alerts and sends them to Microsoft Sentinel.
The exception to this process is Microsoft Defender for Cloud. Although its inte
To avoid creating *duplicate incidents for the same alerts*, the **Microsoft incident creation rules** setting is turned off for Defender XDR-integrated products when connecting Defender XDR. Defender XDR-integrated products include Microsoft Defender for Identity, Microsoft Defender for Office 365, and more. Also, Microsoft incident creation rules aren't supported in the unified security operations platform. Defender XDR has its own incident creation rules. This change has the following potential impacts: -- Microsoft Sentinel's incident creation rules allowed you to filter the alerts that would be used to create incidents. With these rules disabled, preserve the alert filtering capability by configuring [alert tuning in the Microsoft Defender portal](/microsoft-365/security/defender/investigate-alerts), or by using [automation rules](automate-incident-handling-with-automation-rules.md#incident-suppression) to suppress or close incidents you don't want.
+- Microsoft Sentinel's incident creation rules allowed you to filter the alerts that would be used to create incidents. With these rules disabled, preserve the alert filtering capability by configuring [alert tuning in the Microsoft Defender portal](/microsoft-365/security/defender/investigate-alerts), or by using [automation rules](automate-incident-handling-with-automation-rules.md#incident-suppression) to suppress or close incidents you don't want.
- After you enable the Defender XDR connector, you can no longer predetermine the titles of incidents. The Defender XDR correlation engine presides over incident creation and automatically names the incidents it creates. This change is liable to affect any automation rules you created that use the incident name as a condition. To avoid this pitfall, use criteria other than the incident name as conditions for [triggering automation rules](automate-incident-handling-with-automation-rules.md#conditions). We recommend using *tags*. - If you use Microsoft Sentinel's incident creation rules for other Microsoft security solutions or products not integrated into Defender XDR, such as Microsoft Purview Insider Risk Management, and you plan to onboard to the unified security operations platform in the Defender portal, replace your incident creation rules with [scheduled analytic rules](create-analytics-rule-from-template.md). - ## Working with Microsoft Defender XDR incidents in Microsoft Sentinel and bi-directional sync Defender XDR incidents appear in the Microsoft Sentinel incidents queue with the product name **Microsoft Defender XDR**, and with similar details and functionality to any other Microsoft Sentinel incidents. Each incident contains a link back to the parallel incident in the Microsoft Defender portal.
The Defender XDR connector also lets you stream **advanced hunting** events&mdas
In this document, you learned the benefits of enabling the Defender XDR connector in Microsoft Sentinel. - [Connect data from Microsoft Defender XDR to Microsoft Sentinel](connect-microsoft-365-defender.md)-- To use the unified security operations platform in the Defender portal, see [Connect data from Microsoft Defender XDR to Microsoft Sentinel](connect-microsoft-365-defender.md).
+- To use the unified security operations platform in the Defender portal, see [Connect Microsoft Sentinel to Microsoft Defender XDR](/defender-xdr/microsoft-sentinel-onboard).
- Check [availability of different Microsoft Defender XDR data types](microsoft-365-defender-cloud-support.md) in the different Microsoft 365 and Azure clouds.
sentinel Microsoft Sentinel Defender Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/microsoft-sentinel-defender-portal.md
description: Learn about changes in the Microsoft Defender portal with the integ
Previously updated : 05/29/2024 Last updated : 07/11/2024 appliesto: - Microsoft Sentinel in the Microsoft Defender portal
# Microsoft Sentinel in the Microsoft Defender portal
-Microsoft Sentinel is available as part of the unified security operations platform in the Microsoft Defender portal. Microsoft Sentinel in the Defender portal is now supported for production use. For more information, see:
+This article describes the Microsoft Sentinel experience in the Microsoft Defender portal. Microsoft Sentinel is now generally available within the Microsoft unified security operations platform in the Microsoft Defender portal. For more information, see:
-- [Unified security operations platform with Microsoft Sentinel and Defender XDR](https://aka.ms/unified-soc-announcement)
+- Blog post: [General availability of the Microsoft unified security operations platform](https://aka.ms/unified-soc-announcement)
- [Connect Microsoft Sentinel to Microsoft Defender XDR](/microsoft-365/security/defender/microsoft-sentinel-onboard)
-This article describes the Microsoft Sentinel experience in the Microsoft Defender portal.
- ## New and improved capabilities The following table describes the new or improved capabilities available in the Defender portal with the integration of Microsoft Sentinel and Defender XDR.
The following table lists the changes in navigation between the Azure and Defend
## Related content
+- [Microsoft Defender XDR integration with Microsoft Sentinel](microsoft-365-defender-sentinel-integration.md)
- [Connect Microsoft Sentinel to Microsoft Defender XDR](/microsoft-365/security/defender/microsoft-sentinel-onboard) - [Microsoft Defender XDR documentation](/microsoft-365/security/defender)
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
description: Learn about the latest new features and announcement in Microsoft S
Previously updated : 05/21/2024 Last updated : 07/10/2024 # What's new in Microsoft Sentinel
The listed features were released in the last three months. For information abou
[!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)]
+## July 2024
+
+- [Microsoft unified security platform now generally available](#microsoft-unified-security-platform-now-generally-available)
+
+### Microsoft unified security platform now generally available
+
+Microsoft Sentinel is now generally available within the Microsoft unified security operations platform in the Microsoft Defender portal. The Microsoft unified security operations platform brings together the full capabilities of Microsoft Sentinel, Microsoft Defender XDR, and Microsoft Copilot for Security in Microsoft Defender. For more information, see the following resources:
+
+- Blog post: [General availability of the Microsoft unified security operations platform](https://aka.ms/unified-soc-announcement)
+- [Microsoft Sentinel in the Microsoft Defender portal](microsoft-sentinel-defender-portal.md)
+- [Connect Microsoft Sentinel to Microsoft Defender XDR](/defender-xdr/microsoft-sentinel-onboard)
+- [Microsoft Copilot for Security in Microsoft Defender](/defender-xdr/security-copilot-in-microsoft-365-defender)
+ ## June 2024 - [Codeless Connector Platform now generally available](#codeless-connector-platform-now-generally-available)
storage Storage C Plus Plus Enumeration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-c-plus-plus-enumeration.md
- Title: List Azure Storage resources with C++ client library
-description: Learn how to use the listing APIs in Microsoft Azure Storage Client Library for C++ to enumerate containers, blobs, queues, tables, and entities.
-- Previously updated : 01/23/2017------
-# List Azure Storage resources in C++
-
-Listing operations are key to many development scenarios with Azure Storage. This article describes how to most efficiently enumerate objects in Azure Storage using the listing APIs provided in the Microsoft Azure Storage Client Library for C++.
-
-> [!NOTE]
-> This guide targets the Azure Storage Client Library for C++ version 2.x, which is available via [NuGet](https://www.nuget.org/packages/wastorage) or [GitHub](https://github.com/Azure/azure-storage-cpp).
-
-The Storage Client Library provides a variety of methods to list or query objects in Azure Storage. This article addresses the following scenarios:
--- List containers in an account-- List blobs in a container or virtual blob directory-- List queues in an account-- List tables in an account-- Query entities in a table-
-Each of these methods is shown using different overloads for different scenarios.
-
-## Asynchronous versus synchronous
-
-Because the Storage Client Library for C++ is built on top of the [C++ REST library](https://github.com/Microsoft/cpprestsdk), we inherently support asynchronous operations by using [pplx::task](https://microsoft.github.io/cpprestsdk/classpplx_1_1task.html). For example:
-
-```cpp
-pplx::task<list_blob_item_segment> list_blobs_segmented_async(continuation_token& token) const;
-```
-
-Synchronous operations wrap the corresponding asynchronous operations:
-
-```cpp
-list_blob_item_segment list_blobs_segmented(const continuation_token& token) const
-{
- return list_blobs_segmented_async(token).get();
-}
-```
-
-If you are working with multiple threading applications or services, we recommend that you use the async APIs directly instead of creating a thread to call the sync APIs, which significantly impacts your performance.
-
-## Segmented listing
-
-The scale of cloud storage requires segmented listing. For example, you can have over a million blobs in an Azure blob container or over a billion entities in an Azure Table. These are not theoretical numbers, but real customer usage cases.
-
-It is therefore impractical to list all objects in a single response. Instead, you can list objects using paging. Each of the listing APIs has a *segmented* overload.
-
-The response for a segmented listing operation includes:
--- *_segment*, which contains the set of results returned for a single call to the listing API.-- *continuation_token*, which is passed to the next call in order to get the next page of results. When there are no more results to return, the continuation token is null.-
-For example, a typical call to list all blobs in a container may look like the following code snippet. The code is available in our [samples](https://github.com/Azure/azure-storage-cpp/blob/master/Microsoft.WindowsAzure.Storage/samples/BlobsGettingStarted.cpp):
-
-```cpp
-// List blobs in the blob container
-azure::storage::continuation_token token;
-do
-{
- azure::storage::list_blob_item_segment segment = container.list_blobs_segmented(token);
- for (auto it = segment.results().cbegin(); it != segment.results().cend(); ++it)
-{
- if (it->is_blob())
- {
- process_blob(it->as_blob());
- }
- else
- {
- process_directory(it->as_directory());
- }
-}
-
- token = segment.continuation_token();
-}
-while (!token.empty());
-```
-
-Note that the number of results returned in a page can be controlled by the parameter *max_results* in the overload of each API, for example:
-
-```cpp
-list_blob_item_segment list_blobs_segmented(const utility::string_t& prefix, bool use_flat_blob_listing,
- blob_listing_details::values includes, int max_results, const continuation_token& token,
- const blob_request_options& options, operation_context context)
-```
-
-If you do not specify the *max_results* parameter, the default maximum value of up to 5000 results is returned in a single page.
-
-Also note that a query against Azure Table storage may return no records, or fewer records than the value of the *max_results* parameter that you specified, even if the continuation token is not empty. One reason might be that the query could not complete in five seconds. As long as the continuation token is not empty, the query should continue, and your code should not assume the size of segment results.
-
-The recommended coding pattern for most scenarios is segmented listing, which provides explicit progress of listing or querying, and how the service responds to each request. Particularly for C++ applications or services, lower-level control of the listing progress may help control memory and performance.
-
-## Greedy listing
-
-Earlier versions of the Storage Client Library for C++ (versions 0.5.0 Preview and earlier) included non-segmented listing APIs for tables and queues, as in the following example:
-
-```cpp
-std::vector<cloud_table> list_tables(const utility::string_t& prefix) const;
-std::vector<table_entity> execute_query(const table_query& query) const;
-std::vector<cloud_queue> list_queues() const;
-```
-
-These methods were implemented as wrappers of segmented APIs. For each response of segmented listing, the code appended the results to a vector and returned all results after the full containers were scanned.
-
-This approach might work when the storage account or table contains a small number of objects. However, with an increase in the number of objects, the memory required could increase without limit, because all results remained in memory. One listing operation can take a very long time, during which the caller had no information about its progress.
-
-These greedy listing APIs in the SDK do not exist in C#, Java, or the JavaScript Node.js environment. To avoid the potential issues of using these greedy APIs, we removed them in version 0.6.0 Preview.
-
-If your code is calling these greedy APIs:
-
-```cpp
-std::vector<azure::storage::table_entity> entities = table.execute_query(query);
-for (auto it = entities.cbegin(); it != entities.cend(); ++it)
-{
- process_entity(*it);
-}
-```
-
-Then you should modify your code to use the segmented listing APIs:
-
-```cpp
-azure::storage::continuation_token token;
-do
-{
- azure::storage::table_query_segment segment = table.execute_query_segmented(query, token);
- for (auto it = segment.results().cbegin(); it != segment.results().cend(); ++it)
- {
- process_entity(*it);
- }
-
- token = segment.continuation_token();
-} while (!token.empty());
-```
-
-By specifying the *max_results* parameter of the segment, you can balance between the numbers of requests and memory usage to meet performance considerations for your application.
-
-Additionally, if you're using segmented listing APIs, but store the data in a local collection in a "greedy" style, we also strongly recommend that you refactor your code to handle storing data in a local collection carefully at scale.
-
-## Lazy listing
-
-Although greedy listing raised potential issues, it is convenient if there are not too many objects in the container.
-
-If you're also using C# or Oracle Java SDKs, you should be familiar with the Enumerable programming model, which offers a lazy-style listing, where the data at a certain offset is only fetched if it is required. In C++, the iterator-based template also provides a similar approach.
-
-A typical lazy listing API, using **list_blobs** as an example, looks like this:
-
-```cpp
-list_blob_item_iterator list_blobs() const;
-```
-
-A typical code snippet that uses the lazy listing pattern might look like this:
-
-```cpp
-// List blobs in the blob container
-azure::storage::list_blob_item_iterator end_of_results;
-for (auto it = container.list_blobs(); it != end_of_results; ++it)
-{
- if (it->is_blob())
- {
- process_blob(it->as_blob());
- }
- else
- {
- process_directory(it->as_directory());
- }
-}
-```
-
-Note that lazy listing is only available in synchronous mode.
-
-Compared with greedy listing, lazy listing fetches data only when necessary. Under the covers, it fetches data from Azure Storage only when the next iterator moves into next segment. Therefore, memory usage is controlled with a bounded size, and the operation is fast.
-
-Lazy listing APIs are included in the Storage Client Library for C++ in version 2.2.0.
-
-## Conclusion
-
-In this article, we discussed different overloads for listing APIs for various objects in the Storage Client Library for C++ . To summarize:
--- Async APIs are strongly recommended under multiple threading scenarios.-- Segmented listing is recommended for most scenarios.-- Lazy listing is provided in the library as a convenient wrapper in synchronous scenarios.-- Greedy listing is not recommended and has been removed from the library.-
-## Next steps
-
-For more information about Azure Storage and Client Library for C++, see the following resources.
--- [How to use Blob Storage from C++](../blobs/quickstart-blobs-c-plus-plus.md)-- [How to use Table Storage from C++](../../cosmos-db/table-storage-how-to-use-c-plus.md)-- [How to use Queue Storage from C++](../queues/storage-c-plus-plus-how-to-use-queues.md)-- [Azure Storage Client Library for C++ API documentation.](https://azure.github.io/azure-storage-cpp/)-- [Azure Storage Team Blog](/archive/blogs/windowsazurestorage/)-- [Azure Storage Documentation](../index.yml)
storage Queues V8 Samples Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/queues-v8-samples-java.md
- Title: Azure Queue Storage code samples using Java version 8 client libraries-
-description: View code samples that use the Azure Queue Storage client library for Java version 8.
----- Previously updated : 07/24/2023---
-# Azure Queue Storage code samples using Java version 8 client libraries
-
-This article shows code samples that use version 8 of the Azure Queue Storage client library for Java.
--
-For code samples using the latest version 12.x client library version, see [Quickstart: Azure Queue Storage client library for Java](storage-quickstart-queues-java.md).
-
-## Create a queue
-
-Add the following `import` directives:
-
-```java
-import com.microsoft.azure.storage.*;
-import com.microsoft.azure.storage.queue.*;
-```
-
-A `CloudQueueClient` object lets you get reference objects for queues. The following code creates a `CloudQueueClient` object that provides a reference to the queue you want to use. You can create the queue if it doesn't exist.
-
-> [!NOTE]
-> There are other ways to create `CloudStorageAccount` objects. For more information, see `CloudStorageAccount` in the [Azure Storage client SDK reference](https://azure.github.io/azure-sdk-for-java/storage.html).)
-
-```java
-try
-{
- // Retrieve storage account from connection-string.
- CloudStorageAccount storageAccount =
- CloudStorageAccount.parse(storageConnectionString);
-
- // Create the queue client.
- CloudQueueClient queueClient = storageAccount.createCloudQueueClient();
-
- // Retrieve a reference to a queue.
- CloudQueue queue = queueClient.getQueueReference("myqueue");
-
- // Create the queue if it doesn't already exist.
- queue.createIfNotExists();
-}
-catch (Exception e)
-{
- // Output the stack trace.
- e.printStackTrace();
-}
-```
-
-## Add a message to a queue
-
-To insert a message into an existing queue, first create a new `CloudQueueMessage`. Next, call the `addMessage` method. A `CloudQueueMessage` can be created from either a string (in UTF-8 format) or a byte array. The following code example creates a queue (if it doesn't exist) and inserts the message `Hello, World`.
-
-```java
-try
-{
- // Retrieve storage account from connection-string.
- CloudStorageAccount storageAccount =
- CloudStorageAccount.parse(storageConnectionString);
-
- // Create the queue client.
- CloudQueueClient queueClient = storageAccount.createCloudQueueClient();
-
- // Retrieve a reference to a queue.
- CloudQueue queue = queueClient.getQueueReference("myqueue");
-
- // Create the queue if it doesn't already exist.
- queue.createIfNotExists();
-
- // Create a message and add it to the queue.
- CloudQueueMessage message = new CloudQueueMessage("Hello, World");
- queue.addMessage(message);
-}
-catch (Exception e)
-{
- // Output the stack trace.
- e.printStackTrace();
-}
-```
-
-## Peek at the next message
-
-You can peek at the message in the front of a queue without removing it from the queue by calling `peekMessage`.
-
-```java
-try
-{
- // Retrieve storage account from connection-string.
- CloudStorageAccount storageAccount =
- CloudStorageAccount.parse(storageConnectionString);
-
- // Create the queue client.
- CloudQueueClient queueClient = storageAccount.createCloudQueueClient();
-
- // Retrieve a reference to a queue.
- CloudQueue queue = queueClient.getQueueReference("myqueue");
-
- // Peek at the next message.
- CloudQueueMessage peekedMessage = queue.peekMessage();
-
- // Output the message value.
- if (peekedMessage != null)
- {
- System.out.println(peekedMessage.getMessageContentAsString());
- }
-}
-catch (Exception e)
-{
- // Output the stack trace.
- e.printStackTrace();
-}
-```
-
-## Change the contents of a queued message
-
-The following code sample searches through the queue of messages, locates the first message content that matches `Hello, world`, modifies the message content, and exits.
-
-```java
-try
-{
- // Retrieve storage account from connection-string.
- CloudStorageAccount storageAccount =
- CloudStorageAccount.parse(storageConnectionString);
-
- // Create the queue client.
- CloudQueueClient queueClient = storageAccount.createCloudQueueClient();
-
- // Retrieve a reference to a queue.
- CloudQueue queue = queueClient.getQueueReference("myqueue");
-
- // The maximum number of messages that can be retrieved is 32.
- final int MAX_NUMBER_OF_MESSAGES_TO_PEEK = 32;
-
- // Loop through the messages in the queue.
- for (CloudQueueMessage message : queue.retrieveMessages(MAX_NUMBER_OF_MESSAGES_TO_PEEK,1,null,null))
- {
- // Check for a specific string.
- if (message.getMessageContentAsString().equals("Hello, World"))
- {
- // Modify the content of the first matching message.
- message.setMessageContent("Updated contents.");
- // Set it to be visible in 30 seconds.
- EnumSet<MessageUpdateFields> updateFields =
- EnumSet.of(MessageUpdateFields.CONTENT,
- MessageUpdateFields.VISIBILITY);
- // Update the message.
- queue.updateMessage(message, 30, updateFields, null, null);
- break;
- }
- }
-}
-catch (Exception e)
-{
- // Output the stack trace.
- e.printStackTrace();
-}
-```
-
-The following code sample updates just the first visible message in the queue.
-
-```java
-try
-{
- // Retrieve storage account from connection-string.
- CloudStorageAccount storageAccount =
- CloudStorageAccount.parse(storageConnectionString);
-
- // Create the queue client.
- CloudQueueClient queueClient = storageAccount.createCloudQueueClient();
-
- // Retrieve a reference to a queue.
- CloudQueue queue = queueClient.getQueueReference("myqueue");
-
- // Retrieve the first visible message in the queue.
- CloudQueueMessage message = queue.retrieveMessage();
-
- if (message != null)
- {
- // Modify the message content.
- message.setMessageContent("Updated contents.");
- // Set it to be visible in 60 seconds.
- EnumSet<MessageUpdateFields> updateFields =
- EnumSet.of(MessageUpdateFields.CONTENT,
- MessageUpdateFields.VISIBILITY);
- // Update the message.
- queue.updateMessage(message, 60, updateFields, null, null);
- }
-}
-catch (Exception e)
-{
- // Output the stack trace.
- e.printStackTrace();
-}
-```
-
-## Get the queue length
-
-The `downloadAttributes` method retrieves several values including the number of messages currently in a queue. The count is only approximate because messages can be added or removed after your request. The `getApproximateMessageCount` method returns the last value retrieved by the call to `downloadAttributes`, without calling Queue Storage.
-
-```java
-try
-{
- // Retrieve storage account from connection-string.
- CloudStorageAccount storageAccount =
- CloudStorageAccount.parse(storageConnectionString);
-
- // Create the queue client.
- CloudQueueClient queueClient = storageAccount.createCloudQueueClient();
-
- // Retrieve a reference to a queue.
- CloudQueue queue = queueClient.getQueueReference("myqueue");
-
- // Download the approximate message count from the server.
- queue.downloadAttributes();
-
- // Retrieve the newly cached approximate message count.
- long cachedMessageCount = queue.getApproximateMessageCount();
-
- // Display the queue length.
- System.out.println(String.format("Queue length: %d", cachedMessageCount));
-}
-catch (Exception e)
-{
- // Output the stack trace.
- e.printStackTrace();
-}
-```
-
-## Dequeue the next message
-
-Your code dequeues a message from a queue in two steps. When you call `retrieveMessage`, you get the next message in a queue. A message returned from `retrieveMessage` becomes invisible to any other code reading messages from this queue. By default, this message stays invisible for 30 seconds. To finish removing the message from the queue, you must also call `deleteMessage`. If your code fails to process a message, this two-step process ensures that you can get the same message and try again. Your code calls `deleteMessage` right after the message has been processed.
-
-```java
-try
-{
- // Retrieve storage account from connection-string.
- CloudStorageAccount storageAccount =
- CloudStorageAccount.parse(storageConnectionString);
-
- // Create the queue client.
- CloudQueueClient queueClient = storageAccount.createCloudQueueClient();
-
- // Retrieve a reference to a queue.
- CloudQueue queue = queueClient.getQueueReference("myqueue");
-
- // Retrieve the first visible message in the queue.
- CloudQueueMessage retrievedMessage = queue.retrieveMessage();
-
- if (retrievedMessage != null)
- {
- // Process the message in less than 30 seconds, and then delete the message.
- queue.deleteMessage(retrievedMessage);
- }
-}
-catch (Exception e)
-{
- // Output the stack trace.
- e.printStackTrace();
-}
-```
-
-## Additional options for dequeuing messages
-
-The following code example uses the `retrieveMessages` method to get 20 messages in one call. Then it processes each message using a `for` loop. It also sets the invisibility timeout to five minutes (300 seconds) for each message. The timeout starts for all messages at the same time. When five minutes have passed since the call to `retrieveMessages`, any messages not deleted becomes visible again.
-
-```java
-try
-{
- // Retrieve storage account from connection-string.
- CloudStorageAccount storageAccount =
- CloudStorageAccount.parse(storageConnectionString);
-
- // Create the queue client.
- CloudQueueClient queueClient = storageAccount.createCloudQueueClient();
-
- // Retrieve a reference to a queue.
- CloudQueue queue = queueClient.getQueueReference("myqueue");
-
- // Retrieve 20 messages from the queue with a visibility timeout of 300 seconds.
- for (CloudQueueMessage message : queue.retrieveMessages(20, 300, null, null)) {
- // Do processing for all messages in less than 5 minutes,
- // deleting each message after processing.
- queue.deleteMessage(message);
- }
-}
-catch (Exception e)
-{
- // Output the stack trace.
- e.printStackTrace();
-}
-```
-
-## List the queues
-
-To obtain a list of the current queues, call the `CloudQueueClient.listQueues()` method, which returns a collection of `CloudQueue` objects.
-
-```java
-try
-{
- // Retrieve storage account from connection-string.
- CloudStorageAccount storageAccount =
- CloudStorageAccount.parse(storageConnectionString);
-
- // Create the queue client.
- CloudQueueClient queueClient =
- storageAccount.createCloudQueueClient();
-
- // Loop through the collection of queues.
- for (CloudQueue queue : queueClient.listQueues())
- {
- // Output each queue name.
- System.out.println(queue.getName());
- }
-}
-catch (Exception e)
-{
- // Output the stack trace.
- e.printStackTrace();
-}
-```
-
-## Delete a queue
-
-To delete a queue and all the messages contained in it, call the `deleteIfExists` method on the `CloudQueue` object.
-
-```java
-try
-{
- // Retrieve storage account from connection-string.
- CloudStorageAccount storageAccount =
- CloudStorageAccount.parse(storageConnectionString);
-
- // Create the queue client.
- CloudQueueClient queueClient = storageAccount.createCloudQueueClient();
-
- // Retrieve a reference to a queue.
- CloudQueue queue = queueClient.getQueueReference("myqueue");
-
- // Delete the queue if it exists.
- queue.deleteIfExists();
-}
-catch (Exception e)
-{
- // Output the stack trace.
- e.printStackTrace();
-}
-```
synapse-analytics Continuous Integration Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/cicd/continuous-integration-delivery.md
This article outlines how to use an Azure DevOps release pipeline and GitHub Act
## Prerequisites
-To automate the deployment of an Azure Synapse workspace to multiple environments, the following prerequisites and configurations must be in place.
+To automate the deployment of an Azure Synapse workspace to multiple environments, the following prerequisites and configurations must be in place. Note that you may choose to use **either** Azure DevOps **or** GitHub, according to your preference or existing setup.
+ ### Azure DevOps
+If you are using Azure DevOps:
+ - Prepare an Azure DevOps project for running the release pipeline. - [Grant any users who will check in code Basic access at the organization level](/azure/devops/organizations/accounts/add-organization-users?view=azure-devops&tabs=preview-page&preserve-view=true), so they can see the repository. - Grant Owner permission to the Azure Synapse repository.
To automate the deployment of an Azure Synapse workspace to multiple environment
### GitHub
+If you are using GitHub:
+ - Create a GitHub repository that contains the Azure Synapse workspace artifacts and the workspace template. - Make sure that you've created a self-hosted runner or use a GitHub-hosted runner.
synapse-analytics Apache Spark 3 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-3-runtime.md
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document covers the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.1.
-> [!WARNING]
+> [!CAUTION]
> Deprecation and disablement notification for Azure Synapse Runtime for Apache Spark 3.1 > * End of Support for Azure Synapse Runtime for Apache Spark 3.1 announced January 26, 2023. > * Effective January 26, 2024, the Azure Synapse has stopped official support for Spark 3.1 Runtimes.
synapse-analytics Apache Spark Version Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-version-support.md
Apache Spark pools in Azure Synapse use runtimes to tie together essential component versions such as Azure Synapse optimizations, packages, and connectors with a specific Apache Spark version. Each runtime is upgraded periodically to include new improvements, features, and patches. When you create a serverless Apache Spark pool, select the corresponding Apache Spark version. Based on this, the pool comes preinstalled with the associated runtime components and packages. The runtimes have the following advantages:- - Faster session startup times - Tested compatibility with specific Apache Spark versions - Access to popular, compatible connectors and open-source packages ## Supported Azure Synapse runtime releases
-> [!WARNING]
-> End of Support Notification for Azure Synapse Runtime for Apache Spark 2.4, Apache Spark 3.1 and Apache Spark 3.2.
-> * Effective September 29, 2023, Azure Synapse will discontinue official support for Spark 2.4 Runtimes.
-> * Effective January 26, 2024, Azure Synapse will discontinue official support for Spark 3.1 Runtimes.
-> * Effective July 8, 2024, Azure Synapse will discontinue official support for Spark 3.2 Runtimes.
-> * After these dates, we will not be addressing any support tickets related to Spark 2.4, 3.1 and 3.2. There will be no release pipeline in place for bug or security fixes for Spark 2.4, 3.1 and 3.2. **Utilizing Spark 2.4, 3.1 and 3.2 post the support cutoff dates is undertaken at one's own risk. We strongly discourage its continued use due to potential security and functionality concerns.**
- > [!TIP]
-> We strongly recommend proactively upgrading workloads to a more recent GA version of the runtime (for example, [Azure Synapse Runtime for Apache Spark 3.4 (GA)](./apache-spark-34-runtime.md)). Refer to the [Apache Spark migration guide](https://spark.apache.org/docs/latest/sql-migration-guide.html).
+> We strongly recommend proactively upgrading workloads to a more recent GA version of the runtime which is [Azure Synapse Runtime for Apache Spark 3.4 (GA)](./apache-spark-34-runtime.md)). Refer to the [Apache Spark migration guide](https://spark.apache.org/docs/latest/sql-migration-guide.html).
The following table lists the runtime name, Apache Spark version, and release date for supported Azure Synapse Runtime releases. | Runtime name | Release date | Release stage | End of Support announcement date | End of Support effective date | | | || | | | [Azure Synapse Runtime for Apache Spark 3.4](./apache-spark-34-runtime.md) | Nov 21, 2023 | GA (as of Apr 8, 2024) | Q2 2025| Q1 2026|
-| [Azure Synapse Runtime for Apache Spark 3.3](./apache-spark-33-runtime.md) | Nov 17, 2022 | GA (as of Feb 23, 2023) | Q2/Q3 2024 | Q1 2025 |
+| [Azure Synapse Runtime for Apache Spark 3.3](./apache-spark-33-runtime.md) | Nov 17, 2022 | GA (as of Feb 23, 2023) | Q2/Q3 2024 | 3/31/2025 |
| [Azure Synapse Runtime for Apache Spark 3.2](./apache-spark-32-runtime.md) | July 8, 2022 | __deprecated and soon disabled__ | July 8, 2023 | __July 8, 2024__ | | [Azure Synapse Runtime for Apache Spark 3.1](./apache-spark-3-runtime.md) | May 26, 2021 | __deprecated and soon disabled__ | January 26, 2023 | __January 26, 2024__ | | [Azure Synapse Runtime for Apache Spark 2.4](./apache-spark-24-runtime.md) | December 15, 2020 | __deprecated and soon disabled__ | July 29, 2022 | __September 29, 2023__ |
synapse-analytics Runtime For Apache Spark Lifecycle And Supportability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/runtime-for-apache-spark-lifecycle-and-supportability.md
Title: Synapse runtime for Apache Spark lifecycle and supportability description: Lifecycle and support policies for Synapse runtime for Apache Spark -+ Last updated 03/08/2024
Apache Spark pools in Azure Synapse use runtimes to tie together essential component versions such as Azure Synapse optimizations, packages, and connectors with a specific Apache Spark version. Each runtime is upgraded periodically to include new improvements, features, and patches. +
+> [!CAUTION]
+> Azure Synapse Runtime for Apache Spark 2.4, 3.1, and 3.2 are unsupported and deprecated. Using these runtimes after the deprecation date is at one's own risk, and with the agreement and acceptance of the risks that existing jobs running on Apache Spark 2.4, 3.1, or 3.2 pools will eventually stop executing.
+ ## Release cadence The Apache Spark project usually releases minor versions about __every 6 months__. Once released, the Azure Synapse team aims to provide a __preview runtime within approximately 90 days__, if possible.
trusted-signing How To Sign History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/trusted-signing/how-to-sign-history.md
You can use diagnostic settings to route your Trusted Signing account platform m
Currently, you can choose from four log routing options for Trusted Signing in Azure: -- **Log Analytics workspace**: A Log Analytics workspace serves as a distinct environment for log data. Each workspace has its own data repository and configuration. ItΓÇÖs the designated destination for your data. If you havenΓÇÖt already set up a workspace, create one before you proceed. For more information, see the [Log Analytics workspace overview](/azure/azure-monitor/logs/log-analytics-workspace-overview). - **Azure Storage account**: An Azure Storage account houses all your Storage data objects, including blobs, files, queues, and tables. It offers a unique namespace for your Storage data, and it's accessible globally via HTTP or HTTPS. To set up your storage account:
update-manager Manage Pre Post Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/manage-pre-post-events.md
Title: Manage the pre and post maintenance configuration events (preview) in Azure Update Manager description: The article provides the steps to manage the pre and post maintenance events in Azure Update Manager. Previously updated : 07/08/2024 Last updated : 07/09/2024
-# Manage pre and post events (preview)
+# Manage pre and post events maintenance configuration events (preview)
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers :heavy_check_mark: Azure VMs. -
-Pre and post events allows you to execute user-defined actions before and after the scheduled maintenance configuration. For more information, go through the [workings of a pre and post event in Azure Update Manager](pre-post-scripts-overview.md).
-This article describes on how to create and manage the pre and post events in Azure Update Manager.
-
-## Event Grid in schedule maintenance configurations
-
-Azure Update Manager leverages Event grid to create and manage pre and post events. For more information, go through the [overview of Event Grid](../event-grid/overview.md). To trigger an event either before or after a schedule maintenance window, you require the following:
-
-1. **Schedule maintenance configuration** - You can create Pre and post events for a schedule maintenance configuration in Azure Update Manager. For more information, see [schedule updates using maintenance configurations](scheduled-patching.md).
-1. **Actions to be performed in the pre or post event** - You can use the [Event handlers](../event-grid/event-handlers.md) (Endpoints) supported by Event Grid to define actions or tasks. Here are examples on how to create Azure Automation Runbooks via Webhooks and Azure Functions. Within these Event handlers/Endpoints, you must define the actions that should be performed as part of pre and post events.
- 1. **Webhook** - Create a PowerShell 7.2 Runbook.[Learn more](../automation/automation-runbook-types.md#powershell-runbooks) and link the Runbook to a webhook. [Learn more](../automation/automation-webhooks.md).
- 1. **Azure Function** - Create an Azure Function. [Learn more][Create your first function in the Azure portal](../azure-functions/functions-create-function-app-portal.md).
-1. **Pre and post event** - You can follow the steps shared in the following section to create a pre and post event for schedule maintenance configuration. For more information in the Basics tab of Event
--
+This article describes on how to register your subscription and manage pre and post events in Azure Update Manager.
## Register your subscription for public preview
Register-AzProviderFeature -FeatureName "InGuestPatchPrePostMaintenanceActivity"
```
-## Timeline of schedules for pre and post events
+## Manage pre and post events
-**We recommend you to go through the following table to understand the timeline of the schedule for pre and post events.**
+### View pre and post events
-For example, if a maintenance schedule is set to start at **3:00 PM**, with the maintenance window of 3 hours and 55 minutes for **Guest** maintenance scope, following are the details:
+To view the pre and post events, follow these steps:
-| **Time**| **Details** |
-|-|-|
-|2:19 PM | You can edit the machines and/or dynamically scope the machines up to 40 minutes before a scheduled patch run with an associated pre event. If any changes are made to the resources attached to the schedule after this time, the resources will be included in the subsequent schedule run and not the current run. </br> **Note**</br> If you're creating a new schedule or editing an existing schedule with a pre event, you need at least 40 minutes prior to the maintenance window for the pre event to run. </br></br> In this example, if you have set a schedule at 3:00 PM, you can modify the scope 40 mins before the set time that is at, 2.19 PM. |
-|Between 2:20 to 2:30 PM | The pre event is triggered giving atleast 20 mins to complete before the patch installation begins to run. </br></br> In this example, the pre event is initiated between 2:20 to 2:30 PM.|
-|2:50 PM | The pre event has atleast 20 mins to complete before the patch installation begins to run. </br> **Note** </br> - If the pre event continues to run beyond 20 mins, the patch installation goes ahead irrespective of the pre event run status. </br> - If you choose to cancel the current run, you can cancel using the cancelation API 10 mins before the schedule. In this example, by 2:50 PM you can cancel either from your script or Azure function code. </br> If cancelation API fails to get invoked or hasn't been set up, the patch installation proceeds to run. </br> </br> In this example, the pre event should complete the tasks by 2:50 PM. If you choose to cancel the current run, the latest time that you can invoke the cancelation API is by 2:50 PM. |
-|3:00 PM | As defined in the maintenance configuration, the schedule gets triggered at the specified time. </br> In this example, the schedule is triggered at 3:00 PM. |
-|6:55 PM | The post event gets triggered after the defined maintenance window completes. If you have defined a shorter maintenance window of 2 hrs, the post maintenance event will trigger after 2 hours and if the maintenance schedule is completed before the stipulated time of 2 hours that is, in 1 hr 50 mins, the post event will start. </br></br> In this example, if the maintenance window is set to the maximum, then by 6:55 PM the patch installation process is complete and if you have a shorter maintenance window, the patch installation process is completed by 5:00 PM. |
-|7:15 PM| After the patch installation, the post event runs for 20 mins. </br>In this example, the post event is initiated at 6:55 PM and completed by 7:15 PM and if you have a shorter maintenance window, the post event is triggered at 5:00 PM and completed by 5:20 PM. |
+1. Sign in to the [Azure portal](https://portal.azure.com) and go to **Azure Update Manager**.
+1. Under **Manage**, select **Machines**, **Maintenance Configurations**.
+1. In the **Maintenance Configuration** page, select the maintenance configuration to which you want to add a pre and post event.
+1. Select **Overview** and check the **Maintenance events**. You can see the count of the pre and post events associated to the configuration.
+ :::image type="content" source="./media/manage-pre-post-events/view-configure-events-inline.png" alt-text="Screenshot that shows how to view and configure a pre and post event." lightbox="./media/manage-pre-post-events/view-configure-events-expanded.png":::
-We recommend that you are watchful of the following:
-+ If you're creating a new schedule or editing an existing schedule with a pre event, you need at least 40 minutes prior to the start of maintenance window (3PM in the above example) for the pre event to run otherwise it will lead to auto-cancellation of the current scheduled run.
-+ Pre event is triggered 30 minutes before the scheduled patch run giving pre event atleast 20 minutes to complete.
-+ Post event runs immediately after the patch installation completes.
-+ To cancel the current patch run, use the cancellation API atleast 10 minutes before the schedule maintenance time.
+1. Select the count of the pre and post events to view the list of events and the event types.
+ :::image type="content" source="./media/manage-pre-post-events/view-events-inline.png" alt-text="Screenshot that shows how to view the pre and post events." lightbox="./media/manage-pre-post-events/view-events-expanded.png":::
-## Configure pre and post events on existing schedule
+### Edit pre and post events
-You can configure pre and post events on an existing schedule and can add multiple pre and post events to a single schedule. To add a pre and post event, follow these steps:
+To edit the pre and post events, follow these steps:
+1. Follow the steps listed in [View pre and post events](#view-pre-and-post-events).
+1. In the selected **events** page, select the pre or post event you want to edit.
+1. In the selected **pre or post event** page, you can edit the Event handler/endpoint used or the location of the endpoint.
-1. Sign in to the [Azure portal](https://portal.azure.com) and go to **Azure Update Manager**.
-1. Under **Manage**, select **Machines**, **Maintenance Configurations**.
-1. On the **Maintenance Configuration** page, select the maintenance configuration to which you want to add a pre and post event.
-1. On the selected **Maintenance configuration** page, under **Settings**, select **Events**. Alternatively, under the **Overview**, select the card **Create a maintenance event**.
-
- :::image type="content" source="./media/manage-pre-post-events/create-maintenance-event-inline.png" alt-text="Screenshot that shows the options to select to create a maintenance event." lightbox="./media/manage-pre-post-events/create-maintenance-event-expanded.png":::
-
-1. Select **+Event Subscription** to create Pre/Post Maintenance Event.
+## Manage the execution of pre/post event and schedule run
- :::image type="content" source="./media/manage-pre-post-events/maintenance-events-inline.png" alt-text="Screenshot that shows the maintenance events." lightbox="./media/manage-pre-post-events/maintenance-events-expanded.png":::
+To check the successful delivery of a pre and post event to an endpoint from Event Grid, follow these steps:
-1. On the **Create Event Subscription** page, enter the following details:
- - In the **Event Subscription Details** section, provide an appropriate name.
- - Keep the schema as **Event Grid Schema**.
- - In the **Topic Details** section, provide an appropriate name to the **System Topic Name**.
- - In the **Event Types** section, **Filter to Event Types**, select the event types that you want to get pushed to the endpoint or destination. You can select between **Pre Maintenance Event** and **Post Maintenance Event**.
- - In the **Endpoint details** section, select the endpoint where you want to receive the response from. It would help customers to trigger their pre or post event.
-
- :::image type="content" source="./media/manage-pre-post-events/create-event-subscription.png" alt-text="Screenshot on how to create event subscription.":::
+ 1. Sign in to the [Azure portal](https://portal.azure.com/) and go to **Azure Update Manager**.
+ 2. Under **Manage**, select **Machines**.
+ 3. Select **Maintenance Configurations** from the ribbon at the top.
+ 4. In the **Maintenance Configuration** page, select the maintenance configuration for which you want to view a pre and post event.
+ 5. On the selected **Maintenance Configuration** page, under **Settings** in the ToC, select **Events**.
+ 6. In the **Essentials** section, you can view the metrics for all the events under the selected event subscription. In the graph, the count of the Published Events metric should match with the count of Matched Events metric. Both values should also correspond with the Delivered Events count.
+ 7. To view the metrics specific to a pre or a post event, select the name of the event from the grid. Here, the count of Matched Events metric should match with the Delivered Events count.
+ 8. To view the time at which the event was triggered, hover over the line graph. [Learn more](/azure/azure-monitor/reference/supported-metrics/microsoft-eventgrid-systemtopics-metrics).
-1. Select **Create** to configure the pre and post events on an existing schedule.
+ > [!Note]
+ > Azure Event Grid adheres to an at-least-once delivery paradigm. This implies that, in exceptional circumstances, there is a chance of the event handler being invoked more than once for a given event. We recommend you to ensure that the event handler actions are idempotent. In other words, if the event handler is executed multiple times, it should not have any adverse effects. Implementing idempotency ensures the robustness of your application in the face of potential duplicate event invocations.
-> [!NOTE]
-> - The pre and post event can only be created at a scheduled maintenance configuration level.
-> - System Topic gets automatically created per maintenance configuration and all event subscription are linked to the System Topic in the EventGrid.
-> - The pre and post event run falls outside of the schedule maintenance window.
-## View pre and post events
+**To check if the endpoint has been triggered and completed in the pre or post event**
-To view the pre and post events, follow these steps:
+#### [Automation Runbooks via Webhooks](#tab/runbooks)
-1. Sign in to the [Azure portal](https://portal.azure.com) and go to **Azure Update Manager**.
-1. Under **Manage**, select **Machines**, **Maintenance Configurations**.
-1. On the **Maintenance Configuration** page, select the maintenance configuration to which you want to add a pre and post event.
-1. Select **Overview** and check the **Maintenance events**.
- - Select **Configure** to set up one.
- :::image type="content" source="./media/manage-pre-post-events/view-configure-events-inline.png" alt-text="Screenshot that shows how to view and configure a pre and post event." lightbox="./media/manage-pre-post-events/view-configure-events-expanded.png":::
+1. Sign in to the [Azure portal](https://portal.azure.com/) and go to **Azure Automation account**.
+1. In your Automation account, under **Process Automation**, select **Runbooks**.
+1. Select the pre or post script that is linked to your Webhook in Event Grid.
+1. In **Overview**, you can view the status of the Runbook job. The trigger time should be approximately 30 minutes before the schedule start time. Once the job is finished, you can come back to the same section to confirm if the status is **Completed**. For example, ensure that the VM has been either powered on or off.
- - If the setup is already done, you can see the count of the pre and post events associated to the configuration in the **Events** page.
+ :::image type="content" source="./media/manage-pre-post-events/automation-runbooks-webhook.png" alt-text="Screenshot that shows how to check the status of runbook job." lightbox="./media/manage-pre-post-events/automation-runbooks-webhook.png":::
+
+ For more information on how to retrieve details from Automation account's activity log and job statuses, see [Manage runbooks in Azure Automation](../automation/automation-runbook-execution.md#jobs).
- :::image type="content" source="./media/manage-pre-post-events/view-events-inline.png" alt-text="Screenshot that shows how to view the pre and post events." lightbox="./media/manage-pre-post-events/view-events-expanded.png":::
-## Delete pre and post event
+#### [Azure Functions](#tab/functions)
-To delete pre and post events, follow these steps:
+1. Sign in to the [Azure portal](https://portal.azure.com/) and go to **your Azure Function app.**
+2. In your Azure Function app, go to the **Overview** page.
+3. Select the specific **function** from the grid in the **Overview** page.
+4. Under the **Monitor** column, select **Invocations and more.**
+5. This will take you to the **Invocations** tab, which displays the execution details and status of the function.
-1. Sign in to the [Azure portal](https://portal.azure.com) and go to **Azure Update Manager**.
-1. Under **Manage**, select **Machines**, **Maintenance Configurations**.
-1. On the **Maintenance Configuration** page, select the maintenance configuration to which you want to add a pre and post event.
-1. On the selected **Maintenance configuration** page, under **Settings**, select **Events**. Alternatively, under the **Overview**, select the card **Create a maintenance event**.
-1. Select the event **Name** you want to delete from the grid.
-1. On the selected event page, select **Delete**.
+To use application insights to monitor executions in Azure functions, refer [here](/azure/azure-functions/functions-monitoring).
- :::image type="content" source="./media/manage-pre-post-events/delete-event-inline.png" alt-text="Screenshot that shows how to delete the pre and post events." lightbox="./media/manage-pre-post-events/delete-event-expanded.png":::
+
-> [!NOTE]
-> - If all the pre and post events are deleted from the maintenance configuration, System Topic gets automatically deleted from the EventGrid.
-> - We recommend that you avoid deleting the System Topic manually from the EventGrid service.
+### Cancel a schedule run before it begins to run
-## Cancel a schedule from a pre event
+To cancel the schedule run, the cancelation API in your pre-event must get triggered at least 10 minutes before the schedule maintenance configuration start time. You must call the cancelation API in your pre-event, that is, Runbook script or Azure function code.
-To cancel the schedule, you must call the cancelation API in your pre event to set up the cancelation process that is in your Runbook script or Azure function code. Here, you must define the criteria from when the schedule must be canceled. The system won't monitor and won't automatically cancels the schedule based on the status of the pre event.
+**To cancel the schedule maintenance run**
-There are two types of cancelations:
-- **Cancelation by user** - when you invoke the cancelation API from your script or code.-- **Cancelation by system** - when the system invokes the cancelation API due to an internal error. This is done only if the system is unable to send the pre event to the customer's end point that is 30 minutes before the scheduled patching job.
+#### [Azure portal](#tab/az-portal)
+1. Sign in to the [Azure portal](https://portal.azure.com/) and go to **Azure Update Manager**.
+1. Under **Manage** in the ToC, select **History**.
+1. Select the **By Maintenance run ID** tab, and select the maintenance run ID for which you want to view the history.
+1. Select **Cancel schedule update**. This option is enabled for 10 minutes before the start of the maintenance configuration.
-> [!NOTE]
-> If the cancelation is done by the system, the upcoming scheduled patching job will be canceled due to the failure of running the pre events by the sytem.
+#### [REST API](#tab/rest)
->[!IMPORTANT]
-> If the scheduled maintenance job is cancelled by the user using cancelation API or by the system due to any internal failure, post event if subscribed, will be sent to the endpoint configured by the user.
-
-### View the cancelation status
+[Apply Updates - Create Or Update Or Cancel - REST API (Azure Maintenance) | Microsoft Learn](/rest/api/maintenance/apply-updates/create-or-update-or-cancel)
-To view the cancelation status, follow these steps:
+#### [PowerShell](#tab/az-ps)
-1. In **Azure Update Manager** home page, go to **History**
-1. Select by the **Maintenance run ID** and choose the run ID for which you want to view the status.
+[New-AzApplyUpdate (Az.Maintenance) | Microsoft Learn](/powershell/module/az.maintenance/new-azapplyupdate)
- :::image type="content" source="./media/manage-pre-post-events/view-cancelation-status-inline.png" alt-text="Screenshot that shows how to view the cancelation status." lightbox="./media/manage-pre-post-events/view-cancelation-status-expanded.png":::
+#### [CLI](#tab/az-cli)
-You can view the cancelation status from the error message in the JSON. The JSON can be obtained from the Azure Resource Graph (ARG). The corresponding maintenance configuration would be canceled using the Cancelation API.
+[az maintenance applyupdate | Microsoft Learn](/cli/azure/maintenance/applyupdate)
++
-The following query allows you to view the list of VMs for a given schedule or a maintenance configuration:
+You can obtain the list of machines in the maintenance run from the following ARG query. You can also view the correlation ID by selecting **See details**:
```kusto maintenanceresources | where type =~ "microsoft.maintenance/maintenanceconfigurations/applyupdates"
-| where properties.correlationId has "/subscriptions/your-s-id/resourcegroups/your-rg-id/providers/microsoft.maintenance/maintenanceconfigurations/mc-name/providers/microsoft.maintenance/applyupdates/"
+| where properties.correlationId has "/subscriptions/your- subscription -id/resourcegroups/your- ResourceGroupName/providers/microsoft.maintenance/maintenanceconfigurations/mc-name/providers/microsoft.maintenance/applyupdates/"
| order by name desc ```
- :::image type="content" source="./media/manage-pre-post-events/cancelation-api-user-inline.png" alt-text="Screenshot for cancelation done by the user." lightbox="./media/manage-pre-post-events/cancelation-api-user-expanded.png" :::
+>[!Note]
+>Azure Update Manager or maintenance configuration will not monitor and automatically cancel the schedule. If the user fails to cancel, the schedule run will proceed with installing updates during the user-defined maintenance window.
-+ `your-s-id` : Subscription ID in which maintenance configuration with Pre or post event is created
-+ `your-rg-id` : Resource Group Name in which maintenance configuration is created
-+ `mc-name` : Name of maintenance configuration in pre event is created
+## Post schedule run
-If the maintenance job is canceled by the system due to any reason, the error message in the JSON is obtained from the Azure Resource Graph for the corresponding maintenance configuration would be **Maintenance schedule canceled due to internal platform failure**.
+### View the history of pre and post events
-#### Invoke the Cancelation API
+1. Sign in to the [Azure portal](https://portal.azure.com/) and go to **Azure Update Manager**.
+2. Under **Manage**, select **History**.
+3. Select the **By Maintenance run ID** tab, select the maintenance run ID for which you want to view the history.
+4. Select the **Events** tab in this history page of the selected maintenance run ID.
+5. You can view the count of events and event names along with the Event type and endpoint details.
-```rest
- C:\ProgramData\chocolatey\bin\ARMClient.exe put https://management.azure.com/<your-c-id-obtained-from-above>?api-version=2023-09-01-preview "{\"Properties\":{\"Status\": \"Cancel\"}}" -VerboseΓÇ»
+### Debug pre and post events
+
+#### [Automation-Webhook](#tab/az-webhook)
+
+To view the job history of an event created through Webhook, follow these steps:
+
+1. Find the event name for which you want to view the job logs.
+2. Under the **Job history** column, select **View runbook history** corresponding to the event name. This takes you to the Automation account where the runbooks reside.
+3. Select the specific runbook name that is associated to the pre or post event. In the **overview** page, you can view the recent jobs of the runbook along with the execution and status details.
+
+#### [Azure Function](#tab/az-function)
+
+To view the history of an event created through Azure Function, follow these steps:
+
+1. Find the event name for which you want to view the job logs.
+2. Under the **Job history** column, select **View Azure Function history** corresponding to the event name. This takes you to the Azure function **Invocations** page.
+3. You can view the recent invocations along with the execution and status details.
+++
+### View the status of a canceled schedule run
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) and go to **Azure Update Manager**.
+2. Under **Manage**, select **History**.
+3. Select the **By Maintenance run ID** tab, and then select the maintenance run ID for which you want to view the status.
+4. Refer to the **Status** to view the status. If the maintenance run has been canceled, the status will be displayed as **cancelled**. Select the status to view the details.
+
+There are two types of cancelations:
+
+- **Cancelation by user**: When you invoke the cancelation API from your script or code.
+- **Cancelation by system**: When the system invokes the cancelation API due to an internal error. This is done only if the system is unable to send the pre-event to the customer's end point that is 30 minutes before the scheduled patching job. In this case, the upcoming scheduled maintenance configuration will be canceled due to the failure of running the pre-events by the system.
+
+To confirm if the cancelation is by user or system, you can view the status of the maintenance run ID from the ARG query mentioned above in **See details**. The **error message** displays whether the schedule run has been canceled by the user or system and the **status** field confirms the status of the maintenance run.
+
+ :::image type="content" source="./media/manage-pre-post-events/cancelation-api-user-inline.png" alt-text="Screenshot that shows how to view the cancelation status." lightbox="./media/manage-pre-post-events/cancelation-api-user-expanded.png":::
+
+The above image shows an example of cancelation by the user, where the error message would be **Maintenance cancelled using cancellation API at YYYY-MM-DD**. If the maintenance run is canceled by the system due to any reason, the error message in the JSON would be **Maintenance cancelled due to internal platform failure at YYYY-MM-DD**.
++
+## Delete pre and post event
+
+#### [Using Azure portal](#tab/del-portal)
+
+To delete pre and post events, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and go to **Azure Update Manager**.
+1. Under **Manage**, select **Machines**, **Maintenance Configurations**.
+1. In the **Maintenance Configuration** page, select the maintenance configuration to which you want to add a pre and post event.
+1. In the selected **Maintenance configuration** page, under **Settings**, select **Events**.
+1. Select the event **Name** you want to delete from the grid.
+1. On the selected event page, select **Delete**.
+
+ :::image type="content" source="./media/manage-pre-post-events/delete-event-inline.png" alt-text="Screenshot that shows how to delete the pre and post events." lightbox="./media/manage-pre-post-events/delete-event-expanded.png":::
+
+#### [Using PowerShell](#tab/del-ps)
+
+#### Removing Event Subscription
+
+```powershell-interactive
+ Remove-AzEventGridSystemTopicEventSubscription -EventSubscriptionName $EventSubscriptionName -ResourceGroupName $ResourceGroupForSystemTopic -SystemTopicName $SystemTopicName
+```
+
+#### Remove System topic
+
+```powershell-interactive
+ Remove-AzEventGridSystemTopic -Name $SystemTopicName -ResourceGroupName $ResourceGroupForSystemTopic
+```
+
+#### [Using CLI](#tab/del-cli)
+
+#### Removing Event subscription
+
+```azurecli-interactive
+ az eventgrid system-topic event-subscription delete --name ΓÇ£<Event subscription name>ΓÇ¥ --resource-group $ResourceGroupName --system-topic-name $SystemTopicName
```
-> [!NOTE]
-> You must replace the **Correlation ID** received from the above ARG query and replace it in the Cancelation API.
+#### Remove System topic
-**Example**
-```http
-ΓÇ» C:\ProgramData\chocolatey\bin\ARMClient.exe put https://management.azure.com/subscriptions/eee2cef4-bc47-4278-b4f8-cfc65f25dfd8/resourcegroups/fp02centraluseuap/providers/microsoft.maintenance/maintenanceconfigurations/prepostdemo7/providers/microsoft.maintenance/applyupdates/20230810085400?api-version=2023-09-01-preview "{\"Properties\":{\"Status\": \"Cancel\"}}" -Verbose
+```azurecli-interactive
+ az eventgrid system-topic delete --name $SystemTopicName --resource-group $ResourceGroupName
+```
+
+#### [Using REST API](#tab/del-api)
+
+#### Event subscription Deletion
+
+```rest
+ DELETE /subscriptions/<subscription Id>/resourceGroups/<resource group name>/providers/Microsoft.EventGrid/systemTopics/<system topic name>/eventSubscriptions/<Event Subscription name>?api-version=2022-06-15
```
+#### System topic deletion
+
+```rest
+ DELETE /subscriptions/<subscription Id>/resourceGroups/<resource group name>/providers/Microsoft.EventGrid/systemTopics/<system topic name>;?api-version=2022-06-15
+```
++ ## Next steps-- For issues and workarounds, see [troubleshoot](troubleshoot.md)-- For an overview on [pre and post scenarios](pre-post-scripts-overview.md)-- Learn on the [common scenarios of pre and post events](pre-post-events-common-scenarios.md)
+- For an overview of pre and post events (preview) in Azure Update Manager, refer [here](pre-post-scripts-overview.md)
+- To learn on how to create pre and post events, see [pre and post maintenance configuration events](pre-post-events-schedule-maintenance-configuration.md).
+- To learn how to use pre and post events to turn on and off your VMs using Webhooks, refer [here](tutorial-webhooks-using-runbooks.md).
+- To learn how to use pre and post events to turn on and off your VMs using Azure Functions, refer [here](tutorial-using-functions.md).
update-manager Pre Post Events Common Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/pre-post-events-common-scenarios.md
- Title: Common scenarios in pre and post events (preview) in your Azure Update Manager
-description: An overview of common scenarios for pre and post events (preview), including viewing the list of different endpoints, successful delivery to an endpoint, checking the script in Webhooks using runbooks triggered from Event Grid.
-- Previously updated : 02/03/2024--
-#Customer intent: As an implementer, I want answers to various questions.
--
-# Pre and Post events (preview) frequently asked questions
-
-**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
-
-This article presents the frequently asked questions in the lifecycle of pre and post events (preview).
-
-## How to check the configuration of pre and post event on your schedule and its count?
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and go to **Azure Update Manager**.
-1. Under **Manage**, select **Machines**, **Maintenance Configurations**.
-1. On the **Maintenance Configuration** page, select the configuration.
-1. Select **Overview**, and check **Maintenance events**.
- - If there are no pre and post events that are set up, select **Configure** to set up.
- :::image type="content" source="./media/pre-post-events-common-scenarios/configure-new-event.png" alt-text="Screenshot that shows how to configure new event." lightbox="./media/pre-post-events-common-scenarios/configure-new-event.png":::
-
- - If there are pre and post events associated to the configuration, you can see the count of pre and post events associated to the configuration in the **Events** page.
-
-## How to view the list of pre and post events set up on a maintenance configuration?
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and go to **Azure Update Manager**.
-1. Under **Manage**, select **Machines**, **Maintenance Configurations**.
-1. On the **Maintenance Configuration** page, select the configuration.
-1. On the selected maintenance configuration page, under **Settings**, select **Events** to view the pre and post events that you have created.
-
- The grid at the bottom of the **Events subscription** tab displays the names of both the pre and post events along with the corresponding **Event Types**.
-
- :::image type="content" source="./media/pre-post-events-common-scenarios/view-pre-post-events.png" alt-text="Screenshot that shows how to view the list of pre and post events." lightbox="./media/pre-post-events-common-scenarios/view-pre-post-events.png":::
--
-## How to view the list of different endpoints setup for pre and post events on a maintenance configuration?
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and go to **Azure Update Manager**.
-1. Under **Manage**, select **Machines**, **Maintenance Configurations**.
-1. On the **Maintenance Configuration** page, select the configuration.
-1. On the selected maintenance configuration page, under **Settings**, select **Events** to view the pre and post events that you have created.
-
- In the grid at the bottom of the **Event Subscription** tab, you can view the endpoint details.
-
- :::image type="content" source="./media/pre-post-events-common-scenarios/view-endpoint.png" alt-text="Screenshot that shows how to view endpoints." lightbox="./media/pre-post-events-common-scenarios/view-endpoint.png":::
-
-## How to check the successful delivery of a pre or post event to an endpoint from Event Grid?
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and go to **Azure Update Manager**.
-1. Under **Manage**, select **Machines**, **Maintenance Configurations**.
-1. On the **Maintenance Configuration** page, select the configuration.
-1. On the selected maintenance configuration page, under **Settings**, select **Events**.
-1. In the **Essentials** section, view metrics to see the metrics for all the events that are part of the event subscription. In the grid, the count of the Published Events metric should match with the count of Matched Events metric. Both of these two values should also correspond with the Delivered Events count.
-1. To view the metrics specific to a pre or a post event, select the name of the event from the grid. Here, the count of Matched Events metric should match with the Delivered Events count.
-1. To view the time at which the event was triggered, hover over the line graph. [Learn more](/azure/azure-monitor/reference/supported-metrics/microsoft-eventgrid-systemtopics-metrics).
--
-## How to check an unsuccessful delivery of a pre and post events to an endpoint from Event Grid?
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and go to **Azure Update Manager**.
-1. Under **Manage**, select **Machines**, **Maintenance Configurations**.
-1. On the **Maintenance Configuration** page, select the configuration.
-1. On the selected maintenance configuration page, under **Settings**, select **Events**.
-1. In the **Essentials** section, view metrics to see the metrics for all the events that are part of the event subscription. Here, you find that the count of the metric **Delivery Failed Events** increase.
-1. You further setup, you can do either of the following:
- 1. Create Azure Monitor Alerts on this failure count to get notified of it. [Set alerts on Azure Event Grid metrics and activity logs](../event-grid/set-alerts.md). **(OR)**
- 1. Enable Diagnostic logs by linking to Storage accounts or Log Analytics workspace. [Enable diagnostic logs for Event Grid resources](../event-grid/enable-diagnostic-logs-topic.md).
- > [!NOTE]
- > You can anytime set up logs and alerts for a successful deliveries.
-
-## How to check if the endpoint has been triggered in the pre or post task?
-
-#### [With webhooks using Automation Runbooks](#tab/events-runbooks)
--- The VM start operation requires the Automation Managed Identity to have *Microsoft.Compute/virtualMachines/start/action* permissions over the VMs to get started, and this permission is included in the **VM Contributor** role.-- Ensure to import the PowerShell package - **ThreadJob with the Module version:2.0.3**.-
-1. Sign in to the [Azure portal](https://portal.azure.com) and go to **Azure Automation account**.
-1. In your Automation account, under **Process Automation**, select **Runbooks**.
-1. Select the pre or post script linked to your Webhook in Event Grid.
-1. In **Overview**, you can view the status of the Runbook job. The trigger time should be approximately 30 minutes prior to the schedule start time. Once the job is finished, you can come back to the same section to confirm if the status is **Completed**.
-
- :::image type="content" source="./media/pre-post-events-common-scenarios/trigger-endpoint.png" alt-text="Screenshot that shows how to view the status of the Runbook job." lightbox="./media/pre-post-events-common-scenarios/trigger-endpoint.png":::
-
- Upon completion, you can confirm whether the prepatch installation process has been completed as planned. For instance, ensure that the VM has been either powered on or off.
-
-For more information on how to retrieve details from Automation account's activity log:
-- Learn more on how to [Manage runbooks in Azure Automation](../automation/manage-runbooks.md).-
-#### [With Azure Functions](#tab/events-functions)
--- See the [set up logs for Azure Functions to track their execution](../azure-functions/streaming-logs.md).---
-### How to check if the script in Webhooks using Runbooks is triggered from Event Grid?
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and go to **Azure Automation account**.
-1. In your Automation account, under **Process Automation**, select **Runbooks**.
-1. Select the pre or post script linked to your Webhook in Event Grid.
-1. In **Overview**, you can view the status of the Runbook job. Select the **Input** tab to view the latest run of the job.
-
- :::image type="content" source="./media/pre-post-events-common-scenarios/view-input-parameter.png" alt-text="Screenshot that shows how to view the latest run of the job." lightbox="./media/pre-post-events-common-scenarios/view-input-parameter.png":::
-
-## How to check the cancelation of a schedule?
-
-#### [Azure portal](#tab/cancel-portal)
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and go to **Azure Update Manager**.
-1. Under **Manage**, select **Machines**, **Maintenance Configurations**.
-1. On the **Maintenance Configuration** page, select the configuration.
-1. On the selected maintenance configuration page, under **Settings**, select **Activity Log** to view the pre and post events that you have created.
- 1. If the current maintenance schedule was canceled, the operation name would be *Write apply updates to a resource*.
-
- :::image type="content" source="./media/pre-post-events-common-scenarios/write-apply-updates.png" alt-text="Screenshot that shows how to view tif the current maintenance schedule has been canceled." lightbox="./media/pre-post-events-common-scenarios/write-apply-updates.png":::
-
- 1. Select the activity to view the details that the activity performs.
--
-#### [REST API](#tab/cancel-rest)
-
-1. The cancellation flow is honored from T-40 when the premaintenance event is triggered until T-10. [Learn more](manage-pre-post-events.md#timeline-of-schedules-for-pre-and-post-events).
-
- To invoke the cancelation API:
-
- ```rest
- C:\ProgramData\chocolatey\bin\ARMClient.exe put https://management.azure.com/<your-c-id-obtained-from-above>?api-version=2023-09-01-preview "{\"Properties\":{\"Status\": \"Cancel\"}}" -Verbose
- ```
-1. Ensure to insert the correlation ID of your maintenance job to cancel it and you see the response on the CLI/API as follows:
-
- :::image type="content" source="./media/pre-post-events-common-scenarios/cancelation-response.png" alt-text="Screenshot that shows the response for cancelation of schedule." lightbox="./media/pre-post-events-common-scenarios/write-apply-updates.png":::
---
-
-### How to confirm if the cancelation is by user or system?
-
-You can view the status of the maintenance job from the ARG query mentioned above to understand if you've canceled the job or the system. The error message confirms the status of the job.
-
-The following query allows you to view the list of VMs for a given schedule or a maintenance configuration:
-
-```kusto
-maintenanceresources
-| where type =~ "microsoft.maintenance/maintenanceconfigurations/applyupdates"
-| where properties.correlationId has "/subscriptions/<your-s-id> /resourcegroups/<your-rg-id> /providers/microsoft.maintenance/maintenanceconfigurations/<mc-name> /providers/microsoft.maintenance/applyupdates/"
-| order by name desc
-```
--
-## How to check the status of the maintenance configuration?
-
-#### [Azure portal](#tab/status-portal)
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and go to **Azure Update Manager**.
-1. Under **Manage**, select **History**.
-1. Select **By Maintenance ID** tab to view the jobs by maintenance configurations. For the respective maintenance run ID, you can view the status of the job.
-1. Select the **Status** to view the details of the job.
- :::image type="content" source="./media/pre-post-events-common-scenarios/status-maintenance-configuration.png" alt-text="Screenshot that shows detailed view of the job." lightbox="./media/pre-post-events-common-scenarios/status-maintenance-configuration.png":::
-
-#### [REST API/CLI](#tab/status-rest)
-
-1. Use the following Azure Resource Graph (ARG) query to view the status of the job in ARG.
-
- ```kusto
- maintenanceresources
- | where type =~ "microsoft.maintenance/maintenanceconfigurations/applyupdates"
- | where properties.correlationId has "/subscriptions/<your-s-id> /resourcegroups/<your-rg-id> /providers/microsoft.maintenance/maintenanceconfigurations/<mc-name> /providers/microsoft.maintenance/applyupdates/"
- | order by name desc
- ```
-
-1. Ensure to insert the subscription ID, resource group, and maintenance configuration name in the above query
----
-## Why the scheduled run was cancelled by the system?
-
-The system cancels the scheduled run if one or more of the following conditions are not met:
-
-1. If the maintenance configuration has at least one pre event subscribed and the schedule time is changed within the 40-minute window before the scheduled start time.
-2. If the pre-event was created within the 40-minute window before the scheduled start time.
--
-## Why the post event was not sent by the system?
-
-If the user modifies the schedule run time after the pre-event has been triggered, the post event will not be sent because the scheduled time has been replaced with a new one.
-
-> [!NOTE]
-> Azure Event Grid adheres to an at-least-once delivery paradigm. This implies that, in exceptional circumstances, there is a chance of the event handler being invoked more than once for a given event. Customers are advised to ensure that their event handler actions are idempotent. In other words, if the event handler is executed multiple times, it should not have any adverse effects. Implementing idempotency ensures the robustness of your application in the face of potential duplicate event invocations.
-
-## Next steps
-- For an overview on [pre and post scenarios](pre-post-scripts-overview.md)-- Manage the [pre and post maintenance configuration events](manage-pre-post-events.md)
update-manager Pre Post Events Schedule Maintenance Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/pre-post-events-schedule-maintenance-configuration.md
+
+ Title: Create the pre and post maintenance configuration events (preview) in Azure Update Manager
+description: The article provides the steps to create the pre and post maintenance events in Azure Update Manager.
+ Last updated : 07/09/2024+++
+zone_pivot_groups: create-pre-post-events-maintenance-configuration
++
+# Create pre and post events (preview)
+
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers :heavy_check_mark: Azure VMs.
+
+Pre and post events allows you to execute user-defined actions before and after the scheduled maintenance configuration. For more information, go through the [workings of a pre and post event in Azure Update Manager](pre-post-scripts-overview.md).
+
+This article describes on how to create pre and post events in Azure Update Manager.
+
+## Event Grid in schedule maintenance configurations
+
+Azure Update Manager leverages Event grid to create and manage pre and post events. For more information, go through the [overview of Event Grid](../event-grid/overview.md). To trigger an event either before or after a schedule maintenance window, you require the following:
+
+1. **Schedule maintenance configuration** - You can create Pre and post events for a schedule maintenance configuration in Azure Update Manager. For more information, see [schedule updates using maintenance configurations](scheduled-patching.md).
+1. **Action to be performed in the pre or post event** - You can use the [Event handlers](../event-grid/event-handlers.md) (Endpoints) supported by Event Grid to define actions or tasks. Here are examples on how to create Azure Automation Runbooks via Webhooks and Azure Functions. Within these Event handlers/Endpoints, you must define the actions that should be performed as part of pre and post events.
+ 1. **Webhook** - [Create a PowerShell 7.2 Runbook](../automation/automation-runbook-types.md#powershell-runbooks) and [link the Runbook to a webhook](../automation/automation-webhooks.md).
+ 1. **Azure Function** - [Create an Azure Function](../azure-functions/functions-create-function-app-portal.md).
+1. **Pre and post event** - You can follow the steps shared in the following section to create a pre and post event for schedule maintenance configuration. To learn more about the terms used in the Basics tab of Event Grid, see [Event Grid](../event-grid/concepts.md) terms.
++
+## Create a pre and post event (preview)
++
+### Create pre and post events while creating a new schedule maintenance configuration
+
+#### [Using Azure portal](#tab/portal)
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) and go to **Azure Update Manager**.
+2. Under **Manage**, select **Machines**.
+3. Select **Schedule updates** from the ribbon at the top.
+4. In the **Create a maintenance configuration** page, select the **Events** tab.
+5. Select **+Event Subscription** to create pre/post event.
+6. On the **Add Event Subscription** page, enter the following details:
+In the **Event Subscription Details** section, provide an appropriate name.
+ - Keep the schema as **Event Grid Schema**.
+ - Enter the **System Topic Name** for the first event you create in this maintenance configuration. The same System Topic name will be auto populated for the consequent events.
+ - In the **Event Types** section, **Filter to Event Types**, select the event types that you want to get pushed to the endpoint or destination. You can select either **Pre Maintenance Event** or **Post Maintenance Event** or both. To learn more about event types that are specific to schedule maintenance configurations, see [Azure Event Types](../event-grid/event-schema-maintenance-configuration.md).
+ - In the **Endpoint details** section, select the endpoint where you want to receive the response from.
+7. Select **Add** to create the pre and post events for the schedule upon its creation.
+
+ :::image type="content" source="./media/manage-pre-post-events/add-event-subscription.png" alt-text="Screenshot that shows how to add an event subscription." lightbox="./media/manage-pre-post-events/add-event-subscription.png":::
+
+> [!NOTE]
+> In the above flow, Webhook and Azure Functions are the two Event handlers/endpoints you can choose from. When you select **Add**, the event subscription is not created but added to the maintenance configuration. Event subscription is created along with the schedule maintenance configuration.
+
+#### [Using PowerShell](#tab/powershell)
+
+1. Create a maintenance configuration by following the steps listed [here](../virtual-machines/maintenance-configurations-powershell.md#guest).
+
+1. ```powershell-interactive
+ # Obtain the Maintenance Configuration ID from Step 1 and assign it to MaintenanceConfigurationResourceId variable
+
+ $MaintenanceConfigurationResourceId = "/subscriptions/<subId>/resourceGroups/<Resource group>/providers/Microsoft.Maintenance/maintenanceConfigurations/<Maintenance configuration Name>"
+
+ # Use the same Resource Group that you used to create maintenance configuration in Step 1
+
+ $ResourceGroupForSystemTopic = "<Resource Group for System Topic>"
+
+ $SystemTopicName = "<System topic name>"
+
+ $TopicType = "Microsoft.Maintenance.MaintenanceConfigurations"
+
+ $SystemTopicLocation = "<System topic location>"
+
+ # System topic creation
+
+ New-AzEventGridSystemTopic -ResourceGroupName $ResourceGroupForSystemTopic -Name $SystemTopicName -Source $MaintenanceConfigurationResourceId -TopicType $TopicType -Location $SystemTopicLocation
+
+ # Event subscription creation
+
+ $IncludedEventTypes = @("Microsoft.Maintenance.PreMaintenanceEvent")
+
+ # Webhook
+
+ $EventSubscriptionName = "PreEventWebhook"
+
+ $PreEventWebhookEndpoint = "<Webhook URL>"
+
+ New-AzEventGridSystemTopicEventSubscription -ResourceGroupName $ResourceGroupForSystemTopic -SystemTopicName $SystemTopicName -EventSubscriptionName $EventSubscriptionName -Endpoint $PreEventWebhookEndpoint -IncludedEventType $IncludedEventTypes
+
+ # Azure Function
+
+ $dest = New-AzEventGridAzureFunctionEventSubscriptionDestinationObject -ResourceId "<Azure Function Resource Id>"
+
+ New-AzEventGridSystemTopicEventSubscription -ResourceGroupName $ResourceGroupForSystemTopic -SystemTopicName $SystemTopicName -EventSubscriptionName $EventSubscriptionName -Destination $dest -IncludedEventType $IncludedEventTypes
+
+ ```
+
+#### [Using CLI](#tab/cli)
+
+1. Create a maintenance configuration by following the steps listed [here](../virtual-machines/maintenance-configurations-cli.md#guest-vms).
+
+1. ```azurecli-interactive
+
+ SystemTopicName="<System topic name>
+
+ # Use the same Resource Group that you used to create maintenance configuration in Step 1
+
+ ResourceGroupName="<Resource Group mentioned in Step 1>"
+
+ # Obtain the Maintenance Configuration ID from Step 1 and assign it to Source variable
+
+ Source="/subscriptions/<subId>/resourceGroups/<Resource group>/providers/Microsoft.Maintenance/maintenanceConfigurations/<Maintenance configuration Name>"
+
+ TopicType="Microsoft.Maintenance.MaintenanceConfigurations"
+
+ Location="<System topic location> "
+
+ # System topic creation
+
+ az eventgrid system-topic create --name $SystemTopicName --resource-group $ResourceGroupName --source $Source --topic-type $TopicType --location $Location
+
+ # Event subscription creation
+
+ IncludedEventTypes='("Microsoft.Maintenance.PreMaintenanceEvent")'
+
+ # Webhook
+
+ az eventgrid system-topic event-subscription create --name "<Event subscription name>" --resource-group $ResourceGroupName --system-topic-name $SystemTopicName --endpoint-type webhook --endpoint "<webhook URL>" --included-event-types IncludedEventTypes
+
+ # Azure Function
+
+ az eventgrid system-topic event-subscription create ΓÇôname "<Event subscription name>" --resource-group $ResourceGroupName --system-topic-name $SystemTopicName --endpoint-type azurefunction --endpoint "<Azure Function ResourceId>" --included-event-types IncludedEventTypes
+
+ ```
+
+#### [Using API](#tab/api)
+
+1. Create a maintenance configuration by following the steps listed [here](https://learn.microsoft.com/rest/api/maintenance/maintenance-configurations/create-or-update?view=rest-maintenance-2023-09-01-preview&tabs=HTTP).
+
+1. **# System topic creation [Learn more](/rest/api/eventgrid/controlplane/system-topics/create-or-update)**
+
+ ```rest
+ PUT /subscriptions/<subscription Id>/resourceGroups/<resource group name>/providers/Microsoft.EventGrid/systemTopics/<system topic name>?api-version=2022-06-15
+ ```
+
+ Request Body:
+ ```
+ {
+ "properties": {
+ "source": "/subscriptions/<subscription Id>/resourceGroups/<resource group>/providers/Microsoft.Maintenance/maintenanceConfigurations/<maintenance configuration name> ",
+ "topicType": "Microsoft.Maintenance.MaintenanceConfigurations"
+ },
+ "location": "<location>"
+ }
+ ```
+
+ **# Event subscription creation [Learn more](/rest/api/eventgrid/controlplane/system-topic-event-subscriptions/create-or-update)**
+
+ Allowed Event types - Microsoft.Maintenance.PreMaintenanceEvent, Microsoft.Maintenance.PostMaintenanceEvent
+
+ **Webhook**
+
+ ```rest
+ PUT /subscriptions/<subscription Id>/resourceGroups/<resource group name>/providers/Microsoft.EventGrid/systemTopics/<system topic name>/eventSubscriptions/<Event Subscription name>?api-version=2022-06-15
+ ```
+
+ Request Body:
+
+ ```
+ {
+ "properties": {
+ "destination": {
+ "endpointType": "WebHook",
+ "properties": {
+ "endpointUrl": "<Webhook URL>"
+ }
+ },
+ "filter": {
+ "includedEventTypes": [
+ "Microsoft.Maintenance.PreMaintenanceEvent"
+ ]
+ }
+ }
+ }
+ ```
+ **Azure Function**
+
+ ```rest
+ PUT /subscriptions/<subscription Id>/resourceGroups/<resource group name>/providers/Microsoft.EventGrid/systemTopics/<system topic name>/eventSubscriptions/<Event Subscription name>?api-version=2022-06-15
+ ```
+
+ **Request Body**
+ ```
+ {
+ "properties": {
+ "destination": {
+ "endpointType": "AzureFunction",
+ "properties": {
+ "resourceId": "<Azure Function Resource Id>"
+ }
+ }
+ },
+ "filter": {
+ "includedEventTypes": [
+ "Microsoft.Maintenance.PostMaintenanceEvent"
+ ]
+ }
+ }
+ ```
+
+++
+### Create pre and post events on an existing schedule maintenance configuration
+
+#### [Using Azure portal](#tab/az-portal)
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and go to **Azure Update Manager**.
+1. Under **Manage**, select **Machines**, **Maintenance Configurations**.
+1. In the **Maintenance Configuration** page, select the maintenance configuration to which you want to add a pre and post event.
+1. In the selected **Maintenance configuration** page, under **Settings**, select **Events**. Alternatively, under the **Overview**, select the card **Create a maintenance event**.
+
+ :::image type="content" source="./media/manage-pre-post-events/create-maintenance-event-inline.png" alt-text="Screenshot that shows the options to select to create a maintenance event." lightbox="./media/manage-pre-post-events/create-maintenance-event-expanded.png":::
+
+1. Select **+Event Subscription** to create Pre/Post Maintenance Event.
+
+ :::image type="content" source="./media/manage-pre-post-events/maintenance-events-inline.png" alt-text="Screenshot that shows the maintenance events." lightbox="./media/manage-pre-post-events/maintenance-events-expanded.png":::
+
+1. On the **Create Event Subscription** page, enter the following details:
+ - In the **Event Subscription Details** section, provide an appropriate name.
+ - Keep the schema as **Event Grid Schema**.
+ - In the **Topic Details** section, provide an appropriate name to the **System Topic Name**.
+ - In the **Event Types** section, **Filter to Event Types**, select the event types that you want to get pushed to the endpoint or destination. You can select between **Pre Maintenance Event** and **Post Maintenance Event**. To learn more about event types that are specific to schedule maintenance configurations, see [Azure Event Types](../event-grid/event-schema-maintenance-configuration.md).
+ - In the **Endpoint details** section, select the endpoint from where you want to receive the response from.
+
+ :::image type="content" source="./media/manage-pre-post-events/create-event-subscription.png" alt-text="Screenshot on how to create event subscription.":::
+
+1. Select **Create** to configure the pre and post events on an existing schedule.
+
+#### [Using PowerShell](#tab/az-powershell)
++
+```powershell-interactive
+ $MaintenanceConfigurationResourceId = "/subscriptions/<subId>/resourceGroups/<Resource group>/providers/Microsoft.Maintenance/maintenanceConfigurations/<Maintenance configuration Name>"
+
+ $ResourceGroupForSystemTopic = "<Resource Group for System Topic>"
+
+ $SystemTopicName = "<System topic name>"
+
+ $TopicType = "Microsoft.Maintenance.MaintenanceConfigurations"
+
+ $SystemTopicLocation = "<System topic location>"
+
+ # System topic creation
+
+ New-AzEventGridSystemTopic -ResourceGroupName $ResourceGroupForSystemTopic -Name $SystemTopicName -Source $MaintenanceConfigurationResourceId -TopicType $TopicType -Location $SystemTopicLocation
+
+ # Event subscription creation
+
+ $IncludedEventTypes = @("Microsoft.Maintenance.PreMaintenanceEvent")
+
+ # Webhook
+
+ $EventSubscriptionName = "PreEventWebhook"
+
+ $PreEventWebhookEndpoint = "<Webhook URL>"
+
+ New-AzEventGridSystemTopicEventSubscription -ResourceGroupName $ResourceGroupForSystemTopic -SystemTopicName $SystemTopicName -EventSubscriptionName $EventSubscriptionName -Endpoint $PreEventWebhookEndpoint -IncludedEventType $IncludedEventTypes
+
+ # Azure Function
+
+ $dest = New-AzEventGridAzureFunctionEventSubscriptionDestinationObject -ResourceId "<Azure Function Resource Id>"
+
+ New-AzEventGridSystemTopicEventSubscription -ResourceGroupName $ResourceGroupForSystemTopic -SystemTopicName $SystemTopicName -EventSubscriptionName $EventSubscriptionName -Destination $dest -IncludedEventType $IncludedEventTypes
+```
+
+#### [Using CLI](#tab/az-cli)
+
+```azurecli-interactive
+
+ SystemTopicName="<System topic name>
+
+ ResourceGroupName="<Resource Group for System Topic>"
+
+ Source="/subscriptions/<subId>/resourceGroups/<Resource group>/providers/Microsoft.Maintenance/maintenanceConfigurations/<Maintenance configuration Name>"
+
+ TopicType="Microsoft.Maintenance.MaintenanceConfigurations"
+
+ Location="<System topic location> "
+
+ # System topic creation
+
+ az eventgrid system-topic create --name $SystemTopicName --resource-group $ResourceGroupName --source $Source --topic-type $TopicType --location $Location
+
+ # Event subscription creation
+
+ IncludedEventTypes='("Microsoft.Maintenance.PreMaintenanceEvent")'
+
+ # Webhook
+
+ az eventgrid system-topic event-subscription create --name "<Event subscription name>" --resource-group $ResourceGroupName --system-topic-name $SystemTopicName --endpoint-type webhook --endpoint "<webhook URL>" --included-event-types IncludedEventTypes
+
+ # Azure Function
+
+ az eventgrid system-topic event-subscription create ΓÇôname "<Event subscription name>" --resource-group $ResourceGroupName --system-topic-name $SystemTopicName --endpoint-type azurefunction --endpoint "<Azure Function ResourceId>" --included-event-types IncludedEventTypes
+```
+#### [Using API](#tab/az-api)
+
+**# System topic creation [Learn more](/rest/api/eventgrid/controlplane/system-topics/create-or-update)**
+
+```rest
+PUT /subscriptions/<subscription Id>/resourceGroups/<resource group name>/providers/Microsoft.EventGrid/systemTopics/<system topic name>?api-version=2022-06-15
+```
+
+Request Body:
+```
+{
+ "properties": {
+ "source": "/subscriptions/<subscription Id>/resourceGroups/<resource group>/providers/Microsoft.Maintenance/maintenanceConfigurations/<maintenance configuration name> ",
+ "topicType": "Microsoft.Maintenance.MaintenanceConfigurations"
+ },
+ "location": "<location>"
+}
+```
+
+**# Event subscription creation [Learn more](/rest/api/eventgrid/controlplane/system-topic-event-subscriptions/create-or-update)**
+
+Allowed Event types - Microsoft.Maintenance.PreMaintenanceEvent, Microsoft.Maintenance.PostMaintenanceEvent
+
+**Webhook**
+
+```rest
+PUT /subscriptions/<subscription Id>/resourceGroups/<resource group name>/providers/Microsoft.EventGrid/systemTopics/<system topic name>/eventSubscriptions/<Event Subscription name>?api-version=2022-06-15
+```
+
+Request Body:
+
+```
+{
+ "properties": {
+ "destination": {
+ "endpointType": "WebHook",
+ "properties": {
+ "endpointUrl": "<Webhook URL>"
+ }
+ },
+ "filter": {
+ "includedEventTypes": [
+ "Microsoft.Maintenance.PreMaintenanceEvent"
+ ]
+ }
+ }
+}
+```
+**Azure Function**
+
+```rest
+PUT /subscriptions/<subscription Id>/resourceGroups/<resource group name>/providers/Microsoft.EventGrid/systemTopics/<system topic name>/eventSubscriptions/<Event Subscription name>?api-version=2022-06-15
+```
+
+**Request Body**
+```
+{
+ "properties": {
+ "destination": {
+ "endpointType": "AzureFunction",
+ "properties": {
+ "resourceId": "<Azure Function Resource Id>"
+ }
+ }
+ },
+ "filter": {
+ "includedEventTypes": [
+ "Microsoft.Maintenance.PostMaintenanceEvent"
+ ]
+ }
+}
+```
++++
+## Next steps
+- For an overview of pre and post events (preview) in Azure Update Manager, refer [here](pre-post-scripts-overview.md).
+- To learn on how to manage pre and post events or to cancel a schedule run, see [pre and post maintenance configuration events](manage-pre-post-events.md).
+- To learn how to use pre and post events to turn on and off your VMs using Webhooks, refer [here](tutorial-webhooks-using-runbooks.md).
+- To learn how to use pre and post events to turn on and off your VMs using Azure Functions, refer [here](tutorial-using-functions.md).
+
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md
Previously updated : 06/14/2024 Last updated : 07/08/2024 # Azure Virtual Machine Scale Set automatic OS image upgrades
properties:ΓÇ»{
} ```
-## Using Application Health Probes
+## Using Application Health Extension
During an OS Upgrade, VM instances in a scale set are upgraded one batch at a time. The upgrade should continue only if the customer application is healthy on the upgraded VM instances. We recommend that the application provides health signals to the scale set OS Upgrade engine. By default, during OS Upgrades the platform considers VM power state and extension provisioning state to determine if a VM instance is healthy after an upgrade. During the OS Upgrade of a VM instance, the OS disk on a VM instance is replaced with a new disk based on latest image version. After the OS Upgrade has completed, the configured extensions are run on these VMs. The application is considered healthy only when all the extensions on the instance are successfully provisioned.
The load-balancer probe can be referenced in the *networkProfile* of the scale s
> [!NOTE] > When using Automatic OS Upgrades with Service Fabric, the new OS image is rolled out Update Domain by Update Domain to maintain high availability of the services running in Service Fabric. To utilize Automatic OS Upgrades in Service Fabric your cluster node type must be configured to use the Silver Durability Tier or higher. For Bronze Durability tier, automatic OS image upgrade is only supported for Stateless node types. For more information on the durability characteristics of Service Fabric clusters, please see [this documentation](../service-fabric/service-fabric-cluster-capacity.md#durability-characteristics-of-the-cluster).
-### Keep credentials up to date
-
-If your scale set uses any credentials to access external resources, such as a VM extension configured to use a SAS token for storage account, then ensure that the credentials are updated. If any credentials, including certificates and tokens, have expired, the upgrade will fail and the first batch of VMs will be left in a failed state.
-
-The recommended steps to recover VMs and re-enable automatic OS upgrade if there's a resource authentication failure are:
-
-* Regenerate the token (or any other credentials) passed into your extension(s).
-* Ensure that any credential used from inside the VM to talk to external entities is up to date.
-* Update extension(s) in the scale set model with any new tokens.
-* Deploy the updated scale set, which will update all VM instances including the failed ones.
- ## Using Application Health extension The Application Health extension is deployed inside a Virtual Machine Scale Set instance and reports on VM health from inside the scale set instance. You can configure the extension to probe on an application endpoint and update the status of the application on that instance. This instance status is checked by Azure to determine whether an instance is eligible for upgrade operations.
There are multiple ways of deploying the Application Health extension to your sc
## Get the history of automatic OS image upgrades You can check the history of the most recent OS upgrade performed on your scale set with Azure PowerShell, Azure CLI 2.0, or the REST APIs. You can get history for the last five OS upgrade attempts within the past two months.
+### Keep credentials up to date
+
+If your scale set uses any credentials to access external resources, such as a VM extension configured to use a SAS token for storage account, then ensure that the credentials are updated. If any credentials, including certificates and tokens, have expired, the upgrade will fail and the first batch of VMs will be left in a failed state.
+
+The recommended steps to recover VMs and re-enable automatic OS upgrade if there's a resource authentication failure are:
+
+* Regenerate the token (or any other credentials) passed into your extension(s).
+* Ensure that any credential used from inside the VM to talk to external entities is up to date.
+* Update extension(s) in the scale set model with any new tokens.
+* Deploy the updated scale set, which will update all VM instances including the failed ones.
+ ### REST API The following example uses [REST API](/rest/api/compute/virtualmachinescalesets/getosupgradehistory) to check the status for the scale set named *myScaleSet* in the resource group named *myResourceGroup*:
Use [az vmss rolling-upgrade start](/cli/azure/vmss/rolling-upgrade#az-vmss-roll
az vmss rolling-upgrade start --resource-group "myResourceGroup" --name "myScaleSet" --subscription "subscriptionId" ```
+## Leverage Activity Logs for Upgrade Notifications and Insights
+
+[Activity Log](https://learn.microsoft.com/azure/azure-monitor/essentials/activity-log?tabs=powershell) is a subscription log that provides insight into subscription-level events that have occurred in Azure. Customers are able to:
+* See events related to operations performed on their resources in Azure portal
+* Create action groups to tune notification methods like email, sms, webhooks, or ITSM
+* Set up suitable alerts using different criteria using Portal, ARM resource template, PowerShell or CLI to be sent to action groups
+
+Customers will receive three types of notifications related to Automatic OS Upgrade operation:
+* Submission of upgrade request for a particular resource
+* Outcome of submission request along with any error details
+* Outcome of upgrade completion along with any error details
+
+### Setting up Action Groups for Activity log alerts
+
+An [action group](https://learn.microsoft.com/azure/azure-monitor/alerts/action-groups) is a collection of notification preferences defined by the owner of an Azure subscription. Azure Monitor and Service Health alerts use action groups to notify users that an alert has been triggered.
+
+Action groups can be created and managed using:
+* [ARM Resource Manager](https://learn.microsoft.com/azure/azure-monitor/alerts/action-groups#create-an-action-group-with-a-resource-manager-template)
+* [Portal](https://learn.microsoft.com/azure/azure-monitor/alerts/action-groups#create-an-action-group-in-the-azure-portal)
+* PowerShell:
+ * [New-AzActionGroup](https://learn.microsoft.com/powershell/module/az.monitor/new-azactiongroup?view=azps-12.0.0)
+ * [Get-AzActionGroup](https://learn.microsoft.com/powershell/module/az.monitor/get-azactiongroup?view=azps-12.0.0)
+ * [Remove-AzActionGroup](https://learn.microsoft.com/powershell/module/az.monitor/remove-azactiongroup?view=azps-12.0.0)
+* [CLI](https://learn.microsoft.com/cli/azure/monitor/action-group?view=azure-cli-latest#az-monitor-action-group-create)
+
+Customers can set up the following using action groups:
+* [SMS and/or Email notifications](https://learn.microsoft.com/azure/azure-monitor/alerts/action-groups#email-azure-resource-manager)
+* [Webhooks](https://learn.microsoft.com/azure/azure-monitor/alerts/action-groups#webhook) - Customers can attach webhooks to their automation runbooks and configure their action groups to trigger the runbooks. You can start a runbook from a [webhook](https://docs.microsoft.com/azure/automation/automation-webhooks)
+* [ITSM Connections](https://learn.microsoft.com/azure/azure-monitor/alerts/itsmc-overview)
+ ## Investigate and Resolve Auto Upgrade Errors The platform can return errors on VMs while performing Automatic Image Upgrade with Rolling Upgrade policy. The [Get Instance View](/rest/api/compute/virtual-machine-scale-sets/get-instance-view) of a VM contains the detailed error message to investigate and resolve an error. The [Rolling Upgrades - Get Latest](/rest/api/compute/virtual-machine-scale-sets/get) can provide more details on rolling upgrade configuration and status. The [Get OS Upgrade History](/rest/api/compute/virtual-machine-scale-sets/get) provides details on the last image upgrade operation on the scale set. Below are the topmost errors that can result in Rolling Upgrades.
virtual-machines Msv3 Mdsv3 Medium Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/msv3-mdsv3-medium-series.md
# ms.prod: sizes Previously updated : 08/10/2023 Last updated : 06/26/2024 # Msv3 and Mdsv3 Medium Memory Series
-The Msv3 and Mdsv3 Medium Memory(MM) series, powered by 4<sup>th</sup> generation Intel® Xeon® Scalable processors, are the next generation of memory-optimized VM sizes delivering faster performance, lower total cost of ownership and improved resilience to failures compared to previous generation Mv2 VMs. The Mv3 MM offers VM sizes of up to 4TB of memory and 4,000 MBps throughout to remote storage and provides up to 25% networking performance improvements over previous generations.
+The Msv3 and Mdsv3 Medium Memory(MM) Virtual Machine (VM) series, powered by 4<sup>th</sup> generation Intel® Xeon® Scalable processors, are the next generation of memory-optimized VM sizes delivering faster performance, lower total cost of ownership (TCO) and improved resilience to failures compared to previous generation Mv2 VMs. The Mv3 MM offers VM sizes of up to 4TB of memory and 4,000 MBps throughout to remote storage and provides up to 25% networking performance improvements over previous generations.
## Msv3 Medium Memory series
The Msv3 and Mdsv3 Medium Memory(MM) series, powered by 4<sup>th</sup> generatio
[Live Migration](maintenance-and-updates.md): Restricted Support<br> [Memory Preserving Updates](maintenance-and-updates.md): Not Supported<br> [VM Generation Support](generation-2.md): Generation 2<br>
+[Write Accelerator](./how-to-enable-write-accelerator.md): Supported<br>
[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br> [Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported<br> [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
These virtual machines feature local SSD storage (up to 400 GiB).
[Live Migration](maintenance-and-updates.md): Not Supported<br> [Memory Preserving Updates](maintenance-and-updates.md): Not Supported<br> [VM Generation Support](generation-2.md): Generation 2<br>
+[Write Accelerator](./how-to-enable-write-accelerator.md): Supported<br>
[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br> [Ephemeral OS Disks](ephemeral-os-disks.md): Supported<br> [Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
virtual-machines Share Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery.md
If you share gallery resources to someone outside of your Azure tenant, they wil
1. On the page for your gallery, in the menu on the left, select **Access control (IAM)**. 1. Under **Add**, select **Add role assignment**. The **Add role assignment** page will open.
-1. Under **Role**, select **Contributor**.
+1. Under **Role**, select **Reader**.
1. Ensure that the user is selected in the Members tab.For **Assign access to**, keep the default of **User, group, or service principal**. 1. Click **Select** members and choose a user account from the page that opens on the right. 1. If the user is outside of your organization, you'll see the message **This user will be sent an email that enables them to collaborate with Microsoft.** Select the user with the email address and then click **Save**.
Use the object ID as a scope, along with an email address and [az role assignmen
```azurecli-interactive az role assignment create \
- --role "Contributor" \
+ --role "Reader" \
--assignee <email address> \ --scope <gallery ID> ```
$user = Get-AzADUser -StartsWith alinne_montes@contoso.com
# Grant access to the user for our gallery New-AzRoleAssignment ` -ObjectId $user.Id `
- -RoleDefinitionName Contributor `
+ -RoleDefinitionName Reader `
-ResourceName $gallery.Name ` -ResourceType Microsoft.Compute/galleries ` -ResourceGroupName $resourceGroup.ResourceGroupName
vpn-gateway Azure Vpn Client Optional Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/azure-vpn-client-optional-configurations.md
You can configure forced tunneling in order to direct all traffic to the VPN tun
* **Advertise custom routes:** You can advertise custom routes `0.0.0.0/1` and `128.0.0.0/1`. For more information, see [Advertise custom routes for P2S VPN clients](vpn-gateway-p2s-advertise-custom-routes.md).
-* **Profile XML:** You can modify the downloaded profile xml file and add the **\<includeroutes>\<route>\<destination>\<mask> \</destination>\</mask>\</route>\</includeroutes>** tags. Make sure to update the version number to **2**.
+* **Profile XML:** You can modify the downloaded profile xml file and add the **\<includeroutes>\<route>\<destination>\<mask> \</destination>\</mask>\</route>\</includeroutes>** tags.
```xml <azvpnprofile>
vpn-gateway Openvpn Azure Ad Tenant Multi App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-tenant-multi-app.md
description: Learn how to set up a Microsoft Entra tenant for P2S OpenVPN authen
Previously updated : 05/15/2024 Last updated : 07/09/2024
-# Configure P2S for access based on users and groups - Microsoft Entra ID authentication
+# Configure P2S for access based on users and groups - Microsoft Entra ID authentication - manual app registration
-When you use Microsoft Entra ID as the authentication method for P2S, you can configure P2S to allow different access for different users and groups. If you want different sets of users to be able to connect to different VPN gateways, you can register multiple apps in AD and link them to different VPN gateways. This article helps you set up a Microsoft Entra tenant for P2S Microsoft Entra authentication and create and register multiple apps in Microsoft Entra ID for allowing different access for different users and groups. For more information about point-to-site protocols and authentication, see [About point-to-site VPN](point-to-site-about.md). These steps walk you through manually registering the Azure VPN Client App with your Microsoft Entra tenant.
+When you use Microsoft Entra ID as the authentication method for point-to-site (P2S), you can configure P2S to allow different access for different users and groups. This article helps you set up a Microsoft Entra tenant for P2S Microsoft Entra authentication and create and register multiple VPN apps in Microsoft Entra ID to allow different access for different users and groups. For more information about P2S protocols and authentication, see [About point-to-site VPN](point-to-site-about.md).
+Considerations:
+
+* You can't create this type of granular access if you have only one VPN gateway.
+* To assign different users and groups different access, register multiple apps with Microsoft Entra ID and then link them to different VPN gateways.
+* Microsoft Entra ID authentication is supported only for OpenVPN® protocol connections and requires the Azure VPN Client.
<a name='azure-ad-tenant'></a>
The steps in this article require a Microsoft Entra tenant. If you don't have a
* Global administrator account * User account
- The global administrator account will be used to grant consent to the Azure VPN app registration. The user account can be used to test OpenVPN authentication.
+ The global administrator account is used to grant consent to the Azure VPN app registration. The user account can be used to test OpenVPN authentication.
-1. Assign one of the accounts the **Global administrator** role. For steps, see [Assign administrator and non-administrator roles to users with Microsoft Entra ID](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
+1. Assign one of the accounts the **Global administrator** role. For steps, see [Assign user roles with Microsoft Entra ID](/entra/fundamentals/users-assign-role-azure-portal).
## Authorize the Azure VPN application
The steps in this article require a Microsoft Entra tenant. If you don't have a
## Register additional applications
-In this section, you can register additional applications for various users and groups. Repeat the steps to create as many applications that are needed for your security requirements. Each application will be associated to a VPN gateway and can have a different set of users. Only one application can be associated to a gateway.
+In this section, you can register additional applications for various users and groups. Repeat the steps to create as many applications that are needed for your security requirements.
+
+* You must have more than one VPN gateway to configure this type of granular access.
+* Each application is associated to a different VPN gateway and can have a different set of users.
### Add a scope 1. In the Azure portal, select **Microsoft Entra ID**. 1. In the left pane, select **App registrations**. 1. At the top of the **App registrations** page, select **+ New registration**.
-1. On the **Register an application** page, enter the **Name**. For example, MarketingVPN. You can always change the name later.
+1. On the **Register an application** page, enter the **Name**. For example, MarketingVPN or Group1. You can always change the name later.
* Select the desired **Supported account types**. * At the bottom of the page, click **Register**. 1. Once the new app has been registered, in the left pane, click **Expose an API**. Then click **+ Add a scope**.
When you enable authentication on the VPN gateway, you'll need the **Application
1. Go to the **Overview** page.
-1. Copy the **Application (client) ID** from the **Overview** page and save it so that you can access this value later. You'll need this information to configure your VPN gateway(s).
+1. Copy the **Application (client) ID** from the **Overview** page and save it so that you can access this value later. You'll need this information to configure your VPN gateways.
:::image type="content" source="./media/openvpn-azure-ad-tenant-multi-app/client-id.png" alt-text="Screenshot showing Client ID value." lightbox="./media/openvpn-azure-ad-tenant-multi-app/client-id.png":::
When you enable authentication on the VPN gateway, you'll need the **Application
Assign the users to your applications. If you're specifying a group, the user must be a direct member of the group. Nested groups aren't supported. 1. Go to your Microsoft Entra ID and select **Enterprise applications**.
-1. From the list, locate the application you just registered and click to open it.
+1. From the list, locate the application you registered and click to open it.
1. Click **Properties**. On the **Properties** page, verify that **Enabled for users to sign in** is set to **Yes**. If not, change the value to **Yes**. 1. For **Assignment required**, change the value to **Yes**. For more information about this setting, see [Application properties](../active-directory/manage-apps/application-properties.md#enabled-for-users-to-sign-in). 1. If you've made changes, click **Save** to save your settings.
In this step, you configure P2S Microsoft Entra authentication for the virtual n
For **Microsoft Entra ID** values, use the following guidelines for **Tenant**, **Audience**, and **Issuer** values. * **Tenant**: `https://login.microsoftonline.com/{TenantID}`
- * **Audience ID**: Use the value that you created in the previous section that corresponds to **Application (client) ID**. Don't use the application ID for "Azure VPN" Microsoft Entra Enterprise App - use application ID that you created and registered. If you use the application ID for the "Azure VPN" Microsoft Entra Enterprise App instead, this will grant all users access to the VPN gateway (which would be the default way to set up access), instead of granting only the users that you assigned to the application that you created and registered.
+ * **Audience ID**: Use the value that you created in the previous section that corresponds to **Application (client) ID**. Don't use the application ID for "Azure VPN" Microsoft Entra Enterprise App - use application ID that you created and registered. If you use the application ID for the "Azure VPN" Microsoft Entra Enterprise App instead, this grants all users access to the VPN gateway (which would be the default way to set up access), instead of granting only the users that you assigned to the application that you created and registered.
* **Issuer**: `https://sts.windows.net/{TenantID}` For the Issuer value, make sure to include a trailing **/** at the end. 1. Once you finish configuring settings, click **Save** at the top of the page.
vpn-gateway Point To Site Certificate Client Linux Azure Vpn Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-certificate-client-linux-azure-vpn-client.md
openssl x509 -req -days 365 -in "${USERNAME}Req.pem" -CA caCert.pem -CAkey caKey
When you generate a VPN client profile configuration package, all the necessary configuration settings for VPN clients are contained in a VPN client profile configuration zip file. The VPN client profile configuration files are specific to the P2S VPN gateway configuration for the virtual network. If there are any changes to the P2S VPN configuration after you generate the files, such as changes to the VPN protocol type or authentication type, you need to generate new VPN client profile configuration files and apply the new configuration to all of the VPN clients that you want to connect.
-Locate and unzip the VPN client profile configuration package you generated. For P2S **Certificate authentication** and with an **OpenVPN** tunnel type, you'll see the **AzureVPN** folder. In the AzureVPN folder, locate the **azurevpnconfig.xml** file. This file contains the settings you use to configure the VPN client profile.
+Locate and unzip the VPN client profile configuration package you generated (listed in the [Prequisites](#prerequisites)). For P2S **Certificate authentication** and with an **OpenVPN** tunnel type, you'll see the **AzureVPN** folder. In the AzureVPN folder, locate the **azurevpnconfig.xml** file. This file contains the settings you use to configure the VPN client profile.
If you don't see the **azurevpnconfig.xml** file, verify the following items:
sudo apt remove microsoft-azurevpnclient
## Next steps
-For additional steps, return to the [P2S Azure portal](vpn-gateway-howto-point-to-site-resource-manager-portal.md) article.
+For additional steps, return to the [P2S Azure portal](vpn-gateway-howto-point-to-site-resource-manager-portal.md) article.
vpn-gateway Vpn Gateway About Vpn Gateway Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md
description: Learn about VPN Gateway resources and configuration settings.
Previously updated : 02/29/2024 Last updated : 07/11/2024 ms.devlang: azurecli
The values in this article specifically apply to VPN gateways (virtual network g
* For values that apply to -GatewayType 'ExpressRoute', see [Virtual network gateways for ExpressRoute](../expressroute/expressroute-about-virtual-network-gateways.md). * For zone-redundant gateways, see [About zone-redundant gateways](about-zone-redundant-vnet-gateways.md).
-* For active-active gateways, see [About highly available connectivity](vpn-gateway-highlyavailable.md).
* For Virtual WAN gateways, see [About Virtual WAN](../virtual-wan/virtual-wan-about.md). ## <a name="gwtype"></a>Gateways and gateway types
If you already have a policy-based gateway, you aren't required to change your g
[!INCLUDE [Route-based and policy-based table](../../includes/vpn-gateway-vpn-type-table.md)]
+## <a name="active"></a>Active-active VPN gateways
+
+You can create an Azure VPN gateway in an active-active configuration, where both instances of the gateway VMs establish S2S VPN tunnels to your on-premises VPN device.
++
+For information about using active-active gateways in a highly available connectivity scenario, see [About highly available connectivity](vpn-gateway-highlyavailable.md).
+ ## <a name="connectiontype"></a>Connection types In the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md), each configuration requires a specific virtual network gateway connection type. The available Resource Manager PowerShell values for `-ConnectionType` are:
vpn-gateway Vpn Gateway Highlyavailable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-highlyavailable.md
description: Learn about highly available configuration options using Azure VPN
Previously updated : 06/23/2023 Last updated : 07/11/2024
You can create an Azure VPN gateway in an active-active configuration, where bot
:::image type="content" source="./media/vpn-gateway-highlyavailable/active-active.png" alt-text="Diagram shows an on-premises site with private I P subnets and on-premises V P N connected to two active Azure V P N gateway to connect to subnets hosted in Azure.":::
-In this configuration, each Azure gateway instance has a unique public IP address, and each will establish an IPsec/IKE S2S VPN tunnel to your on-premises VPN device specified in your local network gateway and connection. Note that both VPN tunnels are actually part of the same connection. You'll still need to configure your on-premises VPN device to accept or establish two S2S VPN tunnels to those two Azure VPN gateway public IP addresses.
-
-Because the Azure gateway instances are in active-active configuration, the traffic from your Azure virtual network to your on-premises network will be routed through both tunnels simultaneously, even if your on-premises VPN device may favor one tunnel over the other. For a single TCP or UDP flow, Azure attempts to use the same tunnel when sending packets to your on-premises network. However, your on-premises network could use a different tunnel to send packets to Azure.
-
-When a planned maintenance or unplanned event happens to one gateway instance, the IPsec tunnel from that instance to your on-premises VPN device will be disconnected. The corresponding routes on your VPN devices should be removed or withdrawn automatically so that the traffic will be switched over to the other active IPsec tunnel. On the Azure side, the switch over will happen automatically from the affected instance to the active instance.
### Dual-redundancy: active-active VPN gateways for both Azure and on-premises networks
vpn-gateway Vpn Gateway Howto Vnet Vnet Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-vnet-vnet-cli.md
Previously updated : 12/11/2023 Last updated : 07/11/2024
This article helps you connect virtual networks by using the VNet-to-VNet connec
:::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/vnet-vnet-diagram.png" alt-text="VNet to VNet diagram." lightbox="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/vnet-vnet-diagram.png":::
-The steps in this article apply to the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md) and use Azure CLI. You can also create this configuration using a different deployment tool or deployment model by selecting a different option from the following list:
+In this exercise, you create the required virtual networks (VNets) and VPN gateways. We have steps to connect VNets within the same subscription, as well as steps and commands for the more complicated scenario to connect VNets in different subscriptions.
-> [!div class="op_single_selector"]
-> * [Azure portal](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md)
-> * [PowerShell](vpn-gateway-vnet-vnet-rm-ps.md)
-> * [Azure CLI](vpn-gateway-howto-vnet-vnet-cli.md)
+The Azure CLI command to create a connection is [az network vpn-connection](/cli/azure/network/vpn-connection). If you're connecting VNets from different subscriptions, use the steps in this article or in the [PowerShell](vpn-gateway-vnet-vnet-rm-ps.md) article. If you already have VNets that you want to connect and they're in the same subscription, you might want to use the [Azure portal](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) steps instead because the process is less complicated. Note that you can't connect VNets from different subscriptions using the Azure portal.
## <a name="about"></a>About connecting VNets
vpn-gateway Vpn Gateway Vnet Vnet Rm Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-vnet-vnet-rm-ps.md
Previously updated : 12/11/2023 Last updated : 07/11/2024 # Configure a VNet-to-VNet VPN gateway connection using PowerShell This article helps you connect virtual networks by using the VNet-to-VNet connection type. The virtual networks can be in the same or different regions, and from the same or different subscriptions. When you connect virtual networks from different subscriptions, the subscriptions don't need to be associated with the same tenant.
-The steps in this article apply to the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md) and use PowerShell. You can also create this configuration using a different deployment tool or deployment model by selecting a different option from the following list:
+In this exercise, you create the required virtual networks (VNets) and VPN gateways. We have steps to connect VNets within the same subscription, as well as steps and commands for the more complicated scenario to connect VNets in different subscriptions.
-> [!div class="op_single_selector"]
-> * [Azure portal](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md)
-> * [PowerShell](vpn-gateway-vnet-vnet-rm-ps.md)
-> * [Azure CLI](vpn-gateway-howto-vnet-vnet-cli.md)
+The PowerShell cmdlet to create a connection is [New-AzVirtualNetworkGatewayConnection](/powershell/module/az.network/new-azvirtualnetworkgatewayconnection). The `-ConnectionType` is `Vnet2Vnet`. If you're connecting VNets from different subscriptions, use the steps in this article or in the [Azure CLI](vpn-gateway-howto-vnet-vnet-cli.md) article. If you already have VNets that you want to connect and they're in the same subscription, you might want to use the [Azure portal](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) steps instead because the process is less complicated. Note that you can't connect VNets from different subscriptions using the Azure portal.
:::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/vnet-vnet-diagram.png" alt-text="VNet to VNet diagram." lightbox="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/vnet-vnet-diagram.png":::
web-application-firewall Waf Copilot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/waf-copilot.md
Last updated 05/20/2024
ms.localizationpriority: high-+ # Azure Web Application Firewall integration in Copilot for Security (preview)