Updates from: 11/01/2023 02:14:00
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-domain.md
zone_pivot_groups: b2c-policy-type
This article describes how to enable custom domains in your redirect URLs for Azure Active Directory B2C (Azure AD B2C). By using a verified custom domain, you've benefits such as: - It provides a more seamless user experience. From the user's perspective, they remain in your domain during the sign in process rather than redirecting to the Azure AD B2C default domain *<tenant-name>.b2clogin.com*.
+- By staying in the same domain for your application during sign-in, you mitigate the impact of [third-party cookie blocking](/azure/active-directory/develop/reference-third-party-cookies-spas).
- You increase the number of objects (user accounts and applications) you can create in your Azure AD B2C tenant from the default 1.25 million to 5.25 million.
active-directory-b2c Force Password Reset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/force-password-reset.md
Previously updated : 06/26/2023 Last updated : 10/31/2023 -+ zone_pivot_groups: b2c-policy-type
Once a password expiration policy has been set, you must also configure force pa
### Password expiry duration
-By default, the password is set not to expire. However, the value is configurable by using the [Set-MsolPasswordPolicy](/powershell/module/msonline/set-msolpasswordpolicy) cmdlet from the Azure AD PowerShell module. This command updates the tenant, so that all users' passwords expire after number of days you configure.
+By default, the password is set not to expire. However, the value is configurable by using the [Update-MgDomain](/powershell/module/microsoft.graph.identity.directorymanagement/update-mgdomain) cmdlet from the Microsoft Graph PowerShell module. This command updates the tenant so that all users' passwords expire after a number of days you configure. For example:
+
+```powershell
+Import-Module Microsoft.Graph.Identity.DirectoryManagement
+
+Connect-MgGraph -Scopes 'Domain.ReadWrite.All'
+
+$domainId = "contoso.com"
+$params = @{
+ passwordValidityPeriodInDays = 90
+ passwordNotificationWindowInDays = 15
+}
+
+Update-MgDomain -DomainId $domainId -BodyParameter $params
+```
+
+> [!NOTE]
+> `passwordValidityPeriodInDays` indicates the length of time in days that a password remains valid before it must be changed. `passwordNotificationWindowInDays` indicates the length of time in days before the password expiration date when users receive their first notification to indicate that their password is about to expire.
## Next steps Set up a [self-service password reset](add-password-reset-policy.md).+
ai-services Concept Custom Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-classifier.md
Custom classification models can analyze a single- or multi-file documents to id
* A single file containing multiple instances of the same document. For instance, a collection of scanned invoices.
-Training a custom classifier requires at least two distinct classes and a minimum of five samples per class. The model response contains the page ranges for each of the classes of documents identified.
+✔️ Training a custom classifier requires at least `two` distinct classes and a minimum of `five` samples per class. The model response contains the page ranges for each of the classes of documents identified.
+
+✔️ The maximum allowed number of classes is `500`. The maximum allowed number of samples per class is `100`.
The model classifies each page of the input document to one of the classes in the labeled dataset. Use the confidence score from the response to set the threshold for your application.
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/language-support.md
Previously updated : 03/09/2023 Last updated : 10/24/2023
Use this article to learn about the languages currently supported by different features.
-> [!NOTE]
-> Some of the languages listed below are only supported in some [model versions](../concepts/model-lifecycle.md#choose-the-model-version-used-on-your-data). See the linked feature-level language support article for details.
-- | Language | Language code | [Custom text classification](../custom-text-classification/language-support.md) | [Custom named entity recognition(NER)](../custom-named-entity-recognition/language-support.md) | [Conversational language understanding](../conversational-language-understanding/language-support.md) | [Entity linking](../entity-linking/language-support.md) | [Language detection](../language-detection/language-support.md) | [Key phrase extraction](../key-phrase-extraction/language-support.md) | [Named entity recognition(NER)](../named-entity-recognition/language-support.md) | [Orchestration workflow](../orchestration-workflow/language-support.md) | [Personally Identifiable Information (PII)](../personally-identifiable-information/language-support.md?tabs=documents) | [Conversation PII](../personally-identifiable-information/language-support.md?tabs=conversations) | [Question answering](../question-answering/language-support.md) | [Sentiment analysis](../sentiment-opinion-mining/language-support.md#sentiment-analysis-language-support) | [Opinion mining](../sentiment-opinion-mining/language-support.md#opinion-mining-language-support) | [Text Analytics for health](../text-analytics-for-health/language-support.md) | [Summarization](../summarization/language-support.md?tabs=document-summarization) | [Conversation summarization](../summarization/language-support.md?tabs=conversation-summarization) | |::|:-:|:-:|:-:|:--:|:-:|::|::|:--:|:--:|:-:|:-:|::|::|:-:|:--:|::|:--:| | Afrikaans | `af` | ✓ | ✓ | ✓ | | ✓ | ✓ | | | ✓ | | | ✓ | ✓ | | | |
Use this article to learn about the languages currently supported by different f
## See also
-See the following service-level language support articles for information on model version support for each language:
+See the following service-level language support articles for more information on language support:
* [Custom text classification](../custom-text-classification/language-support.md) * [Custom named entity recognition(NER)](../custom-named-entity-recognition/language-support.md) * [Conversational language understanding](../conversational-language-understanding/language-support.md)
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/entity-linking/language-support.md
Previously updated : 11/02/2021 Last updated : 10/24/2023 # Entity linking language support
-> [!NOTE]
-> Languages are added as new model versions are released for specific features. The current model version for Entity Linking is `2020-02-01`.
-
-| Language | Language code | v3 support | Starting with v3 model version: | Notes |
-|:|:-:|:-:|:--:|:--:|
-| English | `en` | Γ£ô | 2019-10-01 | |
-| Spanish | `es` | Γ£ô | 2019-10-01 | |
+| Language | Language code | Notes |
+|:|:-:|:--:|
+| English | `en` | |
+| Spanish | `es` | |
## Next steps
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/key-phrase-extraction/language-support.md
Previously updated : 09/18/2023 Last updated : 10/24/2023
Use this article to find the natural languages supported by Key Phrase Extractio
## Supported languages
-> [!NOTE]
-> Languages are added as new [model versions](how-to/call-api.md#specify-the-key-phrase-extraction-model) are released for specific features. The current model version for Key Phrase Extraction is `2022-07-01`.
- Total supported language codes: 94
-| Language | Language code | Starting with model version | Notes |
-|--||-|--|
-| Afrikaans      |     `af`  |                2020-07-01                 |                    |
-| Albanian     |     `sq`  |                2022-10-01                 |                    |
-| Amharic     |     `am`  |                2022-10-01                 |                    |
-| Arabic    |     `ar`  |                2022-10-01                 |                    |
-| Armenian    |     `hy`  |                2022-10-01                 |                    |
-| Assamese    |     `as`  |                2022-10-01                 |                    |
-| Azerbaijani    |     `az`  |                2022-10-01                 |                    |
-| Basque    |     `eu`  |                2022-10-01                 |                    |
-| Belarusian |     `be`  |                2022-10-01                 |                    |
-| Bengali     |     `bn`  |                2022-10-01                 |                    |
-| Bosnian    |     `bs`  |                2022-10-01                 |                    |
-| Breton    |     `br`  |                2022-10-01                 |                    |
-| Bulgarian      |     `bg`  |                2020-07-01                 |                    |
-| Burmese    |     `my`  |                2022-10-01                 |                    |
-| Catalan    |     `ca`  |                2020-07-01                 |                    |
-| Chinese-Simplified    |     `zh-hans` |                2021-06-01                 |                    |
-| Chinese-Traditional |     `zh-hant` |                2022-10-01                 |                    |
-| Croatian | `hr` | 2020-07-01 | |
-| Czech    |     `cs`  |                2022-10-01                 |                    |
-| Danish | `da` | 2019-10-01 | |
-| Dutch                 |     `nl`      |                2019-10-01                 |                    |
-| English               |     `en`      |                2019-10-01                 |                    |
-| Esperanto    |     `eo`  |                2022-10-01                 |                    |
-| Estonian              |     `et`      |                2020-07-01                 |                    |
-| Filipino    |     `fil`  |                2022-10-01                 |                    |
-| Finnish               |     `fi`      |                2019-10-01                 |                    |
-| French                |     `fr`      |                2019-10-01                 |                    |
-| Galician    |     `gl`  |                2022-10-01                 |                    |
-| Georgian    |     `ka`  |                2022-10-01                 |                    |
-| German                |     `de`      |                2019-10-01                 |                    |
-| Greek    |     `el`  |                2020-07-01                 |                    |
-| Gujarati    |     `gu`  |                2022-10-01                 |                    |
-| Hausa      |     `ha`  |                2022-10-01                 |                    |
-| Hebrew    |     `he`  |                2022-10-01                 |                    |
-| Hindi      |     `hi`  |                2022-10-01                 |                    |
-| Hungarian    |     `hu`  |                2020-07-01                 |                    |
-| Indonesian            |     `id`      |                2020-07-01                 |                    |
-| Irish            |     `ga`      |                2022-10-01                 |                    |
-| Italian               |     `it`      |                2019-10-01                 |                    |
-| Japanese              |     `ja`      |                2019-10-01                 |                    |
-| Javanese            |     `jv`      |                2022-10-01                 |                    |
-| Kannada            |     `kn`      |                2022-10-01                 |                    |
-| Kazakh            |     `kk`      |                2022-10-01                 |                    |
-| Khmer            |     `km`      |                2022-10-01                 |                    |
-| Korean                |     `ko`      |                2019-10-01                 |                    |
-| Kurdish (Kurmanji)   |     `ku`      |                2022-10-01                 |                    |
-| Kyrgyz            |     `ky`      |                2022-10-01                 |                    |
-| Lao            |     `lo`      |                2022-10-01                 |                    |
-| Latin            |     `la`      |                2022-10-01                 |                    |
-| Latvian               |     `lv`      |                2020-07-01                 |                    |
-| Lithuanian            |     `lt`      |                2022-10-01                 |                    |
-| Macedonian            |     `mk`      |                2022-10-01                 |                    |
-| Malagasy            |     `mg`      |                2022-10-01                 |                    |
-| Malay            |     `ms`      |                2022-10-01                 |                    |
-| Malayalam            |     `ml`      |                2022-10-01                 |                    |
-| Marathi            |     `mr`      |                2022-10-01                 |                    |
-| Mongolian            |     `mn`      |                2022-10-01                 |                    |
-| Nepali            |     `ne`      |                2022-10-01                 |                    |
-| Norwegian (Bokmål)    |     `no`      |                2020-07-01                 | `nb` also accepted |
-| Odia            |     `or`      |                2022-10-01                 |                    |
-| Oromo            |     `om`      |                2022-10-01                 |                    |
-| Pashto            |     `ps`      |                2022-10-01                 |                    |
-| Persian       |     `fa`      |                2022-10-01                 |                    |
-| Polish                |     `pl`      |                2019-10-01                 |                    |
-| Portuguese (Brazil)   |    `pt-BR`    |                2019-10-01                 |                    |
-| Portuguese (Portugal) |    `pt-PT`    |                2019-10-01                 | `pt` also accepted |
-| Punjabi            |     `pa`      |                2022-10-01                 |                    |
-| Romanian              |     `ro`      |                2020-07-01                 |                    |
-| Russian               |     `ru`      |                2019-10-01                 |                    |
-| Sanskrit            |     `sa`      |                2022-10-01                 |                    |
-| Scottish Gaelic       |     `gd`      |                2022-10-01                 |                    |
-| Serbian            |     `sr`      |                2022-10-01                 |                    |
-| Sindhi            |     `sd`      |                2022-10-01                 |                    |
-| Sinhala            |     `si`      |                2022-10-01                 |                    |
-| Slovak                |     `sk`      |                2020-07-01                 |                    |
-| Slovenian             |     `sl`      |                2020-07-01                 |                    |
-| Somali            |     `so`      |                2022-10-01                 |                    |
-| Spanish               |     `es`      |                2019-10-01                 |                    |
-| Sudanese            |     `su`      |                2022-10-01                 |                    |
-| Swahili            |     `sw`      |                2022-10-01                 |                    |
-| Swedish               |     `sv`      |                2019-10-01                 |                    |
-| Tamil            |     `ta`      |                2022-10-01                 |                    |
-| Telugu           |     `te`      |                2022-10-01                 |                    |
-| Thai            |     `th`      |                2022-10-01                 |                    |
-| Turkish              |     `tr`      |                2020-07-01                 |                    |
-| Ukrainian           |     `uk`      |                2022-10-01                 |                    |
-| Urdu            |     `ur`      |                2022-10-01                 |                    |
-| Uyghur            |     `ug`      |                2022-10-01                 |                    |
-| Uzbek            |     `uz`      |                2022-10-01                 |                    |
-| Vietnamese            |     `vi`      |                2022-10-01                 |                    |
-| Welsh            |     `cy`      |                2022-10-01                 |                    |
-| Western Frisian       |     `fy`      |                2022-10-01                 |                    |
-| Xhosa            |     `xh`      |                2022-10-01                 |                    |
-| Yiddish            |     `yi`      |                2022-10-01                 |                    |
+| Language | Language code | Notes |
+|--||--|
+| Afrikaans      |     `af`  |                    |
+| Albanian     |     `sq`  |                    |
+| Amharic     |     `am`  |                    |
+| Arabic    |     `ar`  |                    |
+| Armenian    |     `hy`  |                    |
+| Assamese    |     `as`  |                    |
+| Azerbaijani    |     `az`  |                    |
+| Basque    |     `eu`  |                    |
+| Belarusian |     `be`  |                    |
+| Bengali     |     `bn`  |                    |
+| Bosnian    |     `bs`  |                    |
+| Breton    |     `br`  |                    |
+| Bulgarian      |     `bg`  |                    |
+| Burmese    |     `my`  |                    |
+| Catalan    |     `ca`  |                    |
+| Chinese-Simplified    |     `zh-hans` |                    |
+| Chinese-Traditional |     `zh-hant` |                    |
+| Croatian | `hr` | |
+| Czech    |     `cs`  |                    |
+| Danish | `da` | |
+| Dutch                 |     `nl`      |                    |
+| English               |     `en`      |                    |
+| Esperanto    |     `eo`  |                    |
+| Estonian              |     `et`      |                    |
+| Filipino    |     `fil`  |                    |
+| Finnish               |     `fi`      |                    |
+| French                |     `fr`      |                    |
+| Galician    |     `gl`  |                    |
+| Georgian    |     `ka`  |                    |
+| German                |     `de`      |                    |
+| Greek    |     `el`  |                    |
+| Gujarati    |     `gu`  |                    |
+| Hausa      |     `ha`  |                    |
+| Hebrew    |     `he`  |                    |
+| Hindi      |     `hi`  |                    |
+| Hungarian    |     `hu`  |                    |
+| Indonesian            |     `id`      |                    |
+| Irish            |     `ga`      |                    |
+| Italian               |     `it`      |                    |
+| Japanese              |     `ja`      |                    |
+| Javanese            |     `jv`      |                    |
+| Kannada            |     `kn`      |                    |
+| Kazakh            |     `kk`      |                    |
+| Khmer            |     `km`      |                    |
+| Korean                |     `ko`      |                    |
+| Kurdish (Kurmanji)   |     `ku`      |                    |
+| Kyrgyz            |     `ky`      |                    |
+| Lao            |     `lo`      |                    |
+| Latin            |     `la`      |                    |
+| Latvian               |     `lv`      |                    |
+| Lithuanian            |     `lt`      |                    |
+| Macedonian            |     `mk`      |                    |
+| Malagasy            |     `mg`      |                    |
+| Malay            |     `ms`      |                    |
+| Malayalam            |     `ml`      |                    |
+| Marathi            |     `mr`      |                    |
+| Mongolian            |     `mn`      |                    |
+| Nepali            |     `ne`      |                    |
+| Norwegian (Bokmål)    |     `no`      | `nb` also accepted |
+| Odia            |     `or`      |                    |
+| Oromo            |     `om`      |                    |
+| Pashto            |     `ps`      |                    |
+| Persian       |     `fa`      |                    |
+| Polish                |     `pl`      |                    |
+| Portuguese (Brazil)   |    `pt-BR`    |                    |
+| Portuguese (Portugal) |    `pt-PT`    | `pt` also accepted |
+| Punjabi            |     `pa`      |                    |
+| Romanian              |     `ro`      |                    |
+| Russian               |     `ru`      |                    |
+| Sanskrit            |     `sa`      |                    |
+| Scottish Gaelic       |     `gd`      |                    |
+| Serbian            |     `sr`      |                    |
+| Sindhi            |     `sd`      |                    |
+| Sinhala            |     `si`      |                    |
+| Slovak                |     `sk`      |                    |
+| Slovenian             |     `sl`      |                    |
+| Somali            |     `so`      |                    |
+| Spanish               |     `es`      |                    |
+| Sudanese            |     `su`      |                    |
+| Swahili            |     `sw`      |                    |
+| Swedish               |     `sv`      |                    |
+| Tamil            |     `ta`      |                    |
+| Telugu           |     `te`      |                    |
+| Thai            |     `th`      |                    |
+| Turkish              |     `tr`      |                    |
+| Ukrainian           |     `uk`      |                    |
+| Urdu            |     `ur`      |                    |
+| Uyghur            |     `ug`      |                    |
+| Uzbek            |     `uz`      |                    |
+| Vietnamese            |     `vi`      |                    |
+| Welsh            |     `cy`      |                    |
+| Western Frisian       |     `fy`      |                    |
+| Xhosa            |     `xh`      |                    |
+| Yiddish            |     `yi`      |                    |
## Next steps
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/language-detection/language-support.md
Previously updated : 11/02/2021 Last updated : 10/24/2023 # Language support for Language Detection
-Use this article to learn which natural languages are supported by Language Detection.
--
-> [!NOTE]
-> Languages are added as new [model versions](how-to/call-api.md#specify-the-language-detection-model) are released. The current model version for Language Detection is `2022-10-01`.
+Use this article to learn which natural languages that language detection supports.
The Language Detection feature can detect a wide range of languages, variants, dialects, and some regional/cultural languages, and return detected languages with their name and code. The returned language code parameters conform to [BCP-47](https://tools.ietf.org/html/bcp47) standard with most of them conforming to [ISO-639-1](https://www.iso.org/iso-639-language-codes.html) identifiers.
-If you have content expressed in a less frequently used language, you can try Language Detection to see if it returns a code. The response for languages that cannot be detected is `unknown`.
+If you have content expressed in a less frequently used language, you can try Language Detection to see if it returns a code. The response for languages that can't be detected is `unknown`.
## Languages supported by Language Detection
-| Language | Language Code | Starting with model version: |
-||||
-| Afrikaans | `af` | |
-| Albanian | `sq` | |
-| Amharic | `am` | 2021-01-05 |
-| Arabic | `ar` | |
-| Armenian | `hy` | |
-| Assamese | `as` | 2021-01-05 |
-| Azerbaijani | `az` | 2021-01-05 |
-| Bashkir | `ba` | 2022-10-01 |
-| Basque | `eu` | |
-| Belarusian | `be` | |
-| Bengali | `bn` | |
-| Bosnian | `bs` | 2020-09-01 |
-| Bulgarian | `bg` | |
-| Burmese | `my` | |
-| Catalan | `ca` | |
-| Central Khmer | `km` | |
-| Chinese | `zh` | |
-| Chinese Simplified | `zh_chs` | |
-| Chinese Traditional | `zh_cht` | |
-| Chuvash | `cv` | 2022-10-01 |
-| Corsican | `co` | 2021-01-05 |
-| Croatian | `hr` | |
-| Czech | `cs` | |
-| Danish | `da` | |
-| Dari | `prs` | 2020-09-01 |
-| Divehi | `dv` | |
-| Dutch | `nl` | |
-| English | `en` | |
-| Esperanto | `eo` | |
-| Estonian | `et` | |
-| Faroese | `fo` | 2022-10-01 |
-| Fijian | `fj` | 2020-09-01 |
-| Finnish | `fi` | |
-| French | `fr` | |
-| Galician | `gl` | |
-| Georgian | `ka` | |
-| German | `de` | |
-| Greek | `el` | |
-| Gujarati | `gu` | |
-| Haitian | `ht` | |
-| Hausa | `ha` | 2021-01-05 |
-| Hebrew | `he` | |
-| Hindi | `hi` | |
-| Hmong Daw | `mww` | 2020-09-01 |
-| Hungarian | `hu` | |
-| Icelandic | `is` | |
-| Igbo | `ig` | 2021-01-05 |
-| Indonesian | `id` | |
-| Inuktitut | `iu` | |
-| Irish | `ga` | |
-| Italian | `it` | |
-| Japanese | `ja` | |
-| Javanese | `jv` | 2021-01-05 |
-| Kannada | `kn` | |
-| Kazakh | `kk` | 2020-09-01 |
-| Kinyarwanda | `rw` | 2021-01-05 |
-| Kirghiz | `ky` | 2022-10-01 |
-| Korean | `ko` | |
-| Kurdish | `ku` | |
-| Lao | `lo` | |
-| Latin | `la` | |
-| Latvian | `lv` | |
-| Lithuanian | `lt` | |
-| Luxembourgish | `lb` | 2021-01-05 |
-| Macedonian | `mk` | |
-| Malagasy | `mg` | 2020-09-01 |
-| Malay | `ms` | |
-| Malayalam | `ml` | |
-| Maltese | `mt` | |
-| Maori | `mi` | 2020-09-01 |
-| Marathi | `mr` | 2020-09-01 |
-| Mongolian | `mn` | 2021-01-05 |
-| Nepali | `ne` | 2021-01-05 |
-| Norwegian | `no` | |
-| Norwegian Nynorsk | `nn` | |
-| Odia | `or` | |
-| Pasht | `ps` | |
-| Persian | `fa` | |
-| Polish | `pl` | |
-| Portuguese | `pt` | |
-| Punjabi | `pa` | |
-| Queretaro Otomi | `otq` | 2020-09-01 |
-| Romanian | `ro` | |
-| Russian | `ru` | |
-| Samoan | `sm` | 2020-09-01 |
-| Serbian | `sr` | |
-| Shona | `sn` | 2021-01-05 |
-| Sindhi | `sd` | 2021-01-05 |
-| Sinhala | `si` | |
-| Slovak | `sk` | |
-| Slovenian | `sl` | |
-| Somali | `so` | |
-| Spanish | `es` | |
-| Sundanese | `su` | 2021-01-05 |
-| Swahili | `sw` | |
-| Swedish | `sv` | |
-| Tagalog | `tl` | |
-| Tahitian | `ty` | 2020-09-01 |
-| Tajik | `tg` | 2021-01-05 |
-| Tamil | `ta` | |
-| Tatar | `tt` | 2021-01-05 |
-| Telugu | `te` | |
-| Thai | `th` | |
-| Tibetan | `bo` | 2021-01-05 |
-| Tigrinya | `ti` | 2021-01-05 |
-| Tongan | `to` | 2020-09-01 |
-| Turkish | `tr` | 2021-01-05 |
-| Turkmen | `tk` | 2021-01-05 |
-| Upper Sorbian | `hsb` | 2022-10-01 |
-| Uyghur | `ug` | 2022-10-01 |
-| Ukrainian | `uk` | |
-| Urdu | `ur` | |
-| Uzbek | `uz` | |
-| Vietnamese | `vi` | |
-| Welsh | `cy` | |
-| Xhosa | `xh` | 2021-01-05 |
-| Yiddish | `yi` | |
-| Yoruba | `yo` | 2021-01-05 |
-| Yucatec Maya | `yua` | |
-| Zulu | `zu` | 2021-01-05 |
+| Language | Language Code |
+|||
+| Afrikaans | `af` |
+| Albanian | `sq` |
+| Amharic | `am` |
+| Arabic | `ar` |
+| Armenian | `hy` |
+| Assamese | `as` |
+| Azerbaijani | `az` |
+| Bashkir | `ba` |
+| Basque | `eu` |
+| Belarusian | `be` |
+| Bengali | `bn` |
+| Bosnian | `bs` |
+| Bulgarian | `bg` |
+| Burmese | `my` |
+| Catalan | `ca` |
+| Central Khmer | `km` |
+| Chinese | `zh` |
+| Chinese Simplified | `zh_chs` |
+| Chinese Traditional | `zh_cht` |
+| Chuvash | `cv` |
+| Corsican | `co` |
+| Croatian | `hr` |
+| Czech | `cs` |
+| Danish | `da` |
+| Dari | `prs` |
+| Divehi | `dv` |
+| Dutch | `nl` |
+| English | `en` |
+| Esperanto | `eo` |
+| Estonian | `et` |
+| Faroese | `fo` |
+| Fijian | `fj` |
+| Finnish | `fi` |
+| French | `fr` |
+| Galician | `gl` |
+| Georgian | `ka` |
+| German | `de` |
+| Greek | `el` |
+| Gujarati | `gu` |
+| Haitian | `ht` |
+| Hausa | `ha` |
+| Hebrew | `he` |
+| Hindi | `hi` |
+| Hmong Daw | `mww` |
+| Hungarian | `hu` |
+| Icelandic | `is` |
+| Igbo | `ig` |
+| Indonesian | `id` |
+| Inuktitut | `iu` |
+| Irish | `ga` |
+| Italian | `it` |
+| Japanese | `ja` |
+| Javanese | `jv` |
+| Kannada | `kn` |
+| Kazakh | `kk` |
+| Kinyarwanda | `rw` |
+| Kirghiz | `ky` |
+| Korean | `ko` |
+| Kurdish | `ku` |
+| Lao | `lo` |
+| Latin | `la` |
+| Latvian | `lv` |
+| Lithuanian | `lt` |
+| Luxembourgish | `lb` |
+| Macedonian | `mk` |
+| Malagasy | `mg` |
+| Malay | `ms` |
+| Malayalam | `ml` |
+| Maltese | `mt` |
+| Maori | `mi` |
+| Marathi | `mr` |
+| Mongolian | `mn` |
+| Nepali | `ne` |
+| Norwegian | `no` |
+| Norwegian Nynorsk | `nn` |
+| Odia | `or` |
+| Pasht | `ps` |
+| Persian | `fa` |
+| Polish | `pl` |
+| Portuguese | `pt` |
+| Punjabi | `pa` |
+| Queretaro Otomi | `otq` |
+| Romanian | `ro` |
+| Russian | `ru` |
+| Samoan | `sm` |
+| Serbian | `sr` |
+| Shona | `sn` |
+| Sindhi | `sd` |
+| Sinhala | `si` |
+| Slovak | `sk` |
+| Slovenian | `sl` |
+| Somali | `so` |
+| Spanish | `es` |
+| Sundanese | `su` |
+| Swahili | `sw` |
+| Swedish | `sv` |
+| Tagalog | `tl` |
+| Tahitian | `ty` |
+| Tajik | `tg` |
+| Tamil | `ta` |
+| Tatar | `tt` |
+| Telugu | `te` |
+| Thai | `th` |
+| Tibetan | `bo` |
+| Tigrinya | `ti` |
+| Tongan | `to` |
+| Turkish | `tr` |
+| Turkmen | `tk` |
+| Upper Sorbian | `hsb` |
+| Uyghur | `ug` |
+| Ukrainian | `uk` |
+| Urdu | `ur` |
+| Uzbek | `uz` |
+| Vietnamese | `vi` |
+| Welsh | `cy` |
+| Xhosa | `xh` |
+| Yiddish | `yi` |
+| Yoruba | `yo` |
+| Yucatec Maya | `yua` |
+| Zulu | `zu` |
## Romanized Indic Languages supported by Language Detection
-| Language | Language Code | Starting with model version: |
-||||
-| Assamese | `as` | 2022-10-01 |
-| Bengali | `bn` | 2022-10-01 |
-| Gujarati | `gu` | 2022-10-01 |
-| Hindi | `hi` | 2022-10-01 |
-| Kannada | `kn` | 2022-10-01 |
-| Malayalam | `ml` | 2022-10-01 |
-| Marathi | `mr` | 2022-10-01 |
-| Odia | `or` | 2022-10-01 |
-| Punjabi | `pa` | 2022-10-01 |
-| Tamil | `ta` | 2022-10-01 |
-| Telugu | `te` | 2022-10-01 |
-| Urdu | `ur` | 2022-10-01 |
+| Language | Language Code |
+|||
+| Assamese | `as` |
+| Bengali | `bn` |
+| Gujarati | `gu` |
+| Hindi | `hi` |
+| Kannada | `kn` |
+| Malayalam | `ml` |
+| Marathi | `mr` |
+| Odia | `or` |
+| Punjabi | `pa` |
+| Tamil | `ta` |
+| Telugu | `te` |
+| Urdu | `ur` |
## Next steps
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/named-entity-recognition/language-support.md
Previously updated : 06/27/2022 Last updated : 10/24/2023
Use this article to learn which natural languages are supported by the NER feature of Azure AI Language. > [!NOTE]
-> * Languages are added as new [model versions](how-to-call.md#specify-the-ner-model) are released.
-> * The language support below is for model version `2023-04-15-preview` for the Generally Available API.
> * You can additionally find the language support for the Preview API in the second tab. ## NER language support
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/personally-identifiable-information/language-support.md
Previously updated : 08/02/2022 Last updated : 10/24/2023
Use this article to learn which natural languages are supported by the PII and conversation PII (preview) features of Azure AI Language.
-> [!NOTE]
-> * Languages are added as new [model versions](how-to-call.md#specify-the-pii-detection-model) are released.
- # [PII for documents](#tab/documents) ## PII language support
-|Language |Language code|Starting with model version|Notes |
-||-|||
-|Afrikaans |`af` |2023-04-15-preview | |
-|Amharic |`am` |2023-04-15-preview | |
-|Arabic |`ar` |2023-01-01-preview | |
-|Assamese |`as` |2023-04-15-preview | |
-|Azerbaijani |`az` |2023-04-15-preview | |
-|Bulgarian |`bg` |2023-04-15-preview | |
-|Bengali |`bn` |2023-04-15-preview | |
-|Bosnian |`bs` |2023-04-15-preview | |
-|Catalan |`ca` |2023-04-15-preview | |
-|Czech |`cs` |2023-01-01-preview | |
-|Welsh |`cy` |2020-04-01 | |
-|Danish |`da` |2023-01-01-preview | |
-|German |`de` |2021-01-15 | |
-|Greek |`el` |2023-04-15-preview | |
-|English |`en` |2020-07-01 | |
-|Spanish |`es` |2020-04-01 | |
-|Estonian |`et` |2023-04-15-preview | |
-|Basque |`eu` |2023-04-15-preview | |
-|Persian |`fa` |2023-04-15-preview | |
-|Finnish |`fi` |2023-01-01-preview | |
-|French |`fr` |2021-01-15 | |
-|Irish |`ga` |2023-04-15-preview | |
-|Galician |`gl` |2023-04-15-preview | |
-|Gujarati |`gu` |2023-04-15-preview | |
-|Hebrew |`he` |2023-01-01-preview | |
-|Hindi |`hi` |2023-01-01-preview | |
-|Croatian |`hr` |2023-04-15-preview | |
-|Hungarian |`hu` |2023-01-01-preview | |
-|Armenian |`hy` |2023-04-15-preview | |
-|Italian |`it` |2021-01-15 | |
-|Indonesian |`id` |2023-04-15-preview | |
-|Japanese |`ja` |2021-01-15 | |
-|Georgian |`ka` |2023-04-15-preview | |
-|Kazakh |`kk` |2023-04-15-preview | |
-|Khmer |`km` |2023-04-15-preview | |
-|Kannada |`kn` |2023-04-15-preview | |
-|Korean |`ko` |2021-01-15 | |
-|Kurdish(Kurmanji) |`ku` |2023-04-15-preview | |
-|Kyrgyz |`ky` |2023-04-15-preview | |
-|Lao |`lo` |2023-04-15-preview | |
-|Lithuanian |`lt` |2023-04-15-preview | |
-|Latvian |`lv` |2023-04-15-preview | |
-|Malagasy |`mg` |2023-04-15-preview | |
-|Macedonian |`mk` |2023-04-15-preview | |
-|Malayalam |`ml` |2023-04-15-preview | |
-|Mongolian |`mn` |2023-04-15-preview | |
-|Marathi |`mr` |2023-04-15-preview | |
-|Malay |`ms` |2023-04-15-preview | |
-|Burmese |`my` |2023-04-15-preview | |
-|Nepali |`ne` |2023-04-15-preview | |
-|Dutch |`nl` |2023-01-01-preview | |
-|Norwegian (Bokmål) |`no` |2023-01-01-preview |`nb` also accepted|
-|Odia |`or` |2023-04-15-preview | |
-|Punjabi |`pa` |2023-04-15-preview | |
-|Polish |`pl` |2023-01-01-preview | |
-|Pashto |`ps` |2023-04-15-preview | |
-|Portuguese (Brazil) |`pt-BR` |2021-01-15 | |
-|Portuguese (Portugal)|`pt-PT` |2021-01-15 |`pt` also accepted|
-|Romanian |`ro` |2023-04-15-preview | |
-|Russian |`ru` |2023-01-01-preview | |
-|Slovak |`sk` |2023-04-15-preview | |
-|Slovenian |`sl` |2023-04-15-preview | |
-|Somali |`so` |2023-04-15-preview | |
-|Albanian |`sq` |2023-04-15-preview | |
-|Serbian |`sr` |2023-04-15-preview | |
-|Swazi |`ss` |2023-04-15-preview | |
-|Swedish |`sv` |2023-01-01-preview | |
-|Swahili |`sw` |2023-04-15-preview | |
-|Tamil |`ta` |2023-04-15-preview | |
-|Telugu |`te` |2023-04-15-preview | |
-|Thai |`th` |2023-04-15-preview | |
-|Turkish |`tr` |2023-01-01-preview | |
-|Uyghur |`ug` |2023-04-15-preview | |
-|Ukrainian |`uk` |2023-04-15-preview | |
-|Urdu |`ur` |2023-04-15-preview | |
-|Uzbek |`uz` |2023-04-15-preview | |
-|Vietnamese |`vi` |2023-04-15-preview | |
-|Chinese-Simplified |`zh-hans` |2021-01-15 |`zh` also accepted|
-|Chinese-Traditional |`zh-hant` |2023-01-01-preview | |
+|Language |Language code|Notes |
+||-||
+|Afrikaans |`af` | |
+|Amharic |`am` | |
+|Arabic |`ar` | |
+|Assamese |`as` | |
+|Azerbaijani |`az` | |
+|Bulgarian |`bg` | |
+|Bengali |`bn` | |
+|Bosnian |`bs` | |
+|Catalan |`ca` | |
+|Czech |`cs` | |
+|Welsh |`cy` | |
+|Danish |`da` | |
+|German |`de` | |
+|Greek |`el` | |
+|English |`en` | |
+|Spanish |`es` | |
+|Estonian |`et` | |
+|Basque |`eu` | |
+|Persian |`fa` | |
+|Finnish |`fi` | |
+|French |`fr` | |
+|Irish |`ga` | |
+|Galician |`gl` | |
+|Gujarati |`gu` | |
+|Hebrew |`he` | |
+|Hindi |`hi` | |
+|Croatian |`hr` | |
+|Hungarian |`hu` | |
+|Armenian |`hy` | |
+|Italian |`it` | |
+|Indonesian |`id` | |
+|Japanese |`ja` | |
+|Georgian |`ka` | |
+|Kazakh |`kk` | |
+|Khmer |`km` | |
+|Kannada |`kn` | |
+|Korean |`ko` | |
+|Kurdish(Kurmanji) |`ku` | |
+|Kyrgyz |`ky` | |
+|Lao |`lo` | |
+|Lithuanian |`lt` | |
+|Latvian |`lv` | |
+|Malagasy |`mg` | |
+|Macedonian |`mk` | |
+|Malayalam |`ml` | |
+|Mongolian |`mn` | |
+|Marathi |`mr` | |
+|Malay |`ms` | |
+|Burmese |`my` | |
+|Nepali |`ne` | |
+|Dutch |`nl` | |
+|Norwegian (Bokmål) |`no` |`nb` also accepted|
+|Odia |`or` | |
+|Punjabi |`pa` | |
+|Polish |`pl` | |
+|Pashto |`ps` | |
+|Portuguese (Brazil) |`pt-BR` | |
+|Portuguese (Portugal)|`pt-PT` |`pt` also accepted|
+|Romanian |`ro` | |
+|Russian |`ru` | |
+|Slovak |`sk` | |
+|Slovenian |`sl` | |
+|Somali |`so` | |
+|Albanian |`sq` | |
+|Serbian |`sr` | |
+|Swazi |`ss` | |
+|Swedish |`sv` | |
+|Swahili |`sw` | |
+|Tamil |`ta` | |
+|Telugu |`te` | |
+|Thai |`th` | |
+|Turkish |`tr` | |
+|Uyghur |`ug` | |
+|Ukrainian |`uk` | |
+|Urdu |`ur` | |
+|Uzbek |`uz` | |
+|Vietnamese |`vi` | |
+|Chinese-Simplified |`zh-hans` |`zh` also accepted|
+|Chinese-Traditional |`zh-hant` | |
# [PII for conversations (preview)](#tab/conversations) ## PII language support
-| Language | Language code | Starting with model version | Notes |
-|--|||--|
-|German |`de` |2023-04-15-preview | |
-|English |`en` |2022-05-15-preview | |
-|Spanish |`es` |2023-04-15-preview | |
-|French |`fr` |2023-04-15-preview | |
+| Language | Language code | Notes |
+|--||--|
+|German |`de` | |
+|English |`en` | |
+|Spanish |`es` | |
+|French |`fr` | |
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/sentiment-opinion-mining/language-support.md
Previously updated : 09/18/2023 Last updated : 10/24/2023
Use this article to learn which languages are supported by Sentiment Analysis and Opinion Mining. Both the cloud-based API and [Docker containers](./how-to/use-containers.md) support the same languages.
-> [!NOTE]
-> Languages are added as new [model versions](../concepts/model-lifecycle.md) are released.
- ## Sentiment Analysis language support Total supported language codes: 94
-| Language | Language code | Starting with model version | Notes |
-|-|-|-|-|
-| Afrikaans | `af` | 2022-10-01 | |
-| Albanian | `sq` | 2022-10-01 | |
-| Amharic | `am` | 2022-10-01 | |
-| Arabic | `ar` | 2022-06-01 | |
-| Armenian | `hy` | 2022-10-01 | |
-| Assamese | `as` | 2022-10-01 | |
-| Azerbaijani | `az` | 2022-10-01 | |
-| Basque | `eu` | 2022-10-01 | |
-| Belarusian (new) | `be` | 2022-10-01 | |
-| Bengali | `bn` | 2022-10-01 | |
-| Bosnian | `bs` | 2022-10-01 | |
-| Breton (new) | `br` | 2022-10-01 | |
-| Bulgarian | `bg` | 2022-10-01 | |
-| Burmese | `my` | 2022-10-01 | |
-| Catalan | `ca` | 2022-10-01 | |
-| Chinese (Simplified) | `zh-hans` | 2019-10-01 | `zh` also accepted |
-| Chinese (Traditional) | `zh-hant` | 2019-10-01 | |
-| Croatian | `hr` | 2022-10-01 | |
-| Czech | `cs` | 2022-10-01 | |
-| Danish | `da` | 2022-06-01 | |
-| Dutch | `nl` | 2019-10-01 | |
-| English | `en` | 2019-10-01 | |
-| Esperanto (new) | `eo` | 2022-10-01 | |
-| Estonian | `et` | 2022-10-01 | |
-| Filipino | `fil` | 2022-10-01 | |
-| Finnish | `fi` | 2022-06-01 | |
-| French | `fr` | 2019-10-01 | |
-| Galician | `gl` | 2022-10-01 | |
-| Georgian | `ka` | 2022-10-01 | |
-| German | `de` | 2019-10-01 | |
-| Greek | `el` | 2022-06-01 | |
-| Gujarati | `gu` | 2022-10-01 | |
-| Hausa (new) | `ha` | 2022-10-01 | |
-| Hebrew | `he` | 2022-10-01 | |
-| Hindi | `hi` | 2020-04-01 | |
-| Hungarian | `hu` | 2022-10-01 | |
-| Indonesian | `id` | 2022-10-01 | |
-| Irish | `ga` | 2022-10-01 | |
-| Italian | `it` | 2019-10-01 | |
-| Japanese | `ja` | 2019-10-01 | |
-| Javanese (new) | `jv` | 2022-10-01 | |
-| Kannada | `kn` | 2022-10-01 | |
-| Kazakh | `kk` | 2022-10-01 | |
-| Khmer | `km` | 2022-10-01 | |
-| Korean | `ko` | 2019-10-01 | |
-| Kurdish (Kurmanji) | `ku` | 2022-10-01 | |
-| Kyrgyz | `ky` | 2022-10-01 | |
-| Lao | `lo` | 2022-10-01 | |
-| Latin (new) | `la` | 2022-10-01 | |
-| Latvian | `lv` | 2022-10-01 | |
-| Lithuanian | `lt` | 2022-10-01 | |
-| Macedonian | `mk` | 2022-10-01 | |
-| Malagasy | `mg` | 2022-10-01 | |
-| Malay | `ms` | 2022-10-01 | |
-| Malayalam | `ml` | 2022-10-01 | |
-| Marathi | `mr` | 2022-10-01 | |
-| Mongolian | `mn` | 2022-10-01 | |
-| Nepali | `ne` | 2022-10-01 | |
-| Norwegian | `no` | 2019-10-01 | |
-| Odia | `or` | 2022-10-01 | |
-| Oromo (new) | `om` | 2022-10-01 | |
-| Pashto | `ps` | 2022-10-01 | |
-| Persian | `fa` | 2022-10-01 | |
-| Polish | `pl` | 2022-06-01 | |
-| Portuguese (Portugal) | `pt-PT` | 2019-10-01 | `pt` also accepted |
-| Portuguese (Brazil) | `pt-BR` | 2019-10-01 | |
-| Punjabi | `pa` | 2022-10-01 | |
-| Romanian | `ro` | 2022-10-01 | |
-| Russian | `ru` | 2022-06-01 | |
-| Sanskrit (new) | `sa` | 2022-10-01 | |
-| Scottish Gaelic (new) | `gd` | 2022-10-01 | |
-| Serbian | `sr` | 2022-10-01 | |
-| Sindhi (new) | `sd` | 2022-10-01 | |
-| Sinhala (new) | `si` | 2022-10-01 | |
-| Slovak | `sk` | 2022-10-01 | |
-| Slovenian | `sl` | 2022-10-01 | |
-| Somali | `so` | 2022-10-01 | |
-| Spanish | `es` | 2019-10-01 | |
-| Sundanese (new) | `su` | 2022-10-01 | |
-| Swahili | `sw` | 2022-10-01 | |
-| Swedish | `sv` | 2022-06-01 | |
-| Tamil | `ta` | 2022-10-01 | |
-| Telugu | `te` | 2022-10-01 | |
-| Thai | `th` | 2022-10-01 | |
-| Turkish | `tr` | 2022-10-01 | |
-| Ukrainian | `uk` | 2022-10-01 | |
-| Urdu | `ur` | 2022-10-01 | |
-| Uyghur | `ug` | 2022-10-01 | |
-| Uzbek | `uz` | 2022-10-01 | |
-| Vietnamese | `vi` | 2022-10-01 | |
-| Welsh | `cy` | 2022-10-01 | |
-| Western Frisian (new) | `fy` | 2022-10-01 | |
-| Xhosa (new) | `xh` | 2022-10-01 | |
-| Yiddish (new) | `yi` | 2022-10-01 | |
+| Language | Language code | Notes |
+|-|-|-|
+| Afrikaans | `af` | |
+| Albanian | `sq` | |
+| Amharic | `am` | |
+| Arabic | `ar` |
+| Armenian | `hy` | |
+| Assamese | `as` | |
+| Azerbaijani | `az` | |
+| Basque | `eu` | |
+| Belarusian (new) | `be` | |
+| Bengali | `bn` | |
+| Bosnian | `bs` | |
+| Breton (new) | `br` | |
+| Bulgarian | `bg` | |
+| Burmese | `my` | |
+| Catalan | `ca` | |
+| Chinese (Simplified) | `zh-hans` | `zh` also accepted |
+| Chinese (Traditional) | `zh-hant` | |
+| Croatian | `hr` | |
+| Czech | `cs` | |
+| Danish | `da` |
+| Dutch | `nl` | |
+| English | `en` | |
+| Esperanto (new) | `eo` | |
+| Estonian | `et` | |
+| Filipino | `fil` | |
+| Finnish | `fi` |
+| French | `fr` | |
+| Galician | `gl` | |
+| Georgian | `ka` | |
+| German | `de` | |
+| Greek | `el` |
+| Gujarati | `gu` | |
+| Hausa (new) | `ha` | |
+| Hebrew | `he` | |
+| Hindi | `hi` | |
+| Hungarian | `hu` | |
+| Indonesian | `id` | |
+| Irish | `ga` | |
+| Italian | `it` | |
+| Japanese | `ja` | |
+| Javanese (new) | `jv` | |
+| Kannada | `kn` | |
+| Kazakh | `kk` | |
+| Khmer | `km` | |
+| Korean | `ko` | |
+| Kurdish (Kurmanji) | `ku` | |
+| Kyrgyz | `ky` | |
+| Lao | `lo` | |
+| Latin (new) | `la` | |
+| Latvian | `lv` | |
+| Lithuanian | `lt` | |
+| Macedonian | `mk` | |
+| Malagasy | `mg` | |
+| Malay | `ms` | |
+| Malayalam | `ml` | |
+| Marathi | `mr` | |
+| Mongolian | `mn` | |
+| Nepali | `ne` | |
+| Norwegian | `no` | |
+| Odia | `or` | |
+| Oromo (new) | `om` | |
+| Pashto | `ps` | |
+| Persian | `fa` | |
+| Polish | `pl` |
+| Portuguese (Portugal) | `pt-PT` | `pt` also accepted |
+| Portuguese (Brazil) | `pt-BR` | |
+| Punjabi | `pa` | |
+| Romanian | `ro` | |
+| Russian | `ru` |
+| Sanskrit (new) | `sa` | |
+| Scottish Gaelic (new) | `gd` | |
+| Serbian | `sr` | |
+| Sindhi (new) | `sd` | |
+| Sinhala (new) | `si` | |
+| Slovak | `sk` | |
+| Slovenian | `sl` | |
+| Somali | `so` | |
+| Spanish | `es` | |
+| Sundanese (new) | `su` | |
+| Swahili | `sw` | |
+| Swedish | `sv` |
+| Tamil | `ta` | |
+| Telugu | `te` | |
+| Thai | `th` | |
+| Turkish | `tr` | |
+| Ukrainian | `uk` | |
+| Urdu | `ur` | |
+| Uyghur | `ug` | |
+| Uzbek | `uz` | |
+| Vietnamese | `vi` | |
+| Welsh | `cy` | |
+| Western Frisian (new) | `fy` | |
+| Xhosa (new) | `xh` | |
+| Yiddish (new) | `yi` | |
### Opinion Mining language support Total supported language codes: 94
-| Language | Language code | Starting with model version | Notes |
-|-|-|-|-|
-| Afrikaans (new) | `af` | 2022-11-01 | |
-| Albanian (new) | `sq` | 2022-11-01 | |
-| Amharic (new) | `am` | 2022-11-01 | |
-| Arabic | `ar` | 2022-11-01 | |
-| Armenian (new) | `hy` | 2022-11-01 | |
-| Assamese (new) | `as` | 2022-11-01 | |
-| Azerbaijani (new) | `az` | 2022-11-01 | |
-| Basque (new) | `eu` | 2022-11-01 | |
-| Belarusian (new) | `be` | 2022-11-01 | |
-| Bengali | `bn` | 2022-11-01 | |
-| Bosnian (new) | `bs` | 2022-11-01 | |
-| Breton (new) | `br` | 2022-11-01 | |
-| Bulgarian (new) | `bg` | 2022-11-01 | |
-| Burmese (new) | `my` | 2022-11-01 | |
-| Catalan (new) | `ca` | 2022-11-01 | |
-| Chinese (Simplified) | `zh-hans` | 2022-11-01 | `zh` also accepted |
-| Chinese (Traditional) (new) | `zh-hant` | 2022-11-01 | |
-| Croatian (new) | `hr` | 2022-11-01 | |
-| Czech (new) | `cs` | 2022-11-01 | |
-| Danish | `da` | 2022-11-01 | |
-| Dutch | `nl` | 2022-11-01 | |
-| English | `en` | 2020-04-01 | |
-| Esperanto (new) | `eo` | 2022-11-01 | |
-| Estonian (new) | `et` | 2022-11-01 | |
-| Filipino (new) | `fil` | 2022-11-01 | |
-| Finnish | `fi` | 2022-11-01 | |
-| French | `fr` | 2021-10-01 | |
-| Galician (new) | `gl` | 2022-11-01 | |
-| Georgian (new) | `ka` | 2022-11-01 | |
-| German | `de` | 2021-10-01 | |
-| Greek | `el` | 2022-11-01 | |
-| Gujarati (new) | `gu` | 2022-11-01 | |
-| Hausa (new) | `ha` | 2022-11-01 | |
-| Hebrew (new) | `he` | 2022-11-01 | |
-| Hindi | `hi` | 2022-11-01 | |
-| Hungarian | `hu` | 2022-11-01 | |
-| Indonesian | `id` | 2022-11-01 | |
-| Irish (new) | `ga` | 2022-11-01 | |
-| Italian | `it` | 2021-10-01 | |
-| Japanese | `ja` | 2022-11-01 | |
-| Javanese (new) | `jv` | 2022-11-01 | |
-| Kannada (new) | `kn` | 2022-11-01 | |
-| Kazakh (new) | `kk` | 2022-11-01 | |
-| Khmer (new) | `km` | 2022-11-01 | |
-| Korean | `ko` | 2022-11-01 | |
-| Kurdish (Kurmanji) | `ku` | 2022-11-01 | |
-| Kyrgyz (new) | `ky` | 2022-11-01 | |
-| Lao (new) | `lo` | 2022-11-01 | |
-| Latin (new) | `la` | 2022-11-01 | |
-| Latvian (new) | `lv` | 2022-11-01 | |
-| Lithuanian (new) | `lt` | 2022-11-01 | |
-| Macedonian (new) | `mk` | 2022-11-01 | |
-| Malagasy (new) | `mg` | 2022-11-01 | |
-| Malay (new) | `ms` | 2022-11-01 | |
-| Malayalam (new) | `ml` | 2022-11-01 | |
-| Marathi | `mr` | 2022-11-01 | |
-| Mongolian (new) | `mn` | 2022-11-01 | |
-| Nepali (new) | `ne` | 2022-11-01 | |
-| Norwegian | `no` | 2022-11-01 | |
-| Odia (new) | `or` | 2022-11-01 | |
-| Oromo (new) | `om` | 2022-11-01 | |
-| Pashto (new) | `ps` | 2022-11-01 | |
-| Persian (new) | `fa` | 2022-11-01 | |
-| Polish | `pl` | 2022-11-01 | |
-| Portuguese (Portugal) | `pt-PT` | 2021-10-01 | `pt` also accepted |
-| Portuguese (Brazil) | `pt-BR` | 2021-10-01 | |
-| Punjabi (new) | `pa` | 2022-11-01 | |
-| Romanian (new) | `ro` | 2022-11-01 | |
-| Russian | `ru` | 2022-11-01 | |
-| Sanskrit (new) | `sa` | 2022-11-01 | |
-| Scottish Gaelic (new) | `gd` | 2022-11-01 | |
-| Serbian (new) | `sr` | 2022-11-01 | |
-| Sindhi (new) | `sd` | 2022-11-01 | |
-| Sinhala (new) | `si` | 2022-11-01 | |
-| Slovak (new) | `sk` | 2022-11-01 | |
-| Slovenian (new) | `sl` | 2022-11-01 | |
-| Somali (new) | `so` | 2022-11-01 | |
-| Spanish | `es` | 2021-10-01 | |
-| Sundanese (new) | `su` | 2022-11-01 | |
-| Swahili (new) | `sw` | 2022-11-01 | |
-| Swedish | `sv` | 2022-11-01 | |
-| Tamil | `ta` | 2022-11-01 | |
-| Telugu | `te` | 2022-11-01 | |
-| Thai (new) | `th` | 2022-11-01 | |
-| Turkish | `tr` | 2022-11-01 | |
-| Ukrainian (new) | `uk` | 2022-11-01 | |
-| Urdu (new) | `ur` | 2022-11-01 | |
-| Uyghur (new) | `ug` | 2022-11-01 | |
-| Uzbek (new) | `uz` | 2022-11-01 | |
-| Vietnamese (new) | `vi` | 2022-11-01 | |
-| Welsh (new) | `cy` | 2022-11-01 | |
-| Western Frisian (new) | `fy` | 2022-11-01 | |
-| Xhosa (new) | `xh` | 2022-11-01 | |
-| Yiddish (new) | `yi` | 2022-11-01 | |
+| Language | Language code | Notes |
+|-|-|-|
+| Afrikaans (new) | `af` | |
+| Albanian (new) | `sq` | |
+| Amharic (new) | `am` | |
+| Arabic | `ar` | |
+| Armenian (new) | `hy` | |
+| Assamese (new) | `as` | |
+| Azerbaijani (new) | `az` | |
+| Basque (new) | `eu` | |
+| Belarusian (new) | `be` | |
+| Bengali | `bn` | |
+| Bosnian (new) | `bs` | |
+| Breton (new) | `br` | |
+| Bulgarian (new) | `bg` | |
+| Burmese (new) | `my` | |
+| Catalan (new) | `ca` | |
+| Chinese (Simplified) | `zh-hans` | `zh` also accepted |
+| Chinese (Traditional) (new) | `zh-hant` | |
+| Croatian (new) | `hr` | |
+| Czech (new) | `cs` | |
+| Danish | `da` | |
+| Dutch | `nl` | |
+| English | `en` | |
+| Esperanto (new) | `eo` | |
+| Estonian (new) | `et` | |
+| Filipino (new) | `fil` | |
+| Finnish | `fi` | |
+| French | `fr` | |
+| Galician (new) | `gl` | |
+| Georgian (new) | `ka` | |
+| German | `de` | |
+| Greek | `el` | |
+| Gujarati (new) | `gu` | |
+| Hausa (new) | `ha` | |
+| Hebrew (new) | `he` | |
+| Hindi | `hi` | |
+| Hungarian | `hu` | |
+| Indonesian | `id` | |
+| Irish (new) | `ga` | |
+| Italian | `it` | |
+| Japanese | `ja` | |
+| Javanese (new) | `jv` | |
+| Kannada (new) | `kn` | |
+| Kazakh (new) | `kk` | |
+| Khmer (new) | `km` | |
+| Korean | `ko` | |
+| Kurdish (Kurmanji) | `ku` | |
+| Kyrgyz (new) | `ky` | |
+| Lao (new) | `lo` | |
+| Latin (new) | `la` | |
+| Latvian (new) | `lv` | |
+| Lithuanian (new) | `lt` | |
+| Macedonian (new) | `mk` | |
+| Malagasy (new) | `mg` | |
+| Malay (new) | `ms` | |
+| Malayalam (new) | `ml` | |
+| Marathi | `mr` | |
+| Mongolian (new) | `mn` | |
+| Nepali (new) | `ne` | |
+| Norwegian | `no` | |
+| Odia (new) | `or` | |
+| Oromo (new) | `om` | |
+| Pashto (new) | `ps` | |
+| Persian (new) | `fa` | |
+| Polish | `pl` | |
+| Portuguese (Portugal) | `pt-PT` | `pt` also accepted |
+| Portuguese (Brazil) | `pt-BR` | |
+| Punjabi (new) | `pa` | |
+| Romanian (new) | `ro` | |
+| Russian | `ru` | |
+| Sanskrit (new) | `sa` | |
+| Scottish Gaelic (new) | `gd` | |
+| Serbian (new) | `sr` | |
+| Sindhi (new) | `sd` | |
+| Sinhala (new) | `si` | |
+| Slovak (new) | `sk` | |
+| Slovenian (new) | `sl` | |
+| Somali (new) | `so` | |
+| Spanish | `es` | |
+| Sundanese (new) | `su` | |
+| Swahili (new) | `sw` | |
+| Swedish | `sv` | |
+| Tamil | `ta` | |
+| Telugu | `te` | |
+| Thai (new) | `th` | |
+| Turkish | `tr` | |
+| Ukrainian (new) | `uk` | |
+| Urdu (new) | `ur` | |
+| Uyghur (new) | `ug` | |
+| Uzbek (new) | `uz` | |
+| Vietnamese (new) | `vi` | |
+| Welsh (new) | `cy` | |
+| Western Frisian (new) | `fy` | |
+| Xhosa (new) | `xh` | |
+| Yiddish (new) | `yi` | |
## Multi-lingual option (Custom sentiment analysis only)
ai-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/text-analytics-for-health/how-to/call-api.md
There are two ways to call the service:
## Development options ---
-## Specify the Text Analytics for health model
-
-By default, Text Analytics for health will use the ("2022-03-01") model version on your text. You can also configure your API requests to use a specific model version. The model you specify will be used to perform operations provided by the Text Analytics for health. Extraction of social determinants of health entities along with their assertions and relationships (**only in English**) is supported with the latest preview model version "2023-04-15-preview".
-
-| Supported Versions | Status |
-|--|--|
-| `2023-04-15-preview` | Preview |
-| `2023-04-01` | Generally available |
-| `2023-01-01-preview` | Preview |
-| `2022-08-15-preview` | Preview |
-| `2022-03-01` | Generally available |
-
-## Specify the Text Analytics for health API version
-
-When making a Text Analytics for health API call, you must specify an API version. The latest generally available API version is "2023-04-01" which supports relationship confidence scores in the results. The latest preview API version is "2023-04-15-preview", offering the latest feature which is support for [temporal assertions](../concepts/assertion-detection.md).
-
-| Supported Versions | Status |
-|--|--|
-| `2023-04-15-preview`| Preview |
-| `2023-04-01`| Generally available |
-| `2022-10-01-preview` | Preview |
-| `2022-05-01` | Generally available |
--
-### Text Analytics for health container
-
-The [Text Analytics for health container](use-containers.md) uses separate model versioning than the REST API and client libraries. Only one model version is available per container image.
-
-| Endpoint | Container Image Tag | Model version |
-||--||
-| `/entities/health` | `3.0.59413252-onprem-amd64` (latest) | `2022-03-01` |
-| `/entities/health` | `3.0.59413252-latin-onprem-amd64` (latin) | `2022-08-15-preview` |
-| `/entities/health` | `3.0.59413252-semitic-onprem-amd64` (semitic) | `2022-08-15-preview` |
-| `/entities/health` | `3.0.016230002-onprem-amd64` | `2021-05-15` |
-| `/entities/health` | `3.0.015370001-onprem-amd64` | `2021-03-01` |
-| `/entities/health` | `1.1.013530001-amd64-preview` | `2020-09-03` |
-| `/entities/health` | `1.1.013150001-amd64-preview` | `2020-07-24` |
-| `/domains/health` | `1.1.012640001-amd64-preview` | `2020-05-08` |
-| `/domains/health` | `1.1.012420001-amd64-preview` | `2020-05-08` |
-| `/domains/health` | `1.1.012070001-amd64-preview` | `2020-04-16` |
### Input languages
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/text-analytics-for-health/language-support.md
Previously updated : 01/04/2023 Last updated : 10/24/2023
Use this article to learn which natural languages are supported by Text Analytic
## Hosted API Service
-The hosted API service supports English language, model version 03-01-2022. Additional languages, English, Spanish, French, German Italian, Portuguese and Hebrew are supported with model version 2022-08-15-preview.
+The hosted API service supports the English, Spanish, French, German, Italian, Portuguese and Hebrew languages.
When structuring the API request, the relevant language tags must be added for these languages:
json
## Docker container
-The docker container supports English language, model version 2022-03-01.
-Additional languages are also supported when using a docker container to deploy the API: Spanish, French, German Italian, Portuguese and Hebrew. This functionality is currently in preview, model version 2022-08-15-preview.
+The docker container supports the English, Spanish, French, German, Italian, Portuguese and Hebrew languages.
Full details for deploying the service in a container can be found [here](../text-analytics-for-health/how-to/use-containers.md). In order to download the new container images from the Microsoft public container registry, use the following [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command.
json
] } ```
-## Details of the supported model versions for each language:
+## Details of the supported container tags:
-| Language Code | Model Version: | Featured Tag | Specific Tag |
-|:--|:-:|:-:|::|
-| `en` | 2022-03-01 | latest | 3.0.59413252-onprem-amd64 |
-| `en`, `es`, `it`, `fr`, `de`, `pt` | 2022-08-15-preview | latin | 3.0.60903415-latin-onprem-amd64 |
-| `he` | 2022-08-15-preview | semitic | 3.0.60903415-semitic-onprem-amd64 |
+| Language Code | Featured Tag | Specific Tag |
+|:--|:-:|::|
+| `en` | latest | 3.0.59413252-onprem-amd64 |
+| `en`, `es`, `it`, `fr`, `de`, `pt` | latin | 3.0.60903415-latin-onprem-amd64 |
+| `he` | semitic | 3.0.60903415-semitic-onprem-amd64 |
ai-services Content Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/content-filter.md
keywords:
> [!IMPORTANT] > The content filtering system isn't applied to prompts and completions processed by the Whisper model in Azure OpenAI Service. Learn more about the [Whisper model in Azure OpenAI](models.md#whisper-preview).
-Azure OpenAI Service includes a content filtering system that works alongside core models. This system works by running both the prompt and completion through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Variations in API configurations and application design may affect completions and thus filtering behavior.
+Azure OpenAI Service includes a content filtering system that works alongside core models. This system works by running both the prompt and completion through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Variations in API configurations and application design might affect completions and thus filtering behavior.
-The content filtering models have been specifically trained and tested on the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality may vary. In all cases, you should do your own testing to ensure that it works for your application.
+The content filtering models have been specifically trained and tested on the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application.
-In addition to the content filtering system, the Azure OpenAI Service performs monitoring to detect content and/or behaviors that suggest use of the service in a manner that may violate applicable product terms. For more information about understanding and mitigating risks associated with your application, see the [Transparency Note for Azure OpenAI](/legal/cognitive-services/openai/transparency-note?tabs=text). For more information about how data is processed in connection with content filtering and abuse monitoring, see [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation).
+In addition to the content filtering system, the Azure OpenAI Service performs monitoring to detect content and/or behaviors that suggest use of the service in a manner that might violate applicable product terms. For more information about understanding and mitigating risks associated with your application, see the [Transparency Note for Azure OpenAI](/legal/cognitive-services/openai/transparency-note?tabs=text). For more information about how data is processed in connection with content filtering and abuse monitoring, see [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation).
The following sections provide information about the content filtering categories, the filtering severity levels and their configurability, and API scenarios to be considered in application design and implementation.
The content filtering system integrated in the Azure OpenAI Service contains neu
|Category|Description| |--|--|
-|Safe | Content may be related to violence, self-harm, sexual, or hate categories but the terms are used in general, journalistic, scientific, medical, and similar professional contexts, which are appropriate for most audiences. |
+|Safe | Content might be related to violence, self-harm, sexual, or hate categories but the terms are used in general, journalistic, scientific, medical, and similar professional contexts, which are appropriate for most audiences. |
|Low | Content that expresses prejudiced, judgmental, or opinionated views, includes offensive use of language, stereotyping, use cases exploring a fictional world (for example, gaming, literature) and depictions at low intensity.| | Medium | Content that uses offensive, insulting, mocking, intimidating, or demeaning language towards specific identity groups, includes depictions of seeking and executing harmful instructions, fantasies, glorification, promotion of harm at medium intensity. | |High | Content that displays explicit and severe harmful instructions, actions, damage, or abuse; includes endorsement, glorification, or promotion of severe harmful acts, extreme or illegal forms of harm, radicalization, or non-consensual power exchange or abuse.|
Content filtering configurations are created within a Resource in Azure AI Studi
## Scenario details
-When the content filtering system detects harmful content, you'll receive either an error on the API call if the prompt was deemed inappropriate or the `finish_reason` on the response will be `content_filter` to signify that some of the completion was filtered. When building your application or system, you'll want to account for these scenarios where the content returned by the Completions API is filtered, which may result in content that is incomplete. How you act on this information will be application specific. The behavior can be summarized in the following points:
+When the content filtering system detects harmful content, you'll receive either an error on the API call if the prompt was deemed inappropriate or the `finish_reason` on the response will be `content_filter` to signify that some of the completion was filtered. When building your application or system, you'll want to account for these scenarios where the content returned by the Completions API is filtered, which might result in content that is incomplete. How you act on this information will be application specific. The behavior can be summarized in the following points:
- Prompts that are classified at a filtered category and severity level will return an HTTP 400 error. - Non-streaming completions calls won't return any content when the content is filtered. The `finish_reason` value will be set to content_filter. In rare cases with longer responses, a partial result can be returned. In these cases, the `finish_reason` will be updated.
When annotations are enabled as shown in the code snippet below, the following i
Annotations are currently in preview for Completions and Chat Completions (GPT models); the following code snippet shows how to use annotations in preview:
+# [Python](#tab/python)
++ ```python # Note: The openai-python library support for Azure OpenAI is in preview. # os.getenv() for the endpoint and key assumes that you are using environment variables.
except openai.error.InvalidRequestError as e:
```
+# [JavaScript](#tab/javascrit)
+
+[Azure OpenAI JavaScript SDK source code & samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/openai/openai)
+
+```javascript
+
+import { OpenAIClient, AzureKeyCredential } from "@azure/openai";
+
+// Load the .env file if it exists
+import * as dotenv from "dotenv";
+dotenv.config();
+
+// You will need to set these environment variables or edit the following values
+const endpoint = process.env["ENDPOINT"] || "<endpoint>";
+const azureApiKey = process.env["AZURE_API_KEY"] || "<api key>";
+
+const messages = [
+ { role: "system", content: "You are a helpful assistant. You will talk like a pirate." },
+ { role: "user", content: "Can you help me?" },
+ { role: "assistant", content: "Arrrr! Of course, me hearty! What can I do for ye?" },
+ { role: "user", content: "What's the best way to train a parrot?" },
+];
+
+export async function main() {
+ console.log("== Get completions Sample ==");
+
+ const client = new OpenAIClient(endpoint, new AzureKeyCredential(azureApiKey));
+ const deploymentId = "gpt-35-turbo"; //This needs to correspond to the name you chose when you deployed the model.
+ const events = await client.listChatCompletions(deploymentId, messages, { maxTokens: 128 });
+
+ for await (const event of events) {
+ for (const choice of event.choices) {
+ console.log(choice.message);
+ if (!choice.contentFilterResults) {
+ console.log("No content filter is found");
+ return;
+ }
+ if (choice.contentFilterResults.error) {
+ console.log(
+ `Content filter ran into the error ${choice.contentFilterResults.error.code}: ${choice.contentFilterResults.error.message}`
+ );
+ } else {
+ const { hate, sexual, selfHarm, violence } = choice.contentFilterResults;
+ console.log(
+ `Hate category is filtered: ${hate?.filtered} with ${hate?.severity} severity`
+ );
+ console.log(
+ `Sexual category is filtered: ${sexual?.filtered} with ${sexual?.severity} severity`
+ );
+ console.log(
+ `Self-harm category is filtered: ${selfHarm?.filtered} with ${selfHarm?.severity} severity`
+ );
+ console.log(
+ `Violence category is filtered: ${violence?.filtered} with ${violence?.severity} severity`
+ );
+ }
+ }
+ }
+}
+
+main().catch((err) => {
+ console.error("The sample encountered an error:", err);
+});
+```
++ For details on the inference REST API endpoints for Azure OpenAI and how to create Chat and Completions please follow [Azure OpenAI Service REST API reference guidance](../reference.md). Annotations are returned for all scenarios when using `2023-06-01-preview`. ### Example scenario: An input prompt containing content that is classified at a filtered category and severity level is sent to the completions API
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json) - `2023-07-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json) - `2023-08-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-09-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
**Request body**
ai-services How To Migrate To Prebuilt Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-migrate-to-prebuilt-neural-voice.md
# Migrate from prebuilt standard voice to prebuilt neural voice > [!IMPORTANT]
-> We are retiring the standard voices from September 1, 2021 through August 31, 2024. If you used a standard voice with your Speech resource that was created prior to September 1, 2021 then you can continue to do so until August 31, 2024. All other Speech resources can only use prebuilt neural voices. You can choose from the supported [neural voice names](language-support.md?tabs=tts). After August 31, 2024 the standard voices won't be supported with any Speech resource.
+> We are retiring the standard voices from September 1, 2021 through August 31, 2024. If you used a standard voice with your Speech resource that was created prior to September 1, 2021 then you can continue to do so until August 31, 2024. To use neural voices, choose voice names that include 'Neural' in their name, for example: en-US-JennyMultilingualNeural. All other Speech resources can only use prebuilt neural voices. You can choose from the supported [neural voice names](language-support.md?tabs=tts). After August 31, 2024 the standard voices won't be supported with any Speech resource.
> > The pricing for prebuilt standard voice is different from prebuilt neural voice. Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and check the pricing details in the collapsable "Deprecated" section. Prebuilt standard voice (retired) is referred as **Standard**.
aks Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-security.md
Node authorization is a special-purpose authorization mode that specifically aut
### Node deployment
-Nodes are deployed onto a private virtual network subnet, with no public IP addresses assigned. For troubleshooting and management purposes, SSH is enabled by default and only accessible using the internal IP address. Disabling SSH is during cluster and node pool creation, or for an existing cluster or node pool is in preview. See [Manage SSH access][manage-ssh-access] for more information.
+Nodes are deployed onto a private virtual network subnet, with no public IP addresses assigned. For troubleshooting and management purposes, SSH is enabled by default and only accessible using the internal IP address. Disabling SSH during cluster and node pool creation, or for an existing cluster or node pool, is in preview. See [Manage SSH access][manage-ssh-access] for more information.
### Node storage
For more information on core Kubernetes and AKS concepts, see:
[network-policy]: use-network-policies.md [microsoft-vulnerability-management-aks]: concepts-vulnerability-management.md [aks-vulnerability-management-nodes]: concepts-vulnerability-management.md#worker-nodes
-[manage-ssh-access]: manage-ssh-node-access.md
+[manage-ssh-access]: manage-ssh-node-access.md
aks Load Balancer Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/load-balancer-standard.md
description: Learn how to use a public load balancer with a Standard SKU to expo
Previously updated : 07/14/2023 Last updated : 10/30/2023 #Customer intent: As a cluster operator or developer, I want to learn how to create a service in AKS that uses an Azure Load Balancer with a Standard SKU.
spec:
The following annotations are supported for Kubernetes services with type `LoadBalancer`, and they only apply to **INBOUND** flows.
-| Annotation | Value | Description
-| -- | - |
-| `service.beta.kubernetes.io/azure-load-balancer-internal` | `true` or `false` | Specify whether the load balancer should be internal. If not set, it defaults to public.
-| `service.beta.kubernetes.io/azure-load-balancer-internal-subnet` | Name of the subnet | Specify which subnet the internal load balancer should be bound to. If not set, it defaults to the subnet configured in cloud config file.
-| `service.beta.kubernetes.io/azure-dns-label-name` | Name of the DNS label on Public IPs | Specify the DNS label name for the **public** service. If it's set to an empty string, the DNS entry in the Public IP isn't used.
-| `service.beta.kubernetes.io/azure-shared-securityrule` | `true` or `false` | Specify that the service should be exposed using an Azure security rule that might be shared with another service. Trade specificity of rules for an increase in the number of services that can be exposed. This annotation relies on the Azure [Augmented Security Rules](../virtual-network/network-security-groups-overview.md#augmented-security-rules) feature of Network Security groups.
-| `service.beta.kubernetes.io/azure-load-balancer-resource-group` | Name of the resource group | Specify the resource group of load balancer public IPs that aren't in the same resource group as the cluster infrastructure (node resource group).
-| `service.beta.kubernetes.io/azure-allowed-service-tags` | List of allowed service tags | Specify a list of allowed [service tags][service-tags] separated by commas.
-| `service.beta.kubernetes.io/azure-load-balancer-tcp-idle-timeout` | TCP idle timeouts in minutes | Specify the time in minutes for TCP connection idle timeouts to occur on the load balancer. The default and minimum value is 4. The maximum value is 30. The value must be an integer.
-| `service.beta.kubernetes.io/azure-load-balancer-ipv4` | IPv4 address | Specify the IPv4 address to assign to the load balancer.
-| `service.beta.kubernetes.io/azure-load-balancer-ipv6` | IPv6 address | Specify the IPv6 address to assign to the load balancer.
-
-> [!NOTE]
-> `service.beta.kubernetes.io/azure-load-balancer-disable-tcp-reset` was deprecated in Kubernetes 1.18 and removed in 1.20.
+| Annotation | Value | Description |
+|--|-|--|
+| `service.beta.kubernetes.io/azure-load-balancer-internal` | `true` or `false` | Specify whether the load balancer should be internal. If not set, it defaults to public. |
+| `service.beta.kubernetes.io/azure-load-balancer-internal-subnet` | Name of the subnet | Specify which subnet the internal load balancer should be bound to. If not set, it defaults to the subnet configured in cloud config file. |
+| `service.beta.kubernetes.io/azure-dns-label-name` | Name of the DNS label on Public IPs | Specify the DNS label name for the **public** service. If it's set to an empty string, the DNS entry in the Public IP isn't used. |
+| `service.beta.kubernetes.io/azure-shared-securityrule` | `true` or `false` | Specify exposing the service through a potentially shared Azure security rule to increase service exposure, utilizing Azure [Augmented Security Rules][augmented-security-rules] in Network Security groups. |
+| `service.beta.kubernetes.io/azure-load-balancer-resource-group` | Name of the resource group | Specify the resource group of load balancer public IPs that aren't in the same resource group as the cluster infrastructure (node resource group). |
+| `service.beta.kubernetes.io/azure-allowed-service-tags` | List of allowed service tags | Specify a list of allowed [service tags][service-tags] separated by commas. |
+| `service.beta.kubernetes.io/azure-load-balancer-tcp-idle-timeout` | TCP idle timeouts in minutes | Specify the time in minutes for TCP connection idle timeouts to occur on the load balancer. The default and minimum value is 4. The maximum value is 30. The value must be an integer. |
+| `service.beta.kubernetes.io/azure-load-balancer-disable-tcp-reset` | `true` or `false` | Specify whether the load balancer should disable TCP reset on idle timeout. |
+| `service.beta.kubernetes.io/azure-load-balancer-ipv4` | IPv4 address | Specify the IPv4 address to assign to the load balancer. |
+| `service.beta.kubernetes.io/azure-load-balancer-ipv6` | IPv6 address | Specify the IPv6 address to assign to the load balancer. |
### Customize the load balancer health probe
-| Annotation | Value | Description |
-| - | -- | -- |
-| `service.beta.kubernetes.io/azure-load-balancer-health-probe-interval` | Health probe interval | |
-| `service.beta.kubernetes.io/azure-load-balancer-health-probe-num-of-probe` | The minimum number of unhealthy responses of health probe | |
-| `service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path` | Request path of the health probe | |
-| `service.beta.kubernetes.io/port_{port}_no_lb_rule` | true/false | {port} is the port number in the service. When it is set to true, no lb rule and health probe rule for this port will be generated. health check service should not be exposed to the public internet(e.g. istio/envoy health check service)|
-| `service.beta.kubernetes.io/port_{port}_no_probe_rule` | true/false | {port} is the port number in the service. When it is set to true, no health probe rule for this port will be generated. |
-| `service.beta.kubernetes.io/port_{port}_health-probe_protocol` | Health probe protocol | {port} is the port number in the service. Explicit protocol for the health probe for the service port {port}, overriding port.appProtocol if set.|
-| `service.beta.kubernetes.io/port_{port}_health-probe_port` | port number or port name in service manifest | {port} is the port number in the service. Explicit port for the health probe for the service port {port}, overriding the default value. |
-| `service.beta.kubernetes.io/port_{port}_health-probe_interval` | Health probe interval | {port} is port number of service. |
-| `service.beta.kubernetes.io/port_{port}_health-probe_num-of-probe` | The minimum number of unhealthy responses of health probe | {port} is port number of service. |
-| `service.beta.kubernetes.io/port_{port}_health-probe_request-path` | Request path of the health probe | {port} is port number of service. |
+| Annotation | Value | Description |
+|-|--|--|
+| `service.beta.kubernetes.io/azure-load-balancer-health-probe-interval` | Health probe interval | |
+| `service.beta.kubernetes.io/azure-load-balancer-health-probe-num-of-probe` | The minimum number of unhealthy responses of health probe | |
+| `service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path` | Request path of the health probe | |
+| `service.beta.kubernetes.io/port_{port}_no_lb_rule` | true/false | {port} is service port number. When set to true, no lb rule or health probe rule for this port is generated. Health check service should not be exposed to the public internet(e.g. istio/envoy health check service) |
+| `service.beta.kubernetes.io/port_{port}_no_probe_rule` | true/false | {port} is service port number. When set to true, no health probe rule for this port is generated. |
+| `service.beta.kubernetes.io/port_{port}_health-probe_protocol` | Health probe protocol | {port} is service port number. Explicit protocol for the health probe for the service port {port}, overriding port.appProtocol if set. |
+| `service.beta.kubernetes.io/port_{port}_health-probe_port` | port number or port name in service manifest | {port} is service port number. Explicit port for the health probe for the service port {port}, overriding the default value. |
+| `service.beta.kubernetes.io/port_{port}_health-probe_interval` | Health probe interval | {port} is service port number. |
+| `service.beta.kubernetes.io/port_{port}_health-probe_num-of-probe` | The minimum number of unhealthy responses of health probe | {port} is service port number. |
+| `service.beta.kubernetes.io/port_{port}_health-probe_request-path` | Request path of the health probe | {port} is service port number. |
As documented [here](../load-balancer/load-balancer-custom-probe-overview.md), Tcp, Http and Https are three protocols supported by load balancer service.
Since v1.20, service annotation `service.beta.kubernetes.io/azure-load-balancer-
Note that the request path would be ignored when using TCP or the `spec.ports.appProtocol` is empty. More specifically: | loadbalancer sku | `externalTrafficPolicy` | spec.ports.Protocol | spec.ports.AppProtocol | `service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path` | LB Probe Protocol | LB Probe Request Path |
-| - | -- | - | - | -- | | |
+||-|||-|--|--|
| standard | local | any | any | any | http | `/healthz` | | standard | cluster | udp | any | any | null | null | | standard | cluster | tcp | | (ignored) | tcp | null |
Different ports in a service can require different health probe configurations.
The following annotations can be used to customize probe configuration per service port. | port specific annotation | global probe annotation | Usage |
-| - | | - |
+||--||
| service.beta.kubernetes.io/port_{port}_no_lb_rule | N/A (no equivalent globally) | if set true, no lb rules and probe rules will be generated | | service.beta.kubernetes.io/port_{port}_no_probe_rule | N/A (no equivalent globally) | if set true, no probe rules will be generated | | service.beta.kubernetes.io/port_{port}_health-probe_protocol | N/A (no equivalent globally) | Set the health probe protocol for this service port (e.g. Http, Https, Tcp) |
To learn more about using internal load balancer for inbound traffic, see the [A
[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md [aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md [aks-sp]: kubernetes-service-principal.md#delegate-access-to-other-azure-resources
+[augmented-security-rules]: ../virtual-network/network-security-groups-overview.md#augmented-security-rules
[az-aks-show]: /cli/azure/aks#az_aks_show [az-aks-create]: /cli/azure/aks#az_aks_create [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
aks Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/nat-gateway.md
This configuration requires bring-your-own networking (via [Kubenet][byo-vnet-ku
--assign-identity $IDENTITY_ID ```
-## Disable OutboundNAT for Windows (preview)
+## Disable OutboundNAT for Windows (Preview)
-Windows OutboundNAT can cause certain connection and communication issues with your AKS pods. Some of these issues include:
-
-* **Unhealthy backend status**: When you deploy an AKS cluster with [Application Gateway Ingress Control (AGIC)][agic] and [Application Gateway][app-gw] in different VNets, the backend health status becomes "Unhealthy." The outbound connectivity fails because the peered networked IP isn't present in the CNI config of the Windows nodes.
-* **Node port reuse**: Windows OutboundNAT uses port to translate your pod IP to your Windows node host IP, which can cause an unstable connection to the external service due to a port exhaustion issue.
-* **Invalid traffic routing to internal service endpoints**: When you create a load balancer service with `externalTrafficPolicy` set to *Local*, kube-proxy on Windows doesn't create the proper rules in the IPTables to route traffic to the internal service endpoints.
+Windows OutboundNAT can cause certain connection and communication issues with your AKS pods. An example issue is node port reuse. In this example, Windows OutboundNAT uses ports to translate your pod IP to your Windows node host IP, which can cause an unstable connection to the external service due to a port exhaustion issue.
Windows enables OutboundNAT by default. You can now manually disable OutboundNAT when creating new Windows agent pools.
-> [!NOTE]
-> OutboundNAT can only be disabled on Windows Server 2019 node pools.
- ### Prerequisites
-* You need to use `aks-preview` and register the feature flag.
+* If you're using Kubernetes version 1.25 or older, you need to [update your deployment configuration][upgrade-kubernetes].
+* You need to install or update `aks-preview` and register the feature flag.
1. Install or update `aks-preview` using the [`az extension add`][az-extension-add] or [`az extension update`][az-extension-update] command.
- ```azurecli
- # Install aks-preview
-
- az extension add --name aks-preview
-
- # Update aks-preview
+ ```azurecli-interactive
+ # Install aks-preview
+ az extension add --name aks-preview
- az extension update --name aks-preview
- ```
+ # Update aks-preview
+ az extension update --name aks-preview
+ ```
2. Register the feature flag using the [`az feature register`][az-feature-register] command.
- ```azurecli
- az feature register --namespace Microsoft.ContainerService --name DisableWindowsOutboundNATPreview
- ```
+ ```azurecli-interactive
+ az feature register --namespace Microsoft.ContainerService --name DisableWindowsOutboundNATPreview
+ ```
3. Check the registration status using the [`az feature list`][az-feature-list] command.
- ```azurecli
- az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/DisableWindowsOutboundNATPreview')].{Name:name,State:properties.state}"
- ```
+ ```azurecli-interactive
+ az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/DisableWindowsOutboundNATPreview')].{Name:name,State:properties.state}"
+ ```
- 4. Refresh the registration of the `Microsoft.ContainerService` resource provider us
+ 4. Refresh the registration of the `Microsoft.ContainerService` resource provider using the [`az provider register`][az-provider-register] command.
- ```azurecli
- az provider register --namespace Microsoft.ContainerService
- ```
+ ```azurecli-interactive
+ az provider register --namespace Microsoft.ContainerService
+ ```
-* If you're using Kubernetes version 1.25 or older, you need to [update your deployment configuration][upgrade-kubernetes].
-* Cluster outbound type can't be set to load balancer.
-* If you need to switch from a load balancer to NAT gateway, you can either add a NAT gateway into the VNet or run [`az aks upgrade`][aks-upgrade] to update the outbound type.
+### Limitations
+
+* You can't set cluster outbound type to LoadBalancer. You can set it to Nat Gateway or UDR:
+ * [NAT Gateway](./nat-gateway.md): NAT Gateway can automatically handle NAT connection and is more powerful than Standard Load Balancer. You might incur extra charges with this option.
+ * [UDR (UserDefinedRouting)](./limit-egress-traffic.md): You must keep port limitations in mind when configuring routing rules.
+ * If you need to switch from a load balancer to NAT Gateway, you can either add a NAT gateway into the VNet or run [`az aks upgrade`][aks-upgrade] to update the outbound type.
+
+> [!NOTE]
+> UserDefinedRouting has the following limitations:
+>
+> * SNAT by Load Balancer (must use the default OutboundNAT) has "64 ports on the host IP".
+> * SNAT by Azure Firewall (disable OutboundNAT) has 2496 ports per public IP.
+> * SNAT by NAT Gateway (disable OutboundNAT) has 64512 ports per public IP.
+> * If the Azure Firewall port range isn't enough for your application, you need to use NAT Gateway.
+> * Azure Firewall doesn't SNAT with Network rules when the destination IP address is in a private IP address range per [IANA RFC 1918 or shared address space per IANA RFC 6598](../firewall/snat-private-range.md).
### Manually disable OutboundNAT for Windows * Manually disable OutboundNAT for Windows when creating new Windows agent pools using the [`az aks nodepool add`][az-aks-nodepool-add] command with the `--disable-windows-outbound-nat` flag. > [!NOTE]
- > You can use an existing AKS cluster, but you may need to update the outbound type and add a node pool to enable `--disable-windows-outbound-nat`.
+ > You can use an existing AKS cluster, but you might need to update the outbound type and add a node pool to enable `--disable-windows-outbound-nat`.
- ```azurecli
+ ```azurecli-interactive
az aks nodepool add \ --resource-group myResourceGroup --cluster-name myNatCluster
For more information on Azure NAT Gateway, see [Azure NAT Gateway][nat-docs].
[az-network-nat-gateway-create]: /cli/azure/network/nat/gateway#az_network_nat_gateway_create [az-network-vnet-create]: /cli/azure/network/vnet#az_network_vnet_create [az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add
+[az-provider-register]: /cli/azure/provider#az_provider_register
api-management Self Hosted Gateway Settings Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-settings-reference.md
This guidance helps you provide the required information to define how to authen
| neighborhood.heartbeat.port | UDP port used for instances of a self-hosted gateway deployment to send heartbeats to other instances. | No | 4291 | v2.0+ | | policy.rate-limit.sync.port | UDP port used for self-hosted gateway instances to synchronize rate limiting across multiple instances. | No | 4290 | v2.0+ |
+## Kubernetes Integration
+
+### Kubernetes Ingress
+
+> [!IMPORTANT]
+> Support for Kubernetes Ingress is currently experimental and not covered through Azure Support. Learn more on [GitHub](https://github.com/Azure/api-management-self-hosted-gateway-ingress).
+
+| Name | Description | Required | Default | Availability |
+|-||-|-| -|
+| k8s.ingress.enabled | Enable Kubernetes Ingress integration. | No | `false` | v1.2+ |
+| k8s.ingress.namespace | Kubernetes namespace to watch Kubernetes Ingress resources in. | No | `default` | v1.2+ |
+| k8s.ingress.dns.suffix | DNS suffix to build DNS hostname for services to send requests to. | No | `svc.cluster.local` | v2.4+ |
+| k8s.ingress.config.path | Path to Kubernetes configuration (Kubeconfig). | No | N/A | v2.4+ |
+ ## Metrics | Name | Description | Required | Default | Availability |
application-gateway Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/troubleshooting-guide.md
Logs can be collected from the ALB Controller by using the _kubectl logs_ comman
You should see the following if the pod is primary: `successfully acquired lease azure-alb-system/alb-controller-leader-election` 2. Collect the logs
- Logs from ALB Controller will be returned in JSON format.
+
+ Logs from ALB Controller will be returned in JSON format.
Execute the following kubectl command, replacing the name with the pod name returned in step 1: ```bash
automation Enable Vms Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/enable-vms-monitoring-agent.md
Last updated 06/28/2023
-# Enable Change Tracking and Inventory using Azure Monitoring Agent (Preview)
+# Enable Change Tracking and Inventory using Azure Monitoring Agent
-**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: Windows Registry :heavy_check_mark: Windows Files :heavy_check_mark: Linux Files :heavy_check_mark: Windows Software
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: Windows Registry :heavy_check_mark: Windows Files :heavy_check_mark: Linux Files :heavy_check_mark: Windows Software :heavy_check_mark: File Content Changes
-> [!IMPORTANT]
-> Currently, the policies to enable Change tracking and inventory with Azure monitoring Agent are in preview. For a seamless policy experience, we recommend that you begin by enabling the *Microsoft.Compute/AutomaticExtensionUpgradePreview* feature flag for your specific subscription. To register for this feature flag, go to **Azure portal** > **Subscriptions** > *Select specific subscription name*. In the **Preview features**, select **Automatic Extension Upgrade Preview** and then select **Register**. :::image type="content" source="media/enable-vms-monitoring-agent/enable-feature-flag.png" alt-text="Screenshot to register the feature flag.":::
- This article describes how you can enable [Change Tracking and Inventory](overview.md) for single and multiple Azure Virtual Machines (VMs) from the Azure portal. ## Prerequisites
automation Guidance Migration Log Analytics Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/guidance-migration-log-analytics-monitoring-agent.md
+
+ Title: Migration guidance from Change Tracking and inventory using Log Analytics to Azure Monitoring Agent
+description: An overview on how to migrate from Change Tracking and inventory using Log Analytics to Azure Monitoring Agent.
++++ Last updated : 09/14/2023+++
+# Migration guidance from Change Tracking and inventory using Log Analytics to Change Tracking and inventory using Azure Monitoring Agent version
+
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: Azure Arc-enabled servers.
+
+This article provides guidance to move from Change Tracking and Inventory using Log Analytics version to the Azure Monitoring Agent version.
+
+## Onboarding to Change tracking and inventory using Azure Monitoring Agent
+
+### [Using Azure portal - for single VM](#tab/ct-single-vm)
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and select your virtual machine
+1. Under **Operations** , select **Change tracking**.
+1. Select **Configure with AMA** and in the **Configure with Azure monitor agent**, provide the **Log analytics workspace** and select **Migrate** to initiate the deployment.
+
+ :::image type="content" source="media/guidance-migration-log-analytics-monitoring-agent/onboarding-single-vm-inline.png" alt-text="Screenshot of onboarding a single VM to Change tracking and inventory using Azure monitoring agent." lightbox="media/guidance-migration-log-analytics-monitoring-agent/onboarding-single-vm-expanded.png":::
+
+1. Select **Switch to CT&I with AMA** to evaluate the incoming events and logs across LA agent and AMA version.
+
+ :::image type="content" source="media/guidance-migration-log-analytics-monitoring-agent/switch-versions-inline.png" alt-text="Screenshot that shows switching between log analytics and Azure Monitoring Agent after a successful migration." lightbox="media/guidance-migration-log-analytics-monitoring-agent/switch-versions-expanded.png":::
+
+### [Using Azure portal - for Automation account](#tab/ct-at-scale)
+
+1. Sign in to [Azure portal](https://portal.azure.com) and select your Automation account.
+1. Under **Configuration Management**, select **Change tracking** and then select **Configure with AMA**.
+
+ :::image type="content" source="media/guidance-migration-log-analytics-monitoring-agent/onboarding-at-scale-inline.png" alt-text="Screenshot of onboarding at scale to Change tracking and inventory using Azure monitoring agent." lightbox="media/guidance-migration-log-analytics-monitoring-agent/onboarding-at-scale-expanded.png":::
+
+1. On the **Onboarding to Change Tracking with Azure Monitoring** page, you can view your automation account and list of machines that are currently on Log Analytics and ready to be onboarded to Azure Monitoring Agent of Change Tracking and inventory.
+1. On the **Assess virtual machines** tab, select the machines and then select **Next**.
+1. On **Assign workspace** tab, assign a new [Log Analytics workspace resource ID](#obtain-log-analytics-workspace-resource-id) to which the settings of AMA based solution should be stored and select **Next**.
+
+ :::image type="content" source="media/guidance-migration-log-analytics-monitoring-agent/assign-workspace-inline.png" alt-text="Screenshot of assigning new Log Analytics resource ID." lightbox="media/guidance-migration-log-analytics-monitoring-agent/assign-workspace-expanded.png":::
+
+1. On **Review** tab, you can review the machines that are being onboarded and the new workspace.
+1. Select **Migrate** to initiate the deployment.
+
+1. After a successful migration, select **Switch to CT&I with AMA** to compare both the LA and AMA experience.
+
+ :::image type="content" source="media/guidance-migration-log-analytics-monitoring-agent/switch-versions-inline.png" alt-text="Screenshot that shows switching between log analytics and Azure Monitoring Agent after a successful migration." lightbox="media/guidance-migration-log-analytics-monitoring-agent/switch-versions-expanded.png":::
++
+### [Using PowerShell script](#tab/ps-policy)
+
+#### Prerequisites
+
+- Ensure to have the Windows PowerShell console installed. Follow the steps to [install Windows PowerShell](https://learn.microsoft.com/powershell/scripting/windows-powershell/install/installing-windows-powershell?view=powershell-7.3).
+- We recommend that you use PowerShell version 7.1.3 or higher.
+- Obtain Read access for the specified workspace resources.
+- Ensure that you have `Az.Accounts` and `Az.OperationalInsights` modules installed. The `Az.PowerShell` module is used to pull workspace agent configuration information.
+- Ensure to have the Azure credentials to run `Connect-AzAccount` and `Select Az-Context` that set the context for the script to run.
+Follow these steps to migrate using scripts.
+
+#### Migration guidance
+
+1. Install the script to run to conduct migrations.
+1. Ensure that the new workspace resource ID is different to the one with which it's associated to in the Change Tracking and Inventory using the LA version.
+1. Migrate settings for the following data types:
+ - Windows Services
+ - Linux Files
+ - Windows Files
+ - Windows Registry
+ - Linux Daemons
+1. Generate and associates a new DCR to transfer the settings to the Change Tracking and Inventory using AMA.
+
+#### Onboard at scale
+
+Use the [script](https://github.com/mayguptMSFT/AzureMonitorCommunity/blob/master/Azure%20Services/Azure%20Monitor/Agents/Migration%20Tools/DCR%20Config%20Generator/CTDcrGenerator/CTWorkSpaceSettingstoDCR.ps1) to migrate Change tracking workspace settings to data collection rule.
+
+#### Parameters
+
+**Parameter** | **Required** | **Description** |
+ | | |
+`InputWorkspaceResourceId`| Yes | Resource ID of the workspace associated to Change Tracking & Inventory with Log Analytics. |
+`OutputWorkspaceResourceId`| Yes | Resource ID of the workspace associated to Change Tracking & Inventory with Azure Monitoring Agent. |
+`OutputDCRName`| Yes | Custom name of the new DCR created. |
+`OutputDCRLocation`| Yes | Azure location of the output workspace ID. |
+`OutputDCRTemplateFolderPath`| Yes | Folder path where DCR templates are created. |
+++
+### Obtain Log Analytics Workspace Resource ID
+
+To obtain the Log Analytics Workspace resource ID, follow these steps:
+
+1. Sign in to [Azure portal](https://portal.azure.com)
+1. In **Log Analytics Workspace**, select the specific workspace and select **Json View**.
+1. Copy the **Resource ID**.
++
+## Limitations
+
+### [Using Azure portal](#tab/limit-single-vm)
+
+**For single VM and Automation Account**
+
+1. 100 VMs per Automation Account can be migrated in one instance.
+1. Any VM with > 100 file/registry settings for migration via portal isn't supported now.
+1. Arc VM migration isn't supported with portal, we recommend that you use PowerShell script migration.
+1. For File Content changes-based settings, you have to migrate manually from LA version to AMA version of Change Tracking & Inventory. Follow the guidance listed in [Track file contents](manage-change-tracking-monitoring-agent.md#configure-file-content-changes).
+1. Alerts that you configure using the Log Analytics Workspace must be [manually configured](configure-alerts.md).
+
+### [Using PowerShell script](#tab/limit-policy)
+
+1. For File Content changes-based settings, you have to migrate manually from LA version to AMA version of Change Tracking & Inventory. Follow the guidance listed in [Track file contents](manage-change-tracking.md#track-file-contents).
+1. Alerts that you configure using the Log Analytics Workspace must be [manually configured](configure-alerts.md).
+++
+## Disable Change tracking using Log Analytics Agent
+
+After you enable management of your virtual machines using Change Tracking and Inventory using Azure Monitoring Agent, you might decide to stop using Change Tracking & Inventory with LA agent version and remove the configuration from the account.
+
+The disable method incorporates the following:
+- [Removes change tracking with LA agent for selected few VMs within Log Analytics Workspace](remove-vms-from-change-tracking.md).
+- [Removes change tracking with LA agent from the entire Log Analytics Workspace](remove-feature.md).
+
+## Next steps
+- To enable from the Azure portal, see [Enable Change Tracking and Inventory from the Azure portal](../change-tracking/enable-vms-monitoring-agent.md).
+
automation Manage Change Tracking Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/manage-change-tracking-monitoring-agent.md
Title: Manage change tracking and inventory in Azure Automation using Azure Monitoring Agent (Preview)
-description: This article tells how to use change tracking and inventory to track software and Microsoft service changes in your environment using Azure Monitoring Agent (Preview)
+ Title: Manage change tracking and inventory in Azure Automation using Azure Monitoring Agent
+description: This article tells how to use change tracking and inventory to track software and Microsoft service changes in your environment using Azure Monitoring Agent
Last updated 07/17/2023
-# Manage change tracking and inventory using Azure Monitoring Agent (Preview)
+# Manage change tracking and inventory using Azure Monitoring Agent
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: Windows Registry :heavy_check_mark: Windows Files :heavy_check_mark: Linux Files :heavy_check_mark: Windows Software
automation Overview Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview-monitoring-agent.md
Title: Azure Automation Change Tracking and Inventory overview using Azure Monitoring Agent (Preview)
-description: This article describes the Change Tracking and Inventory feature using Azure monitoring agent (Preview), which helps you identify software and Microsoft service changes in your environment.
+ Title: Azure Automation Change Tracking and Inventory overview using Azure Monitoring Agent
+description: This article describes the Change Tracking and Inventory feature using Azure monitoring agent, which helps you identify software and Microsoft service changes in your environment.
Previously updated : 09/08/2023 Last updated : 10/02/2023
-# Overview of change tracking and inventory using Azure Monitoring Agent (Preview)
+# Overview of change tracking and inventory using Azure Monitoring Agent
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: Windows Registry :heavy_check_mark: Windows Files :heavy_check_mark: Linux Files :heavy_check_mark: Windows Software :heavy_check_mark: Windows Services & Linux Daemons > [!Important]
-> Currently, Change tracking and inventory uses Log Analytics Agent and this is scheduled to retire by 31.August.2024. We recommend that you use Azure Monitoring Agent as the new supporting agent.
-> Guidance on migration from Change Tracking & Inventory using Log Analytics agent to Azure Monitoring Agent will be available once it is generally available.
+> - Currently, Change tracking and inventory uses Log Analytics Agent and this is scheduled to retire by 31.August.2024. We recommend that you use Azure Monitoring Agent as the new supporting agent.
+> - Guidance on migration from Change Tracking & Inventory using Log Analytics agent to Azure Monitoring Agent will be available once it is generally available. [Learn more](guidance-migration-log-analytics-monitoring-agent.md).
+> - We recommend that you use Change Tracking with Azure Monitoring Agent with the Change tracking extension version 2.20.0.0 (or above) to access the GA version of this service.
-This article explains on the latest version of change tracking support using Azure Monitoring Agent (Preview) as a singular agent for data collection.
+This article explains on the latest version of change tracking support using Azure Monitoring Agent as a singular agent for data collection.
+
+> [!NOTE]
+> The [Current GA version](../../defender-for-cloud/file-integrity-monitoring-enable-log-analytics.md) of File Integrity Monitoring based on Log Analytics agent, will be deprecated in August 2024, and a **new version will be provided over MDE soon**.  The **[FIM Public Preview](../../defender-for-cloud/file-integrity-monitoring-enable-ama.md) based on Azure Monitor Agent (AMA), will be deprecated when the alternative is provided over MDE**. Hence, the FIM with AMA Public Preview version is not planned for GA. Read the announcement [here](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-cloud-strategy-and-plan-towards-log/ba-p/3883341).
## Key benefits
so that all VMs point to a single workspace for data collection and maintenance.
## Current limitations
-Change Tracking and Inventory using Azure Monitoring Agent (Preview) doesn't support or has the following limitations:
+Change Tracking and Inventory using Azure Monitoring Agent doesn't support or has the following limitations:
- Recursion for Windows registry tracking - Network file systems
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview.md
The following table shows the tracked item limits per machine for Change Trackin
|Services|250| |Daemons|250|
-The average Log Analytics data usage for a machine using Change Tracking and Inventory is approximately 40 MB per month, depending on your environment. With the Usage and Estimated Costs feature of the Log Analytics workspace, you can view the data ingested by Change Tracking and Inventory in a usage chart. Use this data view to evaluate your data usage and determine how it affects your bill. See [Understand your usage and estimate costs](../../azure-monitor/usage-estimated-costs.md).
+The average Log Analytics data usage for a machine using Change Tracking and Inventory is approximately 40 MB per month, depending on your environment. With the Usage and Estimated Costs feature of the Log Analytics workspace, you can view the data ingested by Change Tracking and Inventory in a usage chart. Use this data view to evaluate your data usage and determine how it affects your bill. See [Understand your usage and estimate costs](../../azure-monitor/cost-usage.md#usage-and-estimated-costs).
### Windows services data
automation Region Mappings Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/region-mappings-monitoring-agent.md
Title: Supported regions for Change tracking and inventory using Azure Monitoring Agent (Preview)
+ Title: Supported regions for Change tracking and inventory using Azure Monitoring Agent
description: This article describes the supported region mappings between an Automation account and monitoring agent workspace as it relates to certain features of Azure Automation. Last updated 12/14/2022
-# Supported regions for Change tracking and inventory Azure Monitoring Agent (Preview)
+# Supported regions for Change tracking and inventory Azure Monitoring Agent
This article provides the supported regions for change tracking and inventory using Azure Monitoring Agent.
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/overview.md
Configuration Management in Azure Automation is supported by two capabilities:
### Change Tracking and Inventory
-Change Tracking and Inventory combines functions to allow you to track Linux and Windows virtual machine and server infrastructure changes. The service supports change tracking across services, daemons, software, registry, and files in your environment to help you diagnose unwanted changes and raise alerts. Inventory support allows you to query in-guest resources for visibility into installed applications and other configuration items. For details of this feature, see [Change Tracking and Inventory](change-tracking/overview.md).
+[Change Tracking and Inventory](change-tracking/overview.md) combines functions to allow you to track Linux and Windows virtual machine and server infrastructure changes. The service supports change tracking across services, daemons, software, registry, and files in your environment to help you diagnose unwanted changes and raise alerts. Inventory support allows you to query in-guest resources for visibility into installed applications and other configuration items. Change Tracking & Inventory is now supported with the Azure Monitoring Agent version. [Learn more](change-tracking/overview-monitoring-agent.md).
### Azure Automation State Configuration
Azure Automation supports management throughout the lifecycle of your infrastruc
- Subscription management. - Start-stop resources to save cost. * **Monitoring & integrate** with 1st party (through Azure Monitor) or 3rd party external systems.
- - Ensure resource creation\deletion operations is captured to SQL.
+ - Ensure resource creation\deletion operations are captured to SQL.
- Send resource usage data to web API.
- - Send monitoring data to ServiceNow, Event Hub, New Relic and so on.
+ - Send monitoring data to ServiceNow, Event Hubs, New Relic and so on.
- Collect and store information about Azure resources. - Perform SQL monitoring checks & reporting. - Check website availability.
Azure Automation supports management throughout the lifecycle of your infrastruc
* **Find changes** - Identify and isolate machine changes that can cause misconfiguration and improve operational compliance. Remediate or escalate them to management systems.
-Depending on your requirements, one or more of the following Azure services integrate with or compliment Azure Automation to help fullfil them:
+Depending on your requirements, one or more of the following Azure services integrate with or complement Azure Automation to help fulfill them:
* [Azure Arc-enabled servers](../azure-arc/servers/overview.md) enables simplified onboarding of hybrid machines to Update Management, Change Tracking and Inventory, and the Hybrid Runbook Worker role. * [Azure Alerts action groups](../azure-monitor/alerts/action-groups.md) can initiate an Automation runbook when an alert is raised.
automation Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/onboarding.md
Failed to configure automation account for diagnostic logging
#### Cause
-This error can be caused if the pricing tier doesn't match the subscription's billing model. For more information, see [Monitoring usage and estimated costs in Azure Monitor](../../azure-monitor//usage-estimated-costs.md).
+This error can be caused if the pricing tier doesn't match the subscription's billing model. For more information, see [Monitoring usage and estimated costs in Azure Monitor](../../azure-monitor/cost-usage.md#usage-and-estimated-costs).
#### Resolution
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
This page is updated monthly, so revisit it regularly. If you're looking for ite
## October 2023 +
+### General Availability: Change Tracking using Azure Monitoring Agent
+
+Azure Automation announces General Availability of Change Tracking using Azure Monitoring Agent. [Learn more](change-tracking/guidance-migration-log-analytics-monitoring-agent.md).
+ ### Retirement of Run As accounts **Type: Retirement**
azure-app-configuration Concept Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-snapshots.md
For stores that use HMAC authentication, both the "read snapshot" operation (to
## Billing considerations and limits
-The storage quota for snapshots is detailed in the "storage per resource section" of the [App Configuration pricing page](https://azure.microsoft.com/pricing/details/app-configuration/) There's no extra charge for snapshots before the included snapshot storage quota is exhausted.
- App Configuration has two tiers, Free and Standard. Check the following details for snapshot quotas in each tier. * **Free tier**: This tier has a snapshot storage quota of 10 MB. One can create as many snapshots as possible as long as the total storage size of all active and archived snapshots is less than 10 MB.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md
Title: Azure Arc resource bridge (preview) overview description: Learn how to use Azure Arc resource bridge (preview) to support VM self-servicing on Azure Stack HCI, VMware, and System Center Virtual Machine Manager. Previously updated : 10/26/2023 Last updated : 10/31/2023
While Azure has a number of redundancy features at every level of failure, if a
The following private cloud environments and their versions are officially supported for Arc resource bridge:
-* VMware vSphere version 6.7, 7.0, 8.0
+* VMware vSphere version 7.0, 8.0
* Azure Stack HCI * SCVMM
Arc resource bridge typically releases a new version on a monthly cadence, at th
* Learn more about [how Azure Arc-enabled VMware vSphere extends Azure's governance and management capabilities to VMware vSphere infrastructure](../vmware-vsphere/overview.md). * Learn more about [provisioning and managing on-premises Windows and Linux VMs running on Azure Stack HCI clusters](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines).
-* Review the [system requirements](system-requirements.md) for deploying and managing Arc resource bridge.
+* Review the [system requirements](system-requirements.md) for deploying and managing Arc resource bridge.
azure-arc Prepare Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prepare-extended-security-updates.md
We recommend you deploy your machines to Azure Arc in preparation for when the r
There are several at-scale onboarding options for Azure Arc-enabled servers, including running a [Custom Task Sequence](onboard-configuration-manager-custom-task.md) through Configuration Manager and deploying a [Scheduled Task through Group Policy](onboard-group-policy-powershell.md). > [!NOTE]
-> Delivery of ESUs through Azure Arc to virtual machines running on Virtual Desktop Infrastructure (VDI) is not supported. VDI systems should use Multiple Activation Keys (MAK) to apply ESUs. See [Access your Multiple Activation Key from the Microsoft 365 Admin Center](/windows-server/get-started/extended-security-updates-deploy) to learn more.
+> Delivery of ESUs through Azure Arc to virtual machines running on Virtual Desktop Infrastructure (VDI) is not recommended. VDI systems should use Multiple Activation Keys (MAK) to apply ESUs. See [Access your Multiple Activation Key from the Microsoft 365 Admin Center](/windows-server/get-started/extended-security-updates-deploy) to learn more.
> ### Networking
azure-arc Azure Arc Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/azure-arc-agent.md
Title: Azure Arc agent description: Learn about Azure Arc agent Previously updated : 10/23/2023 Last updated : 10/31/2023
# Azure Arc agent
-The Azure Connected Machine agent enables you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud providers.
+When you [enable guest management](enable-guest-management-at-scale.md) on VMware VMs, Azure Arc agent is installed on the VMs. The Azure Connected Machine agent enables you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud providers. This article provides an architectural overview of Azure connected machine agent.
## Agent components
azure-arc Browse And Enable Vcenter Resources In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/browse-and-enable-vcenter-resources-in-azure.md
Title: Enable your VMware vCenter resources in Azure description: Learn how to browse your vCenter inventory and represent a subset of your VMware vCenter resources in Azure to enable self-service. Previously updated : 11/06/2023 Last updated : 10/31/2023
In this section, you will enable resource pools, networks, and other non-VM reso
For information on the capabilities enabled by a guest agent, see [Manage access to VMware resources through Azure RBAC](setup-and-manage-self-service-access.md).
+>[!NOTE]
+>Moving VMware vCenter resources between Resource Groups and Subscriptions is currently not supported.
+
## Next steps -- [Manage access to VMware resources through Azure RBAC](setup-and-manage-self-service-access.md).
+[Manage access to VMware resources through Azure RBAC](setup-and-manage-self-service-access.md).
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/overview.md
Title: What is Azure Arc-enabled VMware vSphere (preview)? description: Azure Arc-enabled VMware vSphere extends Azure governance and management capabilities to VMware vSphere infrastructure and delivers a consistent management experience across both platforms. Previously updated : 08/21/2023 Last updated : 10/31/2023
You have the flexibility to start with either option, and incorporate the other
## Supported VMware vSphere versions
-Azure Arc-enabled VMware vSphere (preview) currently works with vCenter Server versions 6.7, 7, and 8.
+Azure Arc-enabled VMware vSphere (preview) currently works with vCenter Server versions 7 and 8.
+ > [!NOTE] > Azure Arc-enabled VMware vSphere (preview) supports vCenters with a maximum of 9500 VMs. If your vCenter has more than 9500 VMs, we don't recommend you to use Arc-enabled VMware vSphere with it at this point.
azure-arc Switch To New Preview Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/switch-to-new-preview-version.md
If you're an existing **Azure Arc-enabled VMware** customer, for VMs that are Az
5. Once the resources are re-enabled, the VMs are auto switched to the new preview version. The VM resources will now be represented as **Machine - Azure Arc (VMware)**. :::image type=" New VM browse view" source="media/switch-to-new-preview-version/new-vm-browse-view-inline.png" alt-text="Screenshot of New VM browse view." lightbox="media/switch-to-new-preview-version/new-vm-browse-view-expanded.png":::
-
+ ## Next steps [Quickstart: Connect VMware vCenter Server to Azure Arc by using the helper script](/azure/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script).
azure-functions Dotnet Isolated In Process Differences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-in-process-differences.md
Use the following table to compare feature and functional differences between th
| Logging | [ILogger&lt;T&gt;]/[ILogger] obtained from [FunctionContext](/dotnet/api/microsoft.azure.functions.worker.functioncontext) or via [dependency injection](dotnet-isolated-process-guide.md#dependency-injection)| [ILogger] passed to the function<br/>[ILogger&lt;T&gt;] via [dependency injection](functions-dotnet-dependency-injection.md) | | Application Insights dependencies | [Supported](./dotnet-isolated-process-guide.md#application-insights) | [Supported](functions-monitoring.md#dependencies) | | Cancellation tokens | [Supported](dotnet-isolated-process-guide.md#cancellation-tokens) | [Supported](functions-dotnet-class-library.md#cancellation-tokens) |
-| Cold start times<sup>2</sup> | [Configurable optimizations (preview)](./dotnet-isolated-process-guide.md#performance-optimizations) | Optimized |
+| Cold start times<sup>2</sup> | [Configurable optimizations](./dotnet-isolated-process-guide.md#performance-optimizations) | Optimized |
| ReadyToRun | [Supported](dotnet-isolated-process-guide.md#readytorun) | [Supported](functions-dotnet-class-library.md#readytorun) | <sup>1</sup> When you need to interact with a service using parameters determined at runtime, using the corresponding service SDKs directly is recommended over using imperative bindings. The SDKs are less verbose, cover more scenarios, and have advantages for error handling and debugging purposes. This recommendation applies to both models.
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
The following example performs clean-up actions if a cancellation request has be
This section outlines options you can enable to improve performance around [cold start](./event-driven-scaling.md#cold-start).
-### Placeholders (preview)
+In general, your app should use the latest versions of its core dependencies. At a minimum, you should update your project as follows:
-Placeholders are a platform capability that improves cold start. Normally, you do not have to be aware of them, but during the preview period for placeholders for .NET Isolated, they require some opt-in configuration. Placeholders require .NET 6 or later. To enable placeholders:
+- Upgrade [Microsoft.Azure.Functions.Worker] to version 1.19.0 or later.
+- Upgrade [Microsoft.Azure.Functions.Worker.Sdk] to version 1.15.1 or later.
+- Add a framework reference to `Microsoft.AspNetCore.App`, unless your app targets .NET Framework.
-- Set the `WEBSITE_USE_PLACEHOLDER_DOTNETISOLATED` application setting to "1"-- Ensure that the `netFrameworkVersion` property of the function app matches your project's target framework, which must be .NET 6 or later.-- Ensure that the function app is configured to use a 64-bit process.-- Update your project file:
- - Upgrade [Microsoft.Azure.Functions.Worker] to version 1.19.0 or later
- - Upgrade [Microsoft.Azure.Functions.Worker.Sdk] to version 1.14.1 or later
- - Add a framework reference to `Microsoft.AspNetCore.App`
- - Set the property `FunctionsEnableWorkerIndexing` to "True".
- - Set the property `FunctionsAutoRegisterGeneratedMetadataProvider` to "True"
-
-> [!NOTE]
-> Setting `FunctionsEnableWorkerIndexing` to "True" may cause an issue when debugging locally using version 4.0.5274 or earlier of the [Azure Functions Core Tools](./functions-run-local.md). The issue manifests with the debugger not being able to attach. If you encounter this issue, remove the `FunctionsEnableWorkerIndexing` property during local testing.
-
-The following CLI commands will set the application setting, update the `netFrameworkVersion` property, and make the app run as 64-bit. Replace `<groupName>` with the name of the resource group, and replace `<appName>` with the name of your function app. Replace `<framework>` with the appropriate version string, such as "v6.0" or "v7.0", according to your target .NET version.
-
-```azurecli
-az functionapp config appsettings set -g <groupName> -n <appName> --settings 'WEBSITE_USE_PLACEHOLDER_DOTNETISOLATED=1'
-az functionapp config set -g <groupName> -n <appName> --net-framework-version <framework>
-az functionapp config set -g <groupName> -n <appName> --use-32bit-worker-process false
-```
-
-The following example shows a project file with the appropriate changes in place:
+The following example shows this configuration in the context of a project file:
```xml
-<Project Sdk="Microsoft.NET.Sdk">
- <PropertyGroup>
- <TargetFramework>net6.0</TargetFramework>
- <AzureFunctionsVersion>v4</AzureFunctionsVersion>
- <OutputType>Exe</OutputType>
- <ImplicitUsings>enable</ImplicitUsings>
- <Nullable>enable</Nullable>
- <FunctionsEnableWorkerIndexing>True</FunctionsEnableWorkerIndexing>
- <FunctionsAutoRegisterGeneratedMetadataProvider>True</FunctionsAutoRegisterGeneratedMetadataProvider>
- </PropertyGroup>
<ItemGroup> <FrameworkReference Include="Microsoft.AspNetCore.App" /> <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.19.0" />
- <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.14.1" />
- </ItemGroup>
- <ItemGroup>
- <None Update="host.json">
- <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
- </None>
- <None Update="local.settings.json">
- <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
- <CopyToPublishDirectory>Never</CopyToPublishDirectory>
- </None>
- </ItemGroup>
- <ItemGroup>
- <Using Include="System.Threading.ExecutionContext" Alias="ExecutionContext" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.15.1" />
</ItemGroup>
-</Project>
```
-### Optimized executor (preview)
+### Placeholders
-The function executor is a component of the platform that causes invocations to run. By default, it makes use of reflection, but a newer version is available in preview which removes this performance overhead. Normally, you do not have to be aware of this component, but during the preview period of the new version, it requires some opt-in configuration.
+Placeholders are a platform capability that improves cold start for apps targeting .NET 6 or later. The feature requires some opt-in configuration. To enable placeholders:
-To enable the optimized executor, you must update your project file:
+- **Update your project as detailed in the preceding section.**
+- Additionally, when using version 1.15.1 or earlier of `Microsoft.Azure.Functions.Worker.Sdk`, you must add two properties to the project file:
+ - Set the property `FunctionsEnableWorkerIndexing` to "True".
+ - Set the property `FunctionsAutoRegisterGeneratedMetadataProvider` to "True".
+- Set the `WEBSITE_USE_PLACEHOLDER_DOTNETISOLATED` application setting to "1".
+- Ensure that the `netFrameworkVersion` property of the function app matches your project's target framework, which must be .NET 6 or later.
+- Ensure that the function app is configured to use a 64-bit process.
-- Upgrade [Microsoft.Azure.Functions.Worker] to version 1.19.0 or later-- Upgrade [Microsoft.Azure.Functions.Worker.Sdk] to version 1.14.1 or later-- Set the property `FunctionsEnableExecutorSourceGen` to "True"
+> [!IMPORTANT]
+> Setting the `WEBSITE_USE_PLACEHOLDER_DOTNETISOLATED` to "1" requires all other aspects of the configuration to be set correctly. Any deviation can cause startup failures.
-The following example shows a project file with the appropriate changes in place:
+The following CLI commands will set the application setting, update the `netFrameworkVersion` property, and make the app run as 64-bit. Replace `<groupName>` with the name of the resource group, and replace `<appName>` with the name of your function app. Replace `<framework>` with the appropriate version string, such as "v6.0", "v7.0", or "v8.0", according to your target .NET version.
-```xml
-<Project Sdk="Microsoft.NET.Sdk">
- <PropertyGroup>
- <TargetFramework>net6.0</TargetFramework>
- <AzureFunctionsVersion>v4</AzureFunctionsVersion>
- <OutputType>Exe</OutputType>
- <ImplicitUsings>enable</ImplicitUsings>
- <Nullable>enable</Nullable>
- <FunctionsEnableExecutorSourceGen>True</FunctionsEnableExecutorSourceGen>
- </PropertyGroup>
- <ItemGroup>
- <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.19.0" />
- <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.14.1" />
- </ItemGroup>
- <ItemGroup>
- <None Update="host.json">
- <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
- </None>
- <None Update="local.settings.json">
- <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
- <CopyToPublishDirectory>Never</CopyToPublishDirectory>
- </None>
- </ItemGroup>
- <ItemGroup>
- <Using Include="System.Threading.ExecutionContext" Alias="ExecutionContext" />
- </ItemGroup>
-</Project>
+```azurecli
+az functionapp config appsettings set -g <groupName> -n <appName> --settings 'WEBSITE_USE_PLACEHOLDER_DOTNETISOLATED=1'
+az functionapp config set -g <groupName> -n <appName> --net-framework-version <framework>
+az functionapp config set -g <groupName> -n <appName> --use-32bit-worker-process false
```
+### Optimized executor
+
+The function executor is a component of the platform that causes invocations to run. An optimized version of this component is available, and in version 1.15.1 or earlier of the SDK, it requires opt-in configuration. To enable the optimized executor, you must update your project file:
+
+- **Update your project as detailed in the above section.**
+- Additionally set the property `FunctionsEnableExecutorSourceGen` to "True"
+ ### ReadyToRun You can compile your function app as [ReadyToRun binaries](/dotnet/core/deploying/ready-to-run). ReadyToRun is a form of ahead-of-time compilation that can improve startup performance to help reduce the effect of cold starts when running in a [Consumption plan](consumption-plan.md). ReadyToRun is available in .NET 6 and later versions and requires [version 4.0 or later](functions-versions.md) of the Azure Functions runtime.
ReadyToRun requires you to build the project against the runtime architecture of
| Linux | True | N/A (not supported) | | Linux | False | `linux-x64` |
-<sup>1</sup> Only 64-bit apps are eligible for some other performance optimizations such as [placeholders](#placeholders-preview).
+<sup>1</sup> Only 64-bit apps are eligible for some other performance optimizations.
To check if your Windows app is 32-bit or 64-bit, you can run the following CLI command, substituting `<group_name>` with the name of your resource group and `<app_name>` with the name of your application. An output of "true" indicates that the app is 32-bit, and "false" indicates 64-bit.
azure-maps Migrate From Bing Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md
If migrating an existing web application, check to see if it's using an open-sou
* [Leaflet] ΓÇô Lightweight 2D map control for the web. [Leaflet code samples] \| [Leaflet plugin] * [OpenLayers] - A 2D map control for the web that supports projections. <!--[OpenLayers code samples] \|--> [OpenLayers plugin]
-If developing using a JavaScript framework, one of the following open-source projects may be useful:
+If developing using a JavaScript framework, one of the following open-source projects can be useful:
* [ng-azure-maps] - Angular 10 wrapper around Azure maps. * [AzureMapsControl.Components] - An Azure Maps Blazor component.
Azure Maps more [open-source modules for the web SDK] that extend its capabiliti
The following are some of the key differences between the Bing Maps and Azure Maps Web SDKs to be aware of: * In addition to providing a hosted endpoint for accessing the Azure Maps Web SDK, an npm package is available for embedding the Web SDK into apps if preferred. For more information, see [Use the Azure Maps map control]. This package also includes TypeScript definitions.
-* Bing Maps provides two hosted branches of their SDK; Release and Experimental. The Experimental branch may receive multiple updates a day when new development is taking place. Azure Maps only hosts a release branch, however experimental features are created as custom modules in the open-source Azure Maps code samples project. Bing Maps used to have a frozen branch as well that was updated less frequently, thus reducing the risk of breaking changes due to a release. In Azure Maps, you can use the npm module and point to any previous minor version release.
+* Bing Maps provides two hosted branches of their SDK; Release and Experimental. The Experimental branch can receive multiple updates a day when new development is taking place. Azure Maps only hosts a release branch, however experimental features are created as custom modules in the open-source Azure Maps code samples project. Bing Maps used to have a frozen branch as well that was updated less frequently, thus reducing the risk of breaking changes due to a release. In Azure Maps, you can use the npm module and point to any previous minor version release.
> [!TIP] > Azure Maps publishes both minified and unminified versions of the SDK. Simply remove `.min` from the file names. The unminified version is useful when debugging issues but be sure to use the minified version in production to take advantage of the smaller file size.
The following code shows how to load a map with the same view in Azure Maps alon
Running this code in a browser displays a map that looks like the following image:
-![Azure Maps map](media/migrate-bing-maps-web-app/azure-maps-load-map.jpg)
For more information on how to set up and use the Azure Maps map control in a web app, see [Use the Azure Maps map control].
map = new atlas.Map('myMap', {
Here's an example of Azure Maps with the language set to "fr" and the user region set to `fr-FR`.
-![Localized Azure Maps map](media/migrate-bing-maps-web-app/bing-maps-localized-map.jpg)
+![Localized Azure Maps map](media/migrate-bing-maps-web-app/azure-maps-localized-map.jpg)
### Setting the map view
map.setStyle({
}); ```
-![Azure Maps set map view](media/migrate-bing-maps-web-app/azure-maps-set-map-view.jpg)
**More resources**
map.markers.add(new atlas.HtmlMarker({
})); ```
-![Azure Maps add marker](media/migrate-bing-maps-web-app/azure-maps-add-pushpin.jpg)
**After: Azure Maps using a Symbol Layer**
When using a Symbol layer, the data must be added to a data source, and the data
</html> ```
-![Azure Maps add symbol layer](media/migrate-bing-maps-web-app/azure-maps-add-pushpin.jpg)
**More resources**
When using a Symbol layer, the data must be added to a data source, and the data
Custom images can be used to represent points on a map. The following image is used in the below examples and uses a custom image to display a point on the map at (latitude: 51.5, longitude: -0.2) and offsets the position of the marker so that the point of the pushpin icon aligns with the correct position on the map.
-| ![Azure Maps add puspin](media/migrate-bing-maps-web-app/yellow-pushpin.png)|
+| ![Azure Maps add pushpin.](media/migrate-bing-maps-web-app/yellow-pushpin.png)|
|:--:| | yellow-pushpin.png |
layer.add(pushpin);
map.layers.insert(layer); ```
-![Bing Maps add custom puspin](media/migrate-bing-maps-web-app/bing-maps-add-custom-pushpin.jpg)
+![Bing Maps add custom pushpin](media/migrate-bing-maps-web-app/bing-maps-add-custom-pushpin.jpg)
**After: Azure Maps using HTML Markers**
map.markers.add(new atlas.HtmlMarker({
})); ```
-![Azure Maps add custom marker](media/migrate-bing-maps-web-app/azure-maps-add-custom-marker.jpg)
**After: Azure Maps using a Symbol Layer** Symbol layers in Azure Maps support custom images as well, but the image needs to be loaded into the map resources first and assigned a unique ID. The symbol layer can then reference this ID. The symbol can be offset to align to the correct point on the image by using the icon `offset` option. In Azure Maps, an `anchor` option is used to specify the relative position of the symbol relative to the position coordinate using one of nine defined reference points; "center", "top", "bottom", "left", "right", "top-left", "top-right", "bottom-left", "bottom-right". The content is anchored and set to "bottom" by default that is the bottom center of the HTML content. To make it easier to migrate code from Bing Maps, set the anchor to "top-left", and then use the `offset` option with the same offset used in Bing Maps. The offsets in Azure Maps move in the opposite direction of Bing Maps, so multiply them by minus one.
-```javascript
+```html
<!DOCTYPE html> <html> <head>
map.layers.add(new atlas.layer.LineLayer(datasource, null, {
})); ```
-![Azure Maps line](media/migrate-bing-maps-web-app/azure-maps-line.jpg)
**More resources**
layer.add(polygon);
map.layers.insert(layer); ```
-![Bing Maps polyogn](media/migrate-bing-maps-web-app/azure-maps-polygon.jpg)
+![Bing Maps polyogn](media/migrate-bing-maps-web-app/bing-maps-polygon.jpg)
**After: Azure Maps**
map.layers.add(new atlas.layer.LineLayer(datasource, null, {
})); ```
-![Azure Maps polyogn](media/migrate-bing-maps-web-app/azure-maps-polygon.jpg)
**More resources**
map.events.add('click', marker, function () {
}); ```
-![Azure Maps popup](media/migrate-bing-maps-web-app/azure-maps-popup.jpg)
> [!NOTE] > To do the same thing with a symbol, bubble, line or polygon layer, pass the layer into the maps event code instead of a marker.
The `DataSource` class has the following helper function for accessing additiona
| Function | Return type | Description | |-|--|--|
-| `getClusterChildren(clusterId: number)` | `Promise<Feature<Geometry, any> | Shape>` | Retrieves the children of the given cluster on the next zoom level. These children may be a combination of shapes and subclusters. The subclusters are features with properties matching cluster properties. |
+| `getClusterChildren(clusterId: number)` | `Promise<Feature<Geometry, any> | Shape>` | Retrieves the children of the given cluster on the next zoom level. These children can be a combination of shapes and subclusters. The subclusters are features with properties matching cluster properties. |
| `getClusterExpansionZoom(clusterId: number)` | `Promise<number>` | Calculates a zoom level that the cluster starts expanding or break apart. | | `getClusterLeaves(clusterId: number, limit: number, offset: number)` | `Promise<Feature<Geometry, any> | Shape>` | Retrieves all points in a cluster. Set the `limit` to return a subset of the points and use the `offset` to page through the points. |
GeoJSON data can be directly imported in Azure Maps using the `importDataFromUrl
</html> ```
-![Azure Maps clustering](media/migrate-bing-maps-web-app/azure-maps-clustering.jpg)
**More resources**
In Azure Maps, georeferenced images can be overlaid using the `atlas.layer.Image
</html> ```
-![Azure Maps ground overlay](media/migrate-bing-maps-web-app/azure-maps-ground-overlay.jpg)
**More resources**
In Azure Maps, GeoJSON is the main data format used in the web SDK, more spatial
</html> ```
-![Azure Maps kml](media/migrate-bing-maps-web-app/azure-maps-kml.jpg)
**More resources**
In Azure Maps, the drawing tools module needs to be loaded by loading the JavaSc
</html> ```
-![Azure Maps drawing tools](media/migrate-bing-maps-web-app/azure-maps-drawing-tools.jpg)
> [!TIP] > In Azure Maps layers the drawing tools provide multiple ways that users can draw shapes. For example, when drawing a polygon the user can click to add each point, or hold the left mouse button down and drag the mouse to draw a path. This can be modified using the `interactionType` option of the `DrawingManager`.
Learn more about migrating from Bing Maps to Azure Maps.
[atlas.io.read function]: /javascript/api/azure-maps-spatial-io/atlas.io#read-stringarraybufferblob--spatialdatareadoptions- [atlas.layer.ImageLayer.getCoordinatesFromEdges]: /javascript/api/azure-maps-control/atlas.layer.imagelayer#getcoordinatesfromedges-number--number--number--number--number- [atlas.Shape]: /javascript/api/azure-maps-control/atlas.shape
-[Azure AD]: azure-maps-authentication.md#azure-ad-authentication
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [Azure Maps Glossary]: glossary.md [Azure Maps React Component]: https://github.com/WiredSolutions/react-azure-maps
azure-maps Migrate From Google Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-app.md
Also:
> [!div class="checklist"] > * How to accomplish common mapping tasks using the Azure Maps Web SDK. > * Best practices to improve performance and user experience.
-> * Tips on how to make your application using more advanced features available in Azure Maps.
+> * Tips on using more advanced features available in Azure Maps.
If migrating an existing web application, check to see if it's using an open-source map control library. Examples of open-source map control library are: Cesium, Leaflet, and OpenLayers. You can still migrate your application, even if it uses an open-source map control library, and you don't want to use the Azure Maps Web SDK. In such case, connect your application to the Azure Maps [Render] services ([road tiles] | [satellite tiles]). The following points detail on how to use Azure Maps in some commonly used open-source map control libraries.
If migrating an existing web application, check to see if it's using an open-sou
* Leaflet ΓÇô Lightweight 2D map control for the web. [Leaflet code sample] \| [Leaflet documentation]. * OpenLayers - A 2D map control for the web that supports projections. [OpenLayers documentation].
-If developing using a JavaScript framework, one of the following open-source projects may be useful:
+If developing using a JavaScript framework, one of the following open-source projects can be useful:
* [ng-azure-maps] - Angular 10 wrapper around Azure Maps. * [AzureMapsControl.Components] - An Azure Maps Blazor component.
For more information on supported languages, see [Localization support in Azure
Here's an example of Azure Maps with the language set to "fr" and the user region set to "fr-FR".
-![Azure Maps localization](media/migrate-google-maps-web-app/azure-maps-localization.jpg)
### Setting the map view
map.setStyle({
}); ```
-![Azure Maps set view](media/migrate-google-maps-web-app/azure-maps-set-view.jpg)
**More resources:**
map.markers.add(new atlas.HtmlMarker({
})); ```
-![Azure Maps HTML marker](media/migrate-google-maps-web-app/azure-maps-html-marker.jpg)
**After: Azure Maps using a Symbol Layer**
For a Symbol layer, add the data to a data source. Attach the data source to the
</html> ```
-![Azure Maps symbol layer](media/migrate-google-maps-web-app/azure-maps-symbol-layer.jpg)
**More resources:**
For a Symbol layer, add the data to a data source. Attach the data source to the
### Adding a custom marker
-You may use Custom images to represent points on a map. The following map uses a custom image to display a point on the map. The point is displayed at latitude: 51.5 and longitude: -0.2. The anchor offsets the position of the marker, so that the point of the pushpin icon aligns with the correct position on the map.
+You can use Custom images to represent points on a map. The following map uses a custom image to display a point on the map. The point is displayed at latitude: 51.5 and longitude: -0.2. The anchor offsets the position of the marker, so that the point of the pushpin icon aligns with the correct position on the map.
<center>
map.markers.add(new atlas.HtmlMarker({
})); ```
-![Azure Maps custom HTML marker](media/migrate-google-maps-web-app/azure-maps-custom-html-marker.jpg)
**After: Azure Maps using a Symbol Layer**
Symbol layers in Azure Maps support custom images as well. First, load the image
</html> ```
-![Azure Maps custom icon symbol layer](media/migrate-google-maps-web-app/azure-maps-custom-icon-symbol-layer.jpg)</
> [!TIP] > To render advanced custom points, use multiple rendering layers together. For example, let's say you want to have multiple pushpins that have the same icon on different colored circles. Instead of creating a bunch of images for each color overlay, add a symbol layer on top of a bubble layer. Have the pushpins reference the same data source. This approach will be more efficient than creating and maintaining a bunch of different images.
map.layers.add(new atlas.layer.LineLayer(datasource, null, {
})); ```
-![Azure Maps polyline](media/migrate-google-maps-web-app/azure-maps-polyline.jpg)
**More resources:**
map.layers.add(new atlas.layer.LineLayer(datasource, null, {
``` ![Azure Maps polygon](media/migrate-google-maps-web-app/azure-maps-polygon.jpg) **More resources:**
map.events.add('click', marker, function () {
}); ```
-![Azure Maps popup](media/migrate-google-maps-web-app/azure-maps-popup.jpg)
> [!NOTE] > You may do the same thing with a symbol, bubble, line or polygon layer by passing the chosen layer to the maps event code instead of a marker.
GeoJSON is the base data type in Azure Maps. Import it into a data source using
</html> ```
-![Azure Maps GeoJSON](media/migrate-google-maps-web-app/azure-maps-geojson.jpg)
**More resources:**
GeoJSON is the base data type in Azure Maps. Import it into a data source using
### Marker clustering
-When visualizing many data points on the map, points may overlap each other. Overlapping makes the map look cluttered, and the map becomes difficult to read and use. Clustering point data is the process of combining data points that are near each other and representing them on the map as a single clustered data point. As the user zooms into the map, the clusters break apart into their individual data points. Cluster data points to improve user experience and map performance.
+When lots of data points appear on the map, points can overlap, making the map look cluttered and difficult to read and use. Clustering point data is the process of combining data points that are near each other and representing them on the map as a single clustered data point. As the user zooms into the map, the clusters break apart into their individual data points. Clustering data points improves the user experience and map performance.
In the following examples, the code loads a GeoJSON feed of earthquake data from the past week and adds it to the map. Clusters are rendered as scaled and colored circles. The scale and color of the circles depends on the number of points they contain.
The `DataSource` class has the following helper function for accessing additiona
| Method | Return type | Description | |--|-|-|
-| `getClusterChildren(clusterId: number)` | Promise&lt;Array&lt;Feature&lt;Geometry, any&gt; \| Shape&gt;&gt; | Retrieves the children of the given cluster on the next zoom level. These children may be a combination of shapes and subclusters. The subclusters are features with properties matching ClusteredProperties. |
+| `getClusterChildren(clusterId: number)` | Promise&lt;Array&lt;Feature&lt;Geometry, any&gt; \| Shape&gt;&gt; | Retrieves the children of the given cluster on the next zoom level. These children can be a combination of shapes and subclusters. The subclusters are features with properties matching ClusteredProperties. |
| `getClusterExpansionZoom(clusterId: number)` | Promise&lt;number&gt; | Calculates a zoom level at which the cluster starts expanding or break apart. | | `getClusterLeaves(clusterId: number, limit: number, offset: number)` | Promise&lt;Array&lt;Feature&lt;Geometry, any&gt; \| Shape&gt;&gt; | Retrieves all points in a cluster. Set the `limit` to return a subset of the points, and use the `offset` to page through the points. |
Directly import GeoJSON data using the `importDataFromUrl` function on the `Data
map.layers.add([ //Create a bubble layer for rendering clustered data points. new atlas.layer.BubbleLayer(datasource, null, {
- //Scale the size of the clustered bubble based on the number of points inthe cluster.
+ //Scale the size of the clustered bubble based on the number of points in the cluster.
radius: [ 'step', ['get', 'point_count'],
Directly import GeoJSON data using the `importDataFromUrl` function on the `Data
``` ![Azure Maps clustering](media/migrate-google-maps-web-app/azure-maps-clustering.jpg) **More resources:**
Load the GeoJSON data into a data source and connect the data source to a heat m
</html> ```
-![Azure Maps heat map](media/migrate-google-maps-web-app/azure-maps-heatmap.jpg)
**More resources:**
map.overlayMapTypes.insertAt(0, new google.maps.ImageMapType({
Add a tile layer to the map similarly as any other layer. Use a formatted URL that has in x, y, zoom placeholders; `{x}`, `{y}`, `{z}` to tell the layer where to access the tiles. Azure Maps tile layers also support `{quadkey}`, `{bbox-epsg-3857}`, and `{subdomain}` placeholders. > [!TIP]
-> In Azure Maps layers can easily be rendered below other layers, including base map layers. Often it is desirable to render tile layers below the map labels so that they are easy to read. The `map.layers.add` method takes in a second parameter which is the id of the layer in which to insert the new layer below. To insert a tile layer below the map labels, use this code: `map.layers.add(myTileLayer, "labels");`
+> In Azure Maps layers can easily be rendered beneath other layers, including base map layers. Often it is desirable to render tile layers below the map labels so that they are easy to read. The `map.layers.add` method takes in a second parameter which is the id of the layer in which to insert the new layer below. To insert a tile layer below the map labels, use this code: `map.layers.add(myTileLayer, "labels");`
```javascript //Create a tile layer and add it to the map below the label layer.
map.layers.add(new atlas.layer.TileLayer({
}), 'labels'); ```
-![Azure Maps tile layer](media/migrate-google-maps-web-app/azure-maps-tile-layer.jpg)
> [!TIP] > Tile requests can be captured using the `transformRequest` option of the map. This will allow you to modify or add headers to the request if desired.
map.setTraffic({
}); ```
-![Azure Maps traffic](media/migrate-google-maps-web-app/azure-maps-traffic.jpg)
If you select one of the traffic icons in Azure Maps, more information is displayed in a popup.
-![Azure Maps traffic incident](media/migrate-google-maps-web-app/azure-maps-traffic-incident.jpg)
**More resources:**
Use the `atlas.layer.ImageLayer` class to overlay georeferenced images. This cla
</html> ```
-![Azure Maps image overlay](media/migrate-google-maps-web-app/azure-maps-image-overlay.jpg)
**More resources:**
Both Azure and Google Maps can import and render KML, KMZ and GeoRSS data on the
#### Before: Google Maps
-```javascript
+```html
<!DOCTYPE html> <html> <head>
In Azure Maps, GeoJSON is the main data format used in the web SDK, more spatial
</html> ```
-![Azure Maps KML](media/migrate-google-maps-web-app/azure-maps-kml.png)
**More resources:**
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
The tables below provide a comparison of Azure Monitor Agent with the legacy the
| **Data sent to** | | | | | | | Azure Monitor Logs | Γ£ô | Γ£ô | | | | Azure Monitor Metrics<sup>1</sup> | Γ£ô (Public preview) | | Γ£ô (Public preview) |
-| | Azure Storage | | | Γ£ô |
-| | Event Hubs | | | Γ£ô |
+| | Azure Storage | Γ£ô (Preview) | | Γ£ô |
+| | Event Hubs | Γ£ô (Preview) | | Γ£ô |
| **Services and features supported** | | | | | | | Microsoft Sentinel | Γ£ô ([View scope](./azure-monitor-agent-migration.md#migrate-additional-services-and-features)) | Γ£ô | | | | VM Insights | Γ£ô | Γ£ô | |
The tables below provide a comparison of Azure Monitor Agent with the legacy the
| **Data sent to** | | | | | | | | Azure Monitor Logs | Γ£ô | Γ£ô | | | | | Azure Monitor Metrics<sup>1</sup> | Γ£ô (Public preview) | | | Γ£ô (Public preview) |
-| | Azure Storage | | | Γ£ô | |
-| | Event Hubs | | | Γ£ô | |
+| | Azure Storage | Γ£ô (Preview) | | Γ£ô | |
+| | Event Hubs | Γ£ô (Preview) | | Γ£ô | |
| **Services and features supported** | | | | | | | | Microsoft Sentinel | Γ£ô ([View scope](./azure-monitor-agent-migration.md#migrate-additional-services-and-features)) | Γ£ô | | | | VM Insights | Γ£ô | Γ£ô | |
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
We strongly recommended to always update to the latest version, or opt in to the
## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|
-| October 2023| **Linux** <ul><li>Add support for Process metrics counters for Log Analytics upload and Azure Monitor Metrics<li>Use rsyslog omfwd TCP for improved syslog reliability</li><li>Support Palo Alto CEF logs where hostname is followed by 2 spaces</li><li>Bug and reliability improvements</li></ui> |None|1.28.0|
+| October 2023| **Windows** <ul><li>Minimize CPU spikes when resetting an Event Log subscription</li><li>Enable multiple IIS subscriptions to use same filter</li><li>Cleanup files and folders for inactive tenants in multi-tenant mode</li><li>AMA installer will not install unnecessary certs</li><li>AMA emits Telemetry table locally</li><li>Update Metric Extension to v2.2023.721.1630</li><li>Update AzureSecurityPack to v4.29.0.4</li><li>Update AzureWatson to v1.0.99</li></ul>**Linux**<ul><li> Add support for Process metrics counters for Log Analytics upload and Azure Monitor Metrics</li><li>Use rsyslog omfwd TCP for improved syslog reliability</li><li>Support Palo Alto CEF logs where hostname is followed by 2 spaces</li><li>Bug and reliability improvements</li></ul> |1.20.0|1.28.11|
| September 2023| **Windows** <ul><li>Fix issue with high CPU usage due to excessive Windows Event Logs subscription reset</li><li>Reduce fluentbit resource usage by limiting tracked files older than 3 days and limiting logging to errors only</li><li>Fix race-condition where resource_id is unavailable when agent is restarted</li><li>Fix race-condition when vm-extension provision agent (aka GuestAgent) is issuing a disable-vm-extension command to AMA.</li><li>Update MetricExtension version to 2.2023.721.1630</li><li>Update Troubleshooter to v1.5.14 </li></ul>|1.20.0| None | | August 2023| **Windows** <ul><li>AMA: Allow prefixes in the tag names to handle regression</li><li>Updating package version for AzSecPack 4.28 release</li></ul>**Linux**<ul><li> Comming soon</li></ui>|1.19.0| Comming Soon | | July 2023| **Windows** <ul><li>Fix crash when Event Log subscription callback throws errors.<li>MetricExtension updated to 2.2023.609.2051</li></ui> |1.18.0|None|
azure-monitor Azure Monitor Agent Send Data To Event Hubs And Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-send-data-to-event-hubs-and-storage.md
+
+ Title: Send data to Event Hubs and Storage (Preview)
+description: This article describes how to use Azure Monitor Agent to upload data to Azure Storage and Event Hubs.
+++ Last updated : 10/09/2023+++
+# Send data to Event Hubs and Storage (Preview)
+
+This article describes how to use the Azure Monitor Agent (AMA) to upload data to Azure Storage and Event Hubs. This feature is in preview.
+
+The Azure Monitor Agent is the new, consolidated telemetry agent for collecting data from IaaS resources like virtual machines. By using the upload capability in this preview, you can upload the logs<sup>[1](#FN1)</sup> you send to Log Analytics workspaces to Event Hubs and Storage. Both data destinations use data collection rules to configure collection setup for the agents.
+
+> [!NOTE]
+> This functionality replaces the Windows diagnostics extension (WAD) and Linux diagnostics extension (LAD). For more information, see [Compare Azure Monitor Agent to legacy agents](./agents-overview.md#compare-to-legacy-agents).
+
+**Footnotes**
+
+<a name="FN1">1</a>: Not all data types are supported; refer to [What's supported](#whats-supported) for specifics.
+
+## What's supported
+
+### Data types
+
+- Windows:
+ - Windows Event Logs ΓÇô to eventhub and storage
+ - Perf counters ΓÇô eventhub and storage
+ - IIS logs ΓÇô to storage blob
+ - Custom logs ΓÇô to storage blob
+
+- Linux:
+ - Syslog ΓÇô to eventhub and storage
+ - Perf counters ΓÇô to eventhub and storage
+ - Custom Logs / Log files ΓÇô to eventhub and storage
+
+### Operating systems
+
+- Environments that are supported by the Azure Monitoring Agent on Windows and Linux
+- This feature is only supported and planned to be supported for Azure VMs. There are no plans to bring this to on-premises or Azure Arc scenarios.
+
+## What's not supported
+
+### Data types
+
+- Windows:
+ - ETW Logs
+ - Windows Crash Dumps (not planned nor will be supported)
+ - Application Logs (not planned nor will be supported)
+ - .NET event source logs (not planned nor will be supported)
+
+## Prerequisites
+
+A [user-assigned managed identity](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) associated with the following resources:
+
+- [Storage account](../../storage/common/storage-account-create.md)
+- [Event Hubs namespace and event hub](../../event-hubs/event-hubs-create.md)
+- [Virtual machine](../../virtual-machines/overview.md)
+
+## Create a data collection rule
+
+Create a data collection rule for collecting events and sending to storage and event hub.
+
+1. In the Azure portal's search box, type in *template* and then select **Deploy a custom template**.
+
+ :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/deploy-custom-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/deploy-custom-template.png" alt-text="Screenshot that shows the Azure portal with template entered in the search box and Deploy a custom template highlighted in the search results.":::
+
+1. Select **Build your own template in the editor**.
+
+ :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/build-custom-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/build-custom-template.png" alt-text="Screenshot that shows portal screen to build template in the editor.":::
+
+1. Paste this Azure Resource Manager template into the editor:
+
+ ### [Windows](#tab/windows)
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Location for all resources."
+ }
+ },
+ "dataCollectionRulesName": {
+ "defaultValue": "[concat(resourceGroup().name, 'DCR')]",
+ "type": "String"
+ },
+ "storageAccountName": {
+ "defaultValue": "[concat(resourceGroup().name, 'sa')]",
+ "type": "String"
+ },
+ "eventHubNamespaceName": {
+ "defaultValue": "[concat(resourceGroup().name, 'eh')]",
+ "type": "String"
+ },
+ "eventHubInstanceName": {
+ "defaultValue": "[concat(resourceGroup().name, 'ehins')]",
+ "type": "String"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/dataCollectionRules",
+ "apiVersion": "2022-06-01",
+ "name": "[parameters('dataCollectionRulesName')]",
+ "location": "[parameters('location')]",
+ "kind": "AgentDirectToStore",
+ "properties": {
+ "dataSources": {
+ "performanceCounters": [
+ {
+ "streams": [
+ "Microsoft-Perf"
+ ],
+ "samplingFrequencyInSeconds": 10,
+ "counterSpecifiers": [
+ "\\Process(_Total)\\Working Set - Private",
+ "\\Memory\\% Committed Bytes In Use",
+ "\\LogicalDisk(_Total)\\% Free Space",
+ "\\Network Interface(*)\\Bytes Total/sec"
+ ],
+ "name": "perfCounterDataSource10"
+ }
+ ],
+ "windowsEventLogs": [
+ {
+ "streams": [
+ "Microsoft-Event"
+ ],
+ "xPathQueries": [
+ "Application!*[System[(Level=2)]]",
+ "System!*[System[(Level=2)]]"
+ ],
+ "name": "eventLogsDataSource"
+ }
+ ],
+ "iisLogs": [
+ {
+ "streams": [
+ "Microsoft-W3CIISLog"
+ ],
+ "logDirectories": [
+ "C:\\inetpub\\logs\\LogFiles\\W3SVC1\\"
+ ],
+ "name": "myIisLogsDataSource"
+ }
+ ],
+ "logFiles": [
+ {
+ "streams": [
+ "Custom-Text-logs"
+ ],
+ "filePatterns": [
+ "C:\\JavaLogs\\*.log"
+ ],
+ "format": "text",
+ "settings": {
+ "text": {
+ "recordStartTimestampFormat": "ISO 8601"
+ }
+ },
+ "name": "myTextLogs"
+ }
+ ]
+ },
+ "destinations": {
+ "eventHubsDirect": [
+ {
+ "eventHubResourceId": "[resourceId('Microsoft.EventHub/namespaces/eventhubs', parameters('eventHubNamespaceName'), parameters('eventHubInstanceName'))]",
+ "name": "myEh1"
+ }
+ ],
+ "storageBlobsDirect": [
+ {
+ "storageAccountResourceId": "[resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
+ "name": "blobNamedPerf",
+ "containerName": "PerfBlob"
+ },
+ {
+ "storageAccountResourceId": "[resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
+ "name": "blobNamedWin",
+ "containerName": "WinEventBlob"
+ },
+ {
+ "storageAccountResourceId": "[resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
+ "name": "blobNamedIIS",
+ "containerName": "IISBlob"
+ },
+ {
+ "storageAccountResourceId": "[resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
+ "name": "blobNamedTextLogs",
+ "containerName": "TxtLogBlob"
+ }
+ ],
+ "storageTablesDirect": [
+ {
+ "storageAccountResourceId": "[resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
+ "name": "tableNamedPerf",
+ "tableName": "PerfTable"
+ },
+ {
+ "storageAccountResourceId": "[resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
+ "name": "tableNamedWin",
+ "tableName": "WinTable"
+ },
+ {
+ "storageAccountResourceId": "[resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
+ "name": "tableUnnamed"
+ }
+ ]
+ },
+ "dataFlows": [
+ {
+ "streams": [
+ "Microsoft-Perf"
+ ],
+ "destinations": [
+ "myEh1",
+ "blobNamedPerf",
+ "tableNamedPerf",
+ "tableUnnamed"
+ ]
+ },
+ {
+ "streams": [
+ "Microsoft-WindowsEvent"
+ ],
+ "destinations": [
+ "myEh1",
+ "blobNamedWin",
+ "tableNamedWin",
+ "tableUnnamed"
+ ]
+ },
+ {
+ "streams": [
+ "Microsoft-W3CIISLog"
+ ],
+ "destinations": [
+ "blobNamedIIS"
+ ]
+ },
+ {
+ "streams": [
+ "Custom-Text-logs"
+ ],
+ "destinations": [
+ "blobNamedTextLogs"
+ ]
+ }
+ ]
+ }
+ }
+ ]
+ }
+ ```
+
+ ### [Linux](#tab/linux)
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Location for all resources."
+ }
+ },
+ "dataCollectionRulesName": {
+ "defaultValue": "[concat(resourceGroup().name, 'DCR')]",
+ "type": "String"
+ },
+ "storageAccountName": {
+ "defaultValue": "[concat(resourceGroup().name, 'sa')]",
+ "type": "String"
+ },
+ "eventHubNamespaceName": {
+ "defaultValue": "[concat(resourceGroup().name, 'eh')]",
+ "type": "String"
+ },
+ "eventHubInstanceName": {
+ "defaultValue": "[concat(resourceGroup().name, 'ehins')]",
+ "type": "String"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/dataCollectionRules",
+ "apiVersion": "2022-06-01",
+ "name": "[parameters('dataCollectionRulesName')]",
+ "location": "[parameters('location')]",
+ "kind": "AgentDirectToStore",
+ "properties": {
+ "dataSources": {
+ "performanceCounters": [
+ {
+ "streams": [
+ "Microsoft-Perf"
+ ],
+ "samplingFrequencyInSeconds": 10,
+ "counterSpecifiers": [
+ "Processor(*)\\% Processor Time",
+ "Processor(*)\\% Idle Time",
+ "Processor(*)\\% User Time",
+ "Processor(*)\\% Nice Time",
+ "Processor(*)\\% Privileged Time",
+ "Processor(*)\\% IO Wait Time",
+ "Processor(*)\\% Interrupt Time",
+ "Processor(*)\\% DPC Time",
+ "Memory(*)\\Available MBytes Memory",
+ "Memory(*)\\% Available Memory",
+ "Memory(*)\\Used Memory MBytes",
+ "Memory(*)\\% Used Memory",
+ "Memory(*)\\Pages/sec",
+ "Memory(*)\\Page Reads/sec",
+ "Memory(*)\\Page Writes/sec",
+ "Memory(*)\\Available MBytes Swap",
+ "Memory(*)\\% Available Swap Space",
+ "Memory(*)\\Used MBytes Swap Space",
+ "Memory(*)\\% Used Swap Space",
+ "Logical Disk(*)\\% Free Inodes",
+ "Logical Disk(*)\\% Used Inodes",
+ "Logical Disk(*)\\Free Megabytes",
+ "Logical Disk(*)\\% Free Space",
+ "Logical Disk(*)\\% Used Space",
+ "Logical Disk(*)\\Logical Disk Bytes/sec",
+ "Logical Disk(*)\\Disk Read Bytes/sec",
+ "Logical Disk(*)\\Disk Write Bytes/sec",
+ "Logical Disk(*)\\Disk Transfers/sec",
+ "Logical Disk(*)\\Disk Reads/sec",
+ "Logical Disk(*)\\Disk Writes/sec",
+ "Network(*)\\Total Bytes Transmitted",
+ "Network(*)\\Total Bytes Received",
+ "Network(*)\\Total Bytes",
+ "Network(*)\\Total Packets Transmitted",
+ "Network(*)\\Total Packets Received",
+ "Network(*)\\Total Rx Errors",
+ "Network(*)\\Total Tx Errors",
+ "Network(*)\\Total Collisions"
+ ],
+ "name": "perfCounterDataSource10"
+ }
+ ],
+ "syslog": [
+ {
+ "streams": [
+ "Microsoft-Syslog"
+ ],
+ "facilityNames": [
+ "auth",
+ "authpriv",
+ "cron",
+ "daemon",
+ "mark",
+ "kern",
+ "local0",
+ "local1",
+ "local2",
+ "local3",
+ "local4",
+ "local5",
+ "local6",
+ "local7",
+ "lpr",
+ "mail",
+ "news",
+ "syslog",
+ "user",
+ "UUCP"
+ ],
+ "logLevels": [
+ "Debug",
+ "Info",
+ "Notice",
+ "Warning",
+ "Error",
+ "Critical",
+ "Alert",
+ "Emergency"
+ ],
+ "name": "syslogDataSource"
+ }
+ ],
+ "logFiles": [
+ {
+ "streams": [
+ "Custom-Text-logs"
+ ],
+ "filePatterns": [
+ "/var/log/messages"
+ ],
+ "format": "text",
+ "settings": {
+ "text": {
+ "recordStartTimestampFormat": "ISO 8601"
+ }
+ },
+ "name": "myTextLogs"
+ }
+ ]
+ },
+ "destinations": {
+ "eventHubsDirect": [
+ {
+ "eventHubResourceId": "[resourceId('Microsoft.EventHub/namespaces/eventhubs', parameters('eventHubNamespaceName'), parameters('eventHubInstanceName'))]",
+ "name": "myEh1"
+ }
+ ],
+ "storageBlobsDirect": [
+ {
+ "storageAccountResourceId": "[resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
+ "name": "blobNamedPerf",
+ "containerName": "PerfBlob"
+ },
+ {
+ "storageAccountResourceId": "[resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
+ "name": "blobNamedLinux",
+ "containerName": "SyslogBlob"
+ },
+ {
+ "storageAccountResourceId": "[resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
+ "name": "blobNamedTextLogs",
+ "containerName": "TxtLogBlob"
+ }
+ ],
+ "storageTablesDirect": [
+ {
+ "storageAccountResourceId": "[resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
+ "name": "tableNamedPerf",
+ "tableName": "PerfTable"
+ },
+ {
+ "storageAccountResourceId": "[resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
+ "name": "tableNamedLinux",
+ "tableName": "LinuxTable"
+ },
+ {
+ "storageAccountResourceId": "[resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
+ "name": "tableUnnamed"
+ }
+ ]
+ },
+ "dataFlows": [
+ {
+ "streams": [
+ "Microsoft-Perf"
+ ],
+ "destinations": [
+ "myEh1",
+ "blobNamedPerf",
+ "tableNamedPerf",
+ "tableUnnamed"
+ ]
+ },
+ {
+ "streams": [
+ "Microsoft-Syslog"
+ ],
+ "destinations": [
+ "myEh1",
+ "blobNamedLinux",
+ "tableNamedLinux",
+ "tableUnnamed"
+ ]
+ },
+ {
+ "streams": [
+ "Custom-Text-logs"
+ ],
+ "destinations": [
+ "blobNamedTextLogs"
+ ]
+ }
+ ]
+ }
+ }
+ ]
+ }
+ ```
+
+
+
+1. Update the following values in the Azure Resource Manager template. See the example Azure Resource Manager template for a sample.
+
+ **Event hub**
+
+ | Value | Description |
+ |:|:|
+ | `dataSources` | Define it per your requirements. The supported types for direct upload to Event Hubs for Windows are `performanceCounters` and `windowsEventLogs` and for Linux, they're `performanceCounters` and `syslog`. |
+ | `destinations` | Use `eventHubsDirect` for direct upload to the event hub. |
+ | `eventHubResourceId` | Resource ID of the event hub instance.<br><br>NOTE: It isn't the event hub namespace resource ID. |
+ | `dataFlows` | Under `dataFlows`, include destination name. |
+
+ **Storage table**
+
+ | Value | Description |
+ |:|:|
+ | `dataSources` | Define it per your requirements. The supported types for direct upload to storage Table for Windows are `performanceCounters`, `windowsEventLogs` and for Linux, they're `performanceCounters` and `syslog`. |
+ | `destinations` | Use `storageTablesDirect` for direct upload to table storage. |
+ | `storageAccountResourceId` | Resource ID of the storage account. |
+ | `tableName` | The name of the Table where JSON blob with event data is uploaded to. |
+ | `dataFlows` | Under `dataFlows`, include destination name. |
+
+ **Storage blob**
+
+ | Value | Description |
+ |:|:|
+ | `dataSources` | Define it per your requirements. The supported types for direct upload to storage blob for Windows are `performanceCounters`, `windowsEventLogs`, `iisLogs`, `logFiles` and for Linux, they're `performanceCounters`, `syslog` and `logFiles`. |
+ | `destinations` | Use `storageBlobsDirect` for direct upload to blob storage. |
+ | `storageAccountResourceId` | The resource ID of the storage account. |
+ | `containerName` | The name of the container where JSON blob with event data is uploaded to. |
+ | `dataFlows` | Under `dataFlows`, include destination name. |
+
+1. Select **Save**.
+
+## Create DCR association and deploy AzureMonitorAgent
+
+Use custom template deployment to create the DCR association and AMA deployment.
+
+1. In the Azure portal's search box, type in *template* and then select **Deploy a custom template**.
+
+ :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/deploy-custom-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/deploy-custom-template.png" alt-text="Screenshot that shows the Azure portal with template entered in the search box and Deploy a custom template highlighted in the search results.":::
+
+1. Select **Build your own template in the editor**.
+
+ :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/build-custom-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/build-custom-template.png" alt-text="Screenshot that shows portal screen to build template in the editor.":::
+
+1. Paste this Azure Resource Manager template into the editor:
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "vmName": {
+ "defaultValue": "[concat(resourceGroup().name, 'vm')]",
+ "type": "String"
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Location for all resources."
+ }
+ },
+ "dataCollectionRulesName": {
+ "defaultValue": "[concat(resourceGroup().name, 'DCR')]",
+ "type": "String",
+ "metadata": {
+ "description": "Data Collection Rule Name"
+ }
+ },
+ "dcraName": {
+ "type": "string",
+ "defaultValue": "[concat(uniquestring(resourceGroup().id), 'DCRLink')]",
+ "metadata": {
+ "description": "Name of the association."
+ }
+ },
+ "identityName": {
+ "type": "string",
+ "defaultValue": "[concat(resourceGroup().name, 'UAI')]",
+ "metadata": {
+ "description": "Managed Identity"
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Compute/virtualMachines/providers/dataCollectionRuleAssociations",
+ "name": "[concat(parameters('vmName'),'/microsoft.insights/', parameters('dcraName'))]",
+ "apiVersion": "2021-04-01",
+ "properties": {
+ "description": "Association of data collection rule. Deleting this association will break the data collection for this virtual machine.",
+ "dataCollectionRuleId": "[resourceID('Microsoft.Insights/dataCollectionRules',parameters('dataCollectionRulesName'))]"
+ }
+ },
+ {
+ "type": "Microsoft.Compute/virtualMachines/extensions",
+ "name": "[concat(parameters('vmName'), '/AMAExtension')]",
+ "apiVersion": "2020-06-01",
+ "location": "[parameters('location')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.Compute/virtualMachines/providers/dataCollectionRuleAssociations', parameters('vmName'), 'Microsoft.Insights', parameters('dcraName'))]"
+ ],
+ "properties": {
+ "publisher": "Microsoft.Azure.Monitor",
+ "type": "AzureMonitorWindowsAgent",
+ "typeHandlerVersion": "1.0",
+ "autoUpgradeMinorVersion": true,
+ "settings": {
+ "authentication": {
+ "managedIdentity": {
+ "identifier-type": "mi_res_id",
+ "identifier-value": "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/',parameters('identityName'))]"
+ }
+ }
+ }
+ }
+ }
+ ]
+ }
+ ```
+
+1. Select **Save**.
+
+## Troubleshooting
+
+Use the following section to troubleshoot sending data to Event Hubs and Storage.
+
+### Data not found in storage account blob storage
+
+- Check that the built-in role `Storage Blob Data Contributor` is assigned with managed identity on the storage account.
+- Check that the managed identity is assigned to the VM.
+- Check that the AMA settings have managed identity parameter.
+
+### Data not found in storage account table storage
+
+- Check that the built-in role `Storage Table Data Contributor` is assigned with managed identity on storage account.
+- Check that the managed identity is assigned to the VM.
+- Check that the AMA settings have managed identity parameter.
+
+### Data not flowing to event hub
+
+- Check that the built-in role `Azure Event Hubs Data Sender` is assigned with managed identity on storage account.
+- Check that the managed identity is assigned to the VM.
+- Check that the AMA settings have managed identity parameter.
+
+## AMA and WAD/LAD Convergence
+
+### Will the Azure Monitoring Agent support data upload to Application Insights?
+
+No, this support isn't a part of the roadmap. Application Insights are now powered by Log Analytics Workspaces.
+
+### Will the Azure Monitoring Agent support Windows Crash Dumps as a data type to upload?
+
+No, this support isn't a part of the roadmap. The Azure Monitoring Agent is meant for telemetry logs and not large file types.
+
+### Does this mean the Linux (LAD) and Windows (WAD) Diagnostic Extensions are no longer supported/retired?
+
+No, not until Azure formally announces the deprecation of these agents, which would start a three-year clock until they're no longer supported.
+
+### How to configure AMA for event hubs and storage data destinations
+
+Today the configuration experience is by using the DCR API.
+
+### Will you still be actively developing on WAD and LAD?
+
+WAD and LAD will only be getting security/patches going forward. Most engineering funding has gone to the Azure Monitoring Agent. We highly recommend migrating to the Azure Monitoring Agent to benefit from all its awesome capabilities.
+
+## See also
+
+- For more information on creating a data collection rule, see [Collect events and performance counters from virtual machines with Azure Monitor Agent](./data-collection-rule-azure-monitor-agent.md).
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
Title: Collect text logs with Azure Monitor Agent
-description: Configure collection of filed-based text logs using a data collection rule on virtual machines with the Azure Monitor Agent.
+ Title: Collect logs from a text or JSON file with Azure Monitor Agent
+description: Configure a data collection rule to collect log data from a text or JSON file on a virtual machine using Azure Monitor Agent.
Previously updated : 12/11/2022 Last updated : 10/31/2023 -+
-# Collect text logs with Azure Monitor Agent
+# Collect logs from a text or JSON file with Azure Monitor Agent
-Many applications log information to text files instead of standard logging services such as Windows Event log or Syslog. This article explains how to collect text logs from monitored machines using [Azure Monitor Agent](azure-monitor-agent-overview.md) by creating a [data collection rule (DCR)](../essentials/data-collection-rule-overview.md).
+Many applications log information to text or JSON files instead of standard logging services such as Windows Event log or Syslog. This article explains how to collect log data from text and JSON files on monitored machines using [Azure Monitor Agent](azure-monitor-agent-overview.md) by creating a [data collection rule (DCR)](../essentials/data-collection-rule-overview.md).
## Prerequisites To complete this procedure, you need:
To complete this procedure, you need:
- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace. -- A VM, Virtual Machine Scale Set, or Arc-enabled on-premises server that writes logs to a text file.
+- A VM, Virtual Machine Scale Set, or Arc-enabled on-premises server that writes logs to a text or JSON file.
- Text file requirements and best practices:
+ Text and JSON file requirements and best practices:
- Do store files on the local drive of the machine on which Azure Monitor Agent is running and in the directory that is being monitored. - Do delineate the end of a record with an end of line. - Do use ASCII or UTF-8 encoding. Other formats such as UTF-16 aren't supported.
To complete this procedure, you need:
## Create a custom table
-This step will create a new custom table, which is any table name that ends in \_CL. Currently a direct REST call to the table management endpoint is used to create a table. The script at the end of this section is the input to the REST call.
+The table created in the script has two columns:
-The table created in the script has two columns TimeGenerated: datetime and RawData: string, which is the default schema for a custom text log. If you know your final schema, then you can add columns in the script before creating the table. If you don't, columns can always be added in the log analytics table UI.
+- `TimeGenerated` (datetime)
+- `RawData` (string
-The easiest way to make the REST call is from an Azure Cloud PowerShell command line (CLI). To open the shell, go to the Azure portal, press the Cloud Shell button, and select PowerShell. If this is your first-time using Azure Cloud PowerShell, you will need to walk through the one-time configuration wizard.
-
+This is the default table schema for log data collected from text and JSON files. If you know your final schema, you can add columns in the script before creating the table. If you don't, you can [add columns using the Log Analytics table UI](../logs/create-custom-table.md#add-or-delete-a-custom-column).
+
+The easiest way to make the REST call is from an Azure Cloud PowerShell command line (CLI). To open the shell, go to the Azure portal, press the Cloud Shell button, and select PowerShell. If this is your first time using Azure Cloud PowerShell, you'll need to walk through the one-time configuration wizard.
-Copy and paste the following script in to PowerShell to create the table in your workspace. Make sure to replace the {subscription}, {resource group}, {workspace name}, and {table name} in the script. Make sure that there are no extra blanks at the beginning or end of the parameters
+Copy and paste this script into PowerShell to create the table in your workspace:
```code $tableParams = @'
$tableParams = @'
Invoke-AzRestMethod -Path "/subscriptions/{subscription}/resourcegroups/{resourcegroup}/providers/microsoft.operationalinsights/workspaces/{WorkspaceName}/tables/{TableName}_CL?api-version=2021-12-01-preview" -Method PUT -payload $tableParams ```
-Press return to execute the code. You should see a 200 response, and details about the table you just created will show up. To validate that the table was created go to your workspace and select Tables on the left blade. You should see your table in the list.
+You should receive a 200 response and details about the table you just created.
+ > [!Note] > The column names are case sensitive. For example `Rawdata` will not correctly collect the event data. It must be `RawData`. -
-## Create data collection rule to collect text logs
+## Create a data collection rule to collect data from a text or JSON file
The data collection rule defines:
To create the data collection rule in the Azure portal:
- **Region** specifies where the DCR will be created. The virtual machines and their associations can be in any subscription or resource group in the tenant. - **Platform Type** specifies the type of resources this rule can apply to. The **Custom** option allows for both Windows and Linux types.
- - **Data Collection Endpoint** specifies the data collection endpoint used to collect data. This data collection endpoint must be in the same region as the Log Analytics workspace. For more information, see [How to set up data collection endpoints based on your deployment](../essentials/data-collection-endpoint-overview.md#how-to-set-up-data-collection-endpoints-based-on-your-deployment).
+ - **Data Collection Endpoint** specifies the data collection endpoint to which Azure Monitor Agent sends collected data. This data collection endpoint must be in the same region as the Log Analytics workspace. For more information, see [How to set up data collection endpoints based on your deployment](../essentials/data-collection-endpoint-overview.md#how-to-set-up-data-collection-endpoints-based-on-your-deployment).
:::image type="content" source="media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png" lightbox="media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png" alt-text="Screenshot that shows the Basics tab of the Data Collection Rule screen.":::
To create the data collection rule in the Azure portal:
:::image type="content" source="media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png" lightbox="media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png" alt-text="Screenshot that shows the Resources tab of the Data Collection Rule screen."::: 1. On the **Collect and deliver** tab, select **Add data source** to add a data source and set a destination.
-1. Select **Custom Text Logs**.
-
- :::image type="content" source="media/data-collection-text-log/custom-text-log-data-collection-rule.png" lightbox="media/data-collection-text-log/custom-text-log-data-collection-rule.png" alt-text="Screenshot that shows the Add data source screen for a data collection rule in Azure portal.":::
-
+1. From the **Data source type** dropdown, select **Custom Text Logs** or **JSON Logs**.
1. Specify the following information: - **File Pattern** - Identifies where the log files are located on the local disk. You can enter multiple file patterns separated by commas (on Linux, AMA version 1.26 or higher is required to collect from a comma-separated list of file patterns).
To create the data collection rule in the Azure portal:
> Multiple log files of the same type commonly exist in the same directory. For example, a machine might create a new file every day to prevent the log file from growing too large. To collect log data in this scenario, you can use a file wildcard. Use the format `C:\directoryA\directoryB\*MyLog.txt` for Windows and `/var/*.log` for Linux. There is no support for directory wildcards.
- - **Table name** - The name of the destination table you created in your Log Analytics Workspace. For more information, see [Prerequisites](#prerequisites).
+ - **Table name** - The name of the destination table you created in your Log Analytics Workspace. For more information, see [Create a custom table](#create-a-custom-table).
- **Record delimiter** - Will be used in the future to allow delimiters other than the currently supported end of line (`/r/n`). - **Transform** - Add an [ingestion-time transformation](../essentials/data-collection-transformations.md) or leave as **source** if you don't need to transform the collected data. 1. On the **Destination** tab, add one or more destinations for the data source. You can select multiple destinations of the same or different types. For instance, you can select multiple Log Analytics workspaces, which is also known as multihoming. <!-- convertborder later -->
- :::image type="content" source="media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png" lightbox="media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png" alt-text="Screenshot that shows the destination tabe of the Add data source screen for a data collection rule in Azure portal." border="false":::
+ :::image type="content" source="media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png" lightbox="media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png" alt-text="Screenshot that shows the destination tab of the Add data source screen for a data collection rule in Azure portal." border="false":::
1. Select **Review + create** to review the details of the data collection rule and association with the set of virtual machines. 1. Select **Create** to create the data collection rule.
To create the data collection rule in the Azure portal:
1. Paste this Resource Manager template into the editor:
- ```json
- {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "dataCollectionRuleName": {
- "type": "string",
- "metadata": {
- "description": "Specifies the name of the Data Collection Rule to create."
- }
- },
- "location": {
- "type": "string",
- "metadata": {
- "description": "Specifies the location in which to create the Data Collection Rule."
- }
- },
- "workspaceName": {
- "type": "string",
- "metadata": {
- "description": "Name of the Log Analytics workspace to use."
- }
- },
- "workspaceResourceId": {
- "type": "string",
- "metadata": {
- "description": "Specifies the Azure resource ID of the Log Analytics workspace to use."
+ - To collect data from a text file, use this template:
+
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "dataCollectionRuleName": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the name of the Data Collection Rule to create."
+ }
+ },
+ "location": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the location in which to create the Data Collection Rule."
+ }
+ },
+ "workspaceName": {
+ "type": "string",
+ "metadata": {
+ "description": "Name of the Log Analytics workspace to use."
+ }
+ },
+ "workspaceResourceId": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the Azure resource ID of the Log Analytics workspace to use."
+ }
+ },
+ "endpointResourceId": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the Azure resource ID of the Data Collection Endpoint to use."
+ }
} },
- "endpointResourceId": {
- "type": "string",
- "metadata": {
- "description": "Specifies the Azure resource ID of the Data Collection Endpoint to use."
- }
- }
- },
- "resources": [
- {
- "type": "Microsoft.Insights/dataCollectionRules",
- "name": "[parameters('dataCollectionRuleName')]",
- "location": "[parameters('location')]",
- "apiVersion": "2021-09-01-preview",
- "properties": {
- "dataCollectionEndpointId": "[parameters('endpointResourceId')]",
- "streamDeclarations": {
- "Custom-MyLogFileFormat": {
- "columns": [
+ "resources": [
+ {
+ "type": "Microsoft.Insights/dataCollectionRules",
+ "name": "[parameters('dataCollectionRuleName')]",
+ "location": "[parameters('location')]",
+ "apiVersion": "2021-09-01-preview",
+ "properties": {
+ "dataCollectionEndpointId": "[parameters('endpointResourceId')]",
+ "streamDeclarations": {
+ "Custom-MyLogFileFormat": {
+ "columns": [
+ {
+ "name": "TimeGenerated",
+ "type": "datetime"
+ },
+ {
+ "name": "RawData",
+ "type": "string"
+ }
+ ]
+ }
+ },
+ "dataSources": {
+ "logFiles": [
{
- "name": "TimeGenerated",
- "type": "datetime"
+ "streams": [
+ "Custom-MyLogFileFormat"
+ ],
+ "filePatterns": [
+ "C:\\JavaLogs\\*.log"
+ ],
+ "format": "text",
+ "settings": {
+ "text": {
+ "recordStartTimestampFormat": "ISO 8601"
+ }
+ },
+ "name": "myLogFileFormat-Windows"
}, {
- "name": "RawData",
- "type": "string"
+ "streams": [
+ "Custom-MyLogFileFormat"
+ ],
+ "filePatterns": [
+ "//var//*.log"
+ ],
+ "format": "text",
+ "settings": {
+ "text": {
+ "recordStartTimestampFormat": "ISO 8601"
+ }
+ },
+ "name": "myLogFileFormat-Linux"
+ }
+ ]
+ },
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": "[parameters('workspaceResourceId')]",
+ "name": "[parameters('workspaceName')]"
} ]
- }
- },
- "dataSources": {
- "logFiles": [
+ },
+ "dataFlows": [
{ "streams": [ "Custom-MyLogFileFormat" ],
- "filePatterns": [
- "C:\\JavaLogs\\*.log"
+ "destinations": [
+ "[parameters('workspaceName')]"
],
- "format": "text",
- "settings": {
- "text": {
- "recordStartTimestampFormat": "ISO 8601"
+ "transformKql": "source",
+ "outputStream": "Custom-MyTable_CL"
+ }
+ ]
+ }
+ }
+ ],
+ "outputs": {
+ "dataCollectionRuleId": {
+ "type": "string",
+ "value": "[resourceId('Microsoft.Insights/dataCollectionRules', parameters('dataCollectionRuleName'))]"
+ }
+ }
+ }
+ ```
+
+ - To collect data from a JSON file, use this template:
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "type": "Microsoft.Insights/dataCollectionRules",
+ "name": `DataCollectionRuleName`,
+ "location": `location` ,
+ "apiVersion": "2021-09-01-preview",
+ "properties": {
+ "dataCollectionEndpointId": `endpointResourceId` ,
+ "streamDeclarations": {
+ "Custom-JSONLog": {
+ "columns": [
+ {
+ "name": "TimeGenerated",
+ "type": "datetime"
+ },
+ {
+ "name": "RawData",
+ "type": "string"
}
- },
- "name": "myLogFileFormat-Windows"
- },
+ ]
+ }
+ },
+ "dataSources": {
+ "logFiles": [
+ {
+ "streams": [
+ "Custom-JSONLog"
+ ],
+ "filePatterns": [
+ "C:\\JavaLogs\\*.log"
+ ],
+ "format": "json",
+ "settings": {
+ },
+ "name": "myLogFileFormat "
+ }
+ ]
+ },
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": `workspaceResourceId` ,
+ "name": "`workspaceName`"
+ }
+ ]
+ },
+ "dataFlows": [
{ "streams": [
- "Custom-MyLogFileFormat"
+ "Custom-JSONLog"
],
- "filePatterns": [
- "//var//*.log"
+ "destinations": [
+ "`workspaceName`"
],
- "format": "text",
- "settings": {
- "text": {
- "recordStartTimestampFormat": "ISO 8601"
- }
- },
- "name": "myLogFileFormat-Linux"
- }
- ]
- },
- "destinations": {
- "logAnalytics": [
- {
- "workspaceResourceId": "[parameters('workspaceResourceId')]",
- "name": "[parameters('workspaceName')]"
+ "transformKql": "source",
+ "outputStream": "`Table-Name_CL`"
} ]
- },
- "dataFlows": [
- {
- "streams": [
- "Custom-MyLogFileFormat"
- ],
- "destinations": [
- "[parameters('workspaceName')]"
- ],
- "transformKql": "source",
- "outputStream": "Custom-MyTable_CL"
- }
- ]
+ }
+ }
+ ],
+ "outputs": {
+ "dataCollectionRuleId": {
+ "type": "string",
+ "value": "[resourceId('Microsoft.Insights/dataCollectionRules', `dataCollectionRuleName`"
}
- }
- ],
- "outputs": {
- "dataCollectionRuleId": {
- "type": "string",
- "value": "[resourceId('Microsoft.Insights/dataCollectionRules', parameters('dataCollectionRuleName'))]"
} }
- }
- ```
+ ```
+ 1. Update the following values in the Resource Manager template:
To create the data collection rule in the Azure portal:
- `filePatterns`: Specifies the location and file pattern of the log files to collect. This defines a separate pattern for Windows and Linux agents. - `transformKql`: Specifies a [transformation](../logs/../essentials//data-collection-transformations.md) to apply to the incoming data before it's sent to the workspace.
- See [Structure of a data collection rule in Azure Monitor (preview)](../essentials/data-collection-rule-structure.md#custom-logs) if you want to modify the text log DCR.
+ See [Structure of a data collection rule in Azure Monitor](../essentials/data-collection-rule-structure.md#custom-logs) if you want to modify the data collection rule.
> [!IMPORTANT] > Custom data collection rules have a prefix of *Custom-*; for example, *Custom-rulename*. The *Custom-rulename* in the stream declaration must match the *Custom-rulename* name in the Log Analytics workspace.
To create the data collection rule in the Azure portal:
1. Copy the **Resource ID** for the data collection rule. You'll use this in the next step.
-1. Create a data collection association that associates the data collection rule to the agents with the log file to be collected. You can associate the same data collection rule with multiple agents:
+1. Associate the data collection rule to the virtual machine you want to collect data from. You can associate the same data collection rule with multiple machines:
1. From the **Monitor** menu in the Azure portal, select **Data Collection Rules** and select the rule that you created.
To create the data collection rule in the Azure portal:
:::image type="content" source="media/data-collection-text-log/add-resources.png" lightbox="media/data-collection-text-log/add-resources.png" alt-text="Screenshot that shows the Data Collection Rules pane in the portal with resources for the data collection rule.":::
- 1. Select either individual agents to associate the data collection rule, or select a resource group to create an association for all agents in that resource group. Select **Apply**.
+ 1. Select either individual virtual machines to associate the data collection rule, or select a resource group to create an association for all virtual machines in that resource group. Select **Apply**.
:::image type="content" source="media/data-collection-text-log/select-resources.png" lightbox="media/data-collection-text-log/select-resources.png" alt-text="Screenshot that shows the Resources pane in the portal to add resources to the data collection rule.":::
The column names used here are for example only. The column names for your log w
``` - ## Troubleshoot
-Use the following steps to troubleshoot collection of text logs.
+Use the following steps to troubleshoot collection of logs from text and JSON files.
-## Troubleshooting Tool
-Use the [Azure monitor troubleshooter tool](use-azure-monitor-agent-troubleshooter.md) to look for common issues and share results with Microsoft.
+## Use the Azure Monitor Agent Troubleshooter
+Use the [Azure Monitor Agent Troubleshooter](use-azure-monitor-agent-troubleshooter.md) to look for common issues and share results with Microsoft.
-### Check if any custom logs have been received
-Start by checking if any records have been collected for your custom log table by running the following query in Log Analytics. If records aren't returned, check the other sections for possible causes. This query looks for entires in the last two days, but you can modify for another time range. It can take 5-7 minutes for new data from your tables to be uploaded. Only new data will be uploaded any log file last written to prior to the DCR rules being created won't be uploaded.
+### Check if you've ingested data to your custom table
+Start by checking if any records have been ingested into your custom log table by running the following query in Log Analytics:
``` kusto
-<YourCustomLog>_CL
+<YourCustomTable>_CL
| where TimeGenerated > ago(48h) | order by TimeGenerated desc ```
+If records aren't returned, check the other sections for possible causes. This query looks for entries in the last two days, but you can modify for another time range. It can take 5-7 minutes for new data to appear in your table. The Azure Monitor Agent only collects data written to the text or JSON file after you associate the data collection rule with the virtual machine.
+ ### Verify that you created a custom table You must [create a custom log table](../logs/create-custom-table.md#create-a-custom-table) in your Log Analytics workspace before you can send data to it.
This file pattern should correspond to the logs on the agent machine.
:::image type="content" source="media/data-collection-text-log/text-log-files.png" lightbox="media/data-collection-text-log/text-log-files.png" alt-text="Screenshot of text log files on agent machine." border="false":::
-### Verify that the text logs are being populated
-The agent will only collect new content written to the log file being collected. If you're experimenting with the text logs collection feature, you can use the following script to generate sample logs.
+### Verify that logs are being populated
+The agent will only collect new content written to the log file being collected. If you're experimenting with the collection logs from a text or JSON file, you can use the following script to generate sample logs.
```powershell # This script writes a new log entry at the specified interval indefinitely.
azure-monitor Java Get Started Supplemental https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-get-started-supplemental.md
Title: Application Insights with containers description: This article shows you how to set-up Application Insights Previously updated : 10/10/2023 Last updated : 10/30/2023 ms.devlang: java
For more information, see [Use Application Insights Java In-Process Agent in Azu
### Docker entry point
-If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.17.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.18.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.17.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.18.jar", "-jar", "<myapp.jar>"]
```
-If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.17.jar"` somewhere before `-jar`, for example:
+If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.18.jar"` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.17.jar" -jar <myapp.jar>
+ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.18.jar" -jar <myapp.jar>
```
FROM ...
COPY target/*.jar app.jar
-COPY agent/applicationinsights-agent-3.4.17.jar applicationinsights-agent-3.4.17.jar
+COPY agent/applicationinsights-agent-3.4.18.jar applicationinsights-agent-3.4.18.jar
COPY agent/applicationinsights.json applicationinsights.json ENV APPLICATIONINSIGHTS_CONNECTION_STRING="CONNECTION-STRING"
-ENTRYPOINT["java", "-javaagent:applicationinsights-agent-3.4.17.jar", "-jar", "app.jar"]
+ENTRYPOINT["java", "-javaagent:applicationinsights-agent-3.4.18.jar", "-jar", "app.jar"]
```
-In this example we have copied the `applicationinsights-agent-3.4.17.jar` and `applicationinsights.json` files from an `agent` folder (you can choose any folder of your machine). These two files have to be in the same folder in the Docker container.
+In this example we have copied the `applicationinsights-agent-3.4.18.jar` and `applicationinsights.json` files from an `agent` folder (you can choose any folder of your machine). These two files have to be in the same folder in the Docker container.
### Third-party container images
For information on setting up the Application Insights Java agent, see [Enabling
If you installed Tomcat via `apt-get` or `yum`, you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file: ```
-JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.17.jar"
+JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.18.jar"
``` #### Tomcat installed via download and unzip
JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.17.jar"
If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content: ```
-CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.17.jar"
+CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.18.jar"
```
-If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.17.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.18.jar` to `CATALINA_OPTS`.
### Tomcat 8 (Windows)
If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-
Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content: ```
-set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.17.jar
+set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.18.jar
``` Quotes aren't necessary, but if you want to include them, the proper placement is: ```
-set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.17.jar"
+set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.18.jar"
```
-If the file `<tomcat>/bin/setenv.bat` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.17.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.bat` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.18.jar` to `CATALINA_OPTS`.
#### Run Tomcat as a Windows service
-Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.17.jar` to the `Java Options` under the `Java` tab.
+Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.18.jar` to the `Java Options` under the `Java` tab.
### JBoss EAP 7 #### Standalone server
-Add `-javaagent:path/to/applicationinsights-agent-3.4.17.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
+Add `-javaagent:path/to/applicationinsights-agent-3.4.18.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
```java ...
- JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.17.jar -Xms1303m -Xmx1303m ..."
+ JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.18.jar -Xms1303m -Xmx1303m ..."
... ``` #### Domain server
-Add `-javaagent:path/to/applicationinsights-agent-3.4.17.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.4.18.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
```xml ...
Add `-javaagent:path/to/applicationinsights-agent-3.4.17.jar` to the existing `j
<jvm-options> <option value="-server"/> <!--Add Java agent jar file here-->
- <option value="-javaagent:path/to/applicationinsights-agent-3.4.17.jar"/>
+ <option value="-javaagent:path/to/applicationinsights-agent-3.4.18.jar"/>
<option value="-XX:MetaspaceSize=96m"/> <option value="-XX:MaxMetaspaceSize=256m"/> </jvm-options>
Add these lines to `start.ini`:
``` --exec--javaagent:path/to/applicationinsights-agent-3.4.17.jar
+-javaagent:path/to/applicationinsights-agent-3.4.18.jar
``` ### Payara 5
-Add `-javaagent:path/to/applicationinsights-agent-3.4.17.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.4.18.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
```xml ... <java-config ...> <!--Edit the JVM options here--> <jvm-options>
- -javaagent:path/to/applicationinsights-agent-3.4.17.jar>
+ -javaagent:path/to/applicationinsights-agent-3.4.18.jar>
</jvm-options> ... </java-config>
Add `-javaagent:path/to/applicationinsights-agent-3.4.17.jar` to the existing `j
1. In `Generic JVM arguments`, add the following JVM argument: ```
- -javaagent:path/to/applicationinsights-agent-3.4.17.jar
+ -javaagent:path/to/applicationinsights-agent-3.4.18.jar
``` 1. Save and restart the application server.
Add `-javaagent:path/to/applicationinsights-agent-3.4.17.jar` to the existing `j
Create a new file `jvm.options` in the server directory (for example, `<openliberty>/usr/servers/defaultServer`), and add this line: ```--javaagent:path/to/applicationinsights-agent-3.4.17.jar
+-javaagent:path/to/applicationinsights-agent-3.4.18.jar
``` ### Others
azure-monitor Java Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-spring-boot.md
Title: Configure Azure Monitor Application Insights for Spring Boot description: How to configure Azure Monitor Application Insights for Spring Boot applications Previously updated : 09/18/2023 Last updated : 10/30/2023 ms.devlang: java
There are two options for enabling Application Insights Java with Spring Boot: J
## Enabling with JVM argument
-Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.17.jar"` somewhere before `-jar`, for example:
+Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.18.jar"` somewhere before `-jar`, for example:
```
-java -javaagent:"path/to/applicationinsights-agent-3.4.17.jar" -jar <myapp.jar>
+java -javaagent:"path/to/applicationinsights-agent-3.4.18.jar" -jar <myapp.jar>
``` ### Spring Boot via Docker entry point
-If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.17.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.18.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.17.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.18.jar", "-jar", "<myapp.jar>"]
```
-If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.17.jar"` somewhere before `-jar`, for example:
+If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.18.jar"` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.17.jar" -jar <myapp.jar>
+ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.18.jar" -jar <myapp.jar>
``` ### Configuration
To enable Application Insights Java programmatically, you must add the following
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-runtime-attach</artifactId>
- <version>3.4.17</version>
+ <version>3.4.18</version>
</dependency> ```
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
Title: Configuration options - Azure Monitor Application Insights for Java description: This article shows you how to configure Azure Monitor Application Insights for Java. Previously updated : 10/10/2023 Last updated : 10/30/2023 ms.devlang: java
More information and configuration options are provided in the following section
## Configuration file path
-By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.17.jar`.
+By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.18.jar`.
You can specify your own configuration file path by using one of these two options: * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable * `applicationinsights.configuration.file` Java system property
-If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.17.jar` is located.
+If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.18.jar` is located.
Alternatively, instead of using a configuration file, you can specify the entire _content_ of the JSON configuration via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`.
Or you can set the connection string by using the Java system property `applicat
You can also set the connection string by specifying a file to load the connection string from.
-If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.17.jar` is located.
+If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.18.jar` is located.
```json {
and add `applicationinsights-core` to your application:
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-core</artifactId>
- <version>3.4.17</version>
+ <version>3.4.18</version>
</dependency> ```
In the preceding configuration example:
* `level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`. * `path` can be an absolute or relative path. Relative paths are resolved against the directory where
-`applicationinsights-agent-3.4.17.jar` is located.
+`applicationinsights-agent-3.4.18.jar` is located.
Starting from version 3.0.2, you can also set the self-diagnostics `level` by using the environment variable `APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL`. It then takes precedence over the self-diagnostics level specified in the JSON configuration.
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
Title: Upgrading from 2.x - Azure Monitor Application Insights Java description: Upgrading from Azure Monitor Application Insights Java 2.x Previously updated : 09/18/2023 Last updated : 10/30/2023 ms.devlang: java
There are typically no code changes when upgrading to 3.x. The 3.x SDK dependenc
Add the 3.x Java agent to your JVM command-line args, for example ```--javaagent:path/to/applicationinsights-agent-3.4.17.jar
+-javaagent:path/to/applicationinsights-agent-3.4.18.jar
``` If you're using the Application Insights 2.x Java agent, just replace your existing `-javaagent:...` with the aforementioned example.
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Title: Enable Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications description: This article provides guidance on how to enable Azure Monitor on applications by using OpenTelemetry. Previously updated : 10/10/2023 Last updated : 10/30/2023 ms.devlang: csharp, javascript, typescript, python
dotnet add package Azure.Monitor.OpenTelemetry.Exporter
#### [Java](#tab/java)
-Download the [applicationinsights-agent-3.4.17.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.17/applicationinsights-agent-3.4.17.jar) file.
+Download the [applicationinsights-agent-3.4.18.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.18/applicationinsights-agent-3.4.18.jar) file.
> [!WARNING] >
var loggerFactory = LoggerFactory.Create(builder =>
Java autoinstrumentation is enabled through configuration changes; no code changes are required.
-Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights-agent-3.4.17.jar"` to your application's JVM args.
+Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights-agent-3.4.18.jar"` to your application's JVM args.
> [!TIP] > For scenario-specific guidance, see [Get Started (Supplemental)](./java-get-started-supplemental.md).
To paste your Connection String, select from the following options:
B. Set via Configuration File - Java Only (Recommended)
- Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-3.4.17.jar` with the following content:
+ Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-3.4.18.jar` with the following content:
```json {
azure-monitor Pre Aggregated Metrics Log Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pre-aggregated-metrics-log-metrics.md
There are several [ways of sending custom metrics from the Application Insights
## Custom metrics dimensions and pre-aggregation
-All metrics that you send by using [trackMetric](./api-custom-events-metrics.md#trackmetric) or [GetMetric and TrackValue](./api-custom-events-metrics.md#getmetric) API calls are automatically stored in both logs and metrics stores. Although the log-based version of your custom metric always retains all dimensions, the pre-aggregated version of the metric is stored by default with no dimensions. You can turn on collection of dimensions of custom metrics on the [usage and estimated cost](../usage-estimated-costs.md#usage-and-estimated-costs) tab by selecting the **Enable alerting on custom metric dimensions** checkbox.
+All metrics that you send by using [trackMetric](./api-custom-events-metrics.md#trackmetric) or [GetMetric and TrackValue](./api-custom-events-metrics.md#getmetric) API calls are automatically stored in both logs and metrics stores. Although the log-based version of your custom metric always retains all dimensions, the pre-aggregated version of the metric is stored by default with no dimensions. You can turn on collection of dimensions of custom metrics on the [usage and estimated cost](../cost-usage.md#usage-and-estimated-costs) tab by selecting the **Enable alerting on custom metric dimensions** checkbox.
:::image type="content" source="./media/pre-aggregated-metrics-log-metrics/001-cost.png" lightbox="./media/pre-aggregated-metrics-log-metrics/001-cost.png" alt-text="Screenshot that shows usage and estimated costs.":::
azure-monitor Best Practices Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-alerts.md
Security is one of the most important aspects of any architecture. Azure Monitor
## Cost optimization
-Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. See [Azure Monitor cost and usage](usage-estimated-costs.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
+Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. See [Azure Monitor cost and usage](cost-usage.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
> [!NOTE] > See [Optimize costs in Azure Monitor](best-practices-cost.md) for cost optimization recommendations across all features of Azure Monitor.
azure-monitor Best Practices Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-containers.md
Security is one of the most important aspects of any architecture. Azure Monitor
## Cost optimization
-Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. See [Azure Monitor cost and usage](usage-estimated-costs.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
+Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. See [Azure Monitor cost and usage](cost-usage.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
> [!NOTE] > See [Optimize costs in Azure Monitor](best-practices-cost.md) for cost optimization recommendations across all features of Azure Monitor.
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
# Cost optimization in Azure Monitor
-Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. Before you use this article, you should see [Azure Monitor cost and usage](usage-estimated-costs.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
+Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. Before you use this article, you should see [Azure Monitor cost and usage](cost-usage.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
This article describes [Cost optimization](/azure/architecture/framework/cost/) for Azure Monitor as part of the [Azure Well-Architected Framework](/azure/architecture/framework/). This is a set of guiding tenets that can be used to improve the quality of a workload. The framework consists of five pillars of architectural excellence:
azure-monitor Best Practices Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-logs.md
Security is one of the most important aspects of any architecture. Azure Monitor
## Cost optimization
-Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. See [Azure Monitor cost and usage](usage-estimated-costs.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
+Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. See [Azure Monitor cost and usage](cost-usage.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
> [!NOTE] > See [Optimize costs in Azure Monitor](best-practices-cost.md) for cost optimization recommendations across all features of Azure Monitor.
azure-monitor Best Practices Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-plan.md
This article is part of the scenario [Recommendations for configuring Azure Moni
If you're not already familiar with monitoring concepts, start with the [Cloud monitoring guide](/azure/cloud-adoption-framework/manage/monitor), which is part of the [Microsoft Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/). That guide defines high-level concepts of monitoring and provides guidance for defining requirements for your monitoring environment and supporting processes. This article refers to sections of that guide that are relevant to particular planning steps. ## Understand Azure Monitor costs
-Minimizing costs is a core goal of your monitoring strategy. Some data collection and features in Azure Monitor have no cost. However, others have costs based on their particular configuration, amount of data collected, or frequency that they're run. The articles in this scenario identify any recommendations that include a cost, but you should be familiar with Azure Monitor pricing as you design your implementation for cost optimization. See the following pages for details and guidance on Azure Monitor pricing:
+A core goal of your monitoring strategy will be minimizing costs. Some data collection and features in Azure Monitor have no cost while other have costs based on their particular configuration, amount of data collected, or frequency that they're run. The articles in this scenario identify any recommendations that include a cost, but you should be familiar with Azure Monitor pricing as you design your implementation for cost optimization. See the following for details and guidance on Azure Monitor pricing:
- [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/)-- [Monitor usage and estimated costs in Azure Monitor](usage-estimated-costs.md)
+- [Azure Monitor cost and usage](cost-usage.md)
+- [Cost optimization in Azure Monitor](best-practices-cost.md)
## Define strategy Before you design and implement any monitoring solution, you should establish a monitoring strategy so that you understand the goals and requirements of your plan. The strategy defines your particular requirements, the configuration that best meets those requirements, and processes to use the monitoring environment to maximize your applications' performance and reliability. The configuration options that you choose for Azure Monitor should be consistent with your strategy.
-See [Cloud monitoring guide: Formulate a monitoring strategy](/azure/cloud-adoption-framework/strategy/monitoring-strategy) for many factors that you should consider when developing a monitoring strategy. You should also refer to [Monitoring strategy for cloud deployment models](/azure/cloud-adoption-framework/manage/monitor/cloud-models-monitor-overview) for assistance with comparing completely cloud based monitoring with a hybrid model.
+See [Cloud monitoring guide: Formulate a monitoring strategy](/azure/cloud-adoption-framework/strategy/monitoring-strategy) for a number of factors that you should consider when developing a monitoring strategy. You should also refer to [Monitoring strategy for cloud deployment models](/azure/cloud-adoption-framework/manage/monitor/cloud-models-monitor-overview), which assist in comparing completely cloud based monitoring with a hybrid model.
## Gather required information Before you determine the details of your implementation, you should gather information required to define those details. The following sections described information typically required for a complete implementation of Azure Monitor. ### What needs to be monitored?
- You don't need to necessarily configure complete monitoring for all of your cloud resources but instead focus on your critical applications and the components they depend on. This focus will not only reduce your monitoring costs but also reduce the complexity of your monitoring environment. See [Cloud monitoring guide: Collect the right data](/azure/cloud-adoption-framework/manage/monitor/data-collection) for guidance on defining the data that you require.
+ You won't necessarily configure complete monitoring for all of your cloud resources but instead focus on your critical applications and the components they depend on. This not only reduces your monitoring costs but also reduce the complexity of your monitoring environment. See [Cloud monitoring guide: Collect the right data](/azure/cloud-adoption-framework/manage/monitor/data-collection) for guidance on defining the data that you require.
### Who needs to have access and be notified
-As you configure your monitoring environment, you need to determine the folllowing:
--- Which users should have access to monitoring data-- Which users need to be notified when an issue is detected-
-These users may be application and resource owners, or you may have a centralized monitoring team. This information determines how you configure permissions for data access and notifications for alerts. You may also require custom workbooks to present particular sets of information to different users.
+As you configure your monitoring environment, you need to determine which users should have access to monitoring data and which users need to be notified when an issue is detected. These may be application and resource owners, or you may have a centralized monitoring team. This information determines how you configure permissions for data access and notifications for alerts. You may also require custom workbooks to present particular sets of information to different users.
### Service level agreements Your organization may have SLAs that define your commitments for performance and uptime of your applications. These SLAs may determine how you need to configure time sensitive features of Azure Monitor such as alerts. You also need to understand [data latency in Azure Monitor](logs/data-ingestion-time.md) since this affects the responsiveness of monitoring scenarios and your ability to meet SLAs. ## Identify monitoring services and products
-Azure Monitor is designed to address Health and Status monitoring. A complete monitoring solution typically involves multiple Azure services and potentially other products. Other monitoring objectives, which may require more solutions, are described in the Cloud Monitoring Guide in [primary monitoring objectives](/azure/cloud-adoption-framework/strategy/monitoring-strategy#formulate-monitoring-requirements).
+Azure Monitor is designed to address Health and Status monitoring. A complete monitoring solution typically involves multiple Azure services and potentially other products. Other monitoring objectives, which may require additional solutions, are described in the Cloud Monitoring Guide in [primary monitoring objectives](/azure/cloud-adoption-framework/strategy/monitoring-strategy#formulate-monitoring-requirements).
The following sections describe other services and products that you may use with Azure Monitor. This scenario currently doesn't include guidance on implementing these solutions so you should refer to their documentation.
azure-monitor Best Practices Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-vm.md
Security is one of the most important aspects of any architecture. Azure Monitor
## Cost optimization
-Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. See [Azure Monitor cost and usage](usage-estimated-costs.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
+Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. See [Azure Monitor cost and usage](cost-usage.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
> [!NOTE] > See [Optimize costs in Azure Monitor](best-practices-cost.md) for cost optimization recommendations across all features of Azure Monitor.
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
Last updated 03/02/2023 + # Understand monitoring costs for Container insights This article provides pricing guidance for Container insights to help you understand how to:
-* Estimate costs up front before you enable Container insights.
* Measure costs after Container insights has been enabled for one or more containers. * Control the collection of data and make cost reductions.
This article provides pricing guidance for Container insights to help you unders
The Azure Monitor pricing model is primarily based on the amount of data ingested in gigabytes per day into your Log Analytics workspace. The cost of a Log Analytics workspace isn't based only on the volume of data collected, it is also dependent on the plan selected, and how long you chose to store data generated from your clusters. >[!NOTE]
->All sizes and pricing are for sample estimation only. See the Azure Monitor [pricing](https://azure.microsoft.com/pricing/details/monitor/) page for the most recent pricing based on your Azure Monitor Log Analytics pricing model and Azure region.
+> See [Estimate Azure Monitor costs](../cost-estimate.md#log-data-ingestion) to estimate your costs for Container insights before you enable it.
The following types of data collected from a Kubernetes cluster with Container insights influence cost and can be customized based on your usage:
The following types of data collected from a Kubernetes cluster with Container i
- Active scraping of Prometheus metrics - [Resource log collection](../../aks/monitor-aks.md#resource-logs) of Kubernetes main node logs in your Azure Kubernetes Service (AKS) cluster to analyze log data generated by main components, such as `kube-apiserver` and `kube-controller-manager`.
-## Estimating costs to monitor your AKS cluster
-
-The following estimation is based on an AKS cluster with the following sizing example. The estimate applies only for metrics and inventory data collected. For container logs like stdout, stderr, and environmental variables, the estimate varies based on the log sizes generated by the workload. They're excluded from our estimation.
-
-If you enabled monitoring of an AKS cluster configured as follows:
--- Three nodes-- Two disks per node-- One network interface per node-- 20 pods (one container in each pod = 20 containers in total)-- Two Kubernetes namespaces-- Five Kubernetes services (includes kube-system pods, services, and namespace)-- Collection frequency = 60 secs (default)-
-You can see the tables and volume of data generated per hour in the assigned Log Analytics workspace. For more information about each of these tables, see [Azure Monitor Logs tables](../../aks/monitor-aks-reference.md#azure-monitor-logs-tables).
-
-|Table | Size estimate (MB/hour) |
-|||
-|Perf | 12.9 |
-|InsightsMetrics | 11.3 |
-|KubePodInventory | 1.5 |
-|KubeNodeInventory | 0.75 |
-|KubeServices | 0.13 |
-|ContainerInventory | 3.6 |
-|KubeHealth | 0.1 |
-|KubeMonAgentEvents |0.005 |
-
-Total = 31 MB/hour = 23.1 GB/month (one month = 31 days)
-
-By using the default [pricing](https://azure.microsoft.com/pricing/details/monitor/) for Log Analytics, which is a pay-as-you-go model, you can estimate the Azure Monitor cost per month. After a capacity reservation is included, the price would be higher per month depending on the reservation selected.
## Control ingestion to reduce cost
You must be on the ContainerLogV2 schema to configure Basic Logs. For more infor
### Prometheus metrics scraping
-If you use [Prometheus metric scraping](container-insights-prometheus.md), make sure that you limit the number of metrics you collect from your cluster:
+> [!NOTE]
+> This section describes [collection of Prometheus metrics in your Log Analytics workspace](container-insights-prometheus-logs.md). This information does not apply if you're using [Managed Prometheus to scrape your Prometheus metrics](prometheus-metrics-enable.md).
+
+If you [collect Prometheus metrics in your Log Analytics workspace](container-insights-prometheus-logs.md), make sure that you limit the number of metrics you collect from your cluster:
- Ensure that scraping frequency is optimally set. The default is 60 seconds. You can increase the frequency to 15 seconds, but you must ensure that the metrics you're scraping are published at that frequency. Otherwise, many duplicate metrics will be scraped and sent to your Log Analytics workspace at intervals that add to data ingestion and retention costs but are of less value. - Container insights supports exclusion and inclusion lists by metric name. For example, if you're scraping **kubedns** metrics in your cluster, hundreds of them might get scraped by default. But you're most likely only interested in a subset of the metrics. Confirm that you specified a list of metrics to scrape, or exclude others except for a few to save on data ingestion volume. It's easy to enable scraping and not use many of those metrics, which will only add charges to your Log Analytics bill.
If you use [Prometheus metric scraping](container-insights-prometheus.md), make
Container insights includes a predefined set of metrics and inventory items collected that are written as log data in your Log Analytics workspace. All metrics in the following table are collected every one minute. + | Type | Metrics | |:|:| | Node metrics | `cpuUsageNanoCores`<br>`cpuCapacityNanoCores`<br>`cpuAllocatableNanoCores`<br>`memoryRssBytes`<br>`memoryWorkingSetBytes`<br>`memoryCapacityBytes`<br>`memoryAllocatableBytes`<br>`restartTimeEpoch`<br>`used` (disk)<br>`free` (disk)<br>`used_percent` (disk)<br>`io_time` (diskio)<br>`writes` (diskio)<br>`reads` (diskio)<br>`write_bytes` (diskio)<br>`write_time` (diskio)<br>`iops_in_progress` (diskio)<br>`read_bytes` (diskio)<br>`read_time` (diskio)<br>`err_in` (net)<br>`err_out` (net)<br>`bytes_recv` (net)<br>`bytes_sent` (net)<br>`Kubelet_docker_operations` (kubelet)
The following list is the cluster inventory data collected by default:
## Next steps To help you understand what the costs are likely to be based on recent usage patterns from data collected with Container insights, see [Analyze usage in a Log Analytics workspace](../logs/analyze-usage.md).+
azure-monitor Cost Estimate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/cost-estimate.md
+
+ Title: Estimate Azure Monitor costs
+description: Guidance on using the Azure Monitor pricing calculator to estimate Azure Monitor billable usage.
+++ Last updated : 10/27/2023+
+# Estimate Azure Monitor costs
+
+Your Azure Monitor cost will vary significantly based on your expected utilization and configuration. Use the [Azure Monitor Pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) to get cost estimates for different features of Azure Monitor based on your particular environment.
+
+Since Azure Monitor has [multiple types of charges](cost-usage.md#pricing-model), its calculator has multiple categories. See the sections below for an explanation of these categories and guidance for providing estimates. See [Azure Monitor Pricing](https://azure.microsoft.com/pricing/details/monitor/) for current pricing details.
+
+Some of the values required by the calculator might be difficult to provide if you're just getting started with Azure Monitor. For example, you might have no idea of the volume of analytics logs generated from the different Azure resources that you intend to monitor. A common strategy is to enable monitoring for a small group of resources and use the observed data volumes with the calculator to determine your costs for a full environment. See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for queries and other methods to measure the billable data in your Log Analytics workspace.
+++
+## Log data ingestion
+This section includes the ingestion and retention of data in your Log Analytics workspaces. This includes such features as Container insights and Application insights in addition to resource logs collected from your Azure resources and agents installed on your virtual machines. This is where the bulk of monitoring costs will typically be incurred in most environments.
+
+| Category | Description |
+|:|:|
+| Estimate Data Volume For Monitoring VMs | Data collected from your virtual machines either using VM insights or by creating a DCR to events and performance data. The data collected from each VM will vary significantly depending on your particular collection settings and the workloads running on your virtual machines, so you should validate these estimates in your own environment. |
+| Estimate Data Volume Using Container Insights | Data collected from your Kubernetes clusters. The estimate is based on the number of clusters and their configuration. This estimate applies only for metrics and inventory data collected. Container logs (stdout, stderr, and environmental variables) vary significantly based on the log sizes generated by the workload, and they're excluded from this estimate. You should include their volume in the *Analytics Logs* category. |
+| Estimate Data Volume Based On Application Activity | Data collected from your workspace-based applications using Application Insights. The data collected from each application will vary significantly depending on your particular collection settings and applications, so you should validate these estimates in your own environment.
+| Analytics Logs | Resource logs collected from Azure resources and any other data aside from those listed above sent to Log Analytics tables not configured for [basic logs](logs/basic-logs-configure.md). This can be difficult to estimate so you should enable monitoring for a small group of resources and use the observed data volumes to extrapolate for a full environment. |
+| Basic Logs | Resource logs collected from Azure resources and any other data aside from those listed above sent to Log Analytics tables configured for [basic logs](logs/basic-logs-configure.md). This can be difficult to estimate so you should enable monitoring for a small group of resources and use the observed data volumes to extrapolate for a full environment. |
+| Interactive Data Retention | [Interactive retention](logs/data-retention-archive.md) setting for your Log Analytics workspace. |
+| Data Archive | [Archive](logs/data-retention-archive.md) setting for your Log Analytics workspace. |
+| Basic Logs Search Queries | Estimated number and scanned data of the queries that you expect to run using tables configured for [basic logs](logs/basic-logs-configure.md). |
+| Search Jobs | Estimated number and scanned data of the [search jobs](logs/search-jobs.md) that you expect to run against [archived data](logs/data-retention-archive.md). |
+| Platform logs| Resource logs collected from Azure resources to an Event Hub, Storage account, or a partner. This doesn't include logs sent to your Log Analytics workspace, which are included in the **Analytics Logs** and **Basic Logs** category. This can be difficult to estimate so you should enable monitoring for a small group of resources and use the observed data volumes to extrapolate for a full environment. |
+
+## Managed Prometheus
+This section includes charges for the ingestion and query of Prometheus metrics by your Kubernetes clusters.
+
+| Category | Description |
+|:|:|
+| Metric Sample Ingestion | Number and frequency of the Prometheus metrics collected by your AKS nodes. See [Default Prometheus metrics configuration in Azure Monitor](containers/prometheus-metrics-scrape-default.md). |
+| Query Samples Processed | Number of query samples can be estimated from the dashboards and alerting rules that use them. |
++
+## Application Insights
+This section includes charges from [classic Application Insights resources](app/convert-classic-resource.md). Workspace-based Application Insights resources are included in the Log Data Ingestion category.
+
+| Category | Description |
+|:|:|
+| Data ingestion | Volume of data that you expect from your classic Application Insights resources. This can be difficult to estimate so you should enable monitoring for a small group of resources and use the observed data volumes to extrapolate for a full environment. |
+| Data Retention | [Data retention setting](logs/data-retention-archive.md#set-data-retention-for-classic-application-insights-resources) for your classic Application Insights resources. |
+| Multi-step Web Test | Number of legacy [multi-step web tests](/previous-versions/azure/azure-monitor/app/availability-multistep) that you expect to run. |
++
+## Alert rules
+This section includes charges for alert rules.
+
+| Category | Description |
+|:|:|
+| Metric Signals Monitored | Number of [metrics alert rules](alerts/alerts-types.md#metric-alerts) and their time series. |
+| Log Signals Monitored | Number of [log alert rules](alerts/alerts-types.md#log-alerts) and their frequency. |
+
+## ITSM connector - ticket creation/update
+This section includes charges for ITSM events, which are sent in response to alerts being triggered.
+
+| Category | Description |
+|:|:|
+| Estimate the number of ITSM events that will be sent beyond the number included for free. |
++
+## Notifications
+This section includes charges for notifications, which are sent in response to alerts being triggered.
+
+| Category | Description |
+|:|:|
+| Emails, web hooks and push notifications | Estimate the number of different types of notifications that will be sent beyond the number included for free. |
+++
+## Next steps
+
+- See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.
+- See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for details on analyzing the data in your workspace to determine to source of any higher than expected usage and opportunities to reduce your amount of data collected.
+- See [Azure Monitor best practices - Cost management](best-practices-cost.md) for best practices on configuring and managing Azure Monitor to minimize your charges.
azure-monitor Cost Meters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/cost-meters.md
+
+ Title: Azure Monitor billing meter names
+description: Reference of Azure Monitor billing meter names.
+++ Last updated : 09/20/2023+
+# Azure Monitor billing meter names
+
+This article contains a reference of the billing meter names used by Azure Monitor in [Azure Cost Management + Billing](cost-usage.md#azure-cost-management--billing). Use this information to interpret your monthly charges for Azure Monitor.
+
+## Log data ingestion
+The following table lists the meters used to bill for data ingestion in your Log Analytics workspaces, and whether the meter is regional. There's a different billing meter, `MeterId` in the export usage report for each region. Note that Basic Logs ingestion can be used when the workspace's pricing tier is Pay-as-you-go or any commitment tier.
++
+| Pricing tier |ServiceName | MeterName | Regional Meter? |
+| -- | -- | -- | -- |
+| (any) | Azure Monitor | Basic Logs Data Ingestion | yes |
+| Pay-as-you-go | Log Analytics | Pay-as-you-go Data Ingestion | yes |
+| 100 GB/day Commitment Tier | Azure Monitor | 100 GB Commitment Tier Capacity Reservation | yes |
+| 200 GB/day Commitment Tier | Azure Monitor | 200 GB Commitment Tier Capacity Reservation | yes |
+| 300 GB/day Commitment Tier | Azure Monitor | 300 GB Commitment Tier Capacity Reservation | yes |
+| 400 GB/day Commitment Tier | Azure Monitor | 400 GB Commitment Tier Capacity Reservation | yes |
+| 500 GB/day Commitment Tier | Azure Monitor | 500 GB Commitment Tier Capacity Reservation | yes |
+| 1000 GB/day Commitment Tier | Azure Monitor | 1000 GB Commitment Tier Capacity Reservation | yes |
+| 2000 GB/day Commitment Tier | Azure Monitor | 2000 GB Commitment Tier Capacity Reservation | yes |
+| 5000 GB/day Commitment Ties | Azure Monitor | 5000 GB Commitment Tier Capacity Reservation | yes |
+| Per Node (legacy tier) | Insight and Analytics | Standard Node | no |
+| Per Node (legacy tier) | Insight and Analytics | Standard Data Overage per Node | no |
+| Per Node (legacy tier) | Insight and Analytics | Standard Data Included per Node | no |
+| Standalone (legacy tier) | Log Analytics | Pay-as-you-go Data Analyzed | no |
+| Standard (legacy tier) | Log Analytics | Standard Data Analyzed | no |
+| Premium (legacy tier) | Log Analytics | Premium Data Analyzed | no |
++
+The *Standard Data Included per Node* meter is used both for the Log Analytics [Per Node tier](logs/cost-logs.md#per-node-pricing-tier) data allowance, and also the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud), for workspaces in any pricing tier.
++
+## Other Azure Monitor logs meters
+
+| ServiceName | MeterName | Regional Meter? |
+| -- | | |
+| Log Analytics | Pay-as-you-go Data Retention | yes |
+| Insight and Analytics | Standard Data Retention | no |
+| Azure Monitor | Data Archive | yes |
+| Azure Monitor | Search Queries Scanned | yes |
+| Azure Monitor | Search Jobs Scanned | yes |
+| Azure Monitor | Data Restore | yes |
+| Azure Monitor | Log Analytics data export Data Exported | yes |
+| Azure Monitor | Platform Logs Data Processed | yes |
+
+*Pay-as-you-go Data Retention* is used for workspaces in all modern pricing tiers (Pay-as-you-go and Commitment Tiers). "Standard Data Retention" is used for workspaces in the legacy Per Node and Standalone pricing tiers.
+
+## Azure Monitor metrics meters:
+
+| ServiceName | MeterName | Regional Meter? |
+| -- | | |
+| Azure Monitor | Metrics ingestion Metric samples | yes |
+| Azure Monitor | Prometheus Metrics Queries Metric samples | yes |
+| Azure Monitor | Native Metric Queries API Calls | yes |
+
+## Azure Monitor alerts meters
+
+| ServiceName | MeterName | Regional Meter? |
+| -- | | |
+| Azure Monitor | Alerts Metric Monitored | no |
+| Azure Monitor | Alerts Dynamic Threshold | no |
+| Azure Monitor | Alerts System Log Monitored at 1 Minute Frequency | no |
+| Azure Monitor | Alerts System Log Monitored at 10 Minute Frequency | no |
+| Azure Monitor | Alerts System Log Monitored at 15 Minute Frequency | no |
+| Azure Monitor | Alerts System Log Monitored at 5 Minute Frequency | no |
+| Azure Monitor | Alerts Resource Monitored at 1 Minute Frequency | no |
+| Azure Monitor | Alerts Resource Monitored at 10 Minute Frequency | no |
+| Azure Monitor | Alerts Resource Monitored at 15 Minute Frequency | no |
+| Azure Monitor | Alerts Resource Monitored at 5 Minute Frequency | no |
+
+## Azure Monitor web test meters
+
+| ServiceName | MeterName | Regional Meter? |
+| -- | | |
+| Azure Monitor | Standard Web Test Execution | yes |
+| Application Insights | Multi-step Web Test | no |
+
+## Legacy classic Application Insights meters
+
+| ServiceName | MeterName | Regional Meter? |
+| -- | | |
+| Application Insights | Enterprise Node | no |
+| Application Insights | Enterprise Overage Data | no |
++
+### Legacy Application Insights meters
+
+Most Application Insights usage for both classic and workspace-based resources is reported on meters with **Log Analytics** for **Meter Category** because there's a single log back-end for all Azure Monitor components. Only Application Insights resources on legacy pricing tiers and multiple-step web tests are reported with **Application Insights** for **Meter Category**. The usage is shown in the **Consumed Quantity** column. The unit for each entry is shown in the **Unit of Measure** column. For more information, see [Understand your Microsoft Azure bill](../cost-management-billing/understand/review-individual-bill.md).
+
+To separate costs from your Log Analytics and classic Application Insights usage, [create a filter](../cost-management-billing/costs/group-filter.md) on **Resource type**. To see all Application Insights costs, filter **Resource type** to **microsoft.insights/components**. For Log Analytics costs, filter **Resource type** to **microsoft.operationalinsights/workspaces**. (Workspace-based Application Insights is all billed to the Log Analytics workspace resourced.)
azure-monitor Cost Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/cost-usage.md
+
+ Title: Azure Monitor cost and usage
+description: Overview of how Azure Monitor is billed and how to analyze billable usage.
+++ Last updated : 10/20/2023+
+# Azure Monitor cost and usage
+This article describes the different ways that Azure Monitor charges for usage and how to evaluate charges on your Azure bill.
++
+## Pricing model
+Azure Monitor uses a consumption-based pricing (pay-as-you-go) billing model where you only pay for what you use. Features of Azure Monitor that are enabled by default do not incur any charge, including collection and alerting on the [Activity log](essentials/activity-log.md) and collection and analysis of [platform metrics](essentials/metrics-supported.md).
+
+Several other features don't have a direct cost, but you instead pay for the ingestion and retention of data that they collect. The following table describes the different types of usage that are charged in Azure Monitor. Detailed current pricing for each is provided in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
++
+| Type | Description |
+|:|:|
+| Logs | Ingestion, retention, and export of data in [Log Analytics workspaces](logs/log-analytics-workspace-overview.md) and [legacy Application insights resources](app/convert-classic-resource.md). This will typically be the bulk of Azure Monitor charges for most customers. There is no charge for querying this data except in the case of [Basic Logs](logs/basic-logs-configure.md) or [Archived Logs](logs/data-retention-archive.md).<br><br>Charges for Logs can vary significantly on the configuration that you choose. See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how charges for Logs data are calculated and the different pricing tiers available. |
+| Platform Logs | Processing of [diagnostic and auditing information](essentials/resource-logs.md) is charged for [certain services](essentials/resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace. There's no direct charge when this data is sent to a Log Analytics workspace, but there is a charge for the workspace data ingestion and collection. |
+| Metrics | There is no charge for [standard metrics](essentials/metrics-supported.md) collected from Azure resources. There is a cost for collecting [custom metrics](essentials/metrics-custom-overview.md) and for retrieving metrics from the [REST API](essentials/rest-api-walkthrough.md#retrieve-metric-values). |
+| Prometheus Metrics | Pricing for [Azure Monitor managed service for Prometheus](essentials/prometheus-metrics-overview.md) is based on [data samples ingested](essentials/prometheus-metrics-enable.md) and [query samples processed](essentials/azure-monitor-workspace-manage.md#link-a-grafana-workspace). Data is retained for 18 months at no extra charge. |
+| Alerts | Alerts are charged based on the type and number of [signals](alerts/alerts-overview.md) used by the alert rule, its frequency, and the type of [notification](alerts/action-groups.md) used in response. For [log alerts](alerts/alerts-unified-log.md) configured for [at scale monitoring](alerts/alerts-unified-log.md#split-by-alert-dimensions), the cost will also depend on the number of time series created by the dimensions resulting from your query. |
+| Web tests | There is a cost for [standard web tests](app/availability-standard-tests.md) and [multi-step web tests](app/availability-multistep.md) in Application Insights. Multi-step web tests have been deprecated.
++
+### Data transfer charges
+Sending data to Azure Monitor can incur data bandwidth charges. As described in the [Azure Bandwidth pricing page](https://azure.microsoft.com/pricing/details/bandwidth/), data transfer between Azure services located in two regions charged as outbound data transfer at the normal rate. Inbound data transfer is free. Data transfer charges for Azure Monitor though are typically very small compared to the costs for data ingestion and retention. You should focus more on your ingested data volume to control your costs.
+
+> [!NOTE]
+> Data sent to a different region using [Diagnostic Settings](essentials/diagnostic-settings.md) does not incur data transfer charges
+
+## View Azure Monitor usage and charges
+There are two primary tools to view and analyze your Azure Monitor billing and estimated charges. Each is described in detail in the following sections.
+
+| Tool | Description |
+|:|:|
+| [Azure Cost Management + Billing](#azure-cost-management--billing) | The primary tool that you use to analyze your usage and costs. It gives you multiple options to analyze your monthly charges for different Azure Monitor features and their projected cost over time. |
+| [Usage and Estimated Costs](#usage-and-estimated-costs) | Provides a listing of monthly charges for different Azure Monitor features. This is particularly useful for Log Analytics workspaces where it helps you to select your pricing tier by showing how your cost would change at different pricing tiers. |
++
+## Azure Cost Management + Billing
+To get started analyzing your Azure Monitor charges, open [Cost Management + Billing](../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=/azure/billing/TOC.json) in the Azure portal. This tool includes several built-in dashboards for deep cost analysis like cost by resource and invoice details. Select **Cost Management** and then **Cost analysis**. Select your subscription or another [scope](../cost-management-billing/costs/understand-work-scopes.md).
+
+>[!NOTE]
+>You might need additional access to use Cost Management data. See [Assign access to Cost Management data](../cost-management-billing/costs/assign-access-acm-data.md).
+++
+To limit the view to Azure Monitor charges, [create a filter](../cost-management-billing/costs/group-filter.md) for the following **Service names**. See [Azure Monitor billing meter names](cost-meters.md) for the different charges that are included in each service.
+
+- Azure Monitor
+- Log Analytics
+- Insight and Analytics
+- Application Insights
+
+Other services such as Microsoft Defender for Cloud and Microsoft Sentinel also bill their usage against Log Analytics workspace resources, so you might want to add them to your filter. See [Common cost analysis uses](../cost-management-billing/costs/cost-analysis-common-uses.md) for details on using this view.
++
+>[!NOTE]
+>Alternatively, you can go to the **Overview** page of a Log Analytics workspace or Application Insights resource and click **View Cost** in the upper right corner of the **Essentials** section. This will launch the **Cost Analysis** from Azure Cost Management + Billing already scoped to the workspace or application.
+> :::image type="content" source="logs/media/view-bill/view-cost-option.png" lightbox="logs/media/view-bill/view-cost-option.png" alt-text="Screenshot of option to view cost for Log Analytics workspace.":::
+
+### Automated mails and alerts
+Rather than manually analyzing your costs in the Azure portal, you can automate delivery of information using the following methods.
+
+ - **Daily cost analysis emails.** Once you've configured your Cost Analysis view, you should click **Subscribe** at the top of the screen to receive regular email updates from Cost Analysis.
+ - **Budget alerts.** To be notified if there are significant increases in your spending, create a [budget alerts](../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md) for a single workspace or group of workspaces.
+
+### Export usage details
+
+To gain deeper understanding of your usage and costs, create exports using **Cost Analysis**. See [Tutorial: Create and manage exported data](../cost-management-billing/costs/tutorial-export-acm-data.md) to learn how to automatically create a daily export you can use for regular analysis.
+
+These exports are in CSV format and will contain a list of daily usage (billed quantity and cost) by resource, billing meter, and several other fields such as [AdditionalInfo](../cost-management-billing/automate/understand-usage-details-fields.md#list-of-fields-and-descriptions). You can use Microsoft Excel to do rich analyses of your usage not possible in the **Cost Analytics** experiences in the portal.
+
+The usage export has both the number of units of usage and their cost. Consequently, you can use this export to see the amount of benefits you are receiving from various offers such as the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) and the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/).
+
+For example, usage from Log Analytics can be found by first filtering on the **Meter Category** column to show
+
+1. **Log Analytics** (for Pay-as-you-go data ingestion and interactive Data Retention),
+2. **Insight and Analytics** (used by some of the legacy pricing tiers), and
+3. **Azure Monitor** (used by most other Log Analytics features such as Commitment Tiers, Basic Logs ingesting, Data Archive, Search Queries, Search Jobs, etc.)
+
+Add a filter on the **Instance ID** column for **contains workspace** or **contains cluster**. The usage is shown in the **Consumed Quantity** column. The unit for each entry is shown in the **Unit of Measure** column.
+
+> [!NOTE]
+> See [Azure Monitor billing meter names](cost-meters.md) for a reference of the billing meter names used by Azure Monitor in Azure Cost Management + Billing.
++
+## Usage and estimated costs
+You can get additional usage details about Log Analytics workspaces and Application Insights resources from the **Usage and Estimated Costs** option for each.
+
+### Log Analytics workspace
+To learn about your usage trends and choose the most cost-effective [commitment tier](logs/cost-logs.md#commitment-tiers) for your Log Analytics workspace, select **Usage and Estimated Costs** from the **Log Analytics workspace** menu in the Azure portal.
++
+This view includes the following:
+
+A. Estimated monthly charges based on usage from the past 31 days using the current pricing tier.<br>
+B. Estimated monthly charges using different commitment tiers.<br>
+C. Billable data ingestion by solution from the past 31 days.
+
+To explore the data in more detail, click on the icon in the upper-right corner of either chart to work with the query in Log Analytics.
++
+### Application insights
+To learn about your usage trends for your classic Application Insights resource, select **Usage and Estimated Costs** from the **Applications** menu in the Azure portal.
++
+This view includes the following:
+
+A. Estimated monthly charges based on usage from the past month.<br>
+B. Billable data ingestion by table from the past month.
+
+To investigate your Application Insights usage more deeply, open the **Metrics** page, add the metric named *Data point volume*, and then select the *Apply splitting* option to split the data by "Telemetry item type".
++
+## View data allocation benefits
+
+To view data allocation benefits from sources such as [Microsoft Defender for Servers](https://azure.microsoft.com/pricing/details/defender-for-cloud/), [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5 and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/), or the [Sentinel Free Trial](https://azure.microsoft.com/pricing/details/microsoft-sentinel/), you need to [export your usage details](#export-usage-details).
+
+1. Open the exported usage spreadsheet and filter the *Instance ID* column to your workspace. To select all of your workspaces in the spreadsheet, filter the *Instance ID* column to "contains /workspaces/".
+2. Filter the *ResourceRate* column to show only rows where this is equal to zero. Now you will see the data allocations from these various sources.
+
+> [!NOTE]
+> Data allocations from Defender for Servers 500 MB/server/day will appear in rows with the meter name "Data Included per Node" and the meter category to "Insight and Analytics" (the name of a legacy offer still used with this meter.) If the workspace is in the legacy Per Node Log Analytics pricing tier, this meter will also include the data allocations from this Log Analytics pricing tier.
++
+## Operations Management Suite subscription entitlements
+
+Customers who purchased Microsoft Operations Management Suite E1 and E2 are eligible for per-node data ingestion entitlements for Log Analytics and Application Insights. Each Application Insights node includes up to 200 MB of data ingested per day (separate from Log Analytics data ingestion), with 90-day data retention at no extra cost.
+
+To receive these entitlements for Log Analytics workspaces or Application Insights resources in a subscription, they must use the Per-Node (OMS) pricing tier. This entitlement isn't visible in the estimated costs shown in the Usage and estimated cost pane.
+
+Depending on the number of nodes of the suite that your organization purchased, moving some subscriptions into a Per GB (pay-as-you-go) pricing tier might be advantageous, but this requires careful consideration.
++
+Also, if you move a subscription to the new Azure monitoring pricing model in April 2018, the Per GB tier is the only tier available. Moving a subscription to the new Azure monitoring pricing model isn't advisable if you have an Operations Management Suite subscription.
+
+> [!TIP]
+> If your organization has Microsoft Operations Management Suite E1 or E2, it's usually best to keep your Log Analytics workspaces in the Per-Node (OMS) pricing tier and your Application Insights resources in the Enterprise pricing tier.
+>
+
+## Next steps
+
+- See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.
+- See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for details on analyzing the data in your workspace to determine to source of any higher than expected usage and opportunities to reduce your amount of data collected.
+- See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) to control your costs by setting a daily limit on the amount of data that might be ingested in a workspace.
+- See [Azure Monitor best practices - Cost management](best-practices-cost.md) for best practices on configuring and managing Azure Monitor to minimize your charges.
azure-monitor Metrics Custom Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-custom-overview.md
For details on when billing is enabled for custom metrics and metrics queries, c
Custom metrics are retained for the [same amount of time as platform metrics](../essentials/data-platform-metrics.md#retention-of-metrics). > [!NOTE]
-> Metrics sent to Azure Monitor via the Application Insights SDK are billed as ingested log data. They incur additional metrics charges only if the Application Insights feature [Enable alerting on custom metric dimensions](../app/pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) has been selected. This checkbox sends data to the Azure Monitor metrics database by using the custom metrics API to allow the more complex alerting. Learn more about the [Application Insights pricing model](../usage-estimated-costs.md) and [prices in your region](https://azure.microsoft.com/pricing/details/monitor/).
+> Metrics sent to Azure Monitor via the Application Insights SDK are billed as ingested log data. They incur additional metrics charges only if the Application Insights feature [Enable alerting on custom metric dimensions](../app/pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) has been selected. This checkbox sends data to the Azure Monitor metrics database by using the custom metrics API to allow the more complex alerting. Learn more about the [Application Insights pricing model](../cost-usage.md) and [prices in your region](https://azure.microsoft.com/pricing/details/monitor/).
## How to send custom metrics
azure-monitor Analyze Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/analyze-usage.md
See the **Usage** tab for a breakdown of ingestion by solution and table. This i
Select **Additional Queries** for prebuilt queries that help you further understand your data patterns. ### Usage and estimated costs
-The **Data ingestion per solution** chart on the [Usage and estimated costs](../usage-estimated-costs.md#usage-and-estimated-costs) page for each workspace shows the total volume of data sent and how much is being sent by each solution over the previous 31 days. This information helps you determine trends such as whether any increase is from overall data usage or usage by a particular solution.
+The **Data ingestion per solution** chart on the [Usage and estimated costs](../cost-usage.md#usage-and-estimated-costs) page for each workspace shows the total volume of data sent and how much is being sent by each solution over the previous 31 days. This information helps you determine trends such as whether any increase is from overall data usage or usage by a particular solution.
## Querying data volumes from the Usage table
W3CIISLog
## Next steps - See [Azure Monitor Logs pricing details](cost-logs.md) for information on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.-- See [Azure Monitor cost and usage](../usage-estimated-costs.md) for a description of the different types of Azure Monitor charges and how to analyze them on your Azure bill.
+- See [Azure Monitor cost and usage](../cost-usage.md) for a description of the different types of Azure Monitor charges and how to analyze them on your Azure bill.
- See [Azure Monitor best practices - Cost management](../best-practices-cost.md) for best practices on configuring and managing Azure Monitor to minimize your charges. - See [Data collection transformations in Azure Monitor (preview)](../essentials/data-collection-transformations.md) for information on using transformations to reduce the amount of data you collected in a Log Analytics workspace by filtering unwanted records and columns.
azure-monitor Azure Monitor Data Explorer Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-monitor-data-explorer-proxy.md
Perf | where ObjectName == "Memory" and (CounterName == "Available MBytes Memory
## Create an alert based on a cross-service query
-To create a new alert rule based on a cross-service query, follow the steps in [Create a new alert rule](../alerts/alerts-create-new-alert-rule.md), selecting your Log Analytics workspace on the Scope tab.
+To create a new alert rule based on a cross-service query, follow the steps in [Create a new alert rule](../alerts/alerts-create-new-alert-rule.md), selecting your Log Analytics workspace on the **Scope** tab.
## Limitations-
+### General cross-service query limitations
* Database names are case sensitive. * Identifying the Timestamp column in the cluster isn't supported. The Log Analytics Query API won't pass the time filter. * Cross-service queries support data retrieval only. * [Private Link](../logs/private-link-security.md) (private endpoints) and [IP restrictions](/azure/data-explorer/security-network-restrict-public-access) do not support cross-service queries. * `mv-expand` is limited to 2000 records.
-* Azure Resource Graph cross-queries do not support these operators: `smv-apply()`, `rand()`, `arg_max()`, `arg_min()`, `avg()`, `avg_if()`, `countif()`, `sumif()`, `percentile()`, `percentiles()`, `percentilew()`, `percentilesw()`, `stdev()`, `stdevif()`, `stdevp()`, `variance()`, `variancep()`, `varianceif()`.
+
+### Azure Resource Graph cross-service query limitations
+When you query Azure Resource Graph data from Azure Monitor:
+* The query returns the first 1000 records only.
+* Azure Monitor doesn't return Azure Resource Graph query errors.
+* The Log Analytics query editor marks valid Azure Resource Graph queries as syntax errors.
+* These operators aren't supported: `smv-apply()`, `rand()`, `arg_max()`, `arg_min()`, `avg()`, `avg_if()`, `countif()`, `sumif()`, `percentile()`, `percentiles()`, `percentilew()`, `percentilesw()`, `stdev()`, `stdevif()`, `stdevp()`, `variance()`, `variancep()`, `varianceif()`.
## Next steps * [Write queries](/azure/data-explorer/write-queries)
azure-monitor Change Pricing Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/change-pricing-tier.md
Last updated 03/25/2022
Each Log Analytics workspace in Azure Monitor can have a different [pricing tier](cost-logs.md#commitment-tiers). This article describes how to change the pricing tier for a workspace and how to track these changes. > [!NOTE]
-> This article describes how to change the commitment tier for a Log Analytics workspace once you determine which commitment tier you want to use. See [Azure Monitor Logs pricing details](cost-logs.md) for details on how commitment tiers work and [Azure Monitor cost and usage](../usage-estimated-costs.md#log-analytics-workspace) for recommendations on the most cost effective commitment based on your observed Azure Monitor usage.
+> This article describes how to change the commitment tier for a Log Analytics workspace once you determine which commitment tier you want to use. See [Azure Monitor Logs pricing details](cost-logs.md) for details on how commitment tiers work and [Azure Monitor cost and usage](../cost-usage.md#log-analytics-workspace) for recommendations on the most cost effective commitment based on your observed Azure Monitor usage.
## Permissions required To change the pricing tier for a workspace, you must be assigned to one of the following roles:
Changes to a workspace's pricing tier are recorded in the [Activity Log](../esse
## Next steps - See [Azure Monitor Logs pricing details](cost-logs.md) for details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.-- See [Azure Monitor cost and usage](../usage-estimated-costs.md) for a description of the different types of Azure Monitor charges and how to analyze them on your Azure bill.
+- See [Azure Monitor cost and usage](../cost-usage.md) for a description of the different types of Azure Monitor charges and how to analyze them on your Azure bill.
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
Billing for the commitment tiers is done per workspace on a daily basis. If the
Azure Commitment Discounts, such as discounts received from [Microsoft Enterprise Agreements](https://www.microsoft.com/licensing/licensing-programs/enterprise), are applied to Azure Monitor Logs commitment-tier pricing just as they are to pay-as-you-go pricing. Discounts are applied whether the usage is being billed per workspace or per dedicated cluster. > [!TIP]
-> The **Usage and estimated costs** menu item for each Log Analytics workspace shows an estimate of what your data ingestion charges would be at each commitment level to help you choose the optimal commitment tier for your data ingestion patterns. Review this information periodically to determine if you can reduce your charges by moving to another tier. For information on this view, see [Usage and estimated costs](../usage-estimated-costs.md#usage-and-estimated-costs). To review your actual charges, use [Azure Cost Management = Billing](../usage-estimated-costs.md#azure-cost-management--billing).
+> The **Usage and estimated costs** menu item for each Log Analytics workspace shows an estimate of what your data ingestion charges would be at each commitment level to help you choose the optimal commitment tier for your data ingestion patterns. Review this information periodically to determine if you can reduce your charges by moving to another tier. For information on this view, see [Usage and estimated costs](../cost-usage.md#usage-and-estimated-costs). To review your actual charges, use [Azure Cost Management = Billing](../cost-usage.md#azure-cost-management--billing).
## Dedicated clusters
This query isn't an exact replication of how usage is calculated, but it provide
## Next steps -- See [Azure Monitor cost and usage](../usage-estimated-costs.md) for a description of the different types of Azure Monitor charges and how to analyze them on your Azure bill.
+- See [Azure Monitor cost and usage](../cost-usage.md) for a description of the different types of Azure Monitor charges and how to analyze them on your Azure bill.
- See [Analyze usage in Log Analytics workspace](analyze-usage.md) for details on analyzing the data in your workspace to determine the source of any higher-than-expected usage and opportunities to reduce your amount of data collected. - See [Set daily cap on Log Analytics workspace](daily-cap.md) to control your costs by configuring a maximum volume that might be ingested in a workspace each day. - See [Azure Monitor best practices - Cost management](../best-practices-cost.md) for best practices on configuring and managing Azure Monitor to minimize your charges.
azure-monitor Daily Cap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/daily-cap.md
The maximum cap for an Application Insights classic resource is 1,000 GB/day unl
We've removed the restriction on some subscription types that have credit that couldn't be used for Application Insights. Previously, if the subscription has a spending limit, the daily cap dialog has instructions to remove the spending limit and enable the daily cap to be raised beyond 32.3 MB/day. ## Determine your daily cap
-To help you determine an appropriate daily cap for your workspace, see [Azure Monitor cost and usage](../usage-estimated-costs.md) to understand your data ingestion trends. You can also review [Analyze usage in Log Analytics workspace](analyze-usage.md) which provides methods to analyze your workspace usage in more detail.
+To help you determine an appropriate daily cap for your workspace, see [Azure Monitor cost and usage](../cost-usage.md) to understand your data ingestion trends. You can also review [Analyze usage in Log Analytics workspace](analyze-usage.md) which provides methods to analyze your workspace usage in more detail.
azure-monitor Get Started Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/get-started-queries.md
description: This article provides a tutorial for getting started writing log qu
Previously updated : 10/20/2021 Last updated : 10/31/2023
azure-monitor Migrate Splunk To Azure Monitor Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/migrate-splunk-to-azure-monitor-logs.md
The benefits of migrating to Azure Monitor include:
Your current usage in Splunk will help you decide which [pricing tier](../logs/change-pricing-tier.md) to select in Azure Monitor and estimate your future costs: - [Follow Splunk guidance](https://docs.splunk.com/Documentation/Splunk/latest/Admin/AboutSplunksLicenseUsageReportView) to view your usage report.-- [Estimate Azure Monitor usage and costs](../usage-estimated-costs.md#estimate-azure-monitor-usage-and-costs) using the [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor).
+- [Azure Monitor cost estimates](../cost-estimate.md) using the [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor).
## 2. Set up a Log Analytics workspace
azure-monitor Workspace Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/workspace-design.md
Each Log Analytics workspace resides in a [particular Azure region](https://azur
- **If you have requirements for keeping data in a particular geography:** Create a separate workspace for each region with such requirements. - **If you don't have requirements for keeping data in a particular geography:** Use a single workspace for all regions.
-Also consider potential [bandwidth charges](https://azure.microsoft.com/pricing/details/bandwidth/) that might apply when you're sending data to a workspace from a resource in another region. These charges are usually minor relative to data ingestion costs for most customers. These charges typically result from sending data to the workspace from a virtual machine. Monitoring data from other Azure resources by using [diagnostic settings](../essentials/diagnostic-settings.md) doesn't [incur egress charges](../usage-estimated-costs.md#data-transfer-charges).
+Also consider potential [bandwidth charges](https://azure.microsoft.com/pricing/details/bandwidth/) that might apply when you're sending data to a workspace from a resource in another region. These charges are usually minor relative to data ingestion costs for most customers. These charges typically result from sending data to the workspace from a virtual machine. Monitoring data from other Azure resources by using [diagnostic settings](../essentials/diagnostic-settings.md) doesn't [incur egress charges](../cost-usage.md#data-transfer-charges).
Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator) to estimate the cost and determine which regions you need. Consider workspaces in multiple regions if bandwidth charges are significant.
You might have a requirement to segregate data or define boundaries based on own
- **If you don't require data segregation:** Use a single workspace for all data owners. ### Split billing
-You might need to split billing between different parties or perform charge back to a customer or internal business unit. You can use [Azure Cost Management + Billing](../usage-estimated-costs.md#azure-cost-management--billing) to view charges by workspace. You can also use a log query to view [billable data volume by Azure resource, resource group, or subscription](analyze-usage.md#data-volume-by-azure-resource-resource-group-or-subscription). This approach might be sufficient for your billing requirements.
+You might need to split billing between different parties or perform charge back to a customer or internal business unit. You can use [Azure Cost Management + Billing](../cost-usage.md#azure-cost-management--billing) to view charges by workspace. You can also use a log query to view [billable data volume by Azure resource, resource group, or subscription](analyze-usage.md#data-volume-by-azure-resource-resource-group-or-subscription). This approach might be sufficient for your billing requirements.
- **If you don't need to split billing or perform charge back:** Use a single workspace for all cost owners.-- **If you need to split billing or perform charge back:** Consider whether [Azure Cost Management + Billing](../usage-estimated-costs.md#azure-cost-management--billing) or a log query provides cost reporting that's granular enough for your requirements. If not, use a separate workspace for each cost owner.
+- **If you need to split billing or perform charge back:** Consider whether [Azure Cost Management + Billing](../cost-usage.md#azure-cost-management--billing) or a log query provides cost reporting that's granular enough for your requirements. If not, use a separate workspace for each cost owner.
### Data retention and archive You can configure default [data retention and archive settings](data-retention-archive.md) for a workspace or [configure different settings for each table](data-retention-archive.md#configure-retention-and-archive-at-the-table-level). You might require different settings for different sets of data in a particular table. If so, you need to separate that data into different workspaces, each with unique retention settings.
azure-monitor Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/service-limits.md
This article lists limits in different areas of Azure Monitor.
## Next steps - [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/)-- [Monitoring usage and estimated costs in Azure Monitor](./usage-estimated-costs.md)
+- [Azure Monitor cost and usage](cost-usage.md)
azure-monitor Usage Estimated Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/usage-estimated-costs.md
- Title: Azure Monitor cost and usage
-description: Overview of how Azure Monitor is billed and how to estimate and analyze billable usage.
--- Previously updated : 08/06/2023--
-# Azure Monitor cost and usage
-
-This article describes the different ways that Azure Monitor charges for usage. It also explains how to evaluate charges on your Azure bill and how to estimate charges to monitor your entire environment.
--
-## Pricing model
-
-Azure Monitor uses consumption-based pricing, which is also known as pay-as-you-go pricing. With this billing model, you only pay for what you use. Features of Azure Monitor that are enabled by default don't incur any charge. These features include collection and alerting on the [Activity log](essentials/activity-log.md) and collection and analysis of [platform metrics](essentials/metrics-supported.md).
-
-Several other features don't have a direct cost, but instead you pay for the ingestion and retention of data that they collect. The following table describes the different types of usage that are charged in Azure Monitor. Detailed pricing for each type is provided in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
-
-| Type | Description |
-|:|:|
-| Logs | Ingestion, retention, and export of data in Log Analytics workspaces and legacy Application Insights resources. For most customers, this category typically incurs the bulk of Azure Monitor charges. There's no charge for querying this data except in the case of [Basic Logs](logs/basic-logs-configure.md) or [Archived Logs](logs/data-retention-archive.md).<br><br>Charges for logs can vary significantly on the configuration that you choose. For information on how charges for logs data are calculated and the different pricing tiers available, see [Azure Monitor logs pricing details](logs/cost-logs.md). |
-| Platform logs | Processing of [diagnostic and auditing information](essentials/resource-logs.md) is charged for [certain services](essentials/resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace. There's no direct charge when this data is sent to a Log Analytics workspace, but there's a charge for the workspace data ingestion and collection. |
-| Metrics | There's no charge for [standard metrics](essentials/metrics-supported.md) collected from Azure resources. There's a cost for collecting [custom metrics](essentials/metrics-custom-overview.md) and for retrieving metrics from the [REST API](essentials/rest-api-walkthrough.md#retrieve-metric-values). |
-| Prometheus Metrics | Pricing for [Azure Monitor managed service for Prometheus](essentials/prometheus-metrics-overview.md) is based on [data samples ingested](essentials/prometheus-metrics-enable.md) and [query samples processed](essentials/azure-monitor-workspace-manage.md#link-a-grafana-workspace). Data is retained for 18 months at no extra charge. |
-| Alerts | Charges are based on the type and number of signals used by the alert rule, its frequency, and the type of [notification](alerts/action-groups.md) used in response. For [Log alerts](alerts/alerts-types.md#log-alerts) configured for [at-scale monitoring](alerts/alerts-types.md#splitting-by-dimensions-in-log-alert-rules), the cost also depends on the number of time series created by the dimensions resulting from your query. |
-| Web tests | There's a cost for [standard web tests](app/availability-standard-tests.md) and [multistep web tests](/previous-versions/azure/azure-monitor/app/availability-multistep) in Application Insights. Multistep web tests have been deprecated.
-
-## Data transfer charges
-
-Sending data to Azure Monitor can incur data bandwidth charges. As described in [Bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/), data transfer between Azure services located in two regions is charged as outbound data transfer at the normal rate. Data sent to a different region via [Diagnostic settings](essentials/diagnostic-settings.md) doesn't incur data transfer charges. Inbound data transfer is free.
-
-Data transfer charges are typically small compared to the costs for data ingestion and retention. Focus on your ingested data volume to control costs for Log Analytics.
-
-## Estimate Azure Monitor usage and costs
-
-If you're new to Azure Monitor, use the [Azure Monitor pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) to estimate your costs. In the **Search** box, enter **Azure Monitor**, and then select the **Azure Monitor** tile. The pricing calculator helps you estimate your likely costs based on your expected utilization.
-
-The bulk of your costs typically come from data ingestion and retention for your Log Analytics workspaces and Application Insights resources. It's difficult to give accurate estimates for data volumes that you can expect because they'll vary significantly based on your configuration.
-
-A common strategy is to enable monitoring for a small group of resources and use the observed data volumes with the calculator to determine your costs for a full environment.
-
-See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for queries and other methods to measure the billable data in your Log Analytics workspace.
-
-Use the following basic guidance for common resources:
--- **Virtual machines**: With typical monitoring enabled, a virtual machine generates from 1 GB to 3 GB of data per month. This range is highly dependent on the configuration of your agents.-- **Application Insights**: For different methods to estimate data from your applications, see the following section.-- **Container insights**: For guidance on estimating data for your Azure Kubernetes Service (AKS) cluster, see [Estimating costs to monitor your AKS cluster](containers/container-insights-cost.md#estimating-costs-to-monitor-your-aks-cluster).-
-The [Azure Monitor pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) includes data volume estimation calculators for these three cases.
-
->[!NOTE]
->The billable data volume is calculated by using a customer-friendly, cost-effective method. The billed data volume is defined as the size of the data that will be stored, excluding a set of standard columns and any JSON wrapper that was part of the data received for ingestion. This billable data volume is substantially smaller than the size of the entire JSON-packaged event, often less than 50%.
->
->It's essential to understand this calculation of billed data size when you estimate costs and compare them with other pricing models. For more information on pricing, see [Azure Monitor Logs pricing details](logs/cost-logs.md#data-size-calculation).
-
-## Estimate application usage
-
-There are two methods you can use to estimate the amount of data from an application monitored with Application Insights.
-
-### Learn from what similar applications collect
-
-In the Azure Monitor pricing calculator for Application Insights, enable **Estimate data volume based on application activity**. You use this option to provide inputs about your application. The calculator then tells you the median and 90th percentile amount of data collected by similar applications. These applications span the range of Application Insights configuration, so you can still use options such as [sampling]() to reduce the volume of data you ingest for your application below the median level.
-
-### Data collection when you use sampling
-
-With the ASP.NET SDK's [adaptive sampling](app/sampling.md#adaptive-sampling), the data volume is adjusted automatically to keep within a specified maximum rate of traffic for default Application Insights monitoring.
-
-If the application produces a low amount of telemetry, such as when debugging or because of low usage, items won't be dropped by the sampling processor if the volume is below the configured-events-per-second level.
-
-For a high-volume application, with the default threshold of five events per second, adaptive sampling limits the number of daily events to 432,000. If you consider a typical average event size of 1 KB, this size corresponds to 13.4 GB of telemetry per 31-day month per node hosting your application because the sampling is done locally to each node.
-
-For SDKs that don't support adaptive sampling, you can employ [ingestion sampling](app/sampling.md#ingestion-sampling). This technique samples when the data is received by Application Insights based on a percentage of data to retain. Or you can use [fixed-rate sampling for ASP.NET, ASP.NET Core, and Java websites](app/sampling.md#fixed-rate-sampling) to reduce the traffic sent from your web server and web browsers.
-
-## View Azure Monitor usage and charges
-
-There are two primary tools to view and analyze your Azure Monitor billing and estimated charges:
--- [Azure Cost Management + Billing](#azure-cost-management--billing) is the primary tool you'll use to analyze your usage and costs. It gives you multiple options to analyze your monthly charges for different Azure Monitor features and their projected cost over time.-- [Usage and estimated costs](#usage-and-estimated-costs) helps optimize log data ingestion costs by estimating what the data ingestion costs would be for Log Analytics in each of the available pricing tiers. -
-## Azure Cost Management + Billing
-
-Azure Cost Management + Billing includes several built-in dashboards for deep cost analysis like cost by resource and invoice details. To get started analyzing your Azure Monitor charges, open [Cost Management + Billing](../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=/azure/billing/TOC.json) in the Azure portal. Select **Cost Management** > **Cost analysis**. Select your subscription or another [scope](../cost-management-billing/costs/understand-work-scopes.md).
-
->[!NOTE]
->You might need additional access to cost management data. See [Assign access to cost management data](../cost-management-billing/costs/assign-access-acm-data.md).
-
-To create a view just Azure Monitor charges, [create a filter](../cost-management-billing/costs/group-filter.md) for the following service names:
--- Azure Monitor-- Log Analytics-- Insight and Analytics-- Application Insights-
->[!NOTE]
->Usage for Azure Monitor Logs (Log Analytics) can be billed with the **Log Analytics** service (for Pay-as-you-go Data Ingestion and Data Retention), or with the **Azure Monitor** service (for Commitment Tiers, Basic Logs, Search, Search Jobs, Data Archive and Data Export) or with the **Insight and Analytics** service when using the legacy Per Node pricing tier. Except for a small set of legacy resources, classic Application Insights data ingestion and retention are billed as the **Log Analytics** service. Note then when you change your workspace from a Pay-as-you-go pricing tier to a Commitment Tier, on your bill, the costs will appear to shift from Log Analytics to Azure Monitor, reflecting the service associated to each pricing tier.
-
-[Classic Application Insights](app/convert-classic-resource.md) usage is billed using Log Analytics data ingestion and retention meters. In the context of biling, the Application Insights service is only includes usage for multi-step web tests and some old Application Insights resources still using legacy classic-mode Application Insights pricing tiers.
-
-Other services such as Microsoft Defender for Cloud and Microsoft Sentinel also bill their usage against Log Analytics workspace resources, so you might want to add them to your filter.
-
-### Cost analysis
-
-To get the most useful view for understanding your cost trends in the **Cost analysis** view,
-
-1. Select the date range you want to investigate
-2. Select the desired "Granularity" of "Daily" or "Monthly" (not "Accumulated")
-3. Set the chart type to "Column (stacked)" in the top right above the chart
-4. Set "Group by" to be "Meter"
-
-See [Common cost analysis uses](../cost-management-billing/costs/cost-analysis-common-uses.md) for more information on how to use this Cost analysis view.
-
-![Screenshot that shows Cost Management with cost information.](./media/usage-estimated-costs/010.png)
-
->[!NOTE]
->Alternatively, you can go to the overview page of a Log Analytics workspace or Application Insights resource and select **View Cost** in the upper-right corner of the **Essentials** section. This option opens **Cost Analysis** from Azure Cost Management + Billing already scoped to the workspace or application.
->
-> :::image type="content" source="logs/media/view-bill/view-cost-option.png" lightbox="logs/media/view-bill/view-cost-option.png" alt-text="Screenshot of option to view cost for a Log Analytics workspace.":::
-
-### Get daily cost analysis emails
-
-Once you have configured your Cost Analysis view, it is strongly recommended to subscribe to get regular email updates from Cost Analysis. The "Subscribe" option is located in the list of options just above the main chart.
-
-### Create cost alerts
-
-To be notified if there are significant increases in your spending, you can set up [cost alerts](../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md) (specifically a budget alert) for a single workspace or group of workspaces.
-
-### Export usage details
-
-To gain more understanding of your usage and costs, create exports using Cost Analysis in Azure Cost Management + Billing. See [Tutorial: Create and manage exported data](../cost-management-billing/costs/tutorial-export-acm-data.md) to learn how to automatically create a daily export you can use for regular analysis.
-
-These exports are in CSV format and will contain a list of daily usage (billed quantity and cost) by resource, billing meter and a few more fields such as [AdditionalInfo](../cost-management-billing/automate/understand-usage-details-fields.md#list-of-fields-and-descriptions). You can use Microsoft Excel to do rich analyses of your usage not possible in the Cost Analytics experiences in the portal.
-
-The usage export has both the cost for your usage, and the number of units of usage. Consequently, you can use this export to see the amount of benefits you are receiving from various offers such as the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) and the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/).
-
-For instance, usage from Log Analytics can be found by first filtering on the **Meter Category** column to show
-
-1. **Log Analytics** (for Pay-as-you-go data ingestion and interactive Data Retention),
-2. **Insight and Analytics** (used by some of the legacy pricing tiers), and
-3. **Azure Monitor** (used by most other Log Analytics features such as Commitment Tiers, Basic Logs ingesting, Data Archive, Search Queries, Search Jobs, etc.)
-
-Add a filter on the **Instance ID** column for **contains workspace** or **contains cluster**. The usage is shown in the **Consumed Quantity** column. The unit for each entry is shown in the **Unit of Measure** column.
-
-### Azure Monitor billing meter names
-
-Below is a list of Azure Monitor billing meters names that you'll see in the Azure Cost Management + Billing user experience and in your usage exports.
-
-Here is a list of the meters used to bill for log data ingestion, and whether the meter is regional (there is a different billing meter, `MeterId` in the export usage report, for each region). Note that Basic Logs ingestion can be used when the workspace's pricing tier is Pay-as-you-go or any commitment tier.
--
-| Pricing tier |ServiceName | MeterName | Regional Meter? |
-| -- | -- | -- | -- |
-| (any) | Azure Monitor | Basic Logs Data Ingestion | yes |
-| Pay-as-you-go | Log Analytics | Pay-as-you-go Data Ingestion | yes |
-| 100 GB/day Commitment Tier | Azure Monitor | 100 GB Commitment Tier Capacity Reservation | yes |
-| 200 GB/day Commitment Tier | Azure Monitor | 200 GB Commitment Tier Capacity Reservation | yes |
-| 300 GB/day Commitment Tier | Azure Monitor | 300 GB Commitment Tier Capacity Reservation | yes |
-| 400 GB/day Commitment Tier | Azure Monitor | 400 GB Commitment Tier Capacity Reservation | yes |
-| 500 GB/day Commitment Tier | Azure Monitor | 500 GB Commitment Tier Capacity Reservation | yes |
-| 1000 GB/day Commitment Tier | Azure Monitor | 1000 GB Commitment Tier Capacity Reservation | yes |
-| 2000 GB/day Commitment Tier | Azure Monitor | 2000 GB Commitment Tier Capacity Reservation | yes |
-| 5000 GB/day Commitment Ties | Azure Monitor | 5000 GB Commitment Tier Capacity Reservation | yes |
-| Per Node (legacy tier) | Insight and Analytics | Standard Node | no |
-| Per Node (legacy tier) | Insight and Analytics | Standard Data Overage per Node | no |
-| Per Node (legacy tier) | Insight and Analytics | Standard Data Included per Node | no |
-| Standalone (legacy tier) | Log Analytics | Pay-as-you-go Data Analyzed | no |
-| Standard (legacy tier) | Log Analytics | Standard Data Analyzed | no |
-| Premium (legacy tier) | Log Analytics | Premium Data Analyzed | no |
--
-The "Standard Data Included per Node" meter is used both for the Log Analytics [Per Node tier](logs/cost-logs.md#per-node-pricing-tier) data allowance, and also the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud), for workspaces in any pricing tier.
-
-Other Azure Monitor logs meters:
-
-| ServiceName | MeterName | Regional Meter? |
-| -- | | |
-| Log Analytics | Pay-as-you-go Data Retention | yes |
-| Insight and Analytics | Standard Data Retention | no |
-| Azure Monitor | Data Archive | yes |
-| Azure Monitor | Search Queries Scanned | yes |
-| Azure Monitor | Search Jobs Scanned | yes |
-| Azure Monitor | Data Restore | yes |
-| Azure Monitor | Log Analytics data export Data Exported | yes |
-| Azure Monitor | Platform Logs Data Processed | yes |
-
-"Pay-as-you-go Data Retention" is used for workspaces in all modern pricing tiers (Pay-as-you-go and Commitment Tiers). "Standard Data Retention" is used for workspaces in the legacy Per Node and Standalone pricing tiers.
-
-Azure Monitor metrics meters:
-
-| ServiceName | MeterName | Regional Meter? |
-| -- | | |
-| Azure Monitor | Metrics ingestion Metric samples | yes |
-| Azure Monitor | Prometheus Metrics Queries Metric samples | yes |
-| Azure Monitor | Native Metric Queries API Calls | yes |
-
-Azure Monitor alerts meters:
-
-| ServiceName | MeterName | Regional Meter? |
-| -- | | |
-| Azure Monitor | Alerts Metric Monitored | no |
-| Azure Monitor | Alerts Dynamic Threshold | no |
-| Azure Monitor | Alerts System Log Monitored at 1 Minute Frequency | no |
-| Azure Monitor | Alerts System Log Monitored at 10 Minute Frequency | no |
-| Azure Monitor | Alerts System Log Monitored at 15 Minute Frequency | no |
-| Azure Monitor | Alerts System Log Monitored at 5 Minute Frequency | no |
-| Azure Monitor | Alerts Resource Monitored at 1 Minute Frequency | no |
-| Azure Monitor | Alerts Resource Monitored at 10 Minute Frequency | no |
-| Azure Monitor | Alerts Resource Monitored at 15 Minute Frequency | no |
-| Azure Monitor | Alerts Resource Monitored at 5 Minute Frequency | no |
-
-Azure Monitor web test meters:
-
-| ServiceName | MeterName | Regional Meter? |
-| -- | | |
-| Azure Monitor | Standard Web Test Execution | yes |
-| Application Insights | Multi-step Web Test | no |
-
-Leagcy classic Application Insights meters:
-
-| ServiceName | MeterName | Regional Meter? |
-| -- | | |
-| Application Insights | Enterprise Node | no |
-| Application Insights | Enterprise Overage Data | no |
--
-### Legacy Application Insights meters
-
-Most Application Insights usage for both classic and workspace-based resources is reported on meters with **Log Analytics** for **Meter Category** because there's a single log back-end for all Azure Monitor components. Only Application Insights resources on legacy pricing tiers and multiple-step web tests are reported with **Application Insights** for **Meter Category**. The usage is shown in the **Consumed Quantity** column. The unit for each entry is shown in the **Unit of Measure** column. For more information, see [Understand your Microsoft Azure bill](../cost-management-billing/understand/review-individual-bill.md).
-
-To separate costs from your Log Analytics and classic Application Insights usage, [create a filter](../cost-management-billing/costs/group-filter.md) on **Resource type**. To see all Application Insights costs, filter **Resource type** to **microsoft.insights/components**. For Log Analytics costs, filter **Resource type** to **microsoft.operationalinsights/workspaces**. (Workspace-based Application Insights is all billed to the Log Analytics workspace resourced.)
-
-## Usage and estimated costs
-
-You can get more usage details about Log Analytics workspaces and Application Insights resources from the **Usage and estimated costs** option for each.
-
-### Log Analytics workspace
-
-To learn about your usage trends and choose the most cost-effective pricing tier (Pay-as-you-go or a [commitment tier](logs/cost-logs.md#commitment-tiers)) for your Log Analytics workspace, select **Usage and estimated costs** from the **Log Analytics workspace** menu in the Azure portal.
-
-> [!NOTE]
->
-> **Usage and estimated costs** does *not* show your actual billed usage. It calculates what your data ingestion charges would have been for the last 31 days of usage if your workspace had been in each of the available pricing tiers. You can use these estimated costs to select the lowest cost tier based on your workspace's data ingestion.
--
-This view includes:
--- Estimated monthly charges based on usage from the past 31 days by using the current pricing tier.<br>-- Estimated monthly charges by using different commitment tiers.<br>-- Billable data ingestion by table from the past 31 days.-
-To explore the data in more detail, select the icon in the upper-right corner of either chart to work with the query in Log Analytics.
--
-### Application Insights
-
-To learn about your usage trends for your classic Application Insights resource, select **Usage and estimated costs** from the **Applications** menu in the Azure portal.
--
-This view includes:
--- Estimated monthly charges based on usage from the past month.<br>-- Billable data ingestion by table from the past month.-
-To investigate your Application Insights usage more deeply, open the **Metrics** page and add the metric named **Data point volume**. Then select the **Apply splitting** option to split the data by **Telemetry item type**.
-
-## View data allocation benefits
-
-To view data allocation benefits from sources such as [Microsoft Defender for Servers](https://azure.microsoft.com/pricing/details/defender-for-cloud/), [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/), or the [Sentinel Free Trial](https://azure.microsoft.com/pricing/details/microsoft-sentinel/), you need to export your usage details as described above.
-
-Open the exported usage spreadsheet and filter the **Instance ID** column to your workspace. (To select all your workspaces in the spreadsheet, filter the **Instance ID** column to **contains /workspaces/**.) Next, filter the **ResourceRate** column to show only rows where this rate is equal to zero. Now you'll see the data allocations from these various sources.
-
-> [!NOTE]
-> Data allocations from Defender for Servers 500 MB/server/day will appear in rows with the meter name **Data Included per Node** and the meter category **Insight and Analytics**. (This name is for a legacy offer still used with this meter.) If the workspace is in the legacy Per-Node Log Analytics pricing tier, this meter also includes the data allocations from this Log Analytics pricing tier.
-
-## Operations Management Suite subscription entitlements
-
-Customers who purchased Operations Management Suite E1 and E2 are eligible for per-node data ingestion entitlements for Log Analytics and Application Insights. Each Application Insights node includes up to 200 MB of data ingested per day (separate from Log Analytics data ingestion), with 90-day data retention at no extra cost.
-
-To receive these entitlements for Log Analytics workspaces or Application Insights resources in a subscription, they must use the Per Node (Operations Management Suite) pricing tier. This entitlement isn't visible in the estimated costs shown in the **Usage and estimated cost** pane.
-
-Depending on the number of nodes of the suite that your organization purchased, moving some subscriptions into a Per GB (pay-as-you-go) pricing tier might be advantageous. This move requires careful consideration.
-
-Also, if you move a subscription to the new Azure monitoring pricing model in April 2018, the Per GB tier is the only tier available. Moving a subscription to the new Azure monitoring pricing model isn't advisable if you have an Operations Management Suite subscription.
-
-> [!TIP]
-> If your organization has Operations Management Suite E1 or E2, it's usually best to keep your Log Analytics workspaces in the Per Node (Operations Management Suite) pricing tier and your Application Insights resources in the Enterprise pricing tier.
-
-## Next steps
--- For details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges, see [Azure Monitor Logs pricing details](logs/cost-logs.md).-- For details on how to analyze the data in your workspace to determine the source of any higher-than-expected usage and opportunities to reduce your amount of data collected, see [Analyze usage in Log Analytics workspace](logs/analyze-usage.md).-- To control your costs by setting a daily limit on the amount of data that can be ingested in a workspace, see [Set daily cap on Log Analytics workspace](logs/daily-cap.md).-- For best practices on how to configure and manage Azure Monitor to minimize your charges, see [Azure Monitor best practices - Cost management](best-practices-cost.md).---
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Containers|[Migrate from ContainerLog to ContainerLogV2](containers/container-in
Containers|[Configure remote write for Azure managed service for Prometheus using Azure Active Directory workload identity (preview)](containers/prometheus-remote-write-azure-workload-identity.md)|New article Configure remote write for Azure Monitor managed service …| Essentials|[Migrate from diagnostic settings storage retention to Azure Storage lifecycle management](essentials/migrate-to-azure-storage-lifecycle-policy.md)|Added CLI and template tabs showing storage lifecycle setting.| General|[Plan your alerts and automated actions](alerts/alerts-plan.md)|Add alerts best practices article|
-General|[Azure Monitor cost and usage](usage-estimated-costs.md)|Updated information about the Cost Analysis usage report which contains both the cost for your usage, and the number of units of usage. You can use this export to see the amount of benefit you're receiving from various offers such as the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) and the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/). |
+General|[Azure Monitor cost and usage](cost-usage.md)|Updated information about the Cost Analysis usage report which contains both the cost for your usage, and the number of units of usage. You can use this export to see the amount of benefit you're receiving from various offers such as the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) and the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/). |
Logs|[Send log data to Azure Monitor by using the HTTP Data Collector API (deprecated)](logs/data-collector-api.md)|Added deprecation notice.| Logs|[Azure Monitor Logs overview](logs/data-platform-logs.md)|Added code samples for the Azure Monitor Ingestion client module for Go.| Logs|[Set a table's log data plan to Basic or Analytics](logs/basic-logs-configure.md)|Added new Virtual Network Manager, Dev Center, and Communication Services tables that now support Basic logs.|
Logs|[Set a table's log data plan to Basic or Analytics](logs/basic-logs-configu
|Subservice| Article | Description | ||||
-General|[Azure Monitor cost and usage](usage-estimated-costs.md)|Added section detailing billing meter names.|
+General|[Azure Monitor cost and usage](cost-usage.md)|Added section detailing billing meter names.|
Application-Insights|[Add, modify, and filter OpenTelemetry](app/opentelemetry-add-modify.md)|A caution has been added about using community libraries with additional information on how to request we include them in our distro.| Application-Insights|[Add, modify, and filter OpenTelemetry](app/opentelemetry-add-modify.md)|Support and feedback options are now available across all of our OpenTelemetry pages.| Application-Insights|[How many Application Insights resources should I deploy?](app/create-workspace-resource.md#how-many-application-insights-resources-should-i-deploy)|We added an important warning about additional network costs when monitoring across regions.|
Application-Insights|[Data Collection Basics of Azure Monitor Application Insigh
Application-Insights|[Enable a framework extension for Application Insights JavaScript SDK](app/javascript-framework-extensions.md)|The "Explore your data" section has been improved.| Application-Insights|[Sampling overrides (preview) - Azure Monitor Application Insights for Java](app/java-standalone-sampling-overrides.md)|We've documented steps for troubleshooting sampling.| Logs|[Set a table's log data plan to Basic or Analytics](logs/basic-logs-configure.md)|Additional Azure tables now support low-cost basic logs, including tables for the Bare Metal Machines, Managed Lustre, Nexus Clusters, and Nexus Storage Appliances services. |
-Logs|[Create and manage a dedicated cluster in Azure Monitor Logs](logs/logs-dedicated-clusters.md)|The minimum ingestion commitment for a dedicated cluster is now 100 GB per day (previously 500 GB). |
Logs|[Query Basic Logs in Azure Monitor](logs/basic-logs-query.md)|Basic log queries are now billable.| Logs|[Restore logs in Azure Monitor](logs/restore.md)|Restored logs are now billable.| Logs|[Run search jobs in Azure Monitor](logs/search-jobs.md)|Search jobs are now billable.|
Azure Monitor Workbooks documentation previously resided on an external GitHub r
| Article | Description | |:|:|
-| [Azure Monitor cost and usage](usage-estimated-costs.md) | Added standard web tests to table.<br>Added explanation of billable GB calculation. |
+| [Azure Monitor cost and usage](cost-usage.md) | Added standard web tests to table.<br>Added explanation of billable GB calculation. |
| [Azure Monitor overview](overview.md) | Updated overview diagram. | ### Agents
azure-portal Capture Browser Trace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/capture-browser-trace.md
If you're troubleshooting an issue with the Azure portal, and you need to contact Microsoft support, you may want to first capture some additional information. For example, it can be helpful to share a browser trace, a step recording, and console output. This information can provide important details about what exactly is happening in the portal when your issue occurs.
-> [!IMPORTANT]
-> Microsoft support uses these traces for troubleshooting purposes only. Please be mindful who you share your traces with, as they may contain sensitive information about your environment.
+> [!WARNING]
+> Browser traces often contain sensitive information and might include authentication tokens linked to your identity. Please remove any sensitive information before sharing traces with others. Microsoft support uses these traces for troubleshooting purposes only.
You can capture this information any [supported browser](azure-portal-supported-browsers-devices.md): Microsoft Edge, Google Chrome, Safari (on Mac), or Firefox. Steps for each browser are shown below.
azure-vmware Deploy Vsan Stretched Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-vsan-stretched-clusters.md
It should be noted that these types of failures, although rare, fall outside the
Azure VMware Solution stretched clusters are available in the following regions: -- UK South (on AV36)
+- UK South (on AV36, and AV36P)
- West Europe (on AV36, and AV36P) - Germany West Central (on AV36) - Australia East (on AV36P)
azure-vmware Request Host Quota Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/request-host-quota-azure-vmware-solution.md
In this how-to, you'll request host quota/capacity for [Azure VMware Solution](i
If you have an existing Azure VMware Solution private cloud and want more hosts allocated, you'll follow the same process. >[!IMPORTANT]
->It can take up to five business days to allocate the hosts, depending on the number requested. Therefore, request what you need for provisioning to avoid the delays associated with making additional quota increase requests.
+> It can take up to five business days to allocate the hosts, depending on the number requested. Therefore, request what you need for provisioning to avoid the delays associated with making additional quota increase requests.
## Eligibility criteria
You'll need an Azure account in an Azure subscription that adheres to one of the
- Any other details, including Availability Zone requirements for integrating with other Azure services (e.g. Azure NetApp Files, Azure Blob Storage) >[!NOTE]
- >Azure VMware Solution requires a minimum of three hosts and recommends redundancy of N+1 hosts.
+ > - Azure VMware Solution requires a minimum of three hosts and recommends redundancy of N+1 hosts.
+ > - **New** The unused quota expires after 30 days. A new request will need to be submitted for any additional quota.
1. Select **Review + Create** to submit the request.
Access the Azure portal using the **Admin On Behalf Of** (AOBO) procedure from P
- Is intended to host multiple customers? >[!NOTE]
- >Azure VMware Solution requires a minimum of three hosts and recommends redundancy of N+1 hosts.
+ > - Azure VMware Solution requires a minimum of three hosts and recommends redundancy of N+1 hosts.
+ > - **New** The unused quota expires after 30 days. A new request will need to be submitted for any additional quota.
1. Select **Review + Create** to submit the request.
bastion Bastion Vm Copy Paste https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-vm-copy-paste.md
description: Learn how copy and paste to and from a Windows VM using Bastion.
Previously updated : 09/20/2022 Last updated : 10/31/2023 # Customer intent: I want to copy and paste to and from VMs using Azure Bastion.
-# Copy and paste to a Windows virtual machine: Azure Bastion
+# Windows VMs - copy and paste via Bastion
This article helps you copy and paste text to and from virtual machines when using Azure Bastion.
This article helps you copy and paste text to and from virtual machines when usi
Before you proceed, make sure you have the following items.
-* A VNet with [Azure Bastion](./tutorial-create-host-portal.md) deployed.
-* A Windows VM deployed to your VNet.
+* A virtual network with [Azure Bastion](./tutorial-create-host-portal.md) deployed.
+* A Windows virtual machine deployed to your virtual network.
## <a name="configure"></a> Configure the bastion host
-By default, Azure Bastion is automatically enabled to allow copy and paste for all sessions connected through the bastion resource. You don't need to configure anything additional. This applies to both the Basic and the Standard SKU tier. If you want to disable this feature, you can disable it for web-based clients on the configuration page of your Bastion resource.
+By default, Azure Bastion is automatically enabled to allow copy and paste for all sessions connected through the bastion resource. You don't need to configure anything extra. This applies to both the Basic and the Standard SKU tier. If you want to disable this feature, you can disable it for web-based clients on the configuration page of your Bastion resource.
1. To view or change your configuration, in the portal, go to your Bastion resource. 1. Go to the **Configuration** page. * To enable, select the **Copy and paste** checkbox if it isn't already selected. * To disable, clear the checkbox. Disable is only available with the Standard SKU. You can upgrade the SKU if necessary.
-1. **Apply** changes. The bastion host will update.
-
- :::image type="content" source="./media/bastion-vm-copy-paste/configure.png" alt-text="Screenshot that shows the configuration page." lightbox="./media/bastion-vm-copy-paste/configure.png":::
+1. **Apply** changes. The bastion host updates.
## <a name="to"></a> Copy and paste
For browsers that support the advanced Clipboard API access, you can copy and pa
> [!NOTE] > Only text copy/paste is currently supported.
->
### <a name="advanced"></a> Advanced Clipboard API browsers
-1. Connect to your VM.
-1. For direct copy and paste, your browser may prompt you for clipboard access when the Bastion session is being initialized. **Allow** the web page to access the clipboard.
-
- :::image type="content" source="./media/bastion-vm-copy-paste/copy-paste.png" alt-text="Screenshot that shows allow clipboard access." lightbox="./media/bastion-vm-copy-paste/copy-paste.png":::
+1. Connect to your virtual machine.
+1. For direct copy and paste, your browser might prompt you for clipboard access when the Bastion session is being initialized. **Allow** the web page to access the clipboard.
1. You can now use keyboard shortcuts as usual to copy and paste. If you're working from a Mac, the keyboard shortcut to paste is **SHIFT-CTRL-V**. ### <a name="other"></a>Non-advanced Clipboard API browsers
-To copy text from your local computer to a VM, use the following steps.
+To copy text from your local computer to a virtual machine, use the following steps.
-1. Connect to your VM.
+1. Connect to your virtual machine.
1. Copy the text/content from the local device into your local clipboard.
-1. On the VM, launch the Bastion clipboard access tool palette by selecting the two arrows. The arrows are located on the left center of the session.
-
- :::image type="content" source="./media/bastion-vm-copy-paste/left.png" alt-text="Screenshot that shows the launch arrows for the clipboard access tool palette." lightbox="./media/bastion-vm-copy-paste/left.png":::
-1. Copy the text from your local computer. Typically, the copied text automatically shows on the Bastion clipboard access tool palette. If doesn't show up on the tool palette, then paste the text in the text area on the tool palette. Once the text is in the text area, you can paste it to the remote session. In this example, we copied text to the Bastion clipboard tool palette, then pasted it to the VM Notepad app.
+1. On the virtual machine, you'll see two arrows on the left side of the session screen about halfway down. Launch the Bastion **Clipboard** access tool palette by selecting the two arrows.
+1. Copy the text from your local computer. Typically, the copied text automatically shows on the Bastion clipboard access tool palette. If it doesn't show up on the tool palette, then paste the text in the text area on the tool palette. Once the text is in the text area, you can paste it to the remote session.
- :::image type="content" source="./media/bastion-vm-copy-paste/clipboard-paste.png" alt-text="Screenshot shows a clipboard for text copied in Bastion." lightbox="./media/bastion-vm-copy-paste/clipboard-paste.png":::
+ :::image type="content" source="./media/bastion-vm-copy-paste/clipboard-copy.png" alt-text="Screenshot shows the clipboard for text copied in Bastion." lightbox="./media/bastion-vm-copy-paste/clipboard-copy.png":::
-1. If you want to copy the text from the VM to your local computer, copy the text to the clipboard access tool. Once your text is in the text area on the palette, paste it to your local computer.
+1. If you want to copy the text from the virtual machine to your local computer, copy the text to the clipboard access tool. Once your text is in the text area on the palette, paste it to your local computer.
## Next steps
-For more VM features, see [About VM connections and features](vm-about.md).
+For more virtual machine features, see [About VM connections and features](vm-about.md).
bastion Configuration Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/configuration-settings.md
You can configure this setting using the following methods:
## <a name="instance"></a>Instances and host scaling
-An instance is an optimized Azure VM that is created when you configure Azure Bastion. It's fully managed by Azure and runs all of the processes needed for Azure Bastion. An instance is also referred to as a scale unit. You connect to client VMs via an Azure Bastion instance. When you configure Azure Bastion using the Basic SKU, two instances are created. If you use the Bastion Standard SKU, you can specify the number of instances. This is called **host scaling**.
+An instance is an optimized Azure VM that is created when you configure Azure Bastion. It's fully managed by Azure and runs all of the processes needed for Azure Bastion. An instance is also referred to as a scale unit. You connect to client VMs via an Azure Bastion instance. When you configure Azure Bastion using the Basic SKU, two instances are created. If you use the Standard SKU, you can specify the number of instances (with a minimum of two instances). This is called **host scaling**.
Each instance can support 20 concurrent RDP connections and 40 concurrent SSH connections for medium workloads (see [Azure subscription limits and quotas](../azure-resource-manager/management/azure-subscription-service-limits.md) for more information). The number of connections per instances depends on what actions you're taking when connected to the client VM. For example, if you're doing something data intensive, it creates a larger load for the instance to process. Once the concurrent sessions are exceeded, another scale unit (instance) is required.
bastion Quickstart Developer Sku https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/quickstart-developer-sku.md
-# Quickstart: Deploy Bastion using the Developer SKU (Preview)
+# Quickstart: Deploy Azure Bastion - Developer SKU (Preview)
In this quickstart, you'll learn how to deploy Azure Bastion using the Developer SKU. After Bastion is deployed, you can connect to virtual machines (VM) in the virtual network via Bastion using the private IP address of the VM. The VMs you connect to don't need a public IP address, client software, agent, or a special configuration. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
In this quickstart, you'll learn how to deploy Azure Bastion using the Developer
[!INCLUDE [regions](../../includes/bastion-developer-sku-regions.md)]
+> [!NOTE]
+> VNet peering isn't currently supported for the Developer SKU.
+ ## About the Developer SKU The Bastion Developer SKU is a new [lower-cost](https://azure.microsoft.com/pricing/details/azure-bastion/), lightweight SKU. This SKU is ideal for Dev/Test users who want to securely connect to their VMs if they don't need additional features or scaling. With the Developer SKU, you can connect to one Azure VM at a time directly through the virtual machine connect page.
-When you deploy Bastion using the Developer SKU, the deployment requirements are different than when you deploy using other SKUs. Typically when you create a bastion host, a host is deployed to the AzureBastionSubnet in your virtual network. The Bastion host is dedicated for your use. When using the Developer SKU, a bastion host isn't deployed to your virtual network and you don't need a AzureBastionSubnet. However, the Developer SKU bastion host isn't a dedicated resource and is, instead, part of a shared pool.
+When you deploy Bastion using the Developer SKU, the deployment requirements are different than when you deploy using other SKUs. Typically when you create a bastion host, a host is deployed to the AzureBastionSubnet in your virtual network. The Bastion host is dedicated for your use. When using the Developer SKU, a bastion host isn't deployed to your virtual network and you don't need an AzureBastionSubnet. However, the Developer SKU bastion host isn't a dedicated resource and is, instead, part of a shared pool.
Because the Developer SKU bastion resource isn't dedicated, the features for the Developer SKU are limited. See the Bastion configuration settings [SKU](configuration-settings.md) section for features by SKU. For more information about pricing, see the [Pricing](https://azure.microsoft.com/pricing/details/azure-bastion/) page. You can always upgrade the Developer SKU to a higher SKU if you need more features. See [Upgrade a SKU](upgrade-sku.md).
When you're done using the virtual network and the virtual machines, delete the
## Next steps
-In this quickstart, you deployed Bastion using the Developer SKKU, and then connected to a virtual machine securely via Bastion. Next, you can configure more features and work with VM connections.
+In this quickstart, you deployed Bastion using the Developer SKU, and then connected to a virtual machine securely via Bastion. Next, you can configure more features and work with VM connections.
> [!div class="nextstepaction"] > [Upgrade SKUs](upgrade-sku.md)
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md
Currently, the Windows agent doesn't reduce memory pressure when other applicati
|-|-| | Capability name | NetworkLatency-1.1 | | Target type | Microsoft-Agent |
-| Supported OS types | Windows, Linux. |
+| Supported OS types | Windows, Linux (outbound traffic only) |
| Description | Increases network latency for a specified port range and network block. At least one destinationFilter or inboundDestinationFilter array must be provided. | | Prerequisites | Agent (for Windows) must run as administrator. If the agent is installed as a VM extension, it runs as administrator by default. | | Urn | urn:csci:microsoft:agent:networkLatency/1.1 |
The parameters **destinationFilters** and **inboundDestinationFilters** use the
### Limitations * The agent-based network faults currently only support IPv4 addresses.
+* When running in a Linux environment, the agent-based network latency fault can only affect **outbound** traffic, not inbound traffic. The fault can affect **both inbound and outbound** traffic on Windows environments (via the `inboundDestinationFilters` and `destinationFilters` parameters).
## Network disconnect
Currently, only virtual machine scale sets configured with the **Uniform** orche
| Capability name | DisableCertificate-1.0 | | Target type | Microsoft-KeyVault | | Description | By using certificate properties, the fault disables the certificate for a specific duration (provided by the user). It enables the certificate after this fault duration. |
-| Prerequisites | For OneCert certificates, the domain must be registered with OneCert before you attempt to run the fault. |
+| Prerequisites | None. |
| Urn | urn:csci:microsoft:keyvault:disableCertificate/1.0 | | Fault type | Continuous. | | Parameters (key, value) | |
Currently, only virtual machine scale sets configured with the **Uniform** orche
| Capability name | IncrementCertificateVersion-1.0 | | Target type | Microsoft-KeyVault | | Description | Generates a new certificate version and thumbprint by using the Key Vault Certificate client library. Current working certificate is upgraded to this version. |
-| Prerequisites | For OneCert certificates, the domain must be registered with OneCert before you attempt to run the fault. |
+| Prerequisites | None. |
| Urn | urn:csci:microsoft:keyvault:incrementCertificateVersion/1.0 | | Fault type | Discrete. | | Parameters (key, value) | |
Currently, only virtual machine scale sets configured with the **Uniform** orche
| Capability name | UpdateCertificatePolicy-1.0 | | Target type | Microsoft-KeyVault | | Description | Certificate policies (for example, certificate validity period, certificate type, key size, or key type) are updated based on user input and reverted after the fault duration. |
-| Prerequisites | For OneCert certificates, the domain must be registered with OneCert before you attempt to run the fault. |
+| Prerequisites | None. |
| Urn | urn:csci:microsoft:keyvault:updateCertificatePolicy/1.0 | | Fault type | Continuous. | | Parameters (key, value) | |
chaos-studio Chaos Studio Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-limitations.md
During the public preview of Azure Chaos Studio, there are a few limitations and
## Limitations - **Supported regions** - The target resources must be in [one of the regions supported by the Azure Chaos Studio Preview](https://azure.microsoft.com/global-infrastructure/services/?products=chaos-studio).-- **Resource Move not supported** - Azure Chaos Studio tracked resources (for example, Experiments) currently do NOT support Resource Move. Experiments can be easily copied (by copying Experiment JSON) for use in other subscriptions, resource groups, or regions. Experiments can also already target resources across regions. Extension resources (Targets and Capabilities) do support Resource Move.
+- **Resource Move not supported** - Azure Chaos Studio tracked resources (for example, Experiments) currently do not support Resource Move. Experiments can be easily copied (by copying Experiment JSON) for use in other subscriptions, resource groups, or regions. Experiments can also already target resources across regions. Extension resources (Targets and Capabilities) do support Resource Move.
- **VMs require network access to Chaos studio** - For agent-based faults, the virtual machine must have outbound network access to the Chaos Studio agent service: - Regional endpoints to allowlist are listed in [Permissions and security in Azure Chaos Studio](chaos-studio-permissions-security.md#network-security). - If you're sending telemetry data to Application Insights, the IPs in [IP addresses used by Azure Monitor](../azure-monitor/app/ip-addresses.md) are also required. - **Supported VM operating systems** - If you run an experiment that makes use of the Chaos Studio agent, the virtual machine must run one of the following operating systems:
- - Windows Server 2019, Windows Server 2016, Windows Server 2012, and Windows Server 2012 R2
- - Red Hat Enterprise Linux 8.2, SUSE Enterprise Linux 15 SP2, CentOS 8.2, Debian 10 Buster (with unzip installation required), Oracle Linux 7.8, Ubuntu Server 16.04 LTS, and Ubuntu Server 18.04 LTS
-- **Hardened Linux untested** - The Chaos Studio agent isn't tested against custom Linux distributions or hardened Linux distributions (for example, FIPS or SELinux).
+ - Windows Server 2019, Windows Server 2016, and Windows Server 2012 R2
+ - Red Hat Enterprise Linux 8, Red Hat Enterprise Linux 8.2, openSUSE Leap 15.2, CentOS 8, Debian 10 Buster (with unzip installation required), Oracle Linux 8.3, and Ubuntu Server 18.04 LTS
+- **Hardened Linux untested** - The Chaos Studio agent isn't currently tested against custom Linux distributions or hardened Linux distributions (for example, FIPS or SELinux).
- **Supported browsers** - The Chaos Studio portal experience has only been tested on the following browsers: * **Windows:** Microsoft Edge, Google Chrome, and Firefox * **MacOS:** Safari, Google Chrome, and Firefox
During the public preview of Azure Chaos Studio, there are a few limitations and
- **Agent Service Tags** Currently we don't have service tags available for our Agent-based faults. ## Known issues
-When you pick target resources for an agent-based fault in the experiment designer, it's possible to select virtual machines or virtual machine scale sets with an operating system not supported by the fault selected.
+- When selecting target resources for an agent-based fault in the experiment designer, it's possible to select virtual machines or virtual machine scale sets with an operating system not supported by the fault selected.
+- When running in a Linux environment, the agent-based network latency fault (NetworkLatency-1.1) can only affect **outbound** traffic, not inbound traffic. The fault can affect **both inbound and outbound** traffic on Windows environments (via the `inboundDestinationFilters` and `destinationFilters` parameters).
## Next steps Get started creating and running chaos experiments to improve application resilience with Chaos Studio by using the following links:
chaos-studio Chaos Studio Private Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-private-networking.md
Azure [Virtual Network](../virtual-network/virtual-networks-overview.md) is the
Virtual network injection allows an Azure Chaos Studio Preview resource provider to inject containerized workloads into your virtual network so that resources without public endpoints can be accessed via a private IP address on the virtual network. After you've configured virtual network injection for a resource in a virtual network and enabled the resource as a target, you can use it in multiple experiments. An experiment can target a mix of private and nonprivate resources if the private resources are configured according to the instructions in this article.
+We are also now excited to share that Chaos Studio supports running **agent-based experiments** using Private Endpoints! Chaos Studio now supports Private Link for **both** service-direct and agent-based experiments. If you would like to use Private-Link for agent-service, please reach out to your CSA or the Chaos Studio help team for instructions on how to get yourself onboarded. For private link for service-direct faults, read the following sections for instructions on how to use them.
+ ## Resource type support Currently, you can only enable certain resource types for Chaos Studio virtual network injection:
communications-gateway Monitor Azure Communications Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/monitor-azure-communications-gateway.md
Start with the article [Monitoring Azure resources with Azure Monitor](../azure-
The following sections build on this article by describing the specific data gathered for Azure Communications Gateway. These sections also provide examples for configuring data collection and analyzing this data with Azure tools. > [!TIP]
-> To understand costs associated with Azure Monitor, see [Usage and estimated costs](../azure-monitor/usage-estimated-costs.md).
+> To understand costs associated with Azure Monitor, see [Azure Monitor cost and usage](../azure-monitor/cost-usage.md).
## Azure Monitor data for Azure Communications Gateway
communications-gateway Reliability Communications Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/reliability-communications-gateway.md
Management regions contain the infrastructure used for the ordering, monitoring
## Availability zone support
-Azure availability zones have a minimum of three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. If a local zone fails, regional services, capacity, and high availability are supported by the other zones in the region. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Regions and availability zones](/azure/availability-zones/az-overview).
### Zone down experience for service regions
During a zone-wide outage, calls handled by the affected zone are terminated, wi
## Disaster recovery: fallback to other regions +++ This section describes the behavior of Azure Communications Gateway during a region-wide outage. ### Disaster recovery: cross-region failover for service regions
container-registry Intro Connected Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/intro-connected-registry.md
Scenarios for a connected registry include:
The following image shows a typical deployment model for the connected registry. ### Deployment
-Each connected registry is a resource you manage using a cloud-based Azure container registry. The top parent in the connected registry hierarchy is an Azure container registry in an Azure cloud or in a private deployment of [Azure Stack Hub](/azure-stack/operator/azure-stack-overview).
+Each connected registry is a resource you manage using a cloud-based Azure container registry. The top parent in the connected registry hierarchy is an Azure container registry in an Azure cloud.
Use Azure tools to install the connected registry on a server or device on your premises, or an environment that supports container workloads on-premises such as [Azure IoT Edge](../iot-edge/tutorial-nested-iot-edge.md).
A connected registry can work in one of two modes: *ReadWrite* or *ReadOnly*
The ReadWrite mode is useful when a local development environment is in place. The images are pushed to the local connected registry and from there synchronized to the cloud. -- **ReadOnly mode** - When the connected registry is in ReadOnly mode, clients may only pull (read) artifacts. This configuration is used for nested IoT Edge scenarios, or other scenarios where clients need to pull a container image to operate.
+- **ReadOnly mode** - When the connected registry is in ReadOnly mode, clients can only pull (read) artifacts. This configuration is used for nested IoT Edge scenarios, or other scenarios where clients need to pull a container image to operate.
### Registry hierarchy
cosmos-db Tune Connection Configurations Net Sdk V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tune-connection-configurations-net-sdk-v3.md
Direct mode can be customized through the *CosmosClientOptions* passed to the *C
| Configuration option | Default | Recommended | Details | | :: | :--: | :: | :--: | | EnableTcpConnectionEndpointRediscovery | true | true | This represents the flag to enable detection of connections closing from the server. |
-| IdleTcpConnectionTimeout | By default, idle connections are kept open indefinitely. | 20h-24h | This represents the amount of idle time after which unused connections are closed. Recommended values are between 20 minutes and 24 hours. |
+| IdleTcpConnectionTimeout | By default, idle connections are kept open indefinitely. | 20m-24h | This represents the amount of idle time after which unused connections are closed. Recommended values are between 20 minutes and 24 hours. |
| MaxRequestsPerTcpConnection | 30 | 30 | This represents the number of requests allowed simultaneously over a single TCP connection. When more requests are in flight simultaneously, the direct/TCP client opens extra connections. Don't set this value lower than four requests per connection or higher than 50-100 requests per connection. Applications with a high degree of parallelism per connection, with large requests or responses, or with tight latency requirements might get better performance with 8-16 requests per connection. | | MaxTcpConnectionsPerEndpoint | 65535 | 65535 | This represents the maximum number of TCP connections that may be opened to each Cosmos DB back-end. Together with MaxRequestsPerTcpConnection, this setting limits the number of requests that are simultaneously sent to a single Cosmos DB back-end(MaxRequestsPerTcpConnection x MaxTcpConnectionPerEndpoint). Value must be greater than or equal to 16. | | OpenTcpConnectionTimeout | 5 seconds | >= 5 seconds | This represents the amount of time allowed for trying to establish a connection. When the time elapses, the attempt is canceled and an error is returned. Longer timeouts delay retries and failures. |
The Gateway mode can be customized through the *CosmosClientOptions* passed to t
To learn more about performance tips for .NET SDK, see [Performance tips for Azure Cosmos DB NET SDK v3](performance-tips-dotnet-sdk-v3.md). * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Partial Document Update Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partial-document-update-getting-started.md
Support for Partial Document Update (Patch API) in the [Azure Cosmos DB JavaScri
); ``` -- ## [Python (Preview)](#tab/python) Support for Partial Document Update (Patch API) in the [Azure Cosmos DB Python SDK](nosql/sdk-python.md) is available in Preview starting with version *4.4.0b2*. You can download it from the [pip Registry](https://pypi.org/project/azure-cosmos/4.4.0b2/).
cost-management-billing Analyze Cost Data Azure Cost Management Power Bi Template App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/analyze-cost-data-azure-cost-management-power-bi-template-app.md
To install the app:
1. Select the app that you installed. 1. On the Getting started page, select **Connect your data**. :::image type="content" source="./media/analyze-cost-data-azure-cost-management-power-bi-template-app/connect-your-data.png" alt-text="Screenshot highlighting the Connect your data link." lightbox="./media/analyze-cost-data-azure-cost-management-power-bi-template-app/connect-your-data.png" :::
-1. In the dialog that appears, enter your EA enrollment number for **BillingProfileIdOrEnrollmentNumber**. Specify the number of months of data to get. Leave the default **Scope** value of **Enrollment Number**, then select **Next**.
- >[!NOTE]
- > The default value for Scope is `Enrollment Number`. Do not change the value, otherwise the initial data connection will fail.
+1. In the dialog that appears, enter your EA enrollment number for **BillingProfileIdOrEnrollmentNumber**. Specify the number of months of data to get. Enter "Enrollment Number" for **Scope**, then select **Next**.
:::image type="content" source="./media/analyze-cost-data-azure-cost-management-power-bi-template-app/ea-number.png" alt-text="Screenshot showing where you enter your E A enrollment information." lightbox="./media/analyze-cost-data-azure-cost-management-power-bi-template-app/ea-number.png" ::: 1. The next installation step connects to your EA enrollment and requires an [Enterprise Administrator](../manage/understand-ea-roles.md) account. Leave all the default values. Select **Sign in and continue**.
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-transfer.md
tags: billing
Previously updated : 09/22/2023 Last updated : 10/31/2023
Dev/Test products aren't shown in the following table. Transfers for Dev/Test pr
| MPA | MPA | ΓÇó For details, see [Transfer a customer's Azure subscriptions and/or Reservations (under an Azure plan) to a different CSP](/partner-center/transfer-azure-subscriptions-under-azure-plan). | | MOSP (PAYG) | MOSP (PAYG) | ΓÇó If you're changing the billing owner of the subscription, see [Transfer billing ownership of an Azure subscription to another account](billing-subscription-transfer.md).<br><br> ΓÇó Reservations don't automatically transfer so you must open a [billing support ticket](https://azure.microsoft.com/support/create-ticket/) to transfer them. | | MOSP (PAYG) | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. |
-| MOSP (PAYG) | EA | ΓÇó If you're transferring the subscription to the EA enrollment, see [Transfer a subscription to an EA](mosp-ea-transfer.md#transfer-the-subscription-to-the-ea).<br><br> ΓÇó If you're changing billing ownership, see [Change Azure subscription or account ownership](ea-portal-administration.md#change-azure-subscription-or-account-ownership). |
+| MOSP (PAYG) | EA | ΓÇó If you're transferring the admin account to the EA enrollment, see [Transfer a subscription to an EA](mosp-ea-transfer.md#transfer-the-subscription-to-the-ea).<br><br> ΓÇó If you're transferring subscriptions to the EA enrollment, you must create a [billing support ticket](https://azure.microsoft.com/support/create-ticket/). |
| MOSP (PAYG) | MCA - Enterprise | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. | ## Perform resource transfers
cost-management-billing Prepare Buy Reservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepare-buy-reservation.md
Previously updated : 08/21/2023 Last updated : 10/31/2023
Notifications are sent to the following users:
- Customers with Microsoft Customer Agreement (Azure Plan) - Notifications are sent to the reservation owners and the reservation administrator. - Cloud Solution Provider and new commerce partners
- - Emails are sent to the partner notification contact.
+ - Partner Center Action Center emails are sent to partners. For more information about how partners can update their transactional notifications, see [Action Center preferences](/partner-center/action-center-overview#preferences).
- Individual subscription customers with pay-as-you-go rates - Emails are sent to users who are set up as account administrators, reservation owners, and the reservation administrator.
defender-for-cloud Defender For Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-introduction.md
In this article, you learned about Microsoft Defender for Storage.
+
defender-for-cloud Defender For Storage Malware Scan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-malware-scan.md
Malware Scanning doesn't block access or change permissions to the uploaded blob
- Unsupported storage accounts: Legacy v1 storage accounts aren't supported by malware scanning. - Unsupported service: Azure Files isn't supported by malware scanning.
+- Unsupported regions: Australia Central 2, France South, Germany North, Germany West Central, Jio India West, Korea South, Switzerland West.
+ * Regions that are supported by Defender for Storage but not by malware scanning. Learn more about [availability for Defender for Storage.](/azure/defender-for-cloud/defender-for-storage-introduction)
- Unsupported blob types: [Append and Page blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs) aren't supported for Malware Scanning. - Unsupported encryption: Client-side encrypted blobs aren't supported as they can't be decrypted before scanning by the service. However, data encrypted at rest by Customer Managed Key (CMK) is supported. - Unsupported index tag results: Index tag scan result isn't supported in storage accounts with Hierarchical namespace enabled (Azure Data Lake Storage Gen2).
Malware Scanning doesn't block access or change permissions to the uploaded blob
### Throughput capacity and blob size limit - **Scan throughput rate limit:** Malware Scanning can process up to 2 GB per minute for each storage account. If the rate of file upload momentarily exceeds this threshold for a storage account, the system attempts to scan the files in excess of the rate limit. If the rate of file upload consistently exceeds this threshold, some blobs won't be scanned.- - **Blob scan limit:** Malware Scanning can process up to 2,000 files per minute for each storage account. If the rate of file upload momentarily exceeds this threshold for a storage account, the system attempts to scan the files in excess of the rate limit. If the rate of file upload consistently exceeds this threshold, some blobs won't be scanned.- - **Blob size limit:** The maximum size limit for a single blob to be scanned is 2 GB. Blobs that are larger than the limit won't be scanned. ### Blob uploads and index tag updates
Despite the scanning process, access to uploaded data remains unaffected, and th
## Next steps Learn more on how to [set up response for malware scanning](defender-for-storage-configure-malware-scan.md#setting-up-response-to-malware-scanning) results.++
defender-for-iot Concept Sentinel Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-sentinel-integration.md
Together with the new responsibilities, SOC teams deal with new challenges, incl
- **Siloed or inefficient communication and processes** between OT and SOC organizations. -- **Limited technology and tools**, such as lack of visibility or automated security remediation for OT networks. You'll need to evaluate and link information across data sources for OT networks, and integrations with existing SOC solutions may be costly.
+- **Limited technology and tools**, such as lack of visibility or automated security remediation for OT networks. You need to evaluate and link information across data sources for OT networks, and integrations with existing SOC solutions might be costly.
-However, without OT telemetry, context and integration with existing SOC tools and workflows, OT security and operational threats may be handled incorrectly, or even go unnoticed.
+However, without OT data, context and integration with existing SOC tools and workflows, OT security and operational threats might be handled incorrectly, or even go unnoticed.
## Integrate Defender for IoT and Microsoft Sentinel
-Microsoft Sentinel is a scalable cloud service for security information event management (SIEM) security orchestration automated response (SOAR). SOC teams can use the integration between Microsoft Defender for Iot and Microsoft Sentinel to collect data across networks, detect and investigate threats, and respond to incidents.
+Microsoft Sentinel is a scalable cloud service for security information event management (SIEM) security orchestration automated response (SOAR). SOC teams can use the integration between Microsoft Defender for IoT and Microsoft Sentinel to collect data across networks, detect and investigate threats, and respond to incidents.
In Microsoft Sentinel, the Defender for IoT data connector and solution brings out-of-the-box security content to SOC teams, helping them to view, analyze and respond to OT security alerts, and understand the generated incidents in the broader organizational threat contents.
-Install the Defender for IoT data connector alone to stream your OT network alerts to Microsoft Sentinel. Then, also install the **Microsoft Defender for IoT** solution the extra value of IoT/OT-specific analytics rules, workbooks, and SOAR playbooks, as well as incident mappings to [MITRE ATT&CK for ICS techniques](https://attack.mitre.org/techniques/ics/).
+Install the Defender for IoT data connector alone to stream your OT network alerts to Microsoft Sentinel. Then, also install the **Microsoft Defender for IoT** solution for the extra value of IoT/OT-specific analytics rules, workbooks, and SOAR playbooks, and also incident mappings to [MITRE ATT&CK for ICS techniques](https://attack.mitre.org/techniques/ics/).
+
+Integrating Defender for IoT with Microsoft Sentinel also helps you ingest more data from Microsoft Sentinel's other partner integrations. For more information, see [Integrations with Microsoft and partner services](integrate-overview.md).
+
+> [!NOTE]
+> Some features of Microsoft Sentinel might incur a fee. For more information, see [Plan costs and understand Microsoft Sentinel pricing and billing](/azure/sentinel/billing).
### Integrated detection and response
After you've configured the Defender for IoT data connector and have IoT/OT aler
|Method |Description | ||| |**Use the default data connector rule** | Use the default, **Create incidents based on all alerts generated in Microsoft Defender for IOT** analytics rule provided with the data connector. This rule creates a separate incident in Microsoft Sentinel for each alert streamed from Defender for IoT. |
-|**Use out-of-the-box solution rules** | Enable some or all of the [out-of-the-box analytics rules](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-unifiedmicrosoftsocforot?tab=Overview) provided with the **Microsoft Defender for IoT** solution.<br><br> These analytics rules help to reduce alert fatigue by creating incidents only in specific situations. For example, you might choose to create incidents for excessive login attempts, but for multiple scans detected in the network. |
+|**Use out-of-the-box solution rules** | Enable some or all of the [out-of-the-box analytics rules](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-unifiedmicrosoftsocforot?tab=Overview) provided with the **Microsoft Defender for IoT** solution.<br><br> These analytics rules help to reduce alert fatigue by creating incidents only in specific situations. For example, you might choose to create incidents for excessive sign-in attempts, but for multiple scans detected in the network. |
|**Create custom rules** | Create custom analytics rules to create incidents based only on your specific needs. You can use the out-of-the-box analytics rules as a starting point, or create rules from scratch. <br><br>Add the following filter to prevent duplicate incidents for the same alert ID: `| where TimeGenerated <= ProcessingEndTime + 60m` | Regardless of the method you choose to create alerts, only one incident should be created for each Defender for IoT alert ID.
Playbooks are collections of automated remediation actions that can be run from
For example, use SOAR playbooks to: -- Open an asset ticket in ServiceNow when a new asset is detected, such as a new engineering workstation. This alert can be an unauthorized device that can be used by adversaries to reprogram PLCs.
+- Open an asset ticket in ServiceNow when a new asset is detected, such as a new engineering workstation. This alert can be an unauthorized device that might be used by adversaries to reprogram PLCs.
-- Send an email to relevant stakeholders when suspicious activity is detected, for example unplanned PLC reprogramming. The mail may be sent to OT personnel, such as a control engineer responsible on the related production line.
+- Send an email to relevant stakeholders when suspicious activity is detected, for example unplanned PLC reprogramming. The mail might be sent to OT personnel, such as a control engineer responsible on the related production line.
## Comparing Defender for IoT events, alerts, and incidents This section clarifies the differences between Defender for IoT events, alerts, and incidents in Microsoft Sentinel. Use the listed queries to view a full list of the current events, alerts, and incidents for your OT networks.
-You'll typically see more Defender for IoT *events* in Microsoft Sentinel than *alerts*, and more Defender for IoT *alerts* than *incidents*.
+You typically see more Defender for IoT *events* in Microsoft Sentinel than *alerts*, and more Defender for IoT *alerts* than *incidents*.
### Defender for IoT events in Microsoft Sentinel
After you've installed the Microsoft Defender for IoT solution and deployed the
### Defender for IoT incidents in Microsoft Sentinel
-Microsoft Sentinel creates incidents based on your analytics rules. You might have several alerts grouped in the same incident, or you may have analytics rules configured to *not* create incidents for specific alert types.
+Microsoft Sentinel creates incidents based on your analytics rules. You might have several alerts grouped in the same incident, or you might have analytics rules configured to *not* create incidents for specific alert types.
To view incidents in Microsoft Sentinel, run the following query: ```kql
For more information, see:
- [Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel](../../sentinel/iot-solution.md) - [Detect threats out-of-the-box with Defender for IoT data](../../sentinel/iot-advanced-threat-monitoring.md#detect-threats-out-of-the-box-with-defender-for-iot-data) - [Create custom analytics rules to detect threats](../../sentinel/detect-threats-custom.md)-- [Tutorial Use playbooks with automation rules in Microsoft Sentinel](../../sentinel/tutorial-respond-threats-playbook.md)
+- [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](../../sentinel/tutorial-respond-threats-playbook.md)
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
This article describes how to configure your OT sensor or on-premises management
## Prerequisites -- Depending on where you want to create your forwarding alert rules, you'll need to have either an [OT network sensor or on-premises management console installed](how-to-install-software.md), with access as an **Admin** user.
+- Depending on where you want to create your forwarding alert rules, you need to have either an [OT network sensor or on-premises management console installed](how-to-install-software.md), with access as an **Admin** user.
For more information, see [Install OT agentless monitoring software](how-to-install-software.md) and [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md). -- You'll also need to define SMTP settings on the OT sensor or on-premises management console.
+- You also need to define SMTP settings on the OT sensor or on-premises management console.
For more information, see [Configure SMTP mail server settings on an OT sensor](how-to-manage-individual-sensors.md#configure-smtp-mail-server-settings) and [Configure SMTP mail server settings on the on-premises management console](how-to-manage-the-on-premises-management-console.md#configure-smtp-mail-server-settings).
This article describes how to configure your OT sensor or on-premises management
|Name |Description | |||
- |**Minimal alert level** | Select the minimum [alert severity level](alert-engine-messages.md#alert-severities) you want to forward. <br><br> For example, if you select **Minor**, minor alerts and any alert above this severity level will be forwarded. |
+ |**Minimal alert level** | Select the minimum [alert severity level](alert-engine-messages.md#alert-severities) you want to forward. <br><br> For example, if you select **Minor**, minor alerts and any alert above this severity level are forwarded. |
|**Any protocol detected** | Toggle on to forward alerts from all protocol traffic or toggle off and select the specific protocols you want to include. | |**Traffic detected by any engine** | Toggle on to forward alerts from all [analytics engines](architecture.md#defender-for-iot-analytics-engines), or toggle off and select the specific engines you want to include. | |**Actions** | Select the type of server you want to forward alerts to, and then define any other required information for that server type. <br><br>To add multiple servers to the same rule, select **+ Add server** and add more details. <br><br>For more information, see [Configure alert forwarding rule actions](#configure-alert-forwarding-rule-actions). |
To edit or delete an existing rule:
|Name |Description | |||
- |**Minimal alert level** | At the top-right of the dialog, use the dropdown list to select the minimum [alert severity level](alert-engine-messages.md#alert-severities) that you want to forward. <br><br>For example, if you select **Minor**, minor alerts and any alert above this severity level will be forwarded. |
+ |**Minimal alert level** | At the top-right of the dialog, use the dropdown list to select the minimum [alert severity level](alert-engine-messages.md#alert-severities) that you want to forward. <br><br>For example, if you select **Minor**, minor alerts and any alert above this severity level are forwarded. |
|**Protocols** | Select **All** to forward alerts from all protocol traffic, or select **Specific** to add specific protocols only. |
- |**Engines**** | Select **All** to forward alerts triggered by all sensor analytics engines, or select **Specific** to add specific engines only. |
+ |**Engines** | Select **All** to forward alerts triggered by all sensor analytics engines, or select **Specific** to add specific engines only. |
|**System Notifications** | Select the **Report System Notifications** option to notify about disconnected sensors or remote backup failures. | |**Alert Notifications** | Select the **Report Alert Notifications** option to notify about an alert's date and time, title, severity, source and destination name and IP address, suspicious traffic, and the engine that detected the event. | |**Actions** | Select **Add** to add an action to apply and enter any parameters values needed for the selected action. Repeat as needed to add multiple actions. <br><br>For more information, see [Configure alert forwarding rule actions](#configure-alert-forwarding-rule-actions). |
The following sections describe the syslog output syntax for each format.
| Name | Description | |--|--|
-| Priority | User.Alert |
+| Priority | `User.Alert` |
| Date and Time | Date and time that the syslog server machine received the information. | | Hostname | Sensor IP |
-| Message | Sensor name: The name of the appliance. <br /> Alert time: The time that the alert was detected: Can vary from the time of the syslog server machine, and depends on the time-zone configuration of the forwarding rule. <br /> Alert Title:  The title of the alert. <br /> Alert message: The message of the alert. <br /> Alert severity: The severity of the alert: **Warning**, **Minor**, **Major**, or **Critical**. <br /> Alert type: **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br /> Protocol: The protocol of the alert. <br /> **Source_MAC**: IP address, name, vendor, or OS of the source device. <br /> Destination_MAC: IP address, name, vendor, or OS of the destination. If data is missing, the value will be **N/A**. <br /> alert_group: The alert group associated with the alert. |
+| Message | Sensor name: The name of the appliance. <br /> Alert time: The time that the alert was detected: Can vary from the time of the syslog server machine, and depends on the time-zone configuration of the forwarding rule. <br /> Alert Title:  The title of the alert. <br /> Alert message: The message of the alert. <br /> Alert severity: The severity of the alert: **Warning**, **Minor**, **Major**, or **Critical**. <br /> Alert type: **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br /> Protocol: The protocol of the alert. <br /> **Source_MAC**: IP address, name, vendor, or OS of the source device. <br /> Destination_MAC: IP address, name, vendor, or OS of the destination. If data is missing, the value is **N/A**. <br /> alert_group: The alert group associated with the alert. |
#### Syslog CEF output fields | Name | Description | |--|--|
-| Priority | User.Alert |
+| Priority | `User.Alert` |
| Date and time | Date and time that the sensor sent the information, in UTC format | | Hostname | Sensor hostname |
-| Message | CEF:0 <br />Microsoft Defender for IoT/CyberX <br />Sensor name <br />Sensor version <br />Microsoft Defender for IoT Alert <br />Alert title <br />Integer indication of severity. 1=**Warning**, 4=**Minor**, 8=**Major**, or 10=**Critical**.<br />msg= The message of the alert. <br />protocol= The protocol of the alert. <br />severity= **Warning**, **Minor**, **Major**, or **Critical**. <br />type= **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br />UUID= UUID of the alert (Optional) <br /> start= The time that the alert was detected. <br />Might vary from the time of the syslog server machine, and depends on the time-zone configuration of the forwarding rule. <br />src_ip= IP address of the source device. (Optional) <br />src_mac= MAC address of the source device. (Optional) <br />dst_ip= IP address of the destination device. (Optional)<br />dst_mac= MAC address of the destination device. (Optional)<br />cat= The alert group associated with the alert. |
+| Message | *CEF:0* <br />Microsoft Defender for IoT/CyberX <br />Sensor name <br />Sensor version <br />Microsoft Defender for IoT Alert <br />Alert title <br />Integer indication of severity. 1=**Warning**, 4=**Minor**, 8=**Major**, or 10=**Critical**.<br />msg= The message of the alert. <br />protocol= The protocol of the alert. <br />severity= **Warning**, **Minor**, **Major**, or **Critical**. <br />type= **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br />UUID= UUID of the alert (Optional) <br /> start= The time that the alert was detected. <br />Might vary from the time of the syslog server machine, and depends on the time-zone configuration of the forwarding rule. <br />src_ip= IP address of the source device. (Optional) <br />src_mac= MAC address of the source device. (Optional) <br />dst_ip= IP address of the destination device. (Optional)<br />dst_mac= MAC address of the destination device. (Optional)<br />cat= The alert group associated with the alert. |
#### Syslog LEEF output fields | Name | Description | |--|--|
-| Priority | User.Alert |
+| Priority | `User.Alert` |
| Date and time | Date and time that the sensor sent the information, in UTC format | | Hostname | Sensor IP |
-| Message | Sensor name: The name of the Microsoft Defender for IoT appliance. <br />LEEF:1.0 <br />Microsoft Defender for IoT <br />Sensor <br />Sensor version <br />Microsoft Defender for IoT Alert <br /> Title:  The title of the alert. <br />msg: The message of the alert. <br />protocol: The protocol of the alert.<br />severity: **Warning**, **Minor**, **Major**, or **Critical**. <br />type: The type of the alert: **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br />start: The time of the alert. It may be different from the time of the syslog server machine, and depends on the time-zone configuration. <br />src_ip: IP address of the source device.<br />dst_ip: IP address of the destination device. <br />cat: The alert group associated with the alert. |
+| Message | Sensor name: The name of the Microsoft Defender for IoT appliance. <br />*LEEF:1.0* <br />Microsoft Defender for IoT <br />Sensor <br />Sensor version <br />Microsoft Defender for IoT Alert <br /> Title:  The title of the alert. <br />msg: The message of the alert. <br />protocol: The protocol of the alert.<br />severity: **Warning**, **Minor**, **Major**, or **Critical**. <br />type: The type of the alert: **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br />start: The time of the alert. It might be different from the time of the syslog server machine, and depends on the time-zone configuration. <br />src_ip: IP address of the source device.<br />dst_ip: IP address of the destination device. <br />cat: The alert group associated with the alert. |
### Webhook server action
In the **Actions** area, enter the following details:
|**Hostname / Port** | Enter the NetWitness server's hostname and port. | |**Time zone** | Enter the time zone you want to use in the time stamp for the alert detection at the SIEM. |
-### Other partner server integrations
+## Configure forwarding rules for partner integrations
-You may be integrating Defender for IoT with a partner service to send alert or device inventory information to another security or device management system, or to communicate with partner-side firewalls.
+You might be integrating Defender for IoT with a partner service to send alert or device inventory information to another security or device management system, or to communicate with partner-side firewalls.
[Partner integrations](integrate-overview.md) can help to bridge previously siloed security solutions, enhance device visibility, and accelerate system-wide response to more rapidly mitigate risks.
-In such cases, use the **Actions** area to enter credentials and other information required to communicate with integrated partner services.
+In such cases, use supported **Actions** to enter credentials and other information required to communicate with integrated partner services.
For more information, see: -- [Integrate Qradar with Microsoft Defender for IoT](tutorial-qradar.md)-- [Integrate Splunk with Microsoft Defender for IoT](tutorial-splunk.md)-- [Integrate CyberArk with Microsoft Defender for IoT](tutorial-cyberark.md) - [Integrate Fortinet with Microsoft Defender for IoT](tutorial-fortinet.md)-- [Integrate ClearPass with Microsoft Defender for IoT](tutorial-clearpass.md)-- [Integrate Forescout with Microsoft Defender for IoT](tutorial-forescout.md)-- [Integrate Palo-Alto with Microsoft Defender for IoT](tutorial-palo-alto.md)
+- [Integrate Qradar with Microsoft Defender for IoT](tutorial-qradar.md)
-## Configure alert groups in partner services
+### Configure alert groups in partner services
When you configure forwarding rules to send alert data to Syslog servers, QRadar, and ArcSight, *alert groups* are automatically applied and are available in those partner servers.
-*Alert groups* help SOC teams using those partner solutions to manage alerts based on enterprise security policies and business priorities. For example, alerts about new detections are organized into a *discovery* group, and will include any alerts about new devices, VLANs, user accounts, MAC addresses, and more.
+*Alert groups* help SOC teams using those partner solutions to manage alerts based on enterprise security policies and business priorities. For example, alerts about new detections are organized into a *discovery* group, which includes any alerts about new devices, VLANs, user accounts, MAC addresses, and more.
Alert groups appear in partner services with the following prefixes:
defender-for-iot Integrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrate-overview.md
Title: Integrations with partner services - Microsoft Defender for IoT
-description: Learn about supported integrations with Microsoft Defender for IoT.
Previously updated : 08/02/2022
+ Title: Integrate with partner services | Microsoft Defender for IoT
+description: Learn about supported integrations across your organization's security stack with Microsoft Defender for IoT.
Last updated : 09/06/2023 # Integrations with Microsoft and partner services
-Integrate Microsoft Defender for Iot with partner services to view partner data in Defender for IoT, or to view Defender for IoT data in a partner service.
+Integrate Microsoft Defender for IoT with partner services to view data from across your security stack data in Defender for IoT, or to view Defender for IoT data in one of your security ecosystem integrations.
+
+> [!IMPORTANT]
+> Defender for IoT is refreshing its security stack integrations to improve the overall robustness, scalability, and ease of maintenance of various security solutions.
+>
+> If you're integrating your security solution with cloud-based systems, we recommend that you use data connectors through [Microsoft Sentinel](concept-sentinel-integration.md). For on-premises integrations, we recommend that you either configure your OT sensor to [forward syslog events](how-to-forward-alert-information-to-partners.md)), or use [Defender for IoT APIs](references-work-with-defender-for-iot-apis.md).
+>
+> The legacy [Aruba ClearPass](#aruba-clearpass), [Palo Alto Panorama](#palo-alto), and [Splunk](#splunk) integrations are supported through October 2024 using sensor version 23.1.3, and won't be supported in upcoming major software versions. For customers using legacy integration methods, we recommend moving your integrations to the standard cloud or on-premises methods.
## Aruba ClearPass |Name |Description |Support scope |Supported by |Learn more | ||||||
-|**Aruba ClearPass** | Share Defender for IoT data with ClearPass Security Exchange and update the ClearPass Policy Manager Endpoint Database with Defender for IoT data. | - OT networks<br>- Locally managed sensors and on-premises management consoles | Microsoft | [Integrate ClearPass with Microsoft Defender for IoT](tutorial-clearpass.md) |
+| **Aruba ClearPass** (cloud) | View Defender for IoT data together with Aruba ClearPass data, using Microsoft Sentinel to create custom dashboards, custom alerts, and improve your investigation ability.<br><br> Connect to [Microsoft Sentinel](concept-sentinel-integration.md), and install the [Aruba ClearPass data connector](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-arubaclearpass?tab=Overview). | - OT networks <br>- Cloud-connected or locally managed OT sensors | Microsoft | [Microsoft Sentinel documentation](/azure/sentinel/data-connectors/aruba-clearpass) |
+| **Aruba ClearPass** (on-premises) | View Defender for IoT data together with Aruba ClearPass data by doing one of the following:<br><br>- Configure your sensor to send syslog files directly to ClearPass. <br>- | - OT networks <br>- Cloud-connected or locally managed OT sensors | Microsoft | [Forward on-premises OT alert information](how-to-forward-alert-information-to-partners.md) <br><br>[Defender for IoT API reference](references-work-with-defender-for-iot-apis.md)|
+|**Aruba ClearPass** (legacy) | Share Defender for IoT data directly with ClearPass Security Exchange and update the ClearPass Policy Manager Endpoint Database with Defender for IoT data. | - OT networks<br>- Locally managed sensors and on-premises management consoles | Microsoft | [Integrate ClearPass with Microsoft Defender for IoT](tutorial-clearpass.md) |
+ ## Axonius
Integrate Microsoft Defender for Iot with partner services to view partner data
|Name |Description |Support scope |Supported by |Learn more | ||||||
-|**Defender for IoT data connector in Microsoft Sentinel** | Displays Defender for IoT cloud data in Microsoft Sentinel, supporting end-to-end SOC investigations for Defender for IoT alerts. | - OT and Enterprise IoT networks <br>- Cloud-connected sensors | Microsoft | [Integrate Microsoft Sentinel and Microsoft Defender for IoT](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended) |
-|**Microsoft Sentinel** | Send Defender for IoT alerts from on-premises resources to Microsoft Sentinel. | - OT networks <br>- Locally managed sensors and on-premises management consoles | Microsoft | [Connect on-premises OT network sensors to Microsoft Sentinel](integrations/on-premises-sentinel.md) |
+|**Defender for IoT data connector in Microsoft Sentinel** (cloud) | Displays Defender for IoT cloud data in Microsoft Sentinel, supporting end-to-end SOC investigations for Defender for IoT alerts. <br><br>Connects to other partner services, allowing you to synchronize your data between Defender for IoT and supported partner systems, across Microsoft Sentinel. | - OT and Enterprise IoT networks <br>- Cloud-connected sensors | Microsoft | - [OT threat monitoring in enterprise SOCs](concept-sentinel-integration.md) <br>- [Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel](iot-solution.md) <br>- [Tutorial: Investigate and detect threats for IoT devices](iot-advanced-threat-monitoring.md) |
+| **Microsoft Sentinel** (on-premises) | View Defender for IoT data together with Microsoft Sentinel data by configuring your sensor to send syslog files directly to Microsoft Sentinel.| - OT networks <br>- Cloud-connected or locally managed OT sensors | Microsoft | [Forward on-premises OT alert information](how-to-forward-alert-information-to-partners.md) |
+|**Microsoft Sentinel** (legacy) | Send Defender for IoT alerts from on-premises resources to Microsoft Sentinel. | - OT networks <br>- Locally managed sensors and on-premises management consoles | Microsoft | [Connect on-premises OT network sensors to Microsoft Sentinel](integrations/on-premises-sentinel.md) |
## Palo Alto |Name |Description |Support scope |Supported by |Learn more | ||||||
-|**Palo Alto** | Use Defender for IoT data to block critical threats with Palo Alto firewalls, either with automatic blocking or with blocking recommendations. | - OT networks<br>- Locally managed sensors and on-premises management consoles | Microsoft | [Integrate Palo-Alto with Microsoft Defender for IoT](tutorial-palo-alto.md) |
+| **Palo Alto Panorama** (cloud) | View Defender for IoT data together with Panorama data. Use Microsoft Sentinel solutions, which include out-of-the-box workbooks, hunting queries, automation playbooks, and analytics rules, or create custom dashboards, alerts, and more. <br><br> Connect to [Microsoft Sentinel](concept-sentinel-integration.md), and install one or more of the following solutions: <br>- [Palo Alto PAN-OS Solution](/azure/sentinel/data-connectors/palo-alto-networks-firewall) <br>- [Palo Alto Networks Cortex Data Lake Solution](/azure/sentinel/data-connectors/palo-alto-networks-cortex-data-lake-cdl) <br>- [Palo Alto Prisma Cloud CSPM solution](/azure/sentinel/data-connectors/palo-alto-prisma-cloud-cspm-using-azure-function) | - OT networks <br>- Cloud-connected or locally managed OT sensors | Microsoft |Microsoft Sentinel documentation: <br>- [Palo Alto PAN-OS Solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-paloaltopanos?tab=Overview) <br>- [Palo Alto Networks Cortex Data Lake Solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-paloaltocdl?tab=Overview) <br>- [Palo Alto Prisma Cloud CSPM solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-paloaltoprisma?tab=Overview) |
+| **Palo Alto Panorama** (on-premises) | View Defender for IoT data together with Panorama data by configuring your sensor to send syslog files directly to Palo Alto Panorama.| - OT networks <br>- Cloud-connected or locally managed OT sensors | Microsoft | [Forward on-premises OT alert information](how-to-forward-alert-information-to-partners.md) |
+|**Palo Alto** (legacy) | Use Defender for IoT data to block critical threats with Palo Alto firewalls, either with automatic blocking or with blocking recommendations. | - OT networks<br>- Locally managed sensors and on-premises management consoles | Microsoft | [Integrate Palo-Alto with Microsoft Defender for IoT](tutorial-palo-alto.md) |
## RSA NetWitness
Integrate Microsoft Defender for Iot with partner services to view partner data
|Name |Description |Support scope |Supported by |Learn more | ||||||
-| **Splunk** | Send Defender for IoT alerts to Splunk | - OT networks <br>- Cloud connected sensors | Microsoft | [Stream Defender for IoT cloud alerts to a partner SIEM](integrations/send-cloud-data-to-partners.md) |
-|**Splunk** | Send Defender for IoT alerts to Splunk | - OT networks<br>- Locally managed sensors and on-premises management consoles | Microsoft | [Integrate Splunk with Microsoft Defender for IoT](tutorial-splunk.md) |
+| **Splunk** (cloud) | Send Defender for IoT alerts to Splunk using one of the following methods: <br><br>- Via the [OT Security Add-on for Splunk](https://apps.splunk.com/app/5151), which widens your capacity to ingest and monitor OT assets and provides OT vulnerability management reports that help you comply with and audit for NERC CIP. <br><br>- Via a SIEM that supports Event Hubs, such as Microsoft Sentinel | - OT networks <br>- Cloud-connected or locally managed OT sensors | Microsoft and Splunk |- Splunk documentation on [The OT Security Add-on for Splunk](https://splunk.github.io/ot-security-solution/integrationguide/) and [installing add-ins](https://docs.splunk.com/Documentation/AddOns/released/Overview/Distributedinstall) <br>- [Stream Defender for IoT cloud alerts to a partner SIEM](integrations/send-cloud-data-to-partners.md) |
+| **Splunk** (on-premises) | View Defender for IoT data together with Splunk data by configuring your sensor to send syslog files directly to Splunk.| - OT networks <br>- Cloud-connected or locally managed OT sensors | Microsoft | [Forward on-premises OT alert information](how-to-forward-alert-information-to-partners.md) |
+|**Splunk** (on-premises, legacy integration) | Send Defender for IoT alerts to Splunk | - OT networks<br>- Locally managed sensors and on-premises management consoles | Microsoft | [Integrate Splunk with Microsoft Defender for IoT](tutorial-splunk.md) |
## Next steps
-> [!div class="nextstepaction"]
-> [Stream Defender for IoT cloud alerts to a partner SIEM](integrations/send-cloud-data-to-partners.md)
+For more information, see:
+
+- [Stream Defender for IoT cloud alerts to a partner SIEM](integrations/send-cloud-data-to-partners.md)
defender-for-iot On Premises Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrations/on-premises-sentinel.md
Title: How to connect on-premises Defender for IoT resources to Microsoft Sentinel
-description: Learn how to stream data into Microsoft Sentinel from an on-premises and locally-managed Microsoft Defender for IoT OT network sensor or an on-premises management console.
+ Title: Connect Defender for IoT on-premises resources to Microsoft Sentinel (legacy)
+description: This article describes the legacy method for connecting your OT sensor or on-premises management console to Microsoft Sentinel.
Previously updated : 12/26/2022 Last updated : 08/17/2023
+#CustomerIntent: As an admin user for my locally-managed OT sensor, I want to learn how to connect my sensor to Microsoft Sentinel so that I can view alerts generated together with other Microsoft Sentinel data.
-# Connect on-premises OT network sensors to Microsoft Sentinel
+# Connect OT network sensors or on-premises management consoles to Microsoft Sentinel (legacy)
-You can [stream Microsoft Defender for IoT data into Microsoft Sentinel](../iot-solution.md) via the Azure portal, for any data coming from cloud-connected OT network sensors.
+This article describes the legacy method for connecting your OT sensor or on-premises management console to Microsoft Sentinel. Stream data into Microsoft Sentinel whenever you want to use Microsoft Sentinel's advanced threat hunting, security analytics, and automation features when responding to security incidents and threats across your network.
-However, if you're working either in a hybrid environment, or completely on-premises, you might want to stream data in from your locally-managed sensors to Microsoft Sentinel. To do this, create forwarding rules on either your OT network sensor, or for multiple sensors from an on-premises management console.
-
-Stream data into Microsoft Sentinel whenever you want to use Microsoft Sentinel's advanced threat hunting, security analytics, and automation features when responding to security incidents and threats across your network. For more information, see [Microsoft Sentinel documentation](../../../sentinel/index.yml).
+> [!IMPORTANT]
+> If you're using a cloud connected sensor, we recommend that you connect Defender for IoT data using the Microsoft Sentinel solution instead of the legacy integration method. For more information, see:
+>
+> - [OT threat monitoring in enterprise SOCs](../concept-sentinel-integration.md)
+> - [Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel](../iot-solution.md)
+> - [Tutorial: Investigate and detect threats for IoT devices](../iot-advanced-threat-monitoring.md)
## Prerequisites
defender-for-iot Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes-archive.md
Title: What's new archive for Microsoft Defender for IoT for organizations
-description: Learn about the features and enhancements released for Microsoft Defender for IoT for organizations more than 6 months ago.
+description: Learn about the features and enhancements released for Microsoft Defender for IoT for organizations more than six months ago.
Last updated 08/07/2022
Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Term
The new **IoT OT Threat Monitoring with Defender for IoT solution** is available and provides enhanced capabilities for Microsoft Defender for IoT integration with Microsoft Sentinel. The **IoT OT Threat Monitoring with Defender for IoT solution** is a set of bundled content, including analytics rules, workbooks, and playbooks, configured specifically for Defender for IoT data. This solution currently supports only Operational Networks (OT/ICS).
-For information on integrating with Microsoft Sentinel, see [Tutorial: Integrate Defender for Iot and Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended)
+For information on integrating with Microsoft Sentinel, see [Tutorial: Integrate Defender for IoT and Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended)
### Apache Log4j vulnerability
This new functionality is available on the following alerts:
The following feature enhancements are available with version 10.5.3 of Microsoft Defender for IoT. -- The on-premises management console, has a new API to support our ServiceNow integration. For more information, see [Integration API reference for on-premises management consoles (Public preview)](api/management-integration-apis.md#integration-api-reference-for-on-premises-management-consoles-public-preview).
+- The on-premises management console has a new API to support our ServiceNow integration. For more information, see [Integration API reference for on-premises management consoles (Public preview)](api/management-integration-apis.md#integration-api-reference-for-on-premises-management-consoles-public-preview).
- Enhancements have been made to the network traffic analysis of multiple OT and ICS protocol dissectors.
Certificate and password recovery enhancements were made for this release.
This version lets you: -- Upload SSL certificates directly to the sensors and on-premises management consoles.
+- Upload TLS/SSL certificates directly to the sensors and on-premises management consoles.
- Perform validation between the on-premises management console and connected sensors, and between a management console and a High Availability management console. Validation is based on expiration dates, root CA authenticity, and Certificate Revocation Lists. If validation fails, the session won't continue. For upgrades: -- There's no change in SSL certificate or validation functionality during the upgrade.-- After upgrading, sensor and on-premises management console administrative users can replace SSL certificates, or activate SSL certificate validation from the System Settings, SSL Certificate window.
+- There's no change in TLS/SSL certificate or validation functionality during the upgrade.
+- After you update your sensors and on-premises management consoles, administrative users can replace TLS/SSL certificates, or activate TLS/SSL certificate validation from the System Settings, TLS/SSL Certificate window.
For Fresh Installations: -- During first-time sign-in, users are required to either use an SSL Certificate (recommended) or a locally generated self-signed certificate (not recommended)
+- During first-time sign-in, users are required to either use an TLS/SSL Certificate (recommended) or a locally generated self-signed certificate (not recommended)
- Certificate validation is turned on by default for fresh installations. #### Password recovery
defender-for-iot Tutorial Clearpass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-clearpass.md
Title: Integrate ClearPass with Microsoft Defender for IoT
-description: In this tutorial, you learn how to integrate Microsoft Defender for IoT with ClearPass.
- Previously updated : 02/07/2022
+description: In this tutorial, you learn how to integrate Microsoft Defender for IoT with ClearPass using Defender for IoT's legacy, on-premises integration.
+ Last updated : 09/06/2023 # Integrate ClearPass with Microsoft Defender for IoT
-This article helps you learn how to integrate ClearPass Policy Manager (CPPM) with Microsoft Defender for IoT.
-The Defender for IoT platform delivers continuous ICS threat monitoring and device discovery, combining a deep embedded understanding of industrial protocols, devices, and applications with ICS-specific behavioral anomaly detection, threat intelligence, risk analytics, and automated threat modeling.
+This article describes how to integrate Aruba ClearPass with Microsoft Defender for IoT, in order to view both ClearPass and Defender for IoT information in a single place.
-Defender for IoT detects, discovers, and classifies OT and ICS endpoints, and share information directly with ClearPass using the ClearPass Security Exchange framework and the OpenAPI.
+Viewing both Defender for IoT and ClearPass information together provides SOC analysts with multidimensional visibility into the specialized OT protocols and devices deployed in industrial environments, along with ICS-aware behavioral analytics to rapidly detect suspicious or anomalous behavior.
-Defender for IoT automatically updates the ClearPass Policy Manager Endpoint Database with endpoint classification data and several custom security attributes.
+## Cloud-based integrations
-The integration allows for the following:
+> [!TIP]
+> Cloud-based security integrations provide several benefits over on-premises solutions, such as centralized, simpler sensor management and centralized security monitoring.
+>
+> Other benefits include real-time monitoring, efficient resource use, increased scalability and robustness, improved protection against security threats, simplified maintenance and updates, and seamless integration with third-party solutions.
+>
-- Viewing ICS and SCADA security threats identified by Defender for IoT security engines.
+If you're integrating a cloud-connected OT sensor with Aruba ClearPass, we recommend that you connect to [Microsoft Sentinel](concept-sentinel-integration.md), and then install the [Aruba ClearPass data connector](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-arubaclearpass?tab=Overview).
-- Viewing device inventory information discovered by the Defender for IoT sensor. The sensor delivers centralized visibility of all network devices and endpoints across the IT and OT infrastructure. From here, a centralized endpoint and edge security policy can be defined and administered in the ClearPass system.
+Microsoft Sentinel is a scalable cloud service for security information event management (SIEM) security orchestration automated response (SOAR). SOC teams can use the integration between Microsoft Defender for IoT and Microsoft Sentinel to collect data across networks, detect and investigate threats, and respond to incidents.
-In this article, you learn how to:
+In Microsoft Sentinel, the Defender for IoT data connector and solution brings out-of-the-box security content to SOC teams, helping them to view, analyze and respond to OT security alerts, and understand the generated incidents in the broader organizational threat contents.
-> [!div class="checklist"]
->
-> - Create a ClearPass API user
-> - Create a ClearPass operator profile
-> - Create a ClearPass OAuth API client
-> - Configure Defender for IoT to integrate with ClearPass
-> - Define the ClearPass forwarding rule
-> - Monitor ClearPass and Defender for IoT communication
+For more information, see:
-## Prerequisites
+- [Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel](iot-solution.md)
+- [Tutorial: Investigate and detect threats for IoT devices](iot-advanced-threat-monitoring.md)
+- [Microsoft Sentinel documentation](/azure/sentinel/data-connectors/aruba-clearpass).
-Before you begin, make sure that you have the following prerequisites:
+## On-premises integrations
-### Aruba ClearPass requirements
+If you're working with an air-gapped, locally managed OT sensor, you'll need an on-premises solution to view Defender for IoT and Splunk information in the same place.
-CPPM runs on hardware appliances with pre-installed software or as a Virtual Machine under the following hypervisors. Hypervisors that run on a client computer such as VMware Player aren't supported.
+In such cases, we recommend that you configure your OT sensor to send syslog files directly to ClearPass, or use Defender for IoT's built-in API.
-- VMware ESXi 5.5, 6.0, 6.5, 6.6 or higher.
+For more information, see:
-- Microsoft Hyper-V Server 2012 R2 or 2016 R2.
+- [Forward on-premises OT alert information](how-to-forward-alert-information-to-partners.md)
+- [Defender for IoT API reference](references-work-with-defender-for-iot-apis.md)
-- Hyper-V on Microsoft Windows Server 2012 R2 or 2016 R2. -- KVM on CentOS 7.5 or later.
+## On-premises integration (legacy)
-### Defender for IoT requirements
+This section describes how to integrate Defender for IoT and ClearPass Policy Manager (CPPM) using the legacy, on-premises integration.
-- Defender for IoT version 2.5.1 or higher.
+> [!IMPORTANT]
+> The legacy Aruba ClearPass integration is supported through October 2024 using sensor version 23.1.3, and won't be supported in upcoming major software versions.. For customers using the legacy integration, we recommend moving to one of the following methods:
+>
+> - If you're integrating your security solution with cloud-based systems, we recommend that you use data connectors through [Microsoft Sentinel](#cloud-based-integrations).
+> - For on-premises integrations, we recommend that you either configure your OT sensor to [forward syslog events, or use Defender for IoT APIs](#on-premises-integrations).
+>
+
+### Prerequisites
-- Access to a Defender for IoT OT sensor as an Admin user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
+Before you begin, make sure that you have the following prerequisites:
-## Create a ClearPass API user
+|Prerequisite |Description |
+|||
+|**Aruba ClearPass requirements** | CPPM runs on hardware appliances with pre-installed software or as a Virtual Machine under the following hypervisors. <br>- VMware ESXi 5.5, 6.0, 6.5, 6.6 or higher. <br>- Microsoft Hyper-V Server 2012 R2 or 2016 R2. <br>- Hyper-V on Microsoft Windows Server 2012 R2 or 2016 R2. <br>- KVM on CentOS 7.5 or later. <br><br>Hypervisors that run on a client computer such as VMware Player aren't supported. |
+|**Defender for IoT requirements** | - Defender for IoT version 2.5.1 or higher. <br>- Access to a Defender for IoT OT sensor as an [Admin user](roles-on-premises.md). |
+
+### Create a ClearPass API user
As part of the communications channel between the two products, Defender for IoT uses many APIs (both TIPS, and REST). Access to the TIPS APIs is validated via username and password combination credentials. This user ID needs to have minimum levels of access. Don't use a Super Administrator profile, but instead use API Administrator as shown below.
As part of the communications channel between the two products, Defender for IoT
1. Select **Add**.
-## Create a ClearPass operator profile
+### Create a ClearPass operator profile
Defender for IoT uses the REST API as part of the integration. REST APIs are authenticated under an OAuth framework. To sync with Defender for IoT, you need to create an API Client.
In order to secure access to the REST API for the API Client, create a restricte
| **API Services** | Set to **Allow Access** | | **Policy Manager** | Set the following: <br />- **Dictionaries**: **Attributes** set to **Read, Write, Delete**<br />- **Dictionaries**: **Fingerprints** set to **Read, Write, Delete**<br />- **Identity**: **Endpoints** set to **Read, Write, Delete** |
-## Create a ClearPass OAuth API client
+### Create a ClearPass OAuth API client
1. In the main window, select **Administrator** > **API Services** > **API Clients**.
In order to secure access to the REST API for the API Client, create a restricte
- CPPM OAuth2 API Client Secret
-## Configure Defender for IoT to integrate with ClearPass
+### Configure Defender for IoT to integrate with ClearPass
To enable viewing the device inventory in ClearPass, you need to set up Defender for IoT-ClearPass sync. When the sync configuration is complete, the Defender for IoT platform updates the ClearPass Policy Manager EndpointDb as it discovers new endpoints.
To enable viewing the device inventory in ClearPass, you need to set up Defender
1. Select **Save**.
-## Define a ClearPass forwarding rule
+### Define a ClearPass forwarding rule
To enable viewing the alerts discovered by Defender for IoT in Aruba, you need to set the forwarding rule. This rule defines which information about the ICS, and SCADA security threats identified by Defender for IoT security engines is sent to ClearPass.
-Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created aren't affected by the rule.
-
-**To define a ClearPass forwarding rule on the Defender for IoT sensor**:
-
-1. Sign in to the sensor, and select **Forwarding**.
-
-1. Select **+ Create new rule**.
-
-1. In the **Add forwarding rule** pane, define the rule parameters:
-
- :::image type="content" source="media/tutorial-clearpass/create-rule.png" alt-text="Screenshot of how to create a Forwarding Rule." lightbox="media/tutorial-clearpass/create-rule.png":::
-
- | Parameter | Description |
- |--|--|
- | **Rule name** | The forwarding rule name. |
- | **Minimal alert level** | The minimal security level incident to forward. For example, if Minor is selected, minor alerts and any alert above this severity level will be forwarded. |
- | **Any protocol detected** | Toggle off to select the protocols you want to include in the rule. |
- | **Traffic detected by any engine** | Toggle off to select the traffic you want to include in the rule. |
-
-1. In the **Actions** area, define the following values:
-
- | Parameter | Description |
- |--|--|
- | **Server** | Select ClearPass. |
- | **Host** | Define the ClearPass server IP to send alert information. |
- | **Port** | Define the ClearPass port to send alert information. |
-
-1. Configure which alert information you want to forward:
-
- | Parameter | Description |
- |--|--|
- | **Report illegal function codes** | Protocol violations - Illegal field value violating ICS protocol specification (potential exploit). |
- | **Report unauthorized PLC programming / firmware updates** | Unauthorized PLC changes. |
- | **Report unauthorized PLC stop** | PLC stop (downtime). |
- | **Report malware related alerts** | Industrial malware attempts, such as TRITON, NotPetya. |
- | **Report unauthorized scanning** | Unauthorized scanning (potential reconnaissance) |
-
-1. Select **Save**.
+For more information, see [On-premises integrations](#on-premises-integrations).
-## Monitor ClearPass and Defender for IoT communication
+### Monitor ClearPass and Defender for IoT communication
Once the sync has started, endpoint data is populated directly into the Policy Manager EndpointDb, you can view the last update time from the integration configuration screen.
Once the sync has started, endpoint data is populated directly into the Policy M
:::image type="content" source="media/tutorial-clearpass/last-sync.png" alt-text="Screenshot of the view the time and date of your last sync." lightbox="media/tutorial-clearpass/last-sync.png":::
-If Sync isn't working, or shows an error, then itΓÇÖs likely youΓÇÖve missed capturing some of the information. Recheck the data recorded.
+If the sync isn't working, or shows an error, then itΓÇÖs likely youΓÇÖve missed capturing some of the information. Recheck the data recorded.
Additionally, you can view the API calls between Defender for IoT and ClearPass from **Guest** > **Administration** > **Support** > **Application Log**.
defender-for-iot Tutorial Fortinet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-fortinet.md
The FortiGate firewall can be used to block suspicious traffic.
Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created aren't affected by the rule.
-**To set a forwarding rule to block malware-related alerts**:
+When creating your forwarding rule:
-1. Sign in to the Microsoft Defender for IoT sensor, and select **Forwarding**.
+1. In the **Actions** area, select **FortiGate**.
-1. Select **+ Create new rule**.
-
-1. In the **Add forwarding rule** pane, define the rule parameters:
-
- :::image type="content" source="media/tutorial-fortinet/forward-rule.png" alt-text="Screenshot of the Forwarding window option in a sensor." lightbox="media/tutorial-fortinet/forward-rule.png":::
-
- | Parameter | Description |
- |--|--|
- | **Rule name** | The forwarding rule name. |
- | **Minimal alert level** | The minimal security level incident to forward. For example, if Minor is selected, minor alerts and any alert above this severity level will be forwarded. |
- | **Any protocol detected** | Toggle off to select the protocols you want to include in the rule. |
- | **Traffic detected by any engine** | Toggle off to select the traffic you want to include in the rule. |
+1. Define the server IP address where you want to send the data.
-1. In the **Actions** area, define the following values:
+1. Enter an API key created in FortiGate.
- | Parameter | Description |
- |--|--|
- | **Server** | Select FortiGage. |
- | **Host** | Define the ClearPass server IP to send alert information. |
- | **API key** | Enter the [API key](#create-an-api-key-in-fortinet) that you created in FortiGate. |
- | **Incoming Interface** | Enter the incoming firewall interface port. |
- | **Outgoing Interface** | Enter the outgoing firewall interface port. |
+1. Enter the incoming and outgoing firewall interface ports.
-1. Configure which alert information you want to forward:
+1. Select to forward specific alert details. We recommend selecting one of more of the following:
- | Parameter | Description |
- |--|--|
- | **Block illegal function codes** | Protocol violations - Illegal field value violating ICS protocol specification (potential exploit) |
- | **Block unauthorized PLC programming / firmware updates** | Unauthorized PLC changes. |
- | **Block unauthorized PLC stop** | PLC stop (downtime). |
- | **Block malware related alerts** | Blocking of the industrial malware attempts (TRITON, NotPetya, etc.). |
- | **Block unauthorized scanning** | Unauthorized scanning (potential reconnaissance) |
+ - **Block illegal function codes**: Protocol violations - Illegal field value violating ICS protocol specification (potential exploit)
+ - **Block unauthorized PLC programming / firmware updates**: Unauthorized PLC changes
+ - **Block unauthorized PLC stop** PLC stop (downtime)
+ - **Block malware related alerts**: Blocking of the industrial malware attempts, such as TRITON or NotPetya
+ - **Block unauthorized scanning**: Unauthorized scanning (potential reconnaissance)
-1. Select **Save**.
+For more information, see [Forward on-premises OT alert information](how-to-forward-alert-information-to-partners.md).
## Block the source of suspicious alerts
defender-for-iot Tutorial Palo Alto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-palo-alto.md
Title: Integrate Palo Alto with Microsoft Defender for IoT description: Defender for IoT has integrated its continuous ICS threat monitoring platform with Palo AltoΓÇÖs next-generation firewalls to enable blocking of critical threats, faster and more efficiently. Previously updated : 01/01/2023 Last updated : 09/06/2023
-# Integrate Palo-Alto with Microsoft Defender for IoT
+# Integrate Palo Alto with Microsoft Defender for IoT
-This article helps you learn how to integrate and use Palo Alto with Microsoft Defender for IoT.
+This article describes how to integrate Palo Alto with Microsoft Defender for IoT, in order to view both Palo Alto and Defender for IoT information in a single place, or use Defender for IoT data to configure blocking actions in Palo Alto.
-Defender for IoT has integrated its continuous ICS threat monitoring platform with Palo AltoΓÇÖs next-generation firewalls to enable blocking of critical threats, faster and more efficiently.
+Viewing both Defender for IoT and Palo Alto information together provides SOC analysts with multidimensional visibility so that they can block critical threats faster.
-The following integration types are available:
+## Cloud-based integrations
-- Automatic blocking option: Direct Defender for IoT to Palo Alto integration.--- Send recommendations for blocking to the central management system: Defender for IoT to Panorama integration.-
-In this article, you learn how to:
-
-> [!div class="checklist"]
+> [!TIP]
+> Cloud-based security integrations provide several benefits over on-premises solutions, such as centralized, simpler sensor management and centralized security monitoring.
+>
+> Other benefits include real-time monitoring, efficient resource use, increased scalability and robustness, improved protection against security threats, simplified maintenance and updates, and seamless integration with third-party solutions.
>
-> - Configure immediate blocking by a specified Palo Alto firewall
-> - Create Panorama blocking policies in Defender for IoT
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-## Prerequisites
-
-Before you begin, make sure that you have the following prerequisites:
--- Confirmation by the Panorama Administrator to allow automatic blocking.-- Access to a Defender for IoT OT sensor as an Admin user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).-
-## Configure immediate blocking by a specified Palo Alto firewall
-
-In cases, such as malware-related alerts, you can enable automatic blocking. Defender for IoT forwarding rules are utilized to send a blocking command directly to a specific Palo Alto firewall.
-
-Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created aren't affected by the rule.
-
-When Defender for IoT identifies a critical threat, it sends an alert that includes an option of blocking the infected source. Selecting **Block Source** in the alertΓÇÖs details activates the forwarding rule, which sends the blocking command to the specified Palo Alto firewall.
-
-**To configure immediate blocking**:
-
-1. Sign in to the sensor, and select **Forwarding**.
-
-1. Select **Create new rule**.
-
-1. In the **Add forwarding rule** pane, define the rule parameters:
- :::image type="content" source="media/tutorial-palo-alto/forwarding-rule.png" alt-text="Screenshot of creating the rules for your forwarding rule." lightbox="media/tutorial-palo-alto/forwarding-rule.png":::
+If you're integrating a cloud-connected OT sensor with Palo Alto we recommend that you connect Defender for IoT to [Microsoft Sentinel](concept-sentinel-integration.md).
- | Parameter | Description |
- |--|--|
- | **Rule name** | The forwarding rule name. |
- | **Minimal alert level** | The minimal security level incident to forward. For example, if Minor is selected, minor alerts and any alert above this severity level will be forwarded. |
- | **Any protocol detected** | Toggle off to select the protocols you want to include in the rule. |
- | **Traffic detected by any engine** | Toggle off to select the traffic you want to include in the rule. |
+Install one or more of the following solutions to view both Palo Alto and Defender for IoT data in Microsoft Sentinel.
-1. In the **Actions** area, set the following parameters:
+|Microsoft Sentinel solution |Learn more |
+|||
+|[Palo Alto PAN-OS Solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-paloaltopanos?tab=Overview) | [Palo Alto Networks (Firewall) connector for Microsoft Sentinel](/azure/sentinel/data-connectors/palo-alto-networks-firewall) |
+|[Palo Alto Networks Cortex Data Lake Solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-paloaltocdl?tab=Overview) | [Palo Alto Networks Cortex Data Lake (CDL) connector for Microsoft Sentinel](/azure/sentinel/data-connectors/palo-alto-networks-cortex-data-lake-cdl) |
+|[Palo Alto Prisma Cloud CSPM solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-paloaltoprisma?tab=Overview) | [Palo Alto Prisma Cloud CSPM (using Azure Function) connector for Microsoft Sentinel](/azure/sentinel/data-connectors/palo-alto-prisma-cloud-cspm-using-azure-function) |
- | Parameter | Description |
- |--|--|
- | **Server** | Select Palo Alto NGFW. |
- | **Host** | Enter the NGFW server IP address. |
- | **Port** | Enter the NGFW server port. |
- | **Username** | Enter the NGFW server username. |
- | **Password** | Enter the NGFW server password. |
+Microsoft Sentinel is a scalable cloud service for security information event management (SIEM) security orchestration automated response (SOAR). SOC teams can use the integration between Microsoft Defender for IoT and Microsoft Sentinel to collect data across networks, detect and investigate threats, and respond to incidents.
-1. Configure the following options to allow blocking of the suspicious sources by the Palo Alto firewall:
+In Microsoft Sentinel, the Defender for IoT data connector and solution brings out-of-the-box security content to SOC teams, helping them to view, analyze and respond to OT security alerts, and understand the generated incidents in the broader organizational threat contents.
- | Parameter | Description |
- |--|--|
- | **Block illegal function codes** | Protocol violations - Illegal field value violating ICS protocol specification (potential exploit). |
- | **Block unauthorized PLC programming / firmware updates** | Unauthorized PLC changes. |
- | **Block unauthorized PLC stop** | PLC stop (downtime). |
- | **Block malware related alerts** | Blocking of industrial malware attempts (TRITON, NotPetya, etc.). <br><br> You can select the option of **Automatic blocking**. <br> In that case, the blocking is executed automatically and immediately. |
- | **Block unauthorized scanning** | Unauthorized scanning (potential reconnaissance). |
+For more information, see:
-1. Select **Save**.
+- [Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel](iot-solution.md)
+- [Tutorial: Investigate and detect threats for IoT devices](iot-advanced-threat-monitoring.md)
-You'll then need to block any suspicious source.
+## On-premises integrations
-**To block a suspicious source**:
+If you're working with an air-gapped, locally managed OT sensor, you'll need an on-premises solution to view Defender for IoT and Palo Alto information in the same place.
-1. Navigate to the **Alerts** page, and select the alert related to the Palo Alto integration.
+In such cases, we recommend that you configure your OT sensor to send syslog files directly to Palo Alto, or use Defender for IoT's built-in API.
-1. To automatically block the suspicious source, select **Block Source**.
+For more information, see:
-1. In the **Please Confirm** dialog box, select **OK**.
+- [Forward on-premises OT alert information](how-to-forward-alert-information-to-partners.md)
+- [Defender for IoT API reference](references-work-with-defender-for-iot-apis.md)
-The suspicious source is now blocked by the Palo Alto firewall.
+## On-premises integration (legacy)
-## Create Panorama blocking policies in Defender for IoT
+This section describes how to integrate and use Palo Alto with Microsoft Defender for IoT using the legacy, on-premises integration, which automatically creates new policies in the Palo Alto Network's NMS and Panorama.
-Defender for IoT and Palo Alto Network's integration automatically creates new policies in the Palo Alto Network's NMS and Panorama.
+> [!IMPORTANT]
+> The legacy Palo Alto Panorama integration is supported through October 2024 using sensor version 23.1.3, and won't be supported in upcoming major software versions. For customers using the legacy integration, we recommend moving to one of the following methods:
+>
+> - If you're integrating your security solution with cloud-based systems, we recommend that you use data connectors through [Microsoft Sentinel](#cloud-based-integrations).
+> - For on-premises integrations, we recommend that you either configure your OT sensor to [forward syslog events, or use Defender for IoT APIs](#on-premises-integrations).
+>
-This table shows which incidents this integration is intended for:
+The following table shows which incidents this integration is intended for:
| Incident type | Description | |--|--|
This table shows which incidents this integration is intended for:
|**Protocol Violation** | A packet structure, or field value that violates the protocol specification. This alert can represent a misconfigured application, or a malicious attempt to compromise the device. For example, causing a buffer overflow condition in the target device. | |**PLC Stop** | A command that causes the device to stop functioning, thereby risking the physical process that is being controlled by the PLC. | |**Industrial malware found in the ICS network** | Malware that manipulates ICS devices using their native protocols, such as TRITON and Industroyer. Defender for IoT also detects IT malware that has moved laterally into the ICS, and SCADA environment. For example, Conficker, WannaCry, and NotPetya. |
-|**Scanning malware** | Reconnaissance tools that collect data about system configuration in a pre-attack phase. For example, the Havex Trojan scans industrial networks for devices using OPC, which is a standard protocol used by Windows-based SCADA systems to communicate with ICS devices. |
+|**Scanning malware** | Reconnaissance tools that collect data about system configuration in a preattack phase. For example, the Havex Trojan scans industrial networks for devices using OPC, which is a standard protocol used by Windows-based SCADA systems to communicate with ICS devices. |
-When Defender for IoT detects a pre-configured use case, the **Block Source** button is added to the alert. Then, when the Defender for IoT user selects the **Block Source** button, Defender for IoT creates policies on Panorama by sending the predefined forwarding rule.
+When Defender for IoT detects a preconfigured use case, the **Block Source** button is added to the alert. Then, when the Defender for IoT user selects the **Block Source** button, Defender for IoT creates policies on Panorama by sending the predefined forwarding rule.
The policy is applied only when the Panorama administrator pushes it to the relevant NGFW in the network.
-In IT networks, there may be dynamic IP addresses. Therefore, for those subnets, the policy must be based on FQDN (DNS name) and not the IP address. Defender for IoT performs reverse lookup and matches devices with dynamic IP address to their FQDN (DNS name) every configured number of hours.
+In IT networks, there might be dynamic IP addresses. Therefore, for those subnets, the policy must be based on FQDN (DNS name) and not the IP address. Defender for IoT performs reverse lookup and matches devices with dynamic IP address to their FQDN (DNS name) every configured number of hours.
+
+In addition, Defender for IoT sends an email to the relevant Panorama user to notify that a new policy created by Defender for IoT is waiting for the approval. The figure below presents the Defender for IoT and Panorama integration architecture:
++
+### Prerequisites
-In addition, Defender for IoT sends an email to the relevant Panorama user to notify that a new policy created by Defender for IoT is waiting for the approval. The figure below presents the Defender for IoT and Panorama integration architecture.
+Before you begin, make sure that you have the following prerequisites:
+
+- Confirmation by the Panorama Administrator to allow automatic blocking.
+- Access to a Defender for IoT OT sensor as an [Admin user](roles-on-premises.md).
+### Configure DNS lookup
The first step in creating Panorama blocking policies in Defender for IoT is to configure DNS lookup.
The first step in creating Panorama blocking policies in Defender for IoT is to
1. Select **Save**.
-## Block suspicious traffic with the Palo Alto firewall
+When you're done, continue by creating forwarding rules as needed:
+
+- [Configure immediate blocking by a specified Palo Alto firewall](#configure-immediate-blocking-by-a-specified-palo-alto-firewall)
+- [Block suspicious traffic with the Palo Alto firewall](#block-suspicious-traffic-with-the-palo-alto-firewall)
-Suspicious traffic needs to be blocked with the Palo Alto firewall. You can block suspicious traffic through the use forwarding rules in Defender for IoT.
+### Configure immediate blocking by a specified Palo Alto firewall
-Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created aren't affected by the rule.
+Configure automatic blocking in cases such as malware-related alerts, by configuring a Defender for IoT forwarding rule to send a blocking command directly to a specific Palo Alto firewall.
-1. Sign in to the sensor, and select **Forwarding**.
+When Defender for IoT identifies a critical threat, it sends an alert that includes an option of blocking the infected source. Selecting **Block Source** in the alertΓÇÖs details activates the forwarding rule, which sends the blocking command to the specified Palo Alto firewall.
-1. Select **Create new rule**.
+When creating your forwarding rule:
-1. In the **Add forwarding rule** pane, define the rule parameters:
+1. In the **Actions** area, define the server, host, port, and credentials for the Palo Alto NGFW.
- :::image type="content" source="media/tutorial-palo-alto/edit.png" alt-text="Screenshot of creating the rules for your Palo Alto Panorama forwarding rule." lightbox="media/tutorial-palo-alto/forwarding-rule.png":::
+1. Configure the following options to allow blocking of the suspicious sources by the Palo Alto firewall:
| Parameter | Description | |--|--|
- | **Rule name** | The forwarding rule name. |
- | **Minimal alert level** | The minimal security level incident to forward. For example, if Minor is selected, minor alerts and any alert above this severity level will be forwarded. |
- | **Any protocol detected** | Toggle off to select the protocols you want to include in the rule. |
- | **Traffic detected by any engine** | Toggle off to select the traffic you want to include in the rule. |
+ | **Block illegal function codes** | Protocol violations - Illegal field value violating ICS protocol specification (potential exploit). |
+ | **Block unauthorized PLC programming / firmware updates** | Unauthorized PLC changes. |
+ | **Block unauthorized PLC stop** | PLC stop (downtime). |
+ | **Block malware related alerts** | Blocking of industrial malware attempts (TRITON, NotPetya, etc.). <br><br> You can select the option of **Automatic blocking**. <br> In that case, the blocking is executed automatically and immediately. |
+ | **Block unauthorized scanning** | Unauthorized scanning (potential reconnaissance). |
-1. In the **Actions** area, set the following parameters:
+For more information, see [Forward on-premises OT alert information](how-to-forward-alert-information-to-partners.md).
- | Parameter | Description |
- |--|--|
- | **Server** | Select Palo Alto NGFW. |
- | **Host** | Enter the NGFW server IP address. |
- | **Port** | Enter the NGFW server port. |
- | **Username** | Enter the NGFW server username. |
- | **Password** | Enter the NGFW server password. |
- | **Report Addresses** | Define how the blocking is executed, as follows: <br><br> - **By IP Address**: Always creates blocking policies on Panorama based on the IP address. <br> - **By FQDN or IP Address**: Creates blocking policies on Panorama based on FQDN if it exists, otherwise by the IP Address. |
- | **Email** | Set the email address for the policy notification email. |
+### Block suspicious traffic with the Palo Alto firewall
+
+Configure a Defender for IoT forwarding rule to block suspicious traffic with the Palo Alto firewall.
+
+When creating your forwarding rule:
+
+1. In the **Actions** area, define the server, host, port, and credentials for the Palo Alto NGFW.
+
+1. Define how the blocking is executed, as follows:
+
+ - **By IP Address**: Always creates blocking policies on Panorama based on the IP address.
+ - **By FQDN or IP Address**: Creates blocking policies on Panorama based on FQDN if it exists, otherwise by the IP Address.
+
+1. In the **Email** field, enter the email address for the policy notification email.
> [!NOTE] > Make sure you have configured a Mail Server in the Defender for IoT. If no email address is entered, Defender for IoT does not send a notification email.
Forwarding alert rules run only on alerts triggered after the forwarding rule is
| **Block malware related alerts** | Blocking of industrial malware attempts (TRITON, NotPetya, etc.). <br><br> You can select the option of **Automatic blocking**. <br> In that case, the blocking is executed automatically and immediately. | | **Block unauthorized scanning** | Unauthorized scanning (potential reconnaissance). |
-1. Select **Save**.
+For more information, see [Forward on-premises OT alert information](how-to-forward-alert-information-to-partners.md).
-You'll then need to block any suspicious source.
+### Block specific suspicious sources
-**To block a suspicious source**:
+After you've created your forwarding rule, use the following steps to block specific, suspicious sources:
-1. Navigate to the **Alerts** page, and select the alert related to the Palo Alto integration.
+1. In the OT sensor's **Alerts** page, locate and select the alert related to the Palo Alto integration.
1. To automatically block the suspicious source, select **Block Source**.
-1. Select **OK**.
+1. In the **Please Confirm** dialog box, select **OK**.
+
+The suspicious source is now blocked by the Palo Alto firewall.
## Next step
defender-for-iot Tutorial Qradar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-qradar.md
A **QID** is a QRadar event identifier. Since all Defender for IoT reports are t
Create a forwarding rule from your on-premises management console to forward alerts to QRadar.
-Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created aren't affected by the rule.
+Forwarding alert rules run only on alerts triggered after the forwarding rule is created. The rule doesn't affect any alerts already in the system from before the forwarding rule was created.
-**To create a QRadar forwarding rule**:
+The following code is an example of a payload sent to QRadar:
-1. Sign in to the on-premises management console and select **Forwarding**.
-
-1. Select the **+** to create a new rule.
-
-1. In the **Create Forwarding Rule** pane, define the following values:
-
- | Parameter | Description |
- |--|--|
- | **Name** | Enter a meaningful name for the forwarding rule. |
- | **Warning** | From the drop-down menu, select the minimal security level incident to forward. <br> For example, if **Minor** is selected, minor alerts and any alert above this severity level will be forwarded.|
- | **Protocols** | To select a specific protocol, select **Specific**, and select the protocol for which this rule is applied. <br> By default, all the protocols are selected. |
- | **Engines** | To select a specific security engine for which this rule is applied, select **Specific**, and select the engine. <br> By default, all the security engines are involved. |
- | **System Notifications** | Forward the sensor's *online* and *offline* status. |
- | **Alert Notifications** | Forward the sensor's alerts. |
+```sample payload
+<9>May 5 12:29:23 sensor_Agent LEEF:1.0|CyberX|CyberX platform|2.5.0|CyberX platform Alert|devTime=May 05 2019 15:28:54 devTimeFormat=MMM dd yyyy HH:mm:ss sev=2 cat=XSense Alerts title=Device is Suspected to be Disconnected (Unresponsive) score=81 reporter=192.168.219.50 rta=0 alertId=6 engine=Operational senderName=sensor Agent UUID=5-1557059334000 site=Site zone=Zone actions=handle dst=192.168.2.2 dstName=192.168.2.2 msg=Device 192.168.2.2 is suspected to be disconnected (unresponsive).
+```
-1. In the **Actions** area, select **Add**, and then select **Qradar**. For example:
+When configuring the forwarding rule:
- :::image type="content" source="media/tutorial-qradar/create.png" alt-text="Screenshot of the Create a Forwarding Rule window." lightbox="media/tutorial-qradar/create.png":::
+1. In the **Actions** area, select **Qradar**.
-1. Define the QRadar **Host**, **Port**, and **Timezone**. You can also choose to **Enable Encryption** and then **CONFIGURE ENCRYPTION**, and you can choose to **Manage alerts externally**.
+1. Enter details for the QRadar host, port, and timezone.
-1. Select **SAVE**.
+1. Optionally, select to enable encryption, and then configure encryption, and/or select to manage alerts externally.
-The following is an example of a payload sent to QRadar:
+For more information, see [Forward on-premises OT alert information](how-to-forward-alert-information-to-partners.md).
-```sample payload
-<9>May 5 12:29:23 sensor_Agent LEEF:1.0|CyberX|CyberX platform|2.5.0|CyberX platform Alert|devTime=May 05 2019 15:28:54 devTimeFormat=MMM dd yyyy HH:mm:ss sev=2 cat=XSense Alerts title=Device is Suspected to be Disconnected (Unresponsive) score=81 reporter=192.168.219.50 rta=0 alertId=6 engine=Operational senderName=sensor Agent UUID=5-1557059334000 site=Site zone=Zone actions=handle dst=192.168.2.2 dstName=192.168.2.2 msg=Device 192.168.2.2 is suspected to be disconnected (unresponsive).
-```
## Map notifications to QRadar
For example:
| Parameter | Description | |--|--|
- | **New Property** | Choose from the list below: <br><br> - Sensor Alert Description <br> - Sensor Alert ID <br> - Sensor Alert Score <br> - Sensor Alert Title <br> - Sensor Destination Name <br> - Sensor Direct Redirect <br> - Sensor Sender IP <br> - Sensor Sender Name <br> - Sensor Alert Engine <br> - Sensor Source Device Name |
+ | **New Property** | One of the following: <br><br> - Sensor Alert Description <br> - Sensor Alert ID <br> - Sensor Alert Score <br> - Sensor Alert Title <br> - Sensor Destination Name <br> - Sensor Direct Redirect <br> - Sensor Sender IP <br> - Sensor Sender Name <br> - Sensor Alert Engine <br> - Sensor Source Device Name |
| **Optimize Parsing** | Check on. | | **Field Type** | `AlphaNumeric` | | **Enabled** | Check on. |
defender-for-iot Tutorial Splunk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-splunk.md
Title: Integrate Splunk with Microsoft Defender for IoT
-description: In this tutorial, learn how to integrate Splunk with Microsoft Defender for IoT.
- Previously updated : 02/07/2022
+description: This article describes how to integrate Splunk with Microsoft Defender for IoT for multidimensional visibility across OT protocols and IIoT devices.
+ Last updated : 09/06/2023 # Integrate Splunk with Microsoft Defender for IoT
-This article helps you learn how to integrate, and use Splunk with Microsoft Defender for IoT.
+This article describes how to integrate Splunk with Microsoft Defender for IoT, in order to view both Splunk and Defender for IoT information in a single place.
-Defender for IoT mitigates IIoT, ICS, and SCADA risk with patented, ICS-aware self-learning engines that deliver immediate insights about ICS devices, vulnerabilities, and threats in less than an image hour and without relying on agents, rules or signatures, specialized skills, or prior knowledge of the environment.
+Viewing both Defender for IoT and Splunk information together provides SOC analysts with multidimensional visibility into the specialized OT protocols and IIoT devices deployed in industrial environments, along with ICS-aware behavioral analytics to rapidly detect suspicious or anomalous behavior.
-To address a lack of visibility into the security and resiliency of OT networks, Defender for IoT developed the Defender for IoT, IIoT, and ICS threat monitoring application for Splunk, a native integration between Defender for IoT and Splunk that enables a unified approach to IT and OT security.
+## Cloud-based integrations
-The application provides SOC analysts with multidimensional visibility into the specialized OT protocols and IIoT devices deployed in industrial environments, along with ICS-aware behavioral analytics to rapidly detect suspicious or anomalous behavior. The application also enables both IT, and OT incident response from within one corporate SOC. This is an important evolution given the ongoing convergence of IT and OT to support new IIoT initiatives, such as smart machines and real-time intelligence.
+> [!TIP]
+> Cloud-based security integrations provide several benefits over on-premises solutions, such as centralized, simpler sensor management and centralized security monitoring.
+>
+> Other benefits include real-time monitoring, efficient resource use, increased scalability and robustness, improved protection against security threats, simplified maintenance and updates, and seamless integration with third-party solutions.
+>
-The Splunk application can be installed locally ('Splunk Enterprise') or run on a cloud ('Splunk Cloud'). The Splunk integration along with Defender for IoT supports 'Splunk Enterprise' only.
+If you're integrating a cloud-connected OT sensor with Splunk, we recommend that you use Splunk's own [OT Security Add-on for Splunk](https://apps.splunk.com/app/5151). For more information, see:
-> [!NOTE]
-> Microsoft Defender for IoT was formally known as [CyberX](https://blogs.microsoft.com/blog/2020/06/22/microsoft-acquires-cyberx-to-accelerate-and-secure-customers-iot-deployments/). References to CyberX refer to Defender for IoT.
+- [The Splunk documentation on installing add-ins](https://docs.splunk.com/Documentation/AddOns/released/Overview/Distributedinstall)
+- [The Splunk documentation on the OT Security Add-on for Splunk](https://splunk.github.io/ot-security-solution/integrationguide/)
-In this article, you learn how to:
-> [!div class="checklist"]
->
-> - Download the Defender for IoT application in Splunk
-> - Send Defender for IoT alerts to Splunk
+## On-premises integrations
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+If you're working with an air-gapped, locally managed OT sensor, you need an on-premises solution to view Defender for IoT and Splunk information in the same place.
+
+In such cases, we recommend that you configure your OT sensor to send syslog files directly to Splunk, or use Defender for IoT's built-in API.
+
+For more information, see:
+
+- [Forward on-premises OT alert information](how-to-forward-alert-information-to-partners.md)
+- [Defender for IoT API reference](references-work-with-defender-for-iot-apis.md)
-## Prerequisites
-Before you begin, make sure that you have the following prerequisites:
-### Version requirements
+## On-premises integration (legacy)
-The following versions are required for the application to run.
+This section describes how to integrate Defender for IoT and Splunk using the legacy, on-premises integration.
-- Defender for IoT version 2.4 and above.
+> [!IMPORTANT]
+> The legacy Splunk integration is supported through October 2024 using sensor version 23.1.3, and won't be supported in upcoming major software versions. For customers using the legacy integration, we recommend moving to one of the following methods:
+>
+> - If you're integrating your security solution with cloud-based systems, we recommend that you use the [OT Security Add-on for Splunk](#cloud-based-integrations).
+> - For on-premises integrations, we recommend that you either configure your OT sensor to [forward syslog events, or use Defender for IoT APIs](#on-premises-integrations).
-- Splunkbase version 11 and above.
+Microsoft Defender for IoT was formally known as [CyberX](https://blogs.microsoft.com/blog/2020/06/22/microsoft-acquires-cyberx-to-accelerate-and-secure-customers-iot-deployments/). References to CyberX refer to Defender for IoT.
-- Splunk Enterprise version 7.2 and above.
+### Prerequisites
-### Permission requirements
+Before you begin, make sure that you have the following prerequisites:
-Make sure you have:
+|Prerequisites |Description |
+|||
+|**Version requirements** | The following versions are required for the application to run: <br>- Defender for IoT version 2.4 and above. <br>- Splunkbase version 11 and above. <br>- Splunk Enterprise version 7.2 and above. |
+|**Permission requirements** | Make sure you have: <br>- Access to a Defender for IoT OT sensor as an [Admin user](roles-on-premises.md). <br>- Splunk user with an *Admin* level user role. |
-- Access to a Defender for IoT OT sensor as an Admin user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).-- Splunk user with an *Admin* level user role.
+> [!NOTE]
+> The Splunk application can be installed locally ('Splunk Enterprise') or run on a cloud ('Splunk Cloud'). The Splunk integration along with Defender for IoT supports 'Splunk Enterprise' only.
+>
-## Download the Defender for IoT application in Splunk
+### Download the Defender for IoT application in Splunk
-To access the Defender for IoT application within Splunk, you need to download the application form the Splunkbase application store.
+To access the Defender for IoT application within Splunk, you need to download the application from the Splunkbase application store.
**To access the Defender for IoT application in Splunk**:
To access the Defender for IoT application within Splunk, you need to download t
1. Select the **LOGIN TO DOWNLOAD BUTTON**.
-## Send Defender for IoT alerts to Splunk
-
-The Defender for IoT alerts provide information about an extensive range of security events. These events include:
--- Deviations from the learned baseline network activity.--- Malware detections.--- Detections based on suspicious operational changes.--- Network anomalies.--- Protocol deviations from protocol specifications.-
-You can also configure Defender for IoT to send alerts to the Splunk server, where alert information is displayed in the Splunk Enterprise dashboard.
--
-To send alert information to the Splunk servers from Defender for IoT, you need to create a Forwarding Rule.
-
-Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created aren't affected by the rule.
-
-**To create the forwarding rule**:
-
-1. Sign in to the sensor, and select **Forwarding**.
-
-1. Select **Create new rule**.
-
-1. In the **Add forwarding rule** pane, define the rule parameters:
-
- :::image type="content" source="media/tutorial-splunk/forwarding-rule.png" alt-text="Screenshot of creating the rules for your forwarding rule." lightbox="media/tutorial-splunk/forwarding-rule.png":::
-
- | Parameter | Description |
- |--|--|
- | **Rule name** | The forwarding rule name. |
- | **Minimal alert level** | The minimal security level incident to forward. For example, if Minor is selected, minor alerts and any alert above this severity level will be forwarded. |
- | **Any protocol detected** | Toggle off to select the protocols you want to include in the rule. |
- | **Traffic detected by any engine** | Toggle off to select the traffic you want to include in the rule. |
-
-1. In the **Actions** area, define the following values:
-
- | Parameter | Description |
- |--|--|
- | **Server** | Select Splunk Server. |
- | **Host** | Enter the Splunk server address. |
- | **Port** | Enter 8089. |
- | **Username** | Enter the Splunk server username. |
- | **Password** | Enter the Splunk server password. |
-
-1. Select **Save**.
- ## Next steps > [!div class="nextstepaction"]
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
Title: What's new in Microsoft Defender for IoT
-description: This article describes features available in Microsoft Defender for IoT, across both OT and Enterprise IoT networks, and both on-premises and in the Azure portal.
+description: This article describes new features available in Microsoft Defender for IoT, including both OT and Enterprise IoT networks, and both on-premises and in the Azure portal.
Previously updated : 09/14/2023 Last updated : 10/23/2023
Features released earlier than nine months ago are described in the [What's new
> Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >
+## November 2023
+
+|Service area |Updates |
+|||
+| **OT networks** | [Updated security stack integration guidance](#updated-security-stack-integration-guidance)|
+
+### Updated security stack integration guidance
+
+Defender for IoT is refreshing its security stack integrations to improve the overall robustness, scalability, and ease of maintenance of various security solutions.
+
+If you're integrating your security solution with cloud-based systems, we recommend that you use data connectors through [Microsoft Sentinel](concept-sentinel-integration.md). For on-premises integrations, we recommend that you either configure your OT sensor to [forward syslog events](how-to-forward-alert-information-to-partners.md)), or use [Defender for IoT APIs](references-work-with-defender-for-iot-apis.md).
+
+The legacy Aruba ClearPass, Palo Alto Panorama, and Splunk integrations are supported through October 2024 using sensor version 23.1.3, and won't be supported in upcoming major software versions.
+
+For customers using legacy integration methods, we recommend moving your integrations to newly recommended methods. For more information, see:
+
+- [Integrate ClearPass with Microsoft Defender for IoT](tutorial-clearpass.md)
+- [Integrate Palo Alto with Microsoft Defender for IoT](tutorial-palo-alto.md)
+- [Integrate Splunk with Microsoft Defender for IoT](tutorial-splunk.md)
+- [Integrations with Microsoft and partner services](integrate-overview.md)
+ ## September 2023 |Service area |Updates |
For more information, see [Enrich Windows workstation and server data with a loc
### Automatically resolved OS notifications
-After updating your OT sensor to version 22.3.8, no new device notifications for **Operating system changes** are generated. Existing **Operating system changes** notifications are automatically resolved if they aren't dismissed or otherwise handled within 14 days.
+After you've updated your OT sensor to version 22.3.8, no new device notifications for **Operating system changes** are generated. Existing **Operating system changes** notifications are automatically resolved if they aren't dismissed or otherwise handled within 14 days.
For more information, see [Device notification responses](how-to-work-with-the-sensor-device-map.md#device-notification-responses)
For more information, see [Malware engine alerts](alert-engine-messages.md#malwa
Starting in version 22.3.6, selected notifications on the OT sensor's **Device map** page are now automatically resolved if they aren't dismissed or otherwise handled within 14 days.
-After updating your sensor version, the **Inactive devices** and **New OT devices** notifications no longer appear. While any **Inactive devices** notifications that are left over from before the update are automatically dismissed, you may still have legacy **New OT devices** notifications to handle. Handle these notifications as needed to remove them from your sensor.
+After you've updated your sensor version, the **Inactive devices** and **New OT devices** notifications no longer appear. While any **Inactive devices** notifications that are left over from before the update are automatically dismissed, you might still have legacy **New OT devices** notifications to handle. Handle these notifications as needed to remove them from your sensor.
For more information, see [Manage device notifications](how-to-work-with-the-sensor-device-map.md#manage-device-notifications). ### New Microsoft Sentinel incident experience for Defender for IoT
-Microsoft Sentinel's new [incident experience](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/the-new-incident-experience-is-here/ba-p/3717042) includes specific features for Defender for IoT customers. When investigating OT/IoT-related incidents, SOC analysts can now use the following enhancements on incident details pages:
+Microsoft Sentinel's new [incident experience](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/the-new-incident-experience-is-here/ba-p/3717042) includes specific features for Defender for IoT customers. SOC analysts who are investigating OT/IoT-related can now use the following enhancements on incident details pages:
- **View related sites, zones, sensors, and device importance** to better understand an incident's business impact and physical location.
OT network sensors connect to Azure to provide alert and device data and sensor
For OT sensors with software versions 22.x and higher, Defender for IoT now supports increased security when adding outbound allow rules for connections to Azure. Now you can define your outbound allow rules to connect to Azure without using wildcards.
-When defining outbound allow rules to connect to Azure, you need to enable HTTPS traffic to each of the required endpoints on port 443. Outbound allow rules are defined once for all OT sensors onboarded to the same subscription.
+When defining outbound *allow* rules to connect to Azure, you need to enable HTTPS traffic to each of the required endpoints on port 443. Outbound *allow* rules are defined once for all OT sensors onboarded to the same subscription.
For supported sensor versions, download the full list of required secure endpoints from the following locations in the Azure portal: -- **A successful sensor registration page**: After onboarding a new OT sensor, version 22.x, the successful registration page now provides instructions for next steps, including a link to the endpoints you'll need to add as secure outbound allow rules on your network. Select the **Download endpoint details** link to download the JSON file.
+- **A successful sensor registration page**: After onboarding a new OT sensor with version 22.x, the successful registration page now provides instructions for next steps, including a link to the endpoints you'll need to add as secure outbound allow rules on your network. Select the **Download endpoint details** link to download the JSON file.
For example:
The Enterprise IoT integration with Microsoft Defender for Endpoint is now in Ge
### Same passwords for cyberx_host and cyberx users
-During OT monitoring software installations and updates, the **cyberx** user is assigned a random password. When updating from version 10.x.x to version 22.1.7, the **cyberx_host** password is assigned with an identical password to the **cyberx** user.
+During OT monitoring software installations and updates, the **cyberx** user is assigned a random password. When you update from version 10.x.x to version 22.1.7, the **cyberx_host** password is assigned with an identical password to the **cyberx** user.
For more information, see [Install OT agentless monitoring software](how-to-install-software.md) and [Update Defender for IoT OT monitoring software](update-ot-software.md).
For more information, see [Install OT agentless monitoring software](how-to-inst
Starting in OT sensor versions 22.2.4, you can now take the following actions from the sensor console's **Device inventory** page: -- **Merge duplicate devices**. You may need to merge devices if the sensor has discovered separate network entities that are associated with a single, unique device. Examples of this scenario might include a PLC with four network cards, a laptop with both WiFi and a physical network card, or a single workstation with multiple network cards.
+- **Merge duplicate devices**. You might need to merge devices if the sensor has discovered separate network entities that are associated with a single, unique device. Examples of this scenario might include a PLC with four network cards, a laptop with both WiFi and a physical network card, or a single workstation with multiple network cards.
- **Delete single devices**. Now, you can delete a single device that hasn't communicated for at least 10 minutes.
Update your **IoT OT Threat Monitoring with Defender for IoT** solution to use t
For more information, see: -- [Tutorial: Integrate Defender for Iot and Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended)
+- [Tutorial: Integrate Defender for IoT and Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended)
- [View and manage alerts on the Defender for IoT portal (Preview)](how-to-manage-cloud-alerts.md) - [View alerts on your sensor](how-to-view-alerts.md)
For more information, see [Create custom alert rules on an OT sensor](how-to-acc
### CLI command updates
-The Defender for Iot sensor software installation is now containerized. With the now-containerized sensor, you can use the *cyberx_host* user to investigate issues with other containers or the operating system, or to send files via FTP.
+The Defender for IoT sensor software installation is now containerized. With the now-containerized sensor, you can use the *cyberx_host* user to investigate issues with other containers or the operating system, or to send files via FTP.
This *cyberx_host* user is available by default and connects to the host machine. If you need to, recover the password for the *cyberx_host* user from the **Sites and sensors** page in Defender for IoT.
For more information, see [Defender for IoT installation](how-to-install-softwar
To use all of Defender for IoT's latest features, make sure to update your sensor software versions to 22.1.x.
-If you're on a legacy version, you may need to run a series of updates in order to get to the latest version. You'll also need to update your firewall rules and reactivate your sensor with a new activation file.
+If you're on a legacy version, you might need to run a series of updates in order to get to the latest version. You'll also need to update your firewall rules and reactivate your sensor with a new activation file.
After you've upgraded to version 22.1.x, the new upgrade log can be found at the following path, accessed via SSH and the *cyberx_host* user: `/opt/sensor/logs/legacy-upgrade.log`.
deployment-environments How To Configure Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-catalog.md
Title: Add and configure a catalog
+ Title: Add and configure a catalog hosted in a GitHub or Azure DevOps repository
description: Learn how to add a catalog in your Azure Deployment Environments dev center to provide environment templates for your developers. Catalogs are repositories stored in GitHub or Azure DevOps. Previously updated : 04/25/2023 Last updated : 10/23/2023
Learn how to add and configure a [catalog](./concept-environments-key-concepts.m
You can use a catalog to provide your development teams with a curated set of infrastructure as code (IaC) templates called [environment definitions](./concept-environments-key-concepts.md#environment-definitions). Your catalog is encrypted; Azure Deployment Environments supports encryption at rest with platform-managed encryption keys, which are managed by Microsoft for Azure Services.
-For more information about environment definitions, see [Add and configure an environment definition](./configure-environment-definition.md).
+Deployment Environments supports catalogs hosted in Azure Repos (the repository service in Azure, commonly referred to as Azure DevOps) and catalogs hosted in GitHub. Azure DevOps supports authentication by assigning permissions to a managed identity. Azure DevOps and GitHub both support the use of PATs for authentication. To further secure your templates, the catalog is encrypted; Azure Deployment Environments supports encryption at rest with platform-managed encryption keys, which Microsoft for Azure Services manages.
A catalog is a repository that's hosted in [GitHub](https://github.com) or [Azure DevOps](https://dev.azure.com/). - To learn how to host a repository in GitHub, see [Get started with GitHub](https://docs.github.com/get-started). - To learn how to host a Git repository in an Azure DevOps project, see [Azure Repos](https://azure.microsoft.com/services/devops/repos/).
-We offer a [sample catalog](https://aka.ms/deployment-environments/SampleCatalog) that you can use as your repository. You also can use your own private repository, or you can fork and customize the environment definitions in the sample catalog.
+Microsoft offers a [sample catalog](https://aka.ms/deployment-environments/SampleCatalog) that you can use as your repository. You also can use your own private repository, or you can fork and customize the environment definitions in the sample catalog.
In this article, you learn how to: > [!div class="checklist"] >
+> - Configure a managed identity for the dev center.
> - Add a catalog. > - Update a catalog. > - Delete a catalog.
+## Configure a managed identity for the dev center
+
+After you create a dev center, before you can attach a catalog, you must configure a [managed identity](concept-environments-key-concepts.md#identities) to the dev center. You can attach either a system-assigned managed identity (system-assigned MSI) or a user-assigned managed identity (user-assigned MSI). You then assign roles to the managed identity to allow the dev center to create environment types in your subscription and read the Azure DevOps project that contains the catalog repo.
+
+If your dev center doesn't have an MSI attached, follow the steps in this article to create and attach one: [Configure a managed identity](how-to-configure-managed-identity.md).
++
+To learn more about managed identities, see: [What are managed identities for Azure resources?](/entra/identity/managed-identities-azure-resources/overview)
+ ## Add a catalog
-In Azure Deployment Environments, catalogs help you provide a set of curated IaC templates for your development teams to create environments. You can attach either a GitHub repository or an Azure DevOps repository as a catalog.
+You can add a catalog from an Azure DevOps repository or a GitHub repository. You can choose to authenticate by assigning permissions to an MSI, also called a managed identity, or by using a PAT, which you store in a key vault.
+
+Select the tab for the type of repository and authentication you want to use.
+
+## [Azure DevOps repo with MSI](#tab/DevOpsRepoMSI/)
+
+To add a catalog, you complete these tasks:
+
+- Configure a managed identity for the dev center.
+- Assign roles for the dev center managed identity.
+- Assign permissions in Azure DevOps for the dev center managed identity.
+- Add your repository as a catalog.
+
+### Assign permissions in Azure DevOps for the dev center managed identity
+You must give the dev center managed identity permissions to the repository in Azure DevOps.
+
+1. Sign in to your [Azure DevOps organization](https://dev.azure.com).
+
+1. Select **Organization settings**.
+
+ :::image type="content" source="media/how-to-configure-catalog/devops-organization-settings.png" alt-text="Screenshot showing the Azure DevOps organization page, with Organization Settings highlighted.":::
+
+1. On the **Overview** page, select **Users**.
+
+ :::image type="content" source="media/how-to-configure-catalog/devops-organization-overview.png" alt-text="Screenshot showing the Organization overview page, with Users highlighted.":::
+
+1. On the **Users** page, select **Add users**.
+
+ :::image type="content" source="media/how-to-configure-catalog/devops-add-user.png" alt-text="Screenshot showing the Users page, with Add user highlighted.":::
+
+1. Complete **Add new users** by entering or selecting the following information, and then select **Add**:
+
+ |Name |Value |
+ ||-|
+ |**Users or Service Principals**|Enter the name of your dev center. </br> When you use a system-assigned managed account, specify the name of the dev center, not the object ID of the managed account. When you use a user-assigned managed account, use the name of the managed account. |
+ |**Access level**|Select **Basic**.|
+ |**Add to projects**|Select the project that contains your repository.|
+ |**Azure DevOps Groups**|Select **Project Readers**.|
+ |**Send email invites (to Users only)**|Clear the checkbox.|
+
+ :::image type="content" source="media/how-to-configure-catalog/devops-add-user-blade.png" alt-text="Screenshot showing Add users, with example entries and Add highlighted.":::
+
+## Add a catalog to the dev center
+Azure Deployment Environments supports attaching Azure DevOps repositories and GitHub repositories. You can store a set of curated IaC templates in a repository. Attaching the repository to a dev center as a catalog gives your development teams access to the templates and enables them to quickly create consistent environments.
+
+In this article, you attach an Azure DevOps repository.
+
+### Add a catalog to your dev center
+1. Navigate to your dev center.
+1. In the left menu under **Environment configuration**, select **Catalogs**, and then select **Add**.
+
+ :::image type="content" source="media/how-to-configure-catalog/catalogs-page.png" alt-text="Screenshot that shows the Catalogs pane.":::
+
+1. In **Add catalog**, enter the following information, and then select **Add**:
+
+ | Field | Value |
+ | -- | -- |
+ | **Name** | Enter a name for the catalog. |
+ | **Catalog location** | Select **Azure DevOps**. |
+ | **Authentication type** | Select **Managed Identity**.|
+ | **Organization** | Select your Azure DevOps organization. |
+ | **Project** | From the list of projects, select the project that stores the repo. |
+ | **Repo** | From the list of repos, select the repo you want to add. |
+ | **Branch** | Select the branch. |
+ | **Folder path** | Dev Box retrieves a list of folders in your branch. Select the folder that stores your IaC templates. |
+
+ :::image type="content" source="media/how-to-configure-catalog/add-catalog-to-dev-center.png" alt-text="Screenshot showing the add catalog pane with examples entries and Add highlighted.":::
+
+1. In **Catalogs** for the dev center, verify that your catalog appears. If the connection is successful, **Status** is **Connected**. Connecting to a catalog can take a few minutes the first time.
++
+## [Azure DevOps repo with PAT](#tab/DevOpsRepoPAT/)
To add a catalog, you complete these tasks:
To add a catalog, you complete these tasks:
- Store the personal access token as a key vault secret in Azure Key Vault. - Add your repository as a catalog.
-### Get the clone URL for your repository
+### Get the clone URL for your Azure DevOps repository
-You can choose from two types of repositories:
+1. Go to the home page of your team collection (for example, `https://contoso-web-team.visualstudio.com`), and then select your project.
+1. [Get the Azure Repos Git repo clone URL](/azure/devops/repos/git/clone#get-the-clone-url-of-an-azure-repos-git-repo).
+1. Copy and save the URL. You use it later.
+
+### Create a personal access token in Azure DevOps
-- A GitHub repository-- An Azure DevOps repository
+1. Go to the home page of your team collection (for example, `https://contoso-web-team.visualstudio.com`) and select your project.
+1. Create a [personal access token](/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate#create-a-pat).
+1. Save the generated token. You use the token later.
-#### Get the clone URL of a GitHub repository
+### Create a Key Vault
+You need an Azure Key Vault to store the personal access token (PAT) that is used to grant Azure access to your repository. Key vaults can control access with either access policies or role-based access control (RBAC). If you have an existing key vault, you can use it, but you should check whether it uses access policies or RBAC assignments to control access. For help with configuring an access policy for a key vault, see [Assign a Key Vault access policy](/azure/key-vault/general/assign-access-policy?branch=main&tabs=azure-portal).
-1. Go to the home page of the GitHub repository that contains the template definitions.
-1. [Get the GitHub repo clone URL](/azure/devops/repos/git/clone#get-the-clone-url-of-a-github-repo).
-1. Copy and save the URL. You use it later.
+Use the following steps to create an RBAC key vault:
-#### Get the clone URL of an Azure DevOps repository
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. In the Search box, enter *Key Vault*.
+1. From the results list, select **Key Vault**.
+1. On the Key Vault page, select **Create**.
+1. On the Create key vault tab, provide the following information:
-1. Go to the home page of your team collection (for example, `https://contoso-web-team.visualstudio.com`), and then select your project.
-1. [Get the Git repo clone URL](/azure/devops/repos/git/clone#get-the-clone-url-of-an-azure-repos-git-repo).
-1. Copy and save the URL. You use it later.
+ |Name |Value |
+ |-|--|
+ |**Name**|Enter a name for the key vault.|
+ |**Subscription**|Select the subscription in which you want to create the key vault.|
+ |**Resource group**|Either use an existing resource group or select **Create new** and enter a name for the resource group.|
+ |**Location**|Select the location or region where you want to create the key vault.|
+
+ Leave the other options at their defaults.
+
+1. On the Access policy tab, select **Azure role-based access control**, and then select **Review + create**.
+
+1. On the Review + create tab, select **Create**.
+
+### Store the personal access token in the key vault
+
+1. In the Key Vault, on the left menu, select **Secrets**.
+1. On the Secrets page, select **Generate/Import**.
+1. On the Create a secret page:
+ - In the **Name** box, enter a descriptive name for your secret.
+ - In the **Secret value** box, paste the GitHub secret.
+ - Select **Create**.
+
+### Get the secret identifier
+
+Get the path to the secret you created in the key vault.
+
+1. In the Azure portal, navigate to your key vault.
+1. On the key vault page, from the left menu, select **Secrets**.
+1. On the Secrets page, select the secret you created earlier.
+1. On the versions page, select the **CURRENT VERSION**.
+1. On the current version page, for the **Secret identifier**, select copy.
+
+### Add your repository as a catalog
+
+1. In the [Azure portal](https://portal.azure.com/), go to your dev center.
+1. Ensure that the [identity](./how-to-configure-managed-identity.md) that's attached to the dev center has [access to the key vault secret](./how-to-configure-managed-identity.md#grant-the-managed-identity-access-to-the-key-vault-secret) where your personal access token is stored.
+1. In the left menu under **Environment configuration**, select **Catalogs**, and then select **Add**.
+1. In **Add catalog**, enter the following information, and then select **Add**:
+
+ | Field | Value |
+ | -- | -- |
+ | **Name** | Enter a name for the catalog. |
+ | **Catalog location** | Select **Azure DevOps**. |
+ | **Authentication type** | Select **Personal Access Token**.|
+ | **Organization** | Select the organization that hosts the catalog repo. |
+ | **Project** | Select the project that stores the catalog repo.|
+ | **Rep** | Select the repo that stores the catalog.|
+ | **Folder path** | Select the folder that holds your IaC templates.|
+ | **Secret identifier**| Enter the secret identifier that contains your personal access token for the repository.<br /> When you copy a secret identifier, the connection string includes a version identifier at the end, like in this example: `https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat/9376b432b72441a1b9e795695708ea5a`.<br />Removing the version identifier ensures that Deployment Environments fetch the latest version of the secret from the key vault. If your personal access token expires, only the key vault needs to be updated. <br />*Example secret identifier:* `https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat`|
+
+ :::image type="content" source="media/how-to-configure-catalog/add-devops-catalog-pane.png" alt-text="Screenshot that shows how to add a catalog to a dev center.":::
+
+1. In **Catalogs** for the dev center, verify that your catalog appears. If the connection is successful, the **Status** is **Connected**.
++
+## [GitHub repo with PAT](#tab/GitHubRepoPAT/)
+
+To add a catalog, you complete these tasks:
+
+- Get the clone URL for your repository.
+- Create a personal access token.
+- Store the personal access token as a key vault secret in Azure Key Vault.
+- Add your repository as a catalog.
-### Create a personal access token
+### Get the clone URL of a GitHub repository
-Next, create a personal access token. Depending on the type of repository you use, create a personal access token either in GitHub or in Azure DevOps.
+1. Go to the home page of the GitHub repository that contains the template definitions.
+1. [Get the GitHub repo clone URL](/azure/devops/repos/git/clone#get-the-clone-url-of-a-github-repo).
+1. Copy and save the URL. You use it later.
-#### Create a personal access token in GitHub
+### Create a personal access token in GitHub
1. Go to the home page of the GitHub repository that contains the template definitions. 1. In the upper-right corner of GitHub, select the profile image, and then select **Settings**.
Next, create a personal access token. Depending on the type of repository you us
1. Select **Generate token**. 1. Save the generated token. You use the token later.
-#### Create a personal access token in Azure DevOps
-
-1. Go to the home page of your team collection (for example, `https://contoso-web-team.visualstudio.com`) and select your project.
-1. Create a [personal access token](/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate#create-a-pat).
-1. Save the generated token. You use the token later.
-
-### Store the personal access token as a key vault secret
-Store the personal access token that you generated as a [key vault secret](../key-vault/secrets/about-secrets.md) and copy the secret identifier:
-
-#### Create a Key Vault
-You need an Azure Key Vault to store the GitHub personal access token (PAT) that is used to grant Azure access to your GitHub repository. Key Vaults can control access with either access policies or role-based access control (RBAC). If you have an existing key vault, you can use it, but you should check whether it uses access policies or RBAC assignments to control access. For help with configuring an access policy for a key vault, see [Assign a Key Vault access policy](/azure/key-vault/general/assign-access-policy?branch=main&tabs=azure-portal).
+### Create a Key Vault
+You need an Azure Key Vault to store the personal access token (PAT) that is used to grant Azure access to your repository. Key vaults can control access with either access policies or role-based access control (RBAC). If you have an existing key vault, you can use it, but you should check whether it uses access policies or RBAC assignments to control access. For help with configuring an access policy for a key vault, see [Assign a Key Vault access policy](/azure/key-vault/general/assign-access-policy?branch=main&tabs=azure-portal).
Use the following steps to create an RBAC key vault:
Use the following steps to create an RBAC key vault:
1. On the Review + create tab, select **Create**.
-#### Store the personal access token in the key vault
+### Store the personal access token in the key vault
1. In the Key Vault, on the left menu, select **Secrets**. 1. On the Secrets page, select **Generate/Import**.
Use the following steps to create an RBAC key vault:
- Select **Create**.
-#### Get the secret identifier
+### Get the secret identifier
Get the path to the secret you created in the key vault.
Get the path to the secret you created in the key vault.
| Field | Value | | -- | -- | | **Name** | Enter a name for the catalog. |
- | **Git clone URI** | Enter or paste the [clone URL](#get-the-clone-url-for-your-repository) for either your GitHub repository or your Azure DevOps repository.<br />*Sample catalog example:* `https://github.com/Azure/deployment-environments.git` |
+ | **Catalog location** | Select **GitHub**. |
+ | **Repo** | Enter or paste the clone URL for either your GitHub repository or your Azure DevOps repository.<br />*Sample catalog example:* `https://github.com/Azure/deployment-environments.git` |
| **Branch** | Enter the repository branch to connect to.<br />*Sample catalog example:* `main`| | **Folder path** | Enter the folder path relative to the clone URI that contains subfolders that hold your environment definitions. <br /> The folder path is for the folder with subfolders containing environment definition manifests, not for the folder with the environment definition manifest itself. The following image shows the sample catalog folder structure.<br />*Sample catalog example:* `/Environments`<br /> :::image type="content" source="media/how-to-configure-catalog/github-folders.png" alt-text="Screenshot showing Environments sample folder in GitHub."::: The folder path can begin with or without a forward slash (`/`).|
- | **Secret identifier**| Enter the [secret identifier](#create-a-personal-access-token) that contains your personal access token for the repository.<br /> When you copy a secret identifier, the connection string includes a version identifier at the end, like in this example: `https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat/9376b432b72441a1b9e795695708ea5a`.<br />Removing the version identifier ensures that Deployment Environments fetches the latest version of the secret from the key vault. If your personal access token expires, only the key vault needs to be updated. <br />*Example secret identifier:* `https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat`|
+ | **Secret identifier**| Enter the secret identifier that contains your personal access token for the repository.<br /> When you copy a secret identifier, the connection string includes a version identifier at the end, like in this example: `https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat/9376b432b72441a1b9e795695708ea5a`.<br />Removing the version identifier ensures that Deployment Environments fetch the latest version of the secret from the key vault. If your personal access token expires, only the key vault needs to be updated. <br />*Example secret identifier:* `https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat`|
- :::image type="content" source="media/how-to-configure-catalog/add-catalog-form-inline.png" alt-text="Screenshot that shows how to add a catalog to a dev center." lightbox="media/how-to-configure-catalog/add-catalog-form-expanded.png":::
+ :::image type="content" source="media/how-to-configure-catalog/add-github-catalog-pane.png" alt-text="Screenshot that shows how to add a catalog to a dev center." lightbox="media/how-to-configure-catalog/add-github-catalog-pane.png":::
1. In **Catalogs** for the dev center, verify that your catalog appears. If the connection is successful, **Status** is **Connected**. ++ ## Update a catalog If you update the Azure Resource Manager template (ARM template) contents or definition in the attached repository, you can provide the latest set of environment definitions to your development teams by syncing the catalog.
An ignored environment definition error occurs if you add two or more environmen
An invalid environment definition error might occur for various reasons: -- **Manifest schema errors**. Ensure that your environment definition manifest matches the [required schema](./configure-environment-definition.md#add-an-environment-definition).
+- **Manifest schema errors**. Ensure that your environment definition manifest matches the [required schema](configure-environment-definition.md#add-an-environment-definition).
- **Validation errors**. Check the following items to resolve validation errors:
An invalid environment definition error might occur for various reasons:
- **Reference errors**. Ensure that the template path that the manifest references is a valid relative path to a file in the repository.
-## Next steps
+## Related content
-- Learn how to [create and configure a project](./quickstart-create-and-configure-projects.md).-- Learn how to [create and configure a project environment type](how-to-configure-project-environment-types.md).
+- [Configure environment types for a dev center](how-to-configure-devcenter-environment-types.md)
+- [Create and configure a project by using the Azure CLI](how-to-create-configure-projects.md)
+- [Configure project environment types](how-to-configure-project-environment-types.md)
deployment-environments How To Configure Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-managed-identity.md
As a security best practice, if you choose to use user-assigned identities, use
## Assign a subscription role assignment to the managed identity
-The identity that's attached to the dev center in Azure Deployment Environments should be assigned the Owner role for all the deployment subscriptions and the Reader role for all subscriptions that contain the relevant project. When a user creates or deploys an environment, the service grants appropriate access to the deployment identity that's attached to the project environment type. The deployment identity uses the access to perform deployments on behalf of the user. You can use the managed identity to empower developers to create environments without granting them access to the subscription.
+The identity that's attached to the dev center should be assigned the Contributor and User Access Administrator roles for all the deployment subscriptions and the Reader role for all subscriptions that contain the relevant project. When a user creates or deploys an environment, the service grants appropriate access to the deployment identity that's attached to the project environment type. The deployment identity uses the access to perform deployments on behalf of the user. You can use the managed identity to empower developers to create environments without granting them access to the subscription.
### Add a role assignment to a system-assigned managed identity
-1. In the Azure portal, go to your dev center.
+1. In the Azure portal, navigate to your dev center.
1. On the left menu under **Settings**, select **Identity**. 1. Under **System assigned** > **Permissions**, select **Azure role assignments**. :::image type="content" source="./media/configure-managed-identity/system-assigned-azure-role-assignment.png" alt-text="Screenshot that shows the Azure role assignment for system-assigned identity.":::
-1. On **Azure role assignments**, select **Add role assignment (Preview)**, and then enter or select the following information:
+1. To give Contributor access to the subscription, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**:
+
+ |Name |Value |
+ ||-|
+ |**Scope**|Subscription|
+ |**Subscription**|Select the subscription in which to use the managed identity.|
+ |**Role**|Contributor|
- 1. For **Scope**, select **Subscription**.
- 1. For **Subscription**, select the subscription in which to use the managed identity.
- 1. For **Role**, select **Owner**.
- 1. Select **Save**.
+1. To give User Access Administrator access to the subscription, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**:
+
+ |Name |Value |
+ ||-|
+ |**Scope**|Subscription|
+ |**Subscription**|Select the subscription in which to use the managed identity.|
+ |**Role**|User Access Administrator|
### Add a role assignment to a user-assigned managed identity
The identity that's attached to the dev center in Azure Deployment Environments
1. On the left menu under **Settings**, select **Identity**. 1. Under **User assigned**, select the identity. 1. On the left menu, select **Azure role assignments**.
-1. On **Azure role assignments**, select **Add role assignment (Preview)**, and then enter or select the following information:
-
- 1. For **Scope**, select **Subscription**.
- 1. For **Subscription**, select the subscription in which to use the managed identity.
- 1. For **Role**, select **Owner**.
- 1. Select **Save**.
+1. To give Contributor access to the subscription, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**:
+
+ |Name |Value |
+ ||-|
+ |**Scope**|Subscription|
+ |**Subscription**|Select the subscription in which to use the managed identity.|
+ |**Role**|Contributor|
+
+1. To give User Access Administrator access to the subscription, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**:
+
+ |Name |Value |
+ ||-|
+ |**Scope**|Subscription|
+ |**Subscription**|Select the subscription in which to use the managed identity.|
+ |**Role**|User Access Administrator|
## Grant the managed identity access to the key vault secret
deployment-environments How To Create Configure Dev Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-create-configure-dev-center.md
To add a catalog to your dev center, you first need to gather some information.
To add a catalog, you must specify the GitHub repo URL, the branch, and the folder that contains your environment definitions. You can gather this information before you begin the process of adding the catalog to the dev center. > [!TIP]
-> If you are attaching an Azure DevOps repository, use these steps: [Get the clone URL of an Azure DevOps repository](how-to-configure-catalog.md#get-the-clone-url-of-an-azure-devops-repository).
+> If you are attaching an Azure DevOps repository, use these steps: [Get the clone URL of an Azure DevOps repository](how-to-configure-catalog.md#get-the-clone-url-for-your-azure-devops-repository).
1. On your [GitHub](https://github.com) account page, select **<> Code**, and then select copy. 1. Take a note of the branch that you're working in.
deployment-environments Quickstart Create And Configure Devcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-devcenter.md
Previously updated : 09/06/2023 Last updated : 10/23/2023 # Quickstart: Create and configure a dev center for Azure Deployment Environments
You need to perform the steps in both quickstarts before you can create a deploy
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Azure role-based access control role with permissions to create and manage resources in the subscription, such as [Contributor](../role-based-access-control/built-in-roles.md#contributor).
+- Azure role-based access control role with permissions to create and manage resources in the subscription, such as [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Owner](../role-based-access-control/built-in-roles.md#owner).
+- An [Azure DevOps repository](https://azure.microsoft.com/products/devops/repos/) repository that contains IaC templates. You can use the [Deployment Environments sample catalog](https://github.com/azure/deployment-environments) that contains samples created and maintained by the Azure Deployment Environments team.
+ - In your Azure DevOps organization, [create a project](/azure/devops/repos/get-started/sign-up-invite-teammates?view=azure-devops&branch=main&preserve-view=true) to store your repository.
+ - Import the [Deployment Environments sample catalog](https://github.com/azure/deployment-environments)
## Create a dev center
-To create and configure a Dev center in Azure Deployment Environments by using the Azure portal:
+To create and configure a dev center in Azure Deployment Environments by using the Azure portal:
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for **Azure Deployment Environments**, and then select the service in the results.
To create and configure a Dev center in Azure Deployment Environments by using t
:::image type="content" source="media/quickstart-create-and-configure-devcenter/deployment-environments-devcenter-created.png" alt-text="Screenshot that shows the Dev centers overview, to confirm that the dev center is created.":::
-### Create a Key Vault
-When you're using a GitHub repository or an Azure DevOps repository to store your [catalog](./concept-environments-key-concepts.md#catalogs), you need an Azure Key Vault to store a personal access token (PAT) that is used to grant Azure access to your repository. Key Vaults can control access with either access policies or role-based access control (RBAC). If you have an existing key vault, you can use it, but you should check whether it uses access policies or RBAC assignments to control access. This quickstart assumes you're using an RBAC Key Vault and a GitHub repository.
-
-If you don't have an existing key vault, use the following steps to create one: [Quickstart: Create a key vault using the Azure portal](../key-vault/general/quick-create-portal.md).
-
-### Configure a personal access token
-Using an authentication token like a GitHub PAT enables you to share your repository securely. GitHub offers classic PATs, and fine-grained PATs. Fine-grained and classic PATs work with Azure Deployment Environments, but fine-grained tokens give you more granular control over the repositories to which you're allowing access.
-
-> [!TIP]
-> If you are attaching an Azure DevOps repository, use these steps: [Create a personal access token in Azure DevOps](how-to-configure-catalog.md#create-a-personal-access-token-in-azure-devops).
-
-1. In a new browser tab, sign into your [GitHub](https://github.com) account.
-1. On your profile menu, select **Settings**.
-1. On your account page, on the left menu, select **< >Developer Settings**.
-1. On the Developer settings page, select **Fine-grained tokens**.
-
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/github-fine-grained-pat.png" alt-text="Screenshot that shows the GitHub Fine-grained tokens option.":::
-
-1. On the Fine-grained personal access tokens page, select **Generate new token**
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/generate-github-fine-grained-token.png" alt-text="Screenshot showing the GitHub Fine-grained personal access tokens page with Generate new token highlighted.":::
-
-1. On the New fine-grained personal access token page, provide the following information:
-
- |Name |Value |
- |-|--|
- |**Token name**|Enter a descriptive name for the token.|
- |**Expiration**|Select the token expiration period in days.|
- |**Description**|Enter a description for the token.|
- |**Repository access**|Select **Public Repositories (read-only)**.|
-
- Leave the other options at their defaults.
-
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/github-public-repo-permissions.png" alt-text="Screenshot showing the GitHub New fine-grained personal access token page.":::
-
-1. Select **Generate token**.
-1. On the Fine-grained personal access tokens page, copy the new token.
-
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/copy-new-token.png" alt-text="Screenshot that shows the new GitHub token with the copy button highlighted.":::
-
- > [!WARNING]
- > You must copy the token now. You will not be able to access it again.
-
-1. Switch back to the **Key Vault ΓÇô Microsoft Azure** browser tab.
-1. In the Key Vault, on the left menu, select **Secrets**.
-1. On the Secrets page, select **Generate/Import**.
-
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/import-secret.png" alt-text="Screenshot that shows the key vault Secrets page with the generate/import button highlighted.":::
-
-1. On the Create a secret page:
- - In the **Name** box, enter a descriptive name for your secret.
- - In the **Secret value** box, paste the GitHub secret you copied in step 7.
-
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/create-secret-in-key-vault.png" alt-text="Screenshot that shows the Create a secret page with the Name and Secret value text boxes highlighted.":::
-
- - Select **Create**.
-1. Leave this tab open, you need to come back to the Key Vault later.
- ## Configure a managed identity for the dev center After you create a dev center, attach an [identity](concept-environments-key-concepts.md#identities) to the dev center. You can attach either a system-assigned managed identity or a user-assigned managed identity. Learn about the two [types of identities](how-to-configure-managed-identity.md#add-a-managed-identity).
-In this quickstart, you configure a system-assigned managed identity for your dev center. You then assign roles to the managed identity to allow the dev center to create environment types in your subscription and read the key vault secret that contains the GitHub PAT.
+In this quickstart, you configure a system-assigned managed identity for your dev center. You then assign roles to the managed identity to allow the dev center to create environment types in your subscription and read the Azure DevOps repository project that contains the catalog.
### Attach a system-assigned managed identity
To attach a system-assigned managed identity to your dev center:
### Assign roles for the dev center managed identity
-The managed identity that represents your dev center requires access to the subscriptions where you configure the [project environment types](concept-environments-key-concepts.md#project-environment-types), and to the key vault secret that stores your GitHub PAT.
+The managed identity that represents your dev center requires access to the subscriptions where you configure the [project environment types](concept-environments-key-concepts.md#project-environment-types), and to the Azure DevOps repo that stores your catalog.
1. Navigate to your dev center. 1. On the left menu under Settings, select **Identity**.
The managed identity that represents your dev center requires access to the subs
:::image type="content" source="media/quickstart-create-configure-projects/system-assigned-managed-identity.png" alt-text="Screenshot that shows a system-assigned managed identity with Role assignments highlighted.":::
-1. To give access to the subscription, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**:
+1. To give Contributor access to the subscription, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**:
|Name |Value | ||-|
The managed identity that represents your dev center requires access to the subs
|**Subscription**|Select the subscription in which to use the managed identity.| |**Role**|Contributor|
-1. To give access to the subscription, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**:
-
+1. To give User Access Administrator access to the subscription, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**:
+ |Name |Value | ||-| |**Scope**|Subscription| |**Subscription**|Select the subscription in which to use the managed identity.| |**Role**|User Access Administrator|
-1. To give access to the key vault, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**:
-
- |Name |Value |
- ||-|
- |**Scope**|Key Vault|
- |**Subscription**|Select the subscription in which to use the managed identity.|
- |**Resource**|Select the key vault that you created earlier.|
- |**Role**|Key Vault Secrets User|
-
-## Add a catalog to the dev center
-Azure Deployment Environments supports attaching Azure DevOps repositories and GitHub repositories. You can store a set of curated IaC templates in a repository. Attaching the repository to a dev center as a catalog gives your development teams access to the templates and enables them to quickly create consistent environments.
-
-In this quickstart, you attach a GitHub repository that contains samples created and maintained by the Azure Deployment Environments team.
+### Assign permissions in Azure DevOps for the dev center managed identity
+You must give the dev center managed identity permissions to the repository in Azure DevOps.
-To add a catalog to your dev center, you first need to gather some information.
+1. Sign in to your [Azure DevOps organization](https://dev.azure.com).
-### Gather GitHub repo information
-To add a catalog, you must specify the GitHub repo URL, the branch, and the folder that contains your environment definitions. You can gather this information before you begin the process of adding the catalog to the dev center, and paste it somewhere accessible, like notepad.
+1. Select **Organization settings**.
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/devops-organization-settings.png" alt-text="Screenshot showing the Azure DevOps organization page, with Organization Settings highlighted.":::
+
+1. On the **Overview** page, select **Users**.
-> [!TIP]
-> If you are attaching an Azure DevOps repository, use these steps: [Get the clone URL of an Azure DevOps repository](how-to-configure-catalog.md#get-the-clone-url-of-an-azure-devops-repository).
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/devops-organization-overview.png" alt-text="Screenshot showing the Organization overview page, with Users highlighted.":::
-1. On your [GitHub](https://github.com) account page, select **<> Code**, and then select copy.
-1. Take a note of the branch that you're working in.
-1. Take a note of the folder that contains your environment definitions.
-
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/github-info.png" alt-text="Screenshot that shows the GitHub repo with Code, branch, and folder highlighted.":::
+1. On the **Users** page, select **Add users**.
-### Gather the secret identifier
-You also need the path to the secret you created in the key vault.
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/devops-add-user.png" alt-text="Screenshot showing the Users page, with Add user highlighted.":::
-1. In the Azure portal, navigate to your key vault.
-1. On the key vault page, from the left menu, select **Secrets**.
-1. On the Secrets page, select the secret you created earlier.
+1. Complete **Add new users** by entering or selecting the following information, and then select **Add**:
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/key-vault-secrets-page.png" alt-text="Screenshot that shows the list of secrets in the key vault with one highlighted.":::
+ |Name |Value |
+ ||-|
+ |**Users or Service Principals**|Enter the name of your dev center. </br> When you use a system assigned managed account, specify the name of the dev center, not the Object ID of the Managed Account. When you use a user assigned managed account, use the name of the managed account. |
+ |**Access level**|Select **Basic**.|
+ |**Add to projects**|Select the project that contains your repository.|
+ |**Azure DevOps Groups**|Select **Project Readers**.|
+ |**Send email invites (to Users only)**|Clear the checkbox.|
-1. On the versions page, select the **CURRENT VERSION**.
-
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/key-vault-versions-page.png" alt-text="Screenshot that shows the current version of the select secret.":::
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/devops-add-user-blade.png" alt-text="Screenshot showing Add users, with example entries and Add highlighted.":::
-1. On the current version page, for the **Secret identifier**, select copy.
+## Add a catalog to the dev center
+Azure Deployment Environments supports attaching Azure DevOps repositories and GitHub repositories. You can store a set of curated IaC templates in a repository. Attaching the repository to a dev center as a catalog gives your development teams access to the templates and enables them to quickly create consistent environments.
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/key-vault-current-version-page.png" alt-text="Screenshot that shows the details current version of the select secret with the secret identifier copy button highlighted.":::
+In this quickstart, you attach an Azure DevOps repository.
### Add a catalog to your dev center 1. Navigate to your dev center.
You also need the path to the secret you created in the key vault.
| Field | Value | | -- | -- | | **Name** | Enter a name for the catalog. |
- | **Git clone URI** | Enter or paste the clone URL for either your GitHub repository or your Azure DevOps repository.<br />*Sample catalog example:* `https://github.com/Azure/deployment-environments.git` |
- | **Branch** | Enter the repository branch to connect to.<br />*Sample catalog example:* `main`|
- | **Folder path** | Enter the folder path relative to the clone URI that contains subfolders that hold your environment definitions. <br /> The folder path is for the folder with subfolders containing environment definition manifests, not for the folder with the environment definition manifest itself. The following image shows the sample catalog folder structure.<br />*Sample catalog example:* `/Environments`<br /> :::image type="content" source="media/how-to-configure-catalog/github-folders.png" alt-text="Screenshot showing Environments sample folder in GitHub."::: The folder path can begin with or without a forward slash (`/`).|
- | **Secret identifier**| Enter the [secret identifier](#configure-a-personal-access-token) that contains your personal access token for the repository.<br /> When you copy a secret identifier, the connection string includes a version identifier at the end, like in this example: `https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat/9376b432b72441a1b9e795695708ea5a`.<br />Removing the version identifier ensures that Deployment Environments fetches the latest version of the secret from the key vault. If your personal access token expires, only the key vault needs to be updated. <br />*Example secret identifier:* `https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat`|
+ | **Catalog location** | Select **Azure DevOps**. |
+ | **Authentication type** | Select **Managed Identity**.|
+ | **Organization** | Select your Azure DevOps organization. |
+ | **Project** | From the list of projects, select the project that stores the repo. |
+ | **Repo** | From the list of repos, select the repo you want to add. |
+ | **Branch** | Select the branch. |
+ | **Folder path** | Dev Box retrieves a list of folders in your branch. Select the folder that stores your IaC templates. |
- :::image type="content" source="media/how-to-configure-catalog/add-catalog-form-inline.png" alt-text="Screenshot that shows how to add a catalog to a dev center." lightbox="media/how-to-configure-catalog/add-catalog-form-expanded.png":::
-
-1. In **Catalogs** for the dev center, verify that your catalog appears. If the connection is successful, **Status** is **Connected**.
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/add-catalog-to-devcenter.png" alt-text="Screenshot showing the add catalog pane with examples entries and Add highlighted.":::
+1. In **Catalogs** for the dev center, verify that your catalog appears. If the connection is successful, **Status** is **Connected**. Connecting to a catalog can take a few minutes the first time.
+
## Create an environment type Use an environment type to help you define the different types of environments your development teams can deploy. You can apply different settings for each environment type.
Use an environment type to help you define the different types of environments y
An environment type that you add to your dev center is available in each project in the dev center, but environment types aren't enabled by default. When you enable an environment type at the project level, the environment type determines the managed identity and subscription that are used to deploy environments.
-## Next steps
+## Next step
In this quickstart, you created a dev center and configured it with an identity, a catalog, and an environment type. To learn how to create and configure a project, advance to the next quickstart.
dms Ads Sku Recommend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/ads-sku-recommend.md
- sql-migration-content
-# Get Azure recommendations to migrate your SQL Server database (Preview)
+# Get Azure recommendations to migrate your SQL Server database
The [Azure SQL Migration extension for Azure Data Studio](/azure-data-studio/extensions/azure-sql-migration-extension) helps you to assess your database requirements, get the right-sized SKU recommendations for Azure resources, and migrate your SQL Server database to Azure.
Learn how to use this unified experience, collecting performance data from your
## Overview
-Before migrating to Azure SQL, you can use the SQL Migration extension in Azure Data Studio to help you generate right-sized recommendations (Preview) for Azure SQL Database, Azure SQL Managed Instance, and SQL Server on Azure Virtual Machines targets. The tool helps you collect performance data from your source SQL instance (running on-premises or other cloud), and recommend a compute and storage configuration to meet your workload's needs.
+Before migrating to Azure SQL, you can use the SQL Migration extension in Azure Data Studio to help you generate right-sized recommendations for Azure SQL Database, Azure SQL Managed Instance, and SQL Server on Azure Virtual Machines targets. The tool helps you collect performance data from your source SQL instance (running on-premises or other cloud), and recommend a compute and storage configuration to meet your workload's needs.
The diagram presents the workflow for Azure recommendations in the Azure SQL Migration extension for Azure Data Studio:
The diagram presents the workflow for Azure recommendations in the Azure SQL Mig
## Prerequisites
-To get started with Azure recommendations (Preview) for your SQL Server database migration, you must meet the following prerequisites:
+To get started with Azure recommendations for your SQL Server database migration, you must meet the following prerequisites:
- [Download and install Azure Data Studio](/azure-data-studio/download-azure-data-studio). - [Install the Azure SQL Migration extension](/azure-data-studio/extensions/azure-sql-migration-extension) from Azure Data Studio Marketplace.
GRANT SELECT ON [msdb].[dbo].[sysmail_account] TO [assessment];
- Azure Recommendations don't include price estimates, as this situation may vary depending on region, currency, and discounts such as the [Azure Hybrid Benefit](/azure/azure-sql/azure-hybrid-benefit). To get price estimates, use the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator), or create a [SQL assessment](/azure/migrate/concepts-azure-sql-assessment-calculation) in Azure Migrate. - Recommendations for Azure SQL Database with the [DTU-based purchasing model](/azure/azure-sql/database/migrate-dtu-to-vcore) aren't supported. - Currently, Azure recommendations for Azure SQL Database serverless compute tier and Elastic Pools aren't supported.
+<!--
- Currently, Azure recommendations for SQL Server on Azure Virtual Machine using Premium SSD v2 aren't supported.-
+-->
## Troubleshooting - No recommendations generated - If no recommendations were generated, this situation could mean that no configurations were identified which can fully satisfy the performance requirements of your source instance. In order to see reasons why a particular size, service tier, or hardware family was disqualified:
event-grid Monitor Mqtt Delivery Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/monitor-mqtt-delivery-reference.md
This article provides a reference of log and metric data collected to analyze th
| OperationType | The type of the operation. The available values include: <br><br>- Publish: PUBLISH requests sent from MQTT clients to MQTT broker. <br>- Deliver: PUBLISH requests sent from MQTT broker to MQTT clients. <br>- Subscribe: SUBSCRIBE requests by MQTT clients. <br>- Unsubscribe: UNSUBSCRIBE requests by MQTT clients. <br>- Connect: CONNECT requests by MQTT clients. | | Protocol | The protocol used in the operation. The available values include: <br><br>- MQTT3: MQTT v3.1.1 <br>- MQTT5: MQTT v5 <br>- MQTT3-WS: MQTT v3.1.1 over WebSocket <br>- MQTT5-WS: MQTT v5 over WebSocket | Result | Result of the operation. The available values include: <br><br>- Success <br>- ClientError <br>- ServiceError |
-| Error | Error occurred during the operation. The available values include: <br><br>-QuotaExceeded: the client exceeded one or more of the throttling limits that resulted in a failure <br>- AuthenticationError: a failure because of any authentication reasons. In case of failed MQTT routing messages, the EventGrid Data Sender role for the custom topic configured as the destination for MQTT routed messages was deleted. This error doesn't apply for namespace topics since they don't need a permission to route MQTT messages. In that case for MQTT message routing, MQTT broker drops the MQTT message that was meant to be routed. <br>- AuthorizationError: a failure because of any authorization reasons.<br>- ClientError: the client sent a bad request or used one of the unsupported features that resulted in a failure. <br>-TopicNotFoundError: The custom topic that is configured to receive all the MQTT routed messages was deleted. This error doesn't apply for namespace topics since they can't be deleted if they're used as the destination for MQTT routed messages. In that case, MQTT broker drops the MQTT message that was meant to be routed.<br>-TooManyRequests: the number of MQTT routed messages per second exceeds the limit of the destination (namespace topic or custom topic) for MQTT routed messages. In that case, Event Grid retries to route the MQTT message. <br>- ServiceError: a failure because of an unexpected server error or for a server's operational reason. In that case for MQTT message routing, Event Grid retries to route the MQTT message. |
+| Error | Error occurred during the operation.<br> The available values for MQTT: RequestCount, MQTT: Failed Published Messages, MQTT: Failed Subscription Operations metrics include: <br><br>-QuotaExceeded: the client exceeded one or more of the throttling limits that resulted in a failure <br>- AuthenticationError: a failure because of any authentication reasons. <br>- AuthorizationError: a failure because of any authorization reasons.<br>- ClientError: the client sent a bad request or used one of the unsupported features that resulted in a failure. <br>- ServiceError: a failure because of an unexpected server error or for a server's operational reason. <br><br> [Learn more about how the supported MQTT features.](mqtt-support.md) <br><br>The available values for MQTT: Failed Routed Messages metric include: <br><br>-AuthenticationError: the EventGrid Data Sender role for the custom topic configured as the destination for MQTT routed messages was deleted. <br>-TopicNotFoundError: The custom topic that is configured to receive all the MQTT routed messages was deleted. <br>-TooManyRequests: the number of MQTT routed messages per second exceeds the publish limit of the custom topic. <br>- ServiceError: a failure because of an unexpected server error or for a server's operational reason. <br><br> [Learn more about how the MQTT broker handles each of these routing errors.](mqtt-routing.md#mqtt-message-routing-behavior)|
| QoS | Quality of service level. The available values are: 0, 1. | | Direction | The direction of the operation. The available values are: <br><br>- Inbound: inbound throughput to MQTT broker. <br>- Outbound: outbound throughput from MQTT broker. | | DropReason | The reason a session was dropped. The available values include: <br><br>- SessionExpiry: a persistent session has expired. <br>- TransientSession: a non-persistent session has expired. <br>- SessionOverflow: a client didn't connect during the lifespan of the session to receive queued QOS1 messages until the queue reached its maximum limit. <br>- AuthorizationError: a session drop because of any authorization reasons.
event-grid Mqtt Certificate Chain Client Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-certificate-chain-client-authentication.md
Using the CA files generated to create certificate for the client.
## Upload the CA certificate to the namespace 1. In Azure portal, navigate to your Event Grid namespace. 1. Under the MQTT section in left rail, navigate to CA certificates menu.-- 1. Select **+ Certificate** to launch the Upload certificate page.
-1. Add certificate name and browse to find the intermediate certificate (.step/certs/intermediate_ca.crt) and select **Upload**.
-
-> [!NOTE]
-> - CA certificate name can be 3-50 characters long.
-> - CA certificate name can include alphanumeric, hyphen(-) and, no spaces.
-> - The name needs to be unique per namespace.
+1. Add certificate name and browse to find the intermediate certificate (.step/certs/intermediate_ca.crt) and select **Upload**. You can upload a file of .pem, .cer, or .crt type.
+1. On the Upload certificate page, give a Certificate name and browse for the certificate file.
+1. Select **Upload** button to add the parent certificate.
-4. On the Upload certificate page, give a Certificate name and browse for the certificate file.
-5. Select **Upload** button to add the parent certificate.
+ :::image type="content" source="./media/mqtt-certificate-chain-client-authentication/event-grid-namespace-parent-certificate-added.png" alt-text="Screenshot showing the added CA certificate listed in the CA certificates page.":::
+ > [!NOTE]
+ > - CA certificate name can be 3-50 characters long.
+ > - CA certificate name can include alphanumeric, hyphen(-) and, no spaces.
+ > - The name needs to be unique per namespace.
## Configure client authentication settings 1. Navigate to the Clients page.
event-grid Mqtt Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-overview.md
IoT applications are software designed to interact with and process data from Io
### Client authentication
-Event Grid has a client registry that stores information about the clients permitted to connect to it. Before a client can connect, there must be an entry for that client in the client registry. As a client connects to MQTT broker, it needs to authenticate with MQTT broker based on credentials stored in the identity registry. MQTT broker supports X.509 certificate authentication that is the industry authentication standard in IoT devices and [Microsoft Entra](mqtt-client-azure-ad-token-and-rbac.md) that is Azure's authentication standard for applications.[Learn more about MQTT client authentication.](mqtt-client-authentication.md)
+Event Grid has a client registry that stores information about the clients permitted to connect to it. Before a client can connect, there must be an entry for that client in the client registry. As a client connects to MQTT broker, it needs to authenticate with MQTT broker based on credentials stored in the identity registry. MQTT broker supports X.509 certificate authentication that is the industry authentication standard in IoT devices and [Microsoft Entra ID (formerly Azure Active Directory)](mqtt-client-azure-ad-token-and-rbac.md) that is Azure's authentication standard for applications.[Learn more about MQTT client authentication.](mqtt-client-authentication.md)
### Access control
event-grid Mqtt Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-routing.md
For enrichments configuration instructions, go to [Enrichment CLI configuration]
## MQTT message routing behavior
-While routing MQTT messages to namespace topics or custom topics, Event Grid provides durable delivery as it tries to deliver each message **at least once** immediately. If there's a failure, Event Grid either retries delivery or drops the message that was meant to be routed. Event Grid doesn't guarantee order for event delivery, so subscribers might receive them out of order.
+While routing MQTT messages to custom topics, Event Grid provides durable delivery as it tries to deliver each message **at least once** immediately. If there's a failure, Event Grid either retries delivery or drops the message that was meant to be routed. Event Grid doesn't guarantee order for event delivery, so subscribers might receive them out of order.
The following table describes the behavior of MQTT message routing based on different errors. | Error| Error description | Behavior | | --| --|--|
-| TopicNotFoundError | The custom topic that is configured to receive all the MQTT routed messages was deleted. This error doesn't apply for namespace topics since they can't be deleted if they're used as the destination for MQTT routed messages. | Event Grid drops the MQTT message that was meant to be routed.|
-| AuthenticationError | The EventGrid Data Sender role for the custom topic configured as the destination for MQTT routed messages was deleted. This error doesn't apply for namespace topics since they don't need a permission to route MQTT messages. | Event Grid drops the MQTT message that was meant to be routed.|
-| TooManyRequests | The number of MQTT routed messages per second exceeds the limit of the destination (namespace topic or custom topic) for MQTT routed messages. | Event Grid retries to route the MQTT message.|
+| TopicNotFoundError | The custom topic that is configured to receive all the MQTT routed messages was deleted. | Event Grid drops the MQTT message that was meant to be routed.|
+| AuthenticationError | The EventGrid Data Sender role for the custom topic configured as the destination for MQTT routed messages was deleted. | Event Grid drops the MQTT message that was meant to be routed.|
+| TooManyRequests | The number of MQTT routed messages per second exceeds the publish limit for the custom topic. | Event Grid retries to route the MQTT message.|
| ServiceError | An unexpected server error for a server's operational reason. | Event Grid retries to route the MQTT message.| During retries, Event Grid uses an exponential backoff retry policy for MQTT message routing. Event Grid retries delivery on the following schedule on a best effort basis:
event-grid Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-support.md
MQTT v5 currently differs from the [MQTT v5 Specification](https://docs.oasis-op
- Message ordering isn't guaranteed. - Subscription Identifiers aren't supported. - Assigned Client Identifiers aren't supported yet.-- The server responds to a CONNECT request with either Authentication Method or Authentication Data with a CONNACK with code 0x8C (Bad authentication method) or 0x87 (Not Authorized) respectively. - Topic Alias Maximum is 10. The server doesn't assign any topic aliases for outgoing messages at this time. Clients can assign and use topic aliases within set limit. - CONNACK doesn't return Response Information property even if the CONNECT request contains Request Response Information property.
+- User Properties on CONNECT, SUBSCRIBE, DISCONNECT, PUBACK, AUTH packets are not used by the service so they're not supported. If any of these requests include user properties, the request will fail.
- If the server receives a PUBACK from a client with non-success response code, the connection is terminated. - Keep Alive Maximum is 1160 seconds.
event-grid Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/overview.md
Event Grid offers a rich mixture of features. These features include:
- **[Built-in cloud integration](mqtt-routing.md)** - route your MQTT messages to Azure services or custom webhooks for further processing. - **Flexible and fine-grained [access control model](mqtt-access-control.md)** - group clients and topic to simplify access control management, and use the variable support in topic templates for a fine-grained access control. - **X.509 certificate [authentication](mqtt-client-authentication.md)** - authenticate your devices using the IoT industry's standard mechanism for authentication.-- **[AAD authentication](mqtt-client-azure-ad-token-and-rbac.md)** - authenticate your applications using the Azure's standard mechanism for authentication.
+- **[Microsoft Entra ID (formerly Azure Active Directory) authentication](mqtt-client-azure-ad-token-and-rbac.md)** - authenticate your applications using the Azure's standard mechanism for authentication.
- **TLS 1.2 and TLS 1.3 support** - secure your client communication using robust encryption protocols. - **Multi-session support** - connect your applications with multiple active sessions to ensure reliability and scalability. - **MQTT over WebSockets** - enable connectivity for clients in firewall-restricted environments.
event-hubs Monitor Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/monitor-event-hubs.md
Start with the article [Monitoring Azure resources with Azure Monitor](../azure-
The following sections build on this article by describing the specific data gathered for Azure Event Hubs. These sections also provide examples for configuring data collection and analyzing this data with Azure tools. > [!TIP]
-> To understand costs associated with Azure Monitor, see [Usage and estimated costs](../azure-monitor//usage-estimated-costs.md). To understand the time it takes for your data to appear in Azure Monitor, see [Log data ingestion time](../azure-monitor/logs/data-ingestion-time.md).
+> To understand costs associated with Azure Monitor, see [Azure Monitor cost and usage](../azure-monitor/cost-usage.md). To understand the time it takes for your data to appear in Azure Monitor, see [Log data ingestion time](../azure-monitor/logs/data-ingestion-time.md).
## Monitoring data from Azure Event Hubs Azure Event Hubs collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data).
expressroute Expressroute Monitoring Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-monitoring-metrics-alerts.md
This metric shows the number of FastPath routes configured on a circuit. Set an
Aggregation type: *Avg*
-You can view near to real-time availability of [ARP](./expressroute-troubleshooting-arp-resource-manager.md) (Layer-3 connectivity) across peerings and peers (Primary and Secondary ExpressRoute routers). This dashboard shows the Private Peering ARP session status is up across both peers, but down for Microsoft peering for both peers. The default aggregation (Average) was utilized across both peers.
+You can view near to real-time availability of [ARP](./expressroute-troubleshooting-arp-resource-manager.md) (Layer-2 connectivity) across peerings and peers (Primary and Secondary ExpressRoute routers). This dashboard shows the Private Peering ARP session status is up across both peers, but down for Microsoft peering for both peers. The default aggregation (Average) was utilized across both peers.
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/erArpAvailabilityMetrics.jpg" alt-text="ARP availability per peer":::
frontdoor Front Door Url Rewrite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-url-rewrite.md
With this feature, you can redirect your end users to a different origin based o
The **source pattern** is the URL path in the initial request you want to replace. Currently, source pattern uses a prefix-based match. To match all URL paths, you can define a forward slash (`/`) as the source pattern value.
-For the source pattern in a URL rewrite action, only the path after the *patterns to match* in the route configuration is considered. For example, you have the following incoming URL format `contoso.com/patten-to-match/source-pattern`, only `/source-pattern` gets considered by the rule set as the source pattern to be rewritten. The format of the out going URL after URL rewrite gets applied is `contoso.com/pattern-to-match/destination`.
+For the source pattern in a URL rewrite action, only the path after the *patterns to match* in the route configuration is considered. For example, you have the following incoming URL format `contoso.com/pattern-to-match/source-pattern`, only `/source-pattern` gets considered by the rule set as the source pattern to be rewritten. The format of the out going URL after URL rewrite gets applied is `contoso.com/pattern-to-match/destination`.
-For situation, when you need to remove the `/patterns-to-match` segment of the URL, set the **origin path** for the origin group in route configuration to `/`.
+For situation, when you need to remove the `/pattern-to-match` segment of the URL, set the **origin path** for the origin group in route configuration to `/`.
## Destination
governance Alerts Query Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/alerts-query-quickstart.md
+
+ Title: How Azure Resource Graph uses alerts to monitor resources
+description: In this quickstart, you learn how to create monitoring alerts for Azure resources using an Azure Resource Graph query and a Log Analytics workspace.
Last updated : 10/31/2023+++
+# Quickstart: Create alerts with Azure Resource Graph and Log Analytics
+
+In this quickstart, you learn how you can use Azure Log Analytics to create alerts on Azure Resource Graph queries. You can create alerts with Azure Resource Graph query, Log Analytics workspace, and managed identities. The alert's conditions send notifications at a specified interval.
+
+You can use queries to set up alerts for your deployed Azure resources. You can create queries using Azure Resource Graph tables, or you can combine Azure Resource Graph tables and Log Analytics data from Azure Monitor Logs.
+
+This article includes two examples of alerts:
+
+- **Azure Resource Graph**: Uses the Azure Resource Graph `Resources` table to create a query that gets data for your deployed Azure resources and create an alert.
+- **Azure Resource Graph and Log Analytics**: Uses the Azure Resource Graph `Resources` table and Log Analytics data from the from Azure Monitor Logs `Heartbeat` table. This example uses a virtual machine to show how to set up the query and alert.
+
+> [!NOTE]
+> Azure Resource Graph alerts integration with Log Analytics is in public preview.
+
+## Prerequisites
+
+- If you don't have an Azure account, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- Resources deployed in Azure like virtual machines or storage accounts.
+- To use the example for the Azure Resource Graph and Log Analytics query, you need at least one Azure virtual machine with the Azure Monitor Agent.
+
+## What problem will we solve?
+
+You want to use an Azure Resource Graph query to get information about your Azure resources. You can use Azure Log Analytics to set up alerts that notify you when certain conditions are met.
+
+## Create workspace
+
+Create a Log Analytics Workspace in the subscription that's being monitored.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. In the search field, type _log analytics workspaces_ and select **Log Analytics workspaces**.
+
+ If you've used Log Analytics workspaces, you can select it from **Azure services**.
+
+ :::image type="content" source="./media/alerts-query-quickstart/search-log-analytics.png" alt-text="Screenshot of the Azure home page that highlights search field and Log Analytics workspaces.":::
+
+1. Select **Create**.
+
+ - **Subscription**: Select your Azure subscription
+ - **Resource group**: _demo-arg-alert-rg_
+ - **Name**: _demo-arg-alert-workspace_
+ - **Region**: _West US3_
+
+1. Select **Review + Create** and wait for **Validation passed** to be displayed.
+1. Select **Create** to begin the deployment.
+1. Select **Go to resource** when the deployment is completed.
+
+## Create virtual machine
+
+# [Azure Resource Graph](#tab/azure-resource-graph)
+
+You don't need to create a virtual machine for the example that uses the Azure Resource Graph table.
+
+# [Azure Resource Graph and Log Analytics](#tab/arg-log-analytics)
+
+> [!NOTE]
+> This section is optional if you have existing virtual machines or know how to create a virtual machine. This example uses a virtual machine to show how to create a query using an Azure Resource Graph table and Log Analytics data.
+
+To get log information, when you connect your virtual machine to the Log Analytics workspace, the Azure Monitor Agent is installed on the virtual machine. If you don't have a virtual machine, you can create one for this example. To avoid unnecessary costs, delete the virtual machine when you're finished with the example.
+
+The following instructions are basic settings for a Linux virtual machine. Detailed steps about how to create a virtual machine are outside the scope of this article.
+
+1. In Azure, create an [Ubuntu Linux virtual machine](https://portal.azure.com/#create/canonical.0001-com-ubuntu-server-jammy22_04-lts-gen2).
+1. Select **Create**.
+1. Use the **Create a virtual machine**. You can accept most default settings with the following exceptions:
+
+ **Create a virtual machine**
+ - **Resource group**: _demo-arg-alert-rg_
+ - **virtual machine name**: Type a virtual machine name
+ - **Availability options**: _No infrastructure redundancy required_
+ - **Size**: _B1ms_
+ - **Administrator account**: Create key pair or username and password
+ - **Public inbound ports**: _None_
+
+ **Disks: accept defaults**
+ - Verify **Delete with VM** is selected.
+
+ **Networking**
+ - Accept defaults.
+ - Select **Delete public IP and NIC when VM is deleted**.
+
+ **Management**
+ - Accept defaults
+ - Change the **Auto-shutdown** **Time zone** to your time zone.
+
+ **Monitoring**, **Advanced**, and **Tags**
+ - No changes needed for this example.
+
+1. Select **Review and Create** and then **Create**.
+
+ If you selected SSH for authentication, you're prompted to **Generate new key pair**. Download the private key and create the virtual machine. When you're finished with the VM, delete the key file.
+
+Select **Go to resource** after the virtual machine is deployed.
+
+> [!NOTE]
+> This section is optional if you know how to connect a virtual machine to a Log Analytics workspace.
+
+Set up monitoring for a virtual machine.
+1. Go to your virtual machine.
+1. Select **Monitoring** > **Insights** > **Azure Monitor** > **Overview**.
+1. Select **Not Monitored**.
+1. Select **Enable** for virtual machine's **Monitor Coverage**.
+1. Select **Enable** for the **Azure Monitor Insights Onboarding**.
+1. Set up the **Monitoring Configuration**
+ - **Enable Insights using**: Azure Monitoring Agent
+ - **Subscription**: Select your subscription.
+ - Create a new Data Collection Rule
+ - Create a name.
+ - Select your subscription.
+ - Select your Log Analytics workspace _demo-arg-alert-workspace_.
+ - Select **Create**, verify the settings are correct, and select **Configure** to begin the deployment.
+1. Close **Azure Monitor Insights Onboarding**.
+
+After a successful deployment, **Insights** > **Overview** > **Monitored** shows the virtual machine's **Monitor Coverage** is enabled and a link to the data collection rule.
+
+Select the link to the data collection rule and verify the **Configuration** settings:
+
+- **Resources**: Shows the virtual machine, resource group, and subscription.
+- **Data Sources**:
+ - **Data source**: Performance Counters
+ - **Destination**: Azure Monitor Logs
+
+You can select the Performance Counters link to verify details.
+
+Go to your Log Analytics workspace _demo-arg-alert-workspace_. Select **Settings** > **Agents** > **Linux servers** and one Linux computer is connected to the **Azure Monitor Linux agent**.
+
+Go to your virtual machine and select **Settings** > **Extensions + applications** and verify that the `AzureMonitorLinuxAgent`shows provisioning succeeded.
+++
+## Create query
+
+# [Azure Resource Graph](#tab/azure-resource-graph)
+
+From the Log Analytics workspace, create an Azure Resource Graph query to get a count of your Azure resources. This example uses the Azure Resource Graph `Resources` table.
+
+1. Select **Logs** from the left side of the **Log Analytics workspace** page.
+
+ Close the **Queries** window if it's displayed.
+
+1. Use the following code in the **New Query**.
+
+ ```kusto
+ arg("").Resources
+ | count
+ ```
+
+ Table names in Log Analytics need to be camel case with the first letter of each word capitalized, like `Resources` or `ResourceContainers`. You can also use lowercase like `resources` or `resourcecontainers`.
+
+ :::image type="content" source="./media/alerts-query-quickstart/log-analytics-workspace-query.png" alt-text="Screenshot of the Log Analytics workspace with a query of the Resources table that highlights logs and run button.":::
+
+1. Select **Run**.
+
+ The **Results** displays the **Count** of resources in your Azure subscription. Make a note of that number because you need it for the alert rule's condition. When you manually run the query the count is based on user identity, and a fired alert uses a managed identity. It's possible that the count might vary between a manual run or fired alert.
+
+1. Remove the count from your query.
+
+ ```kusto
+ arg("").Resources
+ ```
+
+# [Azure Resource Graph and Log Analytics](#tab/arg-log-analytics)
+
+From the Log Analytics workspace, create an Azure Resource Graph query to get the last heartbeat information from your virtual machine. This example uses the Azure Resource Graph `Resources` table and Log Analytics data from the from Azure Monitor Logs `Heartbeat` table.
+
+1. Go to your _demo-arg-alert-workspace_ Log Analytics workspace.
+1. Select **Logs** from the left side of the **Log Analytics workspace** page.
+
+ Close the **Queries** window if it's displayed.
+
+1. Use the following code in the **New Query**.
+
+ ```kusto
+ arg("").Resources
+ | where type == 'microsoft.compute/virtualmachines'
+ | project ResourceId = id, name, PowerState = tostring(properties.extended.instanceView.powerState.code)
+ | join (Heartbeat
+ | where TimeGenerated > ago(15m)
+ | summarize lastHeartBeat = max(TimeGenerated) by ResourceId)
+ on ResourceId
+ | project lastHeartBeat, PowerState, name, ResourceId
+ ```
+
+ Table names in Log Analytics need to be camel case with the first letter of each word capitalized, like `Resources` or `ResourceContainers`. You can also use lowercase like `resources` or `resourcecontainers`.
+
+ You can use other timeframes for the `TimeGenerated`. For example, rather than minutes like `15m` use hours like `12h`, `24h`, `48h`.
+
+ :::image type="content" source="./media/alerts-query-quickstart/log-analytics-cross-query.png" alt-text="Screenshot of the Log Analytics workspace with a cross query of the Resources and Heartbeat tables that highlights logs and run button.":::
+
+1. Select **Run**.
+
+ The query should return the virtual machine's last heartbeat, power state, name, and resource ID. If no **Results** are displayed, continue to the next steps.
+++
+## Create alert rule
+
+# [Azure Resource Graph](#tab/azure-resource-graph)
+
+From the Log Analytics workspace, select **New alert rule**. The query from your Log Analytics workspace is copied to the alert rule. **Create an alert rule** has several tabs that need to be updated to create the alert.
++
+### Scope
+
+Verify that the scope is set to your Log Analytics workspace named _demo-arg-alert-workspace_.
+
+If you need to change the scope, do the following steps.
+
+1. Go to the **Scope** tab and select **Select scope**.
+1. At the bottom of the **Selected resources** screen, delete the current scope.
+1. Expand the **demo-arg-alert-rg** from the list of resources and select **demo-arg-alert-workspace**.
+1. Select **Apply**.
+1. Select **Next: Condition**.
+
+### Condition
+
+The form has several fields to complete.
+
+- **Signal name**: Custom log search
+- **Search query**: Displays the query code.
+
+**Measurement**
+
+- **Measure**: Table rows
+- **Aggregation type**: Count
+- **Aggregation granularity**: 5 minutes
+
+**Alert logic**
+
+- **Operator**: Greater than
+- **Threshold value**: Use a number that's less that the number returned from the resources count.
+
+ For example, if your resource count was 50 then use 45. This value triggers the alert to fire when it evaluates your resources because your number of resources is greater than the threshold value.
+
+- **Frequency of evaluation**: 5 minutes
+
+Select **Next: Actions**.
+
+### Actions
+
+Select **Create action group**.
+
+- **Subscription**: Select your Azure subscription.
+- **Resource group**: _demo-arg-alert-rg_
+- **Region**: Global
+- **Action group name**: _demo-arg-alert-action-group_
+- **Display name**: _demo-action_ (limit is 12 characters)
+
+Select **Next: Notifications**.
+
+- **Notification type**: Select **Email/SMS message/Push/Voice**.
+- **Name**: _email-alert_
+- Select the **Email** checkbox and type your email address.
+- Select **Ok**.
+
+Select **Review + Create**, verify the summary is correct, and select **Create**. You're returned to the **Actions** tab of the **Create an alert rule** page. The **Action group name** shows the action group you created.
+
+Select **Next: Details**.
+
+### Details
+
+Use the following information on the **Details** tab.
+
+ - **Subscription**: Select your Azure subscription
+ - **Resource group**: _demo-arg-alert-rg_
+ - **Severity**: Accept the default value **3 - Informational**
+ - **Alert rule name**: _demo-arg-alert-rule_
+ - **Alert rule description**: _Email alert for count of Azure resources_
+ - **Identity**: Select _System assigned managed identity_
+
+Select **Review + Create**, verify the summary is correct, and select **Create**. You're returned to the **Logs** page of your **Log Analytics workspace**.
+
+You receive an email notification to confirm you were added to the action group.
+
+### Assign role
+
+Assign the _Log Analytics Reader_ to the system-assigned managed identity so that it has permissions fire alerts that send email notifications.
+
+1. Select **Monitoring** > **Alerts** in the Log Analytics workspace.
+
+ Select **OK** if you're prompted that **Your unsaved edits will be discarded**.
+
+1. Select **Alert rules**.
+1. Select _demo-arg-alert-rule_.
+1. Select **Settings** > **Identity** > **System assigned**.
+
+ - **Status**: On
+ - **Object ID**: Shows the GUID for your Enterprise Application (service principal) in Microsoft Entra ID.
+ - **Permission**: Select **Azure role assignments**
+ - Verify the correct subscription is selected.
+ - Select **Add role assignment**
+ - **Scope**: _Subscription_
+ - **Subscription**: Your Azure subscription name
+ - **Role**: _Log Analytics Reader_
+1. Select **Save**.
+
+It takes a few minutes for the _Log Analytics Reader_ to display on the **Azure role assignments** page. Select **Refresh** to update the page.
+
+Use your browser's back button to return to the **Identity** and then select **Overview** to return to the alert rule. Select the link to your resource group named _demo-arg-alert-rg_.
+
+# [Azure Resource Graph and Log Analytics](#tab/arg-log-analytics)
+
+From the Log Analytics workspace, select **New alert rule**. The query from your Log Analytics workspace is copied to the alert rule. The **Create an alert rule** has several tabs that need to be updated.
++
+### Scope
+
+Verify that the scope is set to your Log Analytics workspace.
+
+If you need to change the scope, do the following steps.
+
+1. Go to the **Scope** tab and select **Select scope**.
+1. At the bottom of the **Selected resources** screen, delete the current scope.
+1. Expand the **demo-arg-alert-rg** from the list of resources and select **demo-arg-alert-workspace**.
+1. Select **Apply**.
+1. Select **Next: Condition**.
+
+### Condition
+
+The form has several fields to complete.
+
+- **Signal name**: Custom log search
+- **Search query**: Displays the query code.
+
+**Measurement**
+
+- **Measure**: Table rows
+- **Aggregation type**: Count
+- **Aggregation granularity**: 5 minutes
+
+**Alert logic**
+
+- **Operator**: Less than
+- **Threshold value**: 2
+- **Frequency of evaluation**: 5 minutes
+
+Select **Next: Actions**.
+
+### Actions
+
+Select **Create action group**.
+
+- **Subscription**: Select your Azure subscription.
+- **Resource group**: _demo-arg-alert-rg_
+- **Region**: Global
+- **Action group name**: _demo-arg-la-alert-action-group_
+- **Display name**: _demo-argla_ (limit is 12 characters)
+
+Select **Next: Notifications**.
+
+- **Notification type**: Select **Email/SMS message/Push/Voice**
+- **Name**: _email-alert-arg-la_
+- Select the **Email** checkbox and type your email address
+- Select **Ok**
+
+Select **Review + Create**, verify the summary is correct, and select **Create**. You're returned to the **Actions** tab of the **Create an alert rule** page. The **Action group name** shows the action group you created.
+
+Select **Next: Details**.
+
+### Details
+
+Use the following information on the **Details** tab.
+
+ - **Subscription**: Select your Azure subscription
+ - **Resource group**: _demo-arg-alert-rg_
+ - **Severity**: Accept the default value **2 - Warning**
+ - **Alert rule name**: _demo-arg-la-alert-rule_
+ - **Alert rule description**: _Email alert for ARG-LA query of Azure virtual machine_
+ - **Identity**: Select _System assigned managed identity_
+
+Select **Review + Create**, verify the summary is correct, and select **Create**. You're returned to the **Logs** page of your **Log Analytics workspace**.
+
+You receive an email notification to confirm you were added to the action group.
+
+### Assign role
+
+Assign the _Log Analytics Reader_ to the system-assigned managed identity so that it has permissions fire alerts that send email notifications.
+
+1. Select **Monitoring** > **Alerts** in the Log Analytics workspace.
+
+ Select **OK** if you're prompted that **Your unsaved edits will be discarded**.
+
+1. Select **Alert rules**
+1. Select _demo-arg-la-alert-rule_
+1. Select **Settings** > **Identity** > **System assigned**
+
+ - **Status**: On
+ - **Object ID**: Shows the GUID for your Enterprise Application (service principal) in Microsoft Entra ID.
+ - **Permission**: Select **Azure role assignments**
+ - Verify the correct subscription is selected.
+ - Select **Add role assignment**
+ - **Scope**: _Subscription_
+ - **Subscription**: Your Azure subscription name
+ - **Role**: _Log Analytics Reader_
+1. Select **Save**.
+
+It takes a few minutes for the _Log Analytics Reader_ to display on the **Azure role assignments** page. Select **Refresh** to update the page.
+
+Use your browser's back button to return to the **Identity** and select **Overview** to return to the alert rule. Select the link to your resource group named _demo-arg-alert-rg_.
+++
+## Verify alerts
+
+# [Azure Resource Graph](#tab/azure-resource-graph)
+
+After the role is assigned to your alert rule, you begin to receive email for alert messages. The rule was created to send alerts every five minutes and it takes a few minutes to get the first alert.
+
+You can also view the alerts in the Azure portal.
+
+1. Go to the resource group _demo-arg-alert-rg_.
+1. Select _demo-arg-alert-workspace_ in your list of resources.
+1. Select **Monitoring** > **Alerts**.
+1. A list of alerts is displayed.
+
+ :::image type="content" source="./media/alerts-query-quickstart/alert-fired.png" alt-text="Screenshot of the Log Analytics workspace that shows list of alerts that fired.":::
++
+# [Azure Resource Graph and Log Analytics](#tab/arg-log-analytics)
+
+After the role is assigned to your alert rule, you begin to receive email for alert messages. The rule was created to send alerts every five minutes and it takes a few minutes to get the first alert.
+
+You can also view the alerts in the Azure portal.
+
+1. Go to the resource group _demo-arg-alert-rg_.
+1. Select your virtual machine.
+1. Select **Monitoring** > **Alerts**.
+1. A list of alerts is displayed.
+
+ :::image type="content" source="./media/alerts-query-quickstart/vm-alert-fired.png" alt-text="Screenshot of the virtual machine monitoring alerts that shows list of alerts that fired.":::
+
+> [!NOTE]
+> It might take 30 minutes for log information to become available to create alerts.
+++
+## How did we solve the problem?
+
+You created an Azure Resource Graph query and a Log Analytics workspace to monitor Azure resources. You also set up alerts to notify you for events and assigned a role to the system-assigned managed identity. After the alert was created, you received email alerts based on conditions in the alert rule.
+
+## Clean up resources
+
+If you want to keep the alert configuration but stop the alert from firing and sending email notifications, you can disable it. Go to your alert rule _demo-arg-alert-rule_ or _demo-arg-la-alert-rule_ and select **Disable**.
+
+If you don't need this alert or the resources you created in this example, delete the resource group with the following steps:
+
+1. Go to your resource group _demo-arg-alert-rg_.
+1. Select **Delete resource group**.
+1. Type the resource group name to confirm.
+1. Select **Delete**.
+
+## Related content
+
+For more information about the query language or how to explore resources, go to the following articles.
+
+- [Understanding the Azure Resource Graph query language](./concepts/query-language.md)
+- [Explore your Azure resources with Resource Graph](./concepts/explore-resources.md)
+- [Overview of Log Analytics in Azure Monitor](../../azure-monitor/logs/log-analytics-overview.md)
+- [Troubleshoot Azure Resource Graph alerts](./troubleshoot/alerts.md)
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/overview.md
Title: Overview of Azure Resource Graph description: Understand how the Azure Resource Graph service enables complex querying of resources at scale across subscriptions and tenants. Previously updated : 08/15/2023 Last updated : 10/31/2023 + # What is Azure Resource Graph? Azure Resource Graph is an Azure service designed to extend Azure Resource Management by
provide the following abilities:
- Query resources with complex filtering, grouping, and sorting by resource properties. - Explore resources iteratively based on governance requirements.-- Assess the impact of applying policies in a vast cloud environment.
+- Assess the effect of applying policies in a vast cloud environment.
- [Query changes made to resource properties](./how-to/get-resource-changes.md). In this documentation, you review each feature in detail.
With Azure Resource Graph, you can:
## How Resource Graph is kept current
-When an Azure resource is updated, Resource Graph is notified by Resource Manager of the change.
-Resource Graph then updates its database. Resource Graph also does a regular _full scan_. This scan
-ensures that Resource Graph data is current if there are missed notifications or when a resource is
-updated outside of Resource Manager.
+When an Azure resource is updated, Azure Resource Manager notifies Azure Resource Graph about the change. Azure Resource Graph then updates its database. Azure Resource Graph also does a regular _full scan_. This scan ensures that Azure Resource Graph data is current if there are missed notifications or when a resource is updated outside of Azure Resource Manager.
> [!NOTE] > Resource Graph uses a `GET` to the latest non-preview application programming interface (API) of each resource provider to gather
structured the same for each language. Learn how to enable Resource Graph with:
- [Azure PowerShell](./first-query-powershell.md#add-the-resource-graph-module) - [Python](./first-query-python.md#add-the-resource-graph-library)
+## Alerts integration with Log Analytics
+
+> [!NOTE]
+> Azure Resource Graph alerts integration with Log Analytics is in public preview.
+
+You can create alert rules by using either Azure Resources Graph queries or integrating Log Analytics with Azure Resources Graph queries through Azure Monitor. Both methods can be used to create alerts for Azure resources. For examples, go to [Quickstart: Create alerts with Azure Resource Graph and Log Analytics](./alerts-query-quickstart.md).
+ ## Next steps - Learn more about the [query language](./concepts/query-language.md).
governance Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/troubleshoot/alerts.md
+
+ Title: Troubleshoot Azure Resource Graph alerts
+description: Learn how to troubleshoot issues with Azure Resource Graph alerts integration with Log Analytics.
Last updated : 10/31/2023+++
+# Troubleshoot Azure Resource Graph alerts
+
+> [!NOTE]
+> Azure Resource Graph alerts integration with Log Analytics is in public preview.
+
+The following descriptions help you troubleshoot queries for Azure Resource Graph alerts that integrate with Log Analytics.
+
+## Azure Resource Graph operators
+
+Only the operators supported in Azure Resource Graph Explorer are supported as part of this integration with Log Analytics for alerts. For more information, go to [supported operators](../concepts/query-language.md#supported-kql-language-elements).
+
+## Pagination
+
+Azure Resource Graph has pagination in its dedicated APIs. But with the way Log Analytics interacts with Azure Resource Graph, pagination isn't a supported reason why only 1,000 results are returned.
+
+- Cross queries between Azure Resource Graph and Log Analytics don't support pagination and only show the first 1,000 results.
+- You must set a limitation of 400 when writing a query with the [mv-expand](../concepts/query-language.md#supported-tabulartop-level-operators) operator.
++
+## Managed identities
+
+The managed identity for your alert must have the role [Log Analytics Contributor](../../../role-based-access-control/built-in-roles.md#log-analytics-contributor) or [Log Analytics Reader](../../../role-based-access-control/built-in-roles.md#log-analytics-reader). The role provides the permissions to get monitoring information.
+
+When you set up an alert, the results can be different than the result after the alert is fired. The reason is that a fired alert is run based on managed identity, but when you manually test an alert it's based on the user's identity.
+
+## Table names
+
+Azure Resource Graph table names need to be camel case with the first letter of each word capitalized, like `Resources` or `ResourceContainers`. You can also use lowercase like `resources` or `resourcecontainers`.
hdinsight-aks Sink Sql Server Table Using Flink Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/sink-sql-server-table-using-flink-sql.md
The SQLServer CDC connector is a Flink Source connector, which reads database sn
We have already covered this section in detail on how to use [secure shell](./flink-web-ssh-on-portal-to-flink-sql.md) with Flink.
-## Prepare table and enable cdc feature on SQL Server sqldb
+### Prepare table and enable CDC feature on SQL Server SQLDB
Let us prepare a table and enable the CDC, You can refer the detailed steps listed on [SQL Documentation](/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server?)
GO
``` **Verify that the user has access to the CDC table**+ ``` SQL USE inventory GO
VALUES ('21-FEB-2016', 1003, 1, 107);
EXEC sys.sp_cdc_enable_table @source_schema = 'dbo', @source_name = 'orders', @role_name = NULL, @supports_net_changes = 0; GO ```
-## Download SQLServer CDC connector and its dependencies on SSH
-
-**WSL to ubuntu on local to check all dependencies related *flink-sql-connector-sqlserver-cdc* jar**
+### Download SQLServer CDC connector on SSH
```
-myvm@MININT-481C9TJ:/mnt/c/Work/99_tools/apache-maven-3.9.0/bin$ vim pom.xml
-
-<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
- <modelVersion>4.0.0</modelVersion>
- <groupId>com.dep.download</groupId>
- <artifactId>dep-download</artifactId>
- <version>1.0-SNAPSHOT</version>
-<!-- https://mvnrepository.com/artifact/com.ververica/flink-sql-connector-sqlserver-cdc -->
- <dependency>
- <groupId>com.ververica</groupId>
- <artifactId>flink-sql-connector-sqlserver-cdc</artifactId>
- <version>2.3.0</version>
- </dependency>
-</project>
-
-myvm@MININT-481C9TJ:/mnt/c/Work/99_tools/apache-maven-3.9.0/bin$ mkdir target
-
-myvm@MININT-481C9TJ:/mnt/c/Work/99_tools/apache-maven-3.9.0/bin$ /mnt/c/Work/99_tools/apache-maven-3.9.0/bin/mvn -DoutputDirectory=target -f pom.xml dependency:copy-dependencies
-[INFO] Scanning for projects...
-
-myvm@MININT-481C9TJ:/mnt/c/Work/99_tools/apache-maven-3.9.0/bin$ cd target
-myvm@MININT-481C9TJ:/mnt/c/Work/99_tools/apache-maven-3.9.0/bin/target$ ll
-total 19436
-drwxrwxrwx 1 msdata msdata 4096 Feb 9 08:39 ./
-drwxrwxrwx 1 msdata msdata 4096 Feb 9 08:37 ../
--rwxrwxrwx 1 msdata msdata 85388 Feb 9 08:39 awaitility-4.0.1.jar*--rwxrwxrwx 1 msdata msdata 3085931 Feb 9 08:39 flink-shaded-guava-30.1.1-jre-16.0.jar*--rwxrwxrwx 1 msdata msdata 16556459 Feb 9 08:39 flink-sql-connector-sqlserver-cdc-2.3.0.jar*--rwxrwxrwx 1 msdata msdata 123103 Feb 9 08:39 hamcrest-2.1.jar*--rwxrwxrwx 1 msdata msdata 40502 Feb 9 08:39 slf4j-api-1.7.15.jar*
-```
-**Let us download jars to SSH**
-```sql
-wget https://repo1.maven.org/maven2/com/ververica/flink-connector-sqlserver-cdc/2.4.0/flink-connector-sqlserver-cdc-2.4.0.jar
-wget https://repo1.maven.org/maven2/org/apache/flink/flink-shaded-guava/30.1.1-jre-16.0/flink-shaded-guava-30.1.1-jre-16.0.jar
-wget https://repo1.maven.org/maven2/org/awaitility/awaitility/4.0.1/awaitility-4.0.1.jar
-wget https://repo1.maven.org/maven2/org/hamcrest/hamcrest/2.1/hamcrest-2.1.jar
-wget https://repo1.maven.org/maven2/net/java/loci/jsr308-all/1.1.2/jsr308-all-1.1.2.jar
-
-msdata@pod-0 [ ~/jar ]$ ls -l
-total 6988
--rw-r-- 1 msdata msdata 85388 Sep 6 2019 awaitility-4.0.1.jar--rw-r-- 1 msdata msdata 107097 Jun 25 03:47 flink-connector-sqlserver-cdc-2.4.0.jar--rw-r-- 1 msdata msdata 3085931 Sep 27 2022 flink-shaded-guava-30.1.1-jre-16.0.jar--rw-r-- 1 msdata msdata 123103 Dec 20 2018 hamcrest-2.1.jar--rw-r-- 1 msdata msdata 3742993 Mar 30 2011 jsr308-all-1.1.2.jar
+wget https://repo1.maven.org/maven2/com/ververica/flink-sql-connector-sqlserver-cdc/2.4.1/flink-sql-connector-sqlserver-cdc-2.4.1.jar
``` ### Add jar into sql-client.sh and connect to Flink SQL Client ```sql
-msdata@pod-0 [ ~ ]$ bin/sql-client.sh -j jar/flink-sql-connector-sqlserver-cdc-2.4.0.jar -j jar/flink-shaded-guava-30.1.1-jre-16.0.jar -j jar/hamcrest-2.1.jar -j jar/awaitility-4.0.1.jar -j jar/jsr308-all-1.1.2.jar
+bin/sql-client.sh -j flink-sql-connector-sqlserver-cdc-2.4.1.jar
```
-## Create SQLServer CDC table
+### Create SQLServer CDC table
``` sql SET 'sql-client.execution.result-mode' = 'tableau';
select * from orders;
:::image type="content" source="./media/sink-sql-server-table-using-flink-sql/insert-sql-table.png" alt-text="Screenshot showing making changes on SQL Table.":::
-## Validation
+### Validation
Monitor the table on Flink SQL
machine-learning Concept V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-v2.md
Title: 'CLI & SDK v2'
+ Title: 'Azure Machine Learning CLI & SDK v2'
-description: This article explains the difference between the v1 and v2 versions of Azure Machine Learning v1 and v2.
+description: This article explains the difference between the v1 and v2 versions of Azure Machine Learning.
Last updated 11/04/2022
-#Customer intent: As a data scientist, I want to know whether to use v1 or v2 of CLI, SDK.
+#Customer intent: As a data scientist, I want to know whether to use v1 or v2 of CLI and SDK.
-# What is Azure Machine Learning CLI & Python SDK v2?
+# What is Azure Machine Learning CLI and Python SDK v2?
[!INCLUDE [dev v2](includes/machine-learning-dev-v2.md)]
-Azure Machine Learning CLI v2 and Azure Machine Learning Python SDK v2 introduce a consistency of features and terminology across the interfaces. In order to create this consistency, the syntax of commands differs, in some cases significantly, from the first versions (v1).
+Azure Machine Learning CLI v2 (CLI v2) and Azure Machine Learning Python SDK v2 (SDK v2) introduce a consistency of features and terminology across the interfaces. To create this consistency, the syntax of commands differs, in some cases significantly, from the first versions (v1).
-There are no differences in functionality between SDK v2 and CLI v2. The command line based CLI may be more convenient in CI/CD MLOps type of scenarios, while the SDK may be more convenient for development.
+There are no differences in functionality between CLI v2 and SDK v2. The command line-based CLI might be more convenient in CI/CD MLOps types of scenarios, while the SDK might be more convenient for development.
## Azure Machine Learning CLI v2
-The Azure Machine Learning CLI v2 (CLI v2) is the latest extension for the [Azure CLI](/cli/azure/what-is-azure-cli). The CLI v2 provides commands in the format *az ml __\<noun\> \<verb\> \<options\>__* to create and maintain Azure Machine Learning assets and workflows. The assets or workflows themselves are defined using a YAML file. The YAML file defines the configuration of the asset or workflow ΓÇô what is it, where should it run, and so on.
+Azure Machine Learning CLI v2 is the latest extension for the [Azure CLI](/cli/azure/what-is-azure-cli). CLI v2 provides commands in the format *az ml __\<noun\> \<verb\> \<options\>__* to create and maintain Machine Learning assets and workflows. The assets or workflows themselves are defined by using a YAML file. The YAML file defines the configuration of the asset or workflow. For example, what is it, and where should it run?
A few examples of CLI v2 commands:
A few examples of CLI v2 commands:
### Use cases for CLI v2
-The CLI v2 is useful in the following scenarios:
+CLI v2 is useful in the following scenarios:
-* On board to Azure Machine Learning without the need to learn a specific programming language
+* Onboard to Machine Learning without the need to learn a specific programming language.
- The YAML file defines the configuration of the asset or workflow ΓÇô what is it, where should it run, and so on. Any custom logic/IP used, say data preparation, model training, model scoring can remain in script files, which are referred to in the YAML, but not part of the YAML itself. Azure Machine Learning supports script files in python, R, Java, Julia or C#. All you need to learn is YAML format and command lines to use Azure Machine Learning. You can stick with script files of your choice.
+ The YAML file defines the configuration of the asset or workflow, such as what is it and where should it run? Any custom logic or IP used, say data preparation, model training, and model scoring, can remain in script files. These files are referred to in the YAML but aren't part of the YAML itself. Machine Learning supports script files in Python, R, Java, Julia, or C#. All you need to learn is YAML format and command lines to use Machine Learning. You can stick with script files of your choice.
-* Ease of deployment and automation
+* Take advantage of ease of deployment and automation.
- The use of command-line for execution makes deployment and automation simpler, since workflows can be invoked from any offering/platform, which allows users to call the command line.
+ The use of command line for execution makes deployment and automation simpler because you can invoke workflows from any offering or platform, which allows users to call the command line.
-* Managed inference deployments
+* Use managed inference deployments.
- Azure Machine Learning offers [endpoints](concept-endpoints.md) to streamline model deployments for both real-time and batch inference deployments. This functionality is available only via CLI v2 and SDK v2.
+ Machine Learning offers [endpoints](concept-endpoints.md) to streamline model deployments for both real-time and batch inference deployments. This functionality is available only via CLI v2 and SDK v2.
-* Reusable components in pipelines
-
- Azure Machine Learning introduces [components](concept-component.md) for managing and reusing common logic across pipelines. This functionality is available only via CLI v2 and SDK v2.
+* Reuse components in pipelines.
+ Machine Learning introduces [components](concept-component.md) for managing and reusing common logic across pipelines. This functionality is available only via CLI v2 and SDK v2.
## Azure Machine Learning Python SDK v2 Azure Machine Learning Python SDK v2 is an updated Python SDK package, which allows users to:
-* Submit training jobs
-* Manage data, models, environments
-* Perform managed inferencing (real time and batch)
-* Stitch together multiple tasks and production workflows using Azure Machine Learning pipelines
+* Submit training jobs.
+* Manage data, models, and environments.
+* Perform managed inferencing (real time and batch).
+* Stitch together multiple tasks and production workflows by using Machine Learning pipelines.
-The SDK v2 is on par with CLI v2 functionality and is consistent in how assets (nouns) and actions (verbs) are used between SDK and CLI. For example, to list an asset, the `list` action can be used in both CLI and SDK. The same `list` action can be used to list a compute, model, environment, and so on.
+SDK v2 is on par with CLI v2 functionality and is consistent in how assets (nouns) and actions (verbs) are used between SDK and CLI. For example, to list an asset, you can use the `list` action in both SDK and CLI. You can use the same `list` action to list a compute, model, environment, and so on.
### Use cases for SDK v2
-The SDK v2 is useful in the following scenarios:
+SDK v2 is useful in the following scenarios:
+
+* Use Python functions to build a single step or a complex workflow.
-* Use Python functions to build a single step or a complex workflow
+ SDK v2 allows you to build a single command or a chain of commands like Python functions. The command has a name and parameters, expects input, and returns output.
- SDK v2 allows you to build a single command or a chain of commands like Python functions - the command has a name, parameters, expects input, and returns output.
+* Move from simple to complex concepts incrementally.
-* Move from simple to complex concepts incrementally
+ SDK v2 allows you to:
- SDK v2 allows you to:
* Construct a single command.
- * Add a hyperparameter sweep on top of that command,
- * Add the command with various others into a pipeline one after the other.
+ * Add a hyperparameter sweep on top of that command.
+ * Add the command with various others into a pipeline one after the other.
- This construction is useful, given the iterative nature of machine learning.
+ This construction is useful because of the iterative nature of machine learning.
-* Reusable components in pipelines
+* Reuse components in pipelines.
- Azure Machine Learning introduces [components](concept-component.md) for managing and reusing common logic across pipelines. This functionality is available only via CLI v2 and SDK v2.
+ Machine Learning introduces [components](concept-component.md) for managing and reusing common logic across pipelines. This functionality is available only via CLI v2 and SDK v2.
-* Managed inferencing
+* Use managed inferencing.
- Azure Machine Learning offers [endpoints](concept-endpoints.md) to streamline model deployments for both real-time and batch inference deployments. This functionality is available only via CLI v2 and SDK v2.
+ Machine Learning offers [endpoints](concept-endpoints.md) to streamline model deployments for both real-time and batch inference deployments. This functionality is available only via CLI v2 and SDK v2.
## Should I use v1 or v2?
+Here are some considerations to help you decide which version to use.
+ ### CLI v2
-The Azure Machine Learning CLI v1 has been deprecated. We recommend you to use CLI v2 if:
+Azure Machine Learning CLI v1 has been deprecated. We recommend that you use CLI v2 if:
-* You were a CLI v1 user
-* You want to use new features like - reusable components, managed inferencing
-* You don't want to use a Python SDK - CLI v2 allows you to use YAML with scripts in python, R, Java, Julia or C#
-* You were a user of R SDK previously - Azure Machine Learning won't support an SDK in `R`. However, the CLI v2 has support for `R` scripts.
-* You want to use command line based automation/deployments
+* You were a CLI v1 user.
+* You want to use new features like reusable components and managed inferencing.
+* You don't want to use a Python SDK. CLI v2 allows you to use YAML with scripts in Python, R, Java, Julia, or C#.
+* You were a user of R SDK previously. Machine Learning won't support an SDK in `R`. However, CLI v2 has support for `R` scripts.
+* You want to use command line-based automation or deployments.
* You don't need Spark Jobs. This feature is currently available in preview in CLI v2. ### SDK v2
-The Azure Machine Learning Python SDK v1 doesn't have a planned deprecation date. If you have significant investments in Python SDK v1 and don't need any new features offered by SDK v2, you can continue to use SDK v1. However, you should consider using SDK v2 if:
+Azure Machine Learning Python SDK v1 doesn't have a planned deprecation date. If you have significant investments in Python SDK v1 and don't need any new features offered by SDK v2, you can continue to use SDK v1. However, you should consider using SDK v2 if:
-* You want to use new features like - reusable components, managed inferencing
-* You're starting a new workflow or pipeline - all new features and future investments will be introduced in v2
-* You want to take advantage of the improved usability of the Python SDK v2 - ability to compose jobs and pipelines using Python functions, easy evolution from simple to complex tasks etc.
+* You want to use new features like reusable components and managed inferencing.
+* You're starting a new workflow or pipeline. All new features and future investments will be introduced in v2.
+* You want to take advantage of the improved usability of the Python SDK v2 ability to compose jobs and pipelines by using Python functions, with easy evolution from simple to complex tasks.
## Next steps
-* [How to upgrade from v1 to v2](how-to-migrate-from-v1.md)
-* Get started with CLI v2
+* [Upgrade from v1 to v2](how-to-migrate-from-v1.md)
+* Get started with CLI v2:
* [Install and set up CLI (v2)](how-to-configure-cli.md)
- * [Train models with the CLI (v2)](how-to-train-model.md)
+ * [Train models with CLI (v2)](how-to-train-model.md)
* [Deploy and score models with online endpoints](how-to-deploy-online-endpoints.md)
-* Get started with SDK v2
+* Get started with SDK v2:
* [Install and set up SDK (v2)](https://aka.ms/sdk-v2-install)
- * [Train models with the Azure Machine Learning Python SDK v2](how-to-train-model.md)
- * [Tutorial: Create production ML pipelines with Python SDK v2 in a Jupyter notebook](tutorial-pipeline-python-sdk.md)
+ * [Train models with Azure Machine Learning Python SDK v2](how-to-train-model.md)
+ * [Tutorial: Create production Machine Learning pipelines with Python SDK v2 in a Jupyter notebook](tutorial-pipeline-python-sdk.md)
machine-learning How To Network Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-network-security-overview.md
This article is part of a series on securing an Azure Machine Learning workflow.
This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series: :::moniker range="azureml-api-2"
-* [Use managed networks](how-to-managed-network.md) (preview)
+* [Use managed networks](how-to-managed-network.md)
* [Secure the workspace resources](how-to-secure-workspace-vnet.md) * [Secure machine learning registries](how-to-registry-network-isolation.md) * [Secure the training environment](how-to-secure-training-vnet.md)
machine-learning Monitor Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/monitor-azure-machine-learning.md
Start with the article [Monitoring Azure resources with Azure Monitor](../azure-
The following sections build on this article by describing the specific data gathered for Azure Machine Learning. These sections also provide examples for configuring data collection and analyzing this data with Azure tools. > [!TIP]
-> To understand costs associated with Azure Monitor, see [Usage and estimated costs](../azure-monitor/usage-estimated-costs.md). To understand the time it takes for your data to appear in Azure Monitor, see [Log data ingestion time](../azure-monitor/logs/data-ingestion-time.md).
+> To understand costs associated with Azure Monitor, see [Azure Monitor cost and usage](../azure-monitor/cost-usage.md). To understand the time it takes for your data to appear in Azure Monitor, see [Log data ingestion time](../azure-monitor/logs/data-ingestion-time.md).
## Monitoring data from Azure Machine Learning
machine-learning Quickstart Create Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/quickstart-create-resources.md
Last updated 08/17/2023 adobe-target: true
-content_well_notification:
- - AI-contribution
#Customer intent: As a data scientist, I want to create a workspace so that I can start to use Azure Machine Learning.
managed-grafana How To Connect To Data Source Privately https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-connect-to-data-source-privately.md
Once you've set up the private link service, you can create a managed private en
> The *Private link service url* field is optional unless you need TLS. If you specify a URL, Managed Grafana will ensure that the host IP address for that URL matches the private endpoint's IP address. Due to security reasons, AMG have an allowed list of the URL. 1. Click **Create** to add the managed private endpoint resource.
-1. Contact the owner of target Azure Monitor workspace to approve the connection request.
+1. Contact the owner of target private link service to approve the connection request.
1. After the connection request is approved, click **Refresh** to see the connection status and private IP address. > [!NOTE]
managed-grafana Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/overview.md
Azure Managed Grafana is available in the two service tiers presented below.
| Essential (preview) | Provides the core Grafana functionalities in use with Azure data sources. Since it doesn't provide an SLA guarantee, this tier should be used only for non-production environments. | | Standard | The default tier, offering better performance, more features and an SLA. It's recommended for most situations. |
-> [!NOTE]
-> The Essential plan (preview) is currently being rolled out and will be available in all cloud regions on October 30, 2023.
- The following table lists the main features supported in each tier: | Feature | Essential (preview) | Standard |
nat-gateway Nat Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/nat-metrics.md
Azure NAT Gateway provides the following diagnostic capabilities:
## Metrics overview
-NAT gateway resources provide the following multi-dimensional metrics in Azure Monitor:
+NAT gateway provides the following multi-dimensional metrics in Azure Monitor:
| Metric | Description | Recommended aggregation | Dimensions | ||||| | Bytes | Bytes processed inbound and outbound | Sum | Direction (In; Out), Protocol (6 TCP; 17 UDP) | | Packets | Packets processed inbound and outbound | Sum | Direction (In; Out), Protocol (6 TCP; 17 UDP) |
-| Dropped packets | Packets dropped by the NAT gateway | Sum | / |
-| SNAT Connection Count | Number of new SNAT connections over a given interval of time | Sum | Connection State (Attempted, Established, Failed, Closed, Timed Out), Protocol (6 TCP; 17 UDP) |
-| Total SNAT connection count | Total number of active SNAT connections | Sum | Protocol (6 TCP; 17 UDP) |
-| Datapath availability | Availability of the data path of the NAT gateway. Used to determine whether the NAT gateway endpoints are available for outbound traffic flow. | Avg | Availability (0, 100) |
+| Dropped Packets | Packets dropped by the NAT gateway | Sum | / |
+| SNAT Connection Count | Number of new SNAT connections over a given interval of time | Sum | Connection State (Attempted, Failed), Protocol (6 TCP; 17 UDP) |
+| Total SNAT Connection Count | Total number of active SNAT connections | Sum | Protocol (6 TCP; 17 UDP) |
+| Datapath Availability | Availability of the data path of the NAT gateway. Used to determine whether the NAT gateway endpoints are available for outbound traffic flow. | Avg | Availability (0, 100) |
+
+>[!NOTE]
+> Count aggregation is not recommended for any of the NAT gateway metrics. Count aggregation adds up the number of metric values and not the metric values themselves. Use Sum aggregation instead to get the best representation of data values for connection count, bytes, and packets metrics.
+>
+> Use average for best represented health data for the datapath availability metric.
+>
+> See [aggregation types](/azure/azure-monitor/essentials/metrics-aggregation-explained#aggregation-types) for more information.
## Where to find my NAT gateway metrics
To view any one of your metrics for a given NAT gateway resource:
3. In the **Aggregation** drop-down menu, select the recommended aggregation listed in the [metrics overview](#metrics-overview) table.
- :::image type="content" source="./media/nat-metrics/nat-metrics-1.png" alt-text="Screenshot of the metrics setup configuration in NAT gateway resource.":::
+ :::image type="content" source="./media/nat-metrics/nat-metrics-1.png" alt-text="Screenshot of the metrics set up in NAT gateway resource.":::
4. To adjust the time frame over which the chosen metric is presented on the metrics graph or to adjust how frequently the chosen metric is measured, select the **Time** window in the top right corner of the metrics page and make your adjustments.
To view any one of your metrics for a given NAT gateway resource:
The **Bytes** metric shows you the amount of data going outbound through NAT gateway and returning inbound in response to an outbound connection.
-Use this metric for the following measurements:
+Use this metric to:
-- Assess the amount of data being processed through NAT gateway to connect outbound or return inbound.
+- View the amount of data being processed through NAT gateway to connect outbound or return inbound.
-To view the amount of data sent in one or both directions when connecting outbound through NAT gateway:
+To view the amount of data passing through NAT gateway:
1. Select the NAT gateway resource you would like to monitor.
To view the amount of data sent in one or both directions when connecting outbou
### Packets
-The packets metric shows you the number of data packets transmitted through the NAT gateway.
+The packets metric shows you the number of data packets passing through NAT gateway.
Use this metric to: -- To confirm that traffic is being sent through your NAT gateway to go outbound to the internet or return inbound.
+- Verify that traffic is passing outbound or returning inbound through NAT gateway.
-- To assess the amount of traffic being directed through your NAT gateway resource outbound or inbound (when in response to an outbound directed flow).
+- View the amount of traffic going outbound through NAT gateway or returning inbound.
-To view the number of packets sent in one or both directions when connecting outbound through NAT gateway, follow the same steps in the [Bytes](#bytes) section.
+To view the number of packets sent in one or both directions through NAT gateway, follow the same steps in the [Bytes](#bytes) section.
### Dropped packets
-The dropped packets metric shows you the number of data packets dropped by NAT gateway when directing traffic outbound or inbound in response to an outbound connection.
+The dropped packets metric shows you the number of data packets dropped by NAT gateway when traffic goes outbound or returns inbound in response to an outbound connection.
Use this metric to: -- Assess whether or not you're nearing or possibly experiencing SNAT exhaustion with a given NAT gateway resource. Check to see if periods of dropped packets coincide with periods of failed SNAT connections with the [SNAT Connection Count](#snat-connection-count) metric.
+- Check if periods of dropped packets coincide with periods of failed SNAT connections with the [SNAT Connection Count](#snat-connection-count) metric.
-- Help assess if you're experiencing a pattern of failed outbound connections.
+- Help determine if you're experiencing a pattern of failed outbound connections or SNAT port exhaustion.
-Reasons for why you may see dropped packets:
+Possible reasons for dropped packets:
-- If you're seeing a high rate of dropped packets, it may be due to outbound connectivity failure. Connectivity failure may happen for various reasons. See the NAT gateway [troubleshooting guide](./troubleshoot-nat.md) to help you further diagnose.
+- Outbound connectivity failure can cause packets to drop. Connectivity failure can happen for various reasons. See the [NAT gateway connectivity troubleshooting guide](/azure/nat-gateway/troubleshoot-nat-connectivity) to help you further diagnose.
### SNAT connection count
-The SNAT connection count metric shows you the number of new SNAT connections within a specified time frame. This metric can be broken out to view different connection states including: attempted, established, failed, closed, and timed out connections. A failed connection volume greater than zero may indicate SNAT port exhaustion.
+The SNAT connection count metric shows you the number of new SNAT connections within a specified time frame. This metric can be filtered by **Attempted** and **Failed** connection states. A failed connection volume greater than zero can indicate SNAT port exhaustion.
Use this metric to: - Evaluate the health of your outbound connections. -- Assess whether or not you're nearing or possibly experiencing SNAT port exhaustion.
+- Help diagnose if your NAT gateway is experiencing SNAT port exhaustion.
-- Evaluate whether your NAT gateway resource should be scaled out further by adding more public IPs. --- Assess if you're experiencing a pattern of failed outbound connections.
+- Determine if you're experiencing a pattern of failed outbound connections.
To view the connection state of your connections:
To view the connection state of your connections:
### Total SNAT connection count
-The **Total SNAT connection count** metric shows you the total number of active SNAT connections over a period of time.
+The **Total SNAT connection count** metric shows you the total number of active SNAT connections passing through NAT gateway.
You can use this metric to: -- Assess if you're nearing the connection limit of your NAT gateway resource.
+- Evaluate the volume of connections passing through NAT gateway.
+
+- Determine if you're nearing the connection limit of NAT gateway.
- Help assess if you're experiencing a pattern of failed outbound connections.
-Reasons for why you may see failed connections:
+Possible reasons for failed connections:
-- If you're seeing a pattern of failed connections for your NAT gateway resource, there could be multiple possible reasons. See the NAT gateway [troubleshooting guide](./troubleshoot-nat.md) to help you further diagnose.
+- A pattern of failed connections can happen for various reasons. See the [NAT gateway connectivity troubleshooting guide](/azure/nat-gateway/troubleshoot-nat-connectivity) to help you further diagnose.
+
+>[!NOTE]
+> When NAT gateway is attached to a subnet and public IP address, the Azure platform verifies NAT gateway is healthy by conducting health checks. These health checks may appear in NAT gatewayΓÇÖs SNAT connection metrics, but are negligible and donΓÇÖt impact NAT gatewayΓÇÖs ability to connect outbound.
### Datapath availability
-The datapath availability metric measures the status of the NAT gateway resource over time. This metric informs on whether or not NAT gateway is available for directing outbound traffic to the internet. This metric is a reflection of the health of the Azure infrastructure.
+The datapath availability metric measures the health of the NAT gateway resource over time. This metric indicates if NAT gateway is available for directing outbound traffic to the internet. This metric is a reflection of the health of the Azure infrastructure.
You can use this metric to: -- Monitor the availability of your NAT gateway resource.
+- Monitor the availability of NAT gateway.
- Investigate the platform where your NAT gateway is deployed and determine if itΓÇÖs healthy. - Isolate whether an event is related to your NAT gateway or to the underlying data plane.
-Reasons for why you may see a drop in data path availability include:
+Possible reasons for a drop in data path availability include:
- An infrastructure outage has occurred. -- There aren't healthy VMs available in your NAT gateway configured subnet. For more information, see the NAT gateway [troubleshooting guide](./troubleshoot-nat.md).
+- There aren't healthy VMs available in your NAT gateway configured subnet. For more information, see the [NAT gateway connectivity troubleshooting guide](/azure/nat-gateway/troubleshoot-nat-connectivity).
## Alerts
-Alerts can be configured in Azure Monitor for each of the preceding metrics. These alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address potential issues with your NAT gateway resource.
+Alerts can be configured in Azure Monitor for all NAT gateway metrics. These alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address potential issues with NAT gateway.
For more information about how metric alerts work, see [Azure Monitor Metric Alerts](../azure-monitor/alerts/alerts-metric-overview.md). The following guidance describes how to configure some common and recommended types of alerts for your NAT gateway.
-### Alerts for datapath availability droppage
+### Alerts for datapath availability degradation
-If the datapath of your NAT gateway resource begins to experience drops in availability, you can set up an alert to be fired when it hits a specific threshold in availability.
+Set up an alert on datapath availability to help you detect issues with the health of NAT gateway.
-The recommended guidance is to alert on NAT gatewayΓÇÖs datapath availability when it drops below 90% over a 15 minute period. This configuration is indicative of a NAT gateway resource being in a degraded state.
+The recommended guidance is to alert on NAT gatewayΓÇÖs datapath availability when it drops below 90% over a 15-minute period. This configuration is indicative of a NAT gateway resource being in a degraded state.
To set up a datapath availability alert, follow these steps:
To set up a datapath availability alert, follow these steps:
5. From the **Aggregation type** drop-down menu, select **Average**.
-6. In the **Threshold value** box, enter **90%** as the value that the datapath availability must drop below before an alert is fired.
+6. In the **Threshold value** box, enter **90%**.
7. From the **Unit** drop-down menu, select **Count**.
Setting the aggregation granularity to less than 5 minutes may trigger false pos
### Alerts for SNAT port exhaustion
-Use the **SNAT connection count** metric and alerts to help determine if you're experiencing SNAT port exhaustion. A failed connection volume greater than zero may indicate SNAT port exhaustion. You may need to investigate further to determine the root cause of these failures.
+Set up an alert on the **SNAT connection count** metric to notify you of connection failures on your NAT gateway. A failed connection volume greater than zero can indicate that either you have reached the connection limit on your NAT gateway or that you have hit SNAT port exhaustion. Investigate further to determine the root cause of these failures.
To create the alert, use the following steps:
To create the alert, use the following steps:
11. Select **Create** to create the alert rule. >[!NOTE]
->SNAT port exhaustion on your NAT gateway resource is uncommon. If you see SNAT port exhaustion, your NAT gateway's idle timeout timer may be holding on to SNAT ports too long or your may need to scale with additional public IPs. To troubleshoot these kinds of issues, refer to the [NAT gateway connectivity troubleshooting guide](./troubleshoot-nat-connectivity.md#snat-exhaustion-due-to-nat-gateway-configuration).
+>SNAT port exhaustion on your NAT gateway resource is uncommon. If you see SNAT port exhaustion, check if NAT gateway's idle timeout timer is set higher than the default amount of 4 minutes. A long idle timeout timer seeting can cause SNAT ports too be in hold down for longer, which results in exhausting SNAT port inventory sooner. You can also scale your NAT gateway with additional public IPs to increase NAT gateway's overall SNAT port inventory. To troubleshoot these kinds of issues, refer to the [NAT gateway connectivity troubleshooting guide](/azure/nat-gateway/troubleshoot-nat-connectivity#snat-exhaustion-due-to-nat-gateway-configuration).
## Network Insights
-[Azure Monitor Network Insights](../network-watcher/network-insights-overview.md) allows you to visualize your Azure infrastructure setup and to review all metrics for your NAT gateway resource from a pre-configured metrics dashboard. These visual tools help you diagnose and troubleshoot any issues with your NAT gateway resource.
+[Azure Monitor Network Insights](../network-watcher/network-insights-overview.md) allows you to visualize your Azure infrastructure setup and to review all metrics for your NAT gateway resource from a preconfigured metrics dashboard. These visual tools help you diagnose and troubleshoot any issues with your NAT gateway resource.
### View the topology of your Azure architectural setup
To view a topological map of your setup in Azure:
1. From your NAT gatewayΓÇÖs resource page, select **Insights** from the **Monitoring** section.
-2. On the landing page for **Insights**, there is a topology map of your NAT gateway setup. This map shows the relationship between the different components of your network (subnets, virtual machines, public IP addresses).
+2. On the landing page for **Insights**, there's a topology map of your NAT gateway setup. This map shows the relationship between the different components of your network (subnets, virtual machines, public IP addresses).
3. Hover over any component in the topology map to view configuration information.
For more information on what each metric is showing you and how to analyze these
* Learn about [NAT gateway resource](nat-gateway-resource.md) * Learn about [Azure Monitor](../azure-monitor/overview.md) * Learn about [troubleshooting NAT gateway resources](troubleshoot-nat.md).
+* Learn about [troubleshooting NAT gateway connectivity](/azure/nat-gateway/troubleshoot-nat-connectivity)
network-watcher Connection Monitor Install Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-install-azure-monitor-agent.md
Title: Install Azure Monitor Agent for connection monitor
-description: This article describes how to install Azure Monitor Agent.
-
+ Title: Install and upgrade Azure Monitor Agent - Azure Arc-enabled servers
+
+description: Learn how to install, upgrade, and uninstall Azure Monitor Agent on Azure Arc-enabled servers.
+ - Previously updated : 10/25/2022-
-#Customer intent: I need to monitor a connection by using Azure Monitor Agent.
Last updated : 10/31/2023++
+#Customer intent: As an Azure administrator, I need to install the Azure Monitor Agent on Azure Arc-enabled servers so I can monitor a connection using the Connection Monitor.
-# Install Azure Monitor Agent
+# Install and upgrade Azure Monitor Agent on Azure Arc-enabled servers
-Azure Monitor Agent is implemented as an Azure virtual machine (VM) extension. You can install Azure Monitor Agent by using any of the methods for installing virtual machine extensions, including those described in the [Azure Monitor Agent overview](../azure-monitor/agents/agents-overview.md) article.
+Azure Monitor Agent is implemented as an Azure virtual machine (VM) extension. You can install Azure Monitor Agent using any of the methods described in [Azure Monitor Agent overview](../azure-monitor/agents/agents-overview.md?toc=/azure/network-watcher/toc.json).
-The following section covers installing Azure Monitor Agent on Azure Arc-enabled servers by using PowerShell and the Azure CLI. For more information, see [Manage Azure Monitor Agent](../azure-monitor/agents/azure-monitor-agent-manage.md?tabs=ARMAgentPowerShell%2CPowerShellWindows%2CPowerShellWindowsArc%2CCLIWindows%2CCLIWindowsArc).
+This article covers installing Azure Monitor Agent on Azure Arc-enabled servers using PowerShell or the Azure CLI. For more information, see [Manage Azure Monitor Agent](../azure-monitor/agents/azure-monitor-agent-manage.md?tabs=ARMAgentPowerShell%2CPowerShellWindows%2CPowerShellWindowsArc%2CCLIWindows%2CCLIWindowsArc).
## Use PowerShell
New-AzConnectedMachineExtension -Name AzureNetworkWatcherExtension -ExtensionTyp
```
-## Next steps
--- After you've installed the monitoring agents, [create a connection monitor](connection-monitor-create-using-portal.md#create-a-connection-monitor). Then, after you've created a connection monitor, analyze your monitoring data, set alerts, and diagnose issues in your connection monitor and your network.
+## Next step
-- Monitor the network connectivity of your Azure and non-Azure setups by using [Connection Monitor](connection-monitor-overview.md).
+> [!div class="nextstepaction"]
+> [create a connection monitor](connection-monitor-create-using-portal.md)
network-watcher Connection Monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-overview.md
Title: Connection monitor
+ Title: Connection monitor overview
-description: Learn how to use Azure Network Watcher connection monitor to monitor network communication in a distributed environment.
+description: Learn about Azure Network Watcher connection monitor and how to use it to monitor network communication in a distributed environment.
Previously updated : 10/04/2022 Last updated : 10/31/2023
-#CustomerIntent: I need to monitor communication between one VM and another. If the communication fails, I need to know why so that I can resolve the problem.
+#CustomerIntent: As an Azure administrator, I need to monitor communication between one VM and another. If the communication fails, I need to know why so that I can resolve the problem.
# Connection monitor overview
Last updated 10/04/2022
> > To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor](migrate-to-connection-monitor-from-network-performance-monitor.md), or [migrate from Connection Monitor (Classic)](migrate-to-connection-monitor-from-connection-monitor-classic.md) to the new Connection Monitor in Azure Network Watcher before February 29, 2024.
-> [!IMPORTANT]
-> Connection Monitor will now support end-to-end connectivity checks from and to *Azure Virtual Machine Scale Sets*, enabling faster performance monitoring and network troubleshooting across scale sets
- Connection Monitor provides unified, end-to-end connection monitoring in Azure Network Watcher. The Connection Monitor feature supports hybrid and Azure cloud deployments. Network Watcher provides tools to monitor, diagnose, and view connectivity-related metrics for your Azure deployments. Here are some use cases for Connection Monitor:
Here are some benefits of Connection Monitor:
* Support for connectivity checks that are based on HTTP, Transmission Control Protocol (TCP), and Internet Control Message Protocol (ICMP) * Metrics and Log Analytics support for both Azure and non-Azure test setups
-![Diagram showing how Connection Monitor interacts with Azure VMs, non-Azure hosts, endpoints, and data storage locations.](./media/connection-monitor-2-preview/hero-graphic-new.png)
-To start using Connection Monitor for monitoring, do the following:
+To start using Connection Monitor for monitoring, follow these steps:
1. [Install monitoring agents](#install-monitoring-agents). 1. [Enable Network Watcher on your subscription](#enable-network-watcher-on-your-subscription).
Rules for a network security group (NSG) or firewall can block communication bet
If you wish to escape the installation process for enabling the Network Watcher extension, you can proceed with the creation of Connection Monitor and allow auto enablement of Network Watcher extensions on your Azure VMs and scale sets.
- > [!Note]
- > In case the virtual machine scale sets is set for manual upgradation, the user will have to upgrade the scale set post Network Watcher extension installation in order to continue setting up the Connection Monitor with virtual machine scale sets as endpoints. Incase the virtual machine scale set is set to auto upgradation, the user need not worry about any upgradation after Network Watcher extension installation.
- > As Connection Monitor now supports unified auto enablement of monitoring extensions, user can consent to auto upgradation of VM scale set with auto enablement of Network Watcher extension during the creation on Connection Monitor for VM scale sets with manual upgradation.
+> [!NOTE]
+> If the Automatic Extension Upgrade isn't enabled on the virtual machine scale sets, then you have to manually upgrade the Network Watcher extension whenever a new version is released.
+>
+> As Connection Monitor now supports unified auto enablement of monitoring extensions, user can consent to auto upgrade of the virtual machine scale set with auto enablement of Network Watcher extension during the creation of Connection Monitor for virtual machine scale sets with manual upgrade.
### Agents for on-premises machines
Connection Monitor includes the following entities:
* **Test group**: The group that contains source endpoints, destination endpoints, and test configurations. A connection monitor can contain more than one test group. * **Test**: The combination of a source endpoint, destination endpoint, and test configuration. A test is the most granular level at which monitoring data is available. The monitoring data includes the percentage of checks that failed and the round-trip time (RTT).
- ![Diagram showing a connection monitor, defining the relationship between test groups and tests.](./media/connection-monitor-2-preview/cm-tg-2.png)
You can create a connection monitor by using the [Azure portal](./connection-monitor-create-using-portal.md), [ARMClient](./connection-monitor-create-using-template.md), or [Azure PowerShell](connection-monitor-create-using-powershell.md).
All sources, destinations, and test configurations that you add to a test group
| 10 | C | D | Config 2 | | 11 | C | E | Config 1 | | 12 | C | E | Config 2 |
-| | |
-- ### Scale limits
When you use metrics, set the resource type as **Microsoft.Network/networkWatche
| ChecksFailedPercent | % Checks Failed | Percentage | Average | Percentage of failed checks for a test. | ConnectionMonitorResourceId <br>SourceAddress <br>SourceName <br>SourceResourceId <br>SourceType <br>Protocol <br>DestinationAddress <br>DestinationName <br>DestinationResourceId <br>DestinationType <br>DestinationPort <br>TestGroupName <br>TestConfigurationName <br>Region <br>SourceIP <br>DestinationIP <br>SourceSubnet <br>DestinationSubnet | | RoundTripTimeMs | Round-trip time (ms) | Milliseconds | Average | RTT for checks sent between source and destination. This value isn't averaged. | ConnectionMonitorResourceId <br>SourceAddress <br>SourceName <br>SourceResourceId <br>SourceType <br>Protocol <br>DestinationAddress <br>DestinationName <br>DestinationResourceId <br>DestinationType <br>DestinationPort <br>TestGroupName <br>TestConfigurationName <br>Region <br>SourceIP <br>DestinationIP <br>SourceSubnet <br>DestinationSubnet | | TestResult | Test Result | Count | Average | Connection monitor test results. <br>Interpretation of result values: <br>0-&nbsp;Indeterminate <br>1- Pass <br>2- Warning <br>3- Fail| SourceAddress <br>SourceName <br>SourceResourceId <br>SourceType <br>Protocol <br>DestinationAddress <br>DestinationName <br>DestinationResourceId <br>DestinationType <br>DestinationPort <br>TestGroupName <br>TestConfigurationName <br>SourceIP <br>DestinationIP <br>SourceSubnet <br>DestinationSubnet |
-| | |
#### Metric-based alerts for Connection Monitor
openshift Howto Enable Nsg Flowlogs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-enable-nsg-flowlogs.md
metadata:
name: cluster spec: azEnvironment: "AzurePublicCloud"
- resourceId: "subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.RedHatOpenShift/openShiftClusters/{clusterID}"
+ resourceId: "/subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.RedHatOpenShift/openShiftClusters/{clusterID}"
nsgFlowLogs: enabled: true
- networkWatcherID: "subscriptions/{subscriptionID}/resourceGroups/{networkWatcherRG}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}"
+ networkWatcherID: "/subscriptions/{subscriptionID}/resourceGroups/{networkWatcherRG}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}"
flowLogName: "{flowlogName}" retentionDays: {retentionDays}
- storageAccountResourceId: "subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{storageAccountName}"
+ storageAccountResourceId: "/subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{storageAccountName}"
version: {version} ``` See [Tutorial: Log network traffic to and from a virtual machine using the Azure portal](../network-watcher/network-watcher-nsg-flow-logging-portal.md) for possible values for `version` and `retentionDays`.
partner-solutions Dynatrace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-overview.md
Azure Native Dynatrace Service provides the following capabilities:
## Dynatrace links
-For more help using Azure Native Dynatrace Service, visit the [Dynatrace](https://aka.ms/partners/Dynatrace/PartnerDocs) documentation.
+For more help using Azure Native Dynatrace Service, visit the [Dynatrace](https://dt-url.net/azurenativedynatraceservice) documentation.
## Next steps
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
One advantage of running your workload in Azure is global reach. The flexible se
| Qatar Central | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :x: | | South Africa North | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | South Central US | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| South India | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| South India | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
| Southeast Asia | :heavy_check_mark:(v3/v4 only) | :x: $ | :heavy_check_mark: | :heavy_check_mark: | | Sweden Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Switzerland North | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Switzerland West | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: |
-| UAE North | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| UAE North | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: |
| US Gov Arizona | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :x: | | US Gov Texas | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :x: | | US Gov Virginia | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :x: |
One advantage of running your workload in Azure is global reach. The flexible se
| West Central US | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: | | West Europe | :heavy_check_mark: | :x: $ | :heavy_check_mark: | :heavy_check_mark: | | West US | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
-| West US 2 | :heavy_check_mark: | :x: $ | :x: $ | :heavy_check_mark: |
+| West US 2 | :heavy_check_mark: (v3/v4 only) | :x: $ | :x: $ | :heavy_check_mark: |
| West US 3 | :heavy_check_mark: | :heavy_check_mark: ** | :heavy_check_mark: | :x: | $ New Zone-redundant high availability deployments are temporarily blocked in these regions. Already provisioned HA servers are fully supported.
postgresql Quickstart Create Server Database Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-server-database-portal.md
Go to the [Azure portal](https://portal.azure.com/) to create an Azure Database
1. Select **+ Create**.
-2. On the Create a Azure Database for PostgreSQL page , select **Single server**.
+2. On the Create an Azure Database for PostgreSQL page, select **Single server**.
>[!div class="mx-imgBorder"] > :::image type="content" source="./media/quickstart-create-database-portal/select-single-server.png" alt-text="Select single server":::
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Azure Data Factory (Microsoft.DataFactory/factories) | portal | privatelink.adf.azure.com | adf.azure.com | | Azure Cache for Redis (Microsoft.Cache/Redis) | redisCache | privatelink.redis.cache.windows.net | redis.cache.windows.net | | Azure Cache for Redis Enterprise (Microsoft.Cache/RedisEnterprise) | redisEnterprise | privatelink.redisenterprise.cache.azure.net | redisenterprise.cache.azure.net |
-| Microsoft Purview (Microsoft.Purview) | account | privatelink.purview.azure.com | purview.azure.com |
-| Microsoft Purview (Microsoft.Purview) | portal | privatelink.purviewstudio.azure.com | purview.azure.com </br> purviewstudio.azure.com |
-| Azure Digital Twins (Microsoft.DigitalTwins) | digitalTwinsInstances | privatelink.digitaltwins.azure.net | digitaltwins.azure.net |
+| Microsoft Purview (Microsoft.Purview/accounts) | account | privatelink.purview.azure.com | purview.azure.com |
+| Microsoft Purview (Microsoft.Purview/accounts) | portal | privatelink.purviewstudio.azure.com | purview.azure.com </br> purviewstudio.azure.com |
+| Azure Digital Twins (Microsoft.DigitalTwins/digitalTwinsInstances) | digitalTwinsInstances | privatelink.digitaltwins.azure.net | digitaltwins.azure.net |
| Azure HDInsight (Microsoft.HDInsight/clusters) | N/A | privatelink.azurehdinsight.net | azurehdinsight.net |
-| Azure Arc (Microsoft.HybridCompute) | hybridcompute | privatelink.his.arc.azure.com <br/> privatelink.guestconfiguration.azure.com </br> privatelink.dp.kubernetesconfiguration.azure.com | his.arc.azure.com <br/> guestconfiguration.azure.com </br> dp.kubernetesconfiguration.azure.com |
-| Azure Media Services (Microsoft.Media) | keydelivery </br> liveevent </br> streamingendpoint | privatelink.media.azure.net | media.azure.net |
+| Azure Arc (Microsoft.HybridCompute/privateLinkScopes) | hybridcompute | privatelink.his.arc.azure.com <br/> privatelink.guestconfiguration.azure.com </br> privatelink.dp.kubernetesconfiguration.azure.com | his.arc.azure.com <br/> guestconfiguration.azure.com </br> dp.kubernetesconfiguration.azure.com |
+| Azure Media Services (Microsoft.Media/mediaservices) | keydelivery </br> liveevent </br> streamingendpoint | privatelink.media.azure.net | media.azure.net |
| Azure Data Explorer (Microsoft.Kusto/Clusters) | cluster | privatelink.{regionName}.kusto.windows.net | {regionName}.kusto.windows.net | | Azure Static Web Apps (Microsoft.Web/staticSites) | staticSites | privatelink.azurestaticapps.net </br> privatelink.{partitionId}.azurestaticapps.net | azurestaticapps.net </br> {partitionId}.azurestaticapps.net | | Azure Migrate (Microsoft.Migrate/migrateProjects) | Default | privatelink.prod.migration.windowsazure.com | prod.migration.windowsazure.com |
For Azure services, use the recommended zone names as described in the following
| Azure Automation / (Microsoft.Automation/automationAccounts) | Webhook </br> DSCAndHybridWorker | privatelink.azure-automation.us | azure-automation.us | | Azure SQL Database (Microsoft.Sql/servers) | sqlServer | privatelink.database.usgovcloudapi.net | database.usgovcloudapi.net | | Azure SQL Managed Instance (Microsoft.Sql/managedInstances) | managedInstance | privatelink.{dnsPrefix}.database.usgovcloudapi.net | {instanceName}.{dnsPrefix}.database.usgovcloudapi.net |
+| Azure Synapse Analytics (Microsoft.Synapse/workspaces) | Sql | privatelink.sql.azuresynapse.usgovcloudapi.net | sql.azuresynapse.usgovcloudapi.net |
+| Azure Synapse Analytics (Microsoft.Synapse/workspaces) | SqlOnDemand | privatelink.sql.azuresynapse.usgovcloudapi.net | {workspaceName}-ondemand.sql.azuresynapse.usgovcloudapi.net |
+| Azure Synapse Analytics (Microsoft.Synapse/workspaces) | Dev | privatelink.dev.azuresynapse.usgovcloudapi.net | dev.azuresynapse.usgovcloudapi.net |
+| Azure Synapse Studio (Microsoft.Synapse/privateLinkHubs) | Web | privatelink.azuresynapse.usgovcloudapi.net | azuresynapse.usgovcloudapi.net |
| Storage account (Microsoft.Storage/storageAccounts) | blob </br> blob_secondary | privatelink.blob.core.usgovcloudapi.net | blob.core.usgovcloudapi.net | | Storage account (Microsoft.Storage/storageAccounts) | table </br> table_secondary | privatelink.table.core.usgovcloudapi.net | table.core.usgovcloudapi.net | | Storage account (Microsoft.Storage/storageAccounts) | queue </br> queue_secondary | privatelink.queue.core.usgovcloudapi.net | queue.core.usgovcloudapi.net | | Storage account (Microsoft.Storage/storageAccounts) | file </br> file_secondary | privatelink.file.core.usgovcloudapi.net | file.core.usgovcloudapi.net | | Storage account (Microsoft.Storage/storageAccounts) | web </br> web_secondary | privatelink.web.core.usgovcloudapi.net | web.core.usgovcloudapi.net |
+| Azure Data Lake File System Gen2 (Microsoft.Storage/storageAccounts) | dfs </br> dfs_secondary | privatelink.dfs.core.usgovcloudapi.net | dfs.core.usgovcloudapi.net |
| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Sql | privatelink.documents.azure.us | documents.azure.us |
+| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | MongoDB | privatelink.mongo.cosmos.azure.us | mongo.cosmos.azure.us |
| Azure Batch (Microsoft.Batch/batchAccounts) | batchAccount | privatelink.batch.usgovcloudapi.net | {regionName}.batch.usgovcloudapi.net | | Azure Batch (Microsoft.Batch/batchAccounts) | nodeManagement | privatelink.batch.usgovcloudapi.net | {regionName}.service.batch.usgovcloudapi.net | | Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) | postgresqlServer | privatelink.postgres.database.usgovcloudapi.net | postgres.database.usgovcloudapi.net |
For Azure services, use the recommended zone names as described in the following
| Azure Database for MariaDB (Microsoft.DBforMariaDB/servers) | mariadbServer | privatelink.mariadb.database.usgovcloudapi.net| mariadb.database.usgovcloudapi.net | | Azure Key Vault (Microsoft.KeyVault/vaults) | vault | privatelink.vaultcore.usgovcloudapi.net | vault.usgovcloudapi.net <br> vaultcore.usgovcloudapi.net | | Azure Search (Microsoft.Search/searchServices) | searchService | privatelink.search.windows.us | search.windows.us |
+| Azure Container Registry (Microsoft.ContainerRegistry/registries) | registry | privatelink.azurecr.us </br> {regionName}.privatelink.azurecr.us | azurecr.us </br> {regionName}.azurecr.us |
| Azure App Configuration (Microsoft.AppConfiguration/configurationStores) | configurationStores | privatelink.azconfig.azure.us | azconfig.azure.us | | Azure Backup (Microsoft.RecoveryServices/vaults) | AzureBackup | privatelink.{regionCode}.backup.windowsazure.us | {regionCode}.backup.windowsazure.us | | Azure Site Recovery (Microsoft.RecoveryServices/vaults) | AzureSiteRecovery | privatelink.siterecovery.windowsazure.us | {regionCode}.siterecovery.windowsazure.us |
For Azure services, use the recommended zone names as described in the following
| Azure IoT Hub (Microsoft.Devices/IotHubs) | iotHub | privatelink.azure-devices.us<br/>privatelink.servicebus.windows.us<sup>1</sup> | azure-devices.us<br/>servicebus.usgovcloudapi.net | | Azure IoT Hub Device Provisioning Service (Microsoft.Devices/ProvisioningServices) | iotDps | privatelink.azure-devices-provisioning.us | azure-devices-provisioning.us | | Azure Relay (Microsoft.Relay/namespaces) | namespace | privatelink.servicebus.usgovcloudapi.net | servicebus.usgovcloudapi.net |
+| Azure Event Grid (Microsoft.EventGrid/topics) | topic | privatelink.eventgrid.azure.us | eventgrid.azure.us |
+| Azure Event Grid (Microsoft.EventGrid/domains) | domain | privatelink.eventgrid.azure.us | eventgrid.azure.us |
| Azure Web Apps (Microsoft.Web/sites) | sites | privatelink.azurewebsites.us </br> scm.privatelink.azurewebsites.us | azurewebsites.us </br> scm.azurewebsites.us | Azure Monitor (Microsoft.Insights/privateLinkScopes) | azuremonitor | privatelink.monitor.azure.us <br/> privatelink.adx.monitor.azure.us <br/> privatelink.oms.opinsights.azure.us <br/> privatelink.ods.opinsights.azure.us <br/> privatelink.agentsvc.azure-automation.us <br/> privatelink.blob.core.usgovcloudapi.net | monitor.azure.us <br/> adx.monitor.azure.us <br/> oms.opinsights.azure.us<br/> ods.opinsights.azure.us<br/> agentsvc.azure-automation.us <br/> blob.core.usgovcloudapi.net | | Azure AI services (Microsoft.CognitiveServices/accounts) | account | privatelink.cognitiveservices.azure.us | cognitiveservices.azure.us | | Azure Cache for Redis (Microsoft.Cache/Redis) | redisCache | privatelink.redis.cache.usgovcloudapi.net | redis.cache.usgovcloudapi.net |
+| Microsoft Purview (Microsoft.Purview) | account | privatelink.purview.azure.com | purview.azure.com |
+| Microsoft Purview (Microsoft.Purview) | portal | privatelink.purviewstudio.azure.com | purview.azure.com </br> purviewstudio.azure.com |
| Azure HDInsight (Microsoft.HDInsight) | N/A | privatelink.azurehdinsight.us | azurehdinsight.us | | Azure Machine Learning (Microsoft.MachineLearningServices/workspaces) | amlworkspace | privatelink.api.ml.azure.us<br/>privatelink.notebooks.usgovcloudapi.net | api.ml.azure.us<br/>notebooks.usgovcloudapi.net <br/> instances.azureml.us<br/>aznbcontent.net <br/> inference.ml.azure.us |
+| Azure Health Data Services (Microsoft.HealthcareApis/workspaces) | healthcareworkspace | privatelink.workspace.azurehealthcareapis.us </br> privatelink.fhir.azurehealthcareapis.us </br> privatelink.dicom.azurehealthcareapis.us | workspace.azurehealthcareapis.us </br> fhir.azurehealthcareapis.us </br> dicom.azurehealthcareapis.us |
+| Azure Databricks (Microsoft.Databricks/workspaces) | databricks_ui_api </br> browser_authentication | privatelink.databricks.azure.us | databricks.azure.us |
| Azure Virtual Desktop (Microsoft.DesktopVirtualization/workspaces) | global | privatelink-global.wvd.azure.us | wvd.azure.us | | Azure Virtual Desktop (Microsoft.DesktopVirtualization/workspaces </br> Microsoft.DesktopVirtualization/hostpools) | feed <br> connection | privatelink.wvd.azure.us | wvd.azure.us |
reliability Reliability Azure Container Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-azure-container-apps.md
Last updated 08/29/2023
# Reliability in Azure Container Apps
-This article describes reliability support in Azure Container Apps, and covers both regional resiliency with availability zones and cross-region resiliency with disaster recovery. For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/well-architected/resiliency/).
+This article describes reliability support in [Azure Container Apps](/azure/container-apps/overview), and covers both regional resiliency with availability zones and cross-region resiliency with disaster recovery. For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/well-architected/resiliency/).
## Availability zone support Azure Container Apps uses [availability zones](availability-zones-overview.md#zonal-and-zone-redundant-services) in regions where they're available to provide high-availability protection for your applications and data from data center failures.
If you have enabled [session affinity](../container-apps/sticky-sessions.md), an
To take advantage of availability zones, enable zone redundancy as you create the Container Apps environment. The environment must include a virtual network with an available subnet. You can't migrate an existing Container Apps environment from nonavailability zone support to availability zone support.
-## Disaster recovery: cross-region failover
+## Cross-region disaster recovery and business continuity
+ In the unlikely event of a full region outage, you have the option of using one of two strategies:
reliability Reliability Azure Storage Mover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-azure-storage-mover.md
Current doc score: 100, 1130, 0
# Reliability in Azure Storage Mover
-This article describes reliability support in Azure Storage Mover and covers cross-region resiliency with disaster recovery. For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
+This article describes reliability support in [Azure Storage Mover](/azure/storage-mover/service-overview) and covers both intra-regional resiliency with [availability zones](#availability-zone-support) and [cross-region disaster recovery and business continuity](#cross-region-disaster-recovery-and-business-continuity). For a more detailed overview of reliability principles in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
-## Regional reliability
-When deploying an Azure Storage Mover resource, you must select a location in which the resource's instance metadata is stored. Instance metadata includes projects, endpoints, agents, job definitions, and job run history, but doesn't include the actual data to be migrated. Azure storage accounts to be used as migration targets have their own reliability support. Disaster recovery for on-premises data sources is the responsibility of the customer.
+## Availability zone support
-Instance metadata is replicated across multiple availability zones in regions where availability zones are available. Availability zones are unique physical locations within an Azure region. Each zone is made up of one or more data centers equipped with independent power, cooling, and networking.
-Some regions are paired in order to allow cross-region replication. When cross-region replication is utilized, instance metadata is replicated to each region, but is never permitted to leave the geography.
+Azure Storage Mover supports a zone-redundant deployment model.
-When a Storage Mover agent is registered, it connects to the region in which the Storage Mover resource is registered. If an agent's Azure region experiences an outage, the agent itself isn't affected, but management operations that rely on Azure may be unable to complete. In addition, any active data migrations to storage accounts located within the affected region may fail.
+When you deploy an Azure Storage Mover resource, you must [select a particular region](/azure/storage-mover/deployment-planning#select-an-azure-region-for-your-deployment) in which the resource's instance metadata is stored.
-In the unlikely event of a full region outage, you have the option of using one of the following strategies:
+If the region supports availability zones, the instance metadata is automatically replicated across multiple availability zones within that region.
-- Wait for Azure to recover the region-- Redeploy your resources to a different region-- Deploy a redundant Storage Mover in advance
+>[!IMPORTANT]
+>Azure Storage Mover instance metadata includes projects, endpoints, agents, job definitions, and job run history, but doesn't include the actual data to be migrated. Azure storage accounts that are used as migration targets have their own reliability support.
-The last two options are a matter of timing, since deployment will occur either before or after any future outage.
-## Determining reliability for target storage accounts
+### Prerequisites
-Any migration target storage account may require its own recovery steps. This requirement depends on the redundancy options chosen for each storage account. See the [storage account disaster recovery](/azure/storage/common/storage-disaster-recovery-guidance) article to determine whether more steps are necessary.
+- To deploy with availability zone support, you must choose a region that supports availability zones. To see which regions supports availability zones, see the [list of supported regions](availability-zones-service-support.md#azure-regions-with-availability-zone-support).
-If a local storage was chosen in lieu of redundancy options, you may need to create a new storage account for use in migrations during the outage.
+- (Optional) If your target storage account doesn't support availability zones, and you would like to migrate the account to AZ support, see [Migrate Azure Storage accounts to availability zone support](migrate-storage.md).
### Zone down experience
-During a zone-wide outage, no action is required during zone recovery. Azure Storage Mover is designed to self-heal and rebalance itself to take advantage of the healthy zone automatically.
+During a zone-wide outage, no action is required during zone recovery. Azure Storage Mover is designed to self-heal and re-balance itself to take advantage of the healthy zone automatically.
+
+Any migration target storage account may require its own recovery steps. This requirement depends on the redundancy options chosen for each storage account. See the [storage account disaster recovery guide](/azure/storage/common/storage-disaster-recovery-guidance) to determine whether more steps are necessary.
+
+If a local storage was chosen in lieu of redundancy options, you may need to create a new storage account for use in migrations during the outage.
++
+## Cross-region disaster recovery and business continuity
++
+When a Storage Mover agent is registered, it connects to the region in which the Storage Mover resource is registered. If an agent's Azure region experiences an outage, the agent itself isn't affected, but management operations that rely on Azure may be unable to complete. In addition, any active data migrations to storage accounts located within the affected region may fail.
+
+Storage Mover supports two forms of disaster recovery:
+
+- [Azure initiated disaster recovery](#azure-initiated-disaster-recovery)
+- [Customer initiated disaster recovery](#customer-initiated-disaster-recovery)
-## Disaster recovery: cross-region failover
+>[!IMPORTANT]
+>Disaster recovery for on-premises data sources is the responsibility of the customer.
-Azure can provide disaster recovery protection against a region-wide or large geography disaster by making use of another region. For more information on Azure disaster recovery architecture, see the article on [Azure to Azure disaster recovery architecture](/azure/site-recovery/azure-to-azure-architecture).
-Azure initiated disaster recovery is only applicable for those regions that have are paired with a cross-region replication region. Azure Storage Mover uses Cosmos DB for storing instance metadata. Data loss may occur only with an unrecoverable disaster in the Azure Cosmos DB region. For more information, see [Region outages](/azure/cosmos-db/high-availability). Azure initiated recovery is active-passive, and full recovery of a region may be up to 24 hours.
+### Azure initiated disaster recovery
-Customers can minimize downtime by following the customer enabled disaster recovery steps described in this section. These strategies may require that further steps be taken prior to a disaster, so be sure to review and plan accordingly.
+Azure initiated disaster recovery is only applicable to those [regions that have region pairs](./cross-region-replication-azure.md#azure-paired-regions). When cross-region replication is utilized, instance metadata is replicated to each region, but is never permitted to leave the geography.
-## Customer enabled disaster recovery
+Azure Storage Mover uses Cosmos DB for storing instance metadata. Data loss may occur only with an unrecoverable disaster in the Azure Cosmos DB . For more information, see [Region outages](/azure/cosmos-db/high-availability). Azure initiated recovery is active-passive, and full recovery of a region may be up to 24 hours.
-### Deploy resources to a different region
-Since access to your resources may be impacted during an outage. To redeploy resources to a different region, you must first have a snapshot of the resources you wish to redeploy. To ensure that you're restoring the most recent data, taking a snapshot should be done periodically, either on a schedule or after you make substantial changes. Storing the snapshots using a version control system is a good way to store and track history of the snapshots.
+### Customer initiated disaster recovery
+
+Customer initiated disaster recovery isn't restricted to paired regions.
+
+**Before a regional outage occurs:**
+
+- Deploy a zone-redundant Storage Mover by creating Storage Mover resources in a region that supports availability zones.
+
+- Periodically - either on a schedule or after you make substantial changes - take a snapshot of your Storage Mover resources. Storing the snapshots using a version control system is a good way to store and track history of the snapshots. You'll use the last good snapshot in the event of a disaster where you need to recover your resources in a new region.
+
+**During a regional outage:**
+
+You can do one of two things:
+
+- Choose to wait for Azure to recover the region.
+- Minimize downtime by [redeploying your resources to a different region](#deploy-resources-to-a-different-region). Since access to your resources may be impacted during an outage, you'll want to use the last good snapshot of your resources.
+
+>[!TIP]
+>Either one of these strategies still may require that you need to take further steps prior to a disaster, so be sure to review and plan accordingly.
++
+#### Deploy resources to a different region
See the documentation on [exporting templates](/azure/azure-resource-manager/templates/export-template-portal) for further instructions on exporting resources as an Azure Resource Manager (ARM) template.
To use the exported template for disaster recovery, a few changes to the templat
After completing the previous steps and verifying that the template parameters are correct, the template is ready for deployment to a new region. You should deploy the template to a new resource group that has the same default region as the location property in the template.
-### Registering the new agent
+#### Registering the new agent
Follow the steps within the [deploy an Azure Storage Mover agent](/azure/storage-mover/agent-deploy) article to register a new agent in the new Storage Mover resource.
-### Assigning the agent to job definitions
+#### Assigning the agent to job definitions
After the new agent has been registered and reports as online, use the Azure portal or PowerShell to associate the existing job definitions to the new agent. The following PowerShell example is provided for convenience.
Update-AzStorageMoverJobDefinition `
-AgentName $agentName ```
-### Granting agent access to the target storage container
+#### Granting agent access to the target storage container
You need to assign the data contributor role to the managed identity to successfully perform a migration job. Assign the Hybrid Compute resource's system managed identity access to the target storage account resource. The [assign a managed identity access to a resource](/azure/active-directory/managed-identities-azure-resources/howto-assign-access-portal) article provides guidance on how to grant access to the target resource.
You're now ready to start migration jobs using the newly deployed Storage Mover
## Next steps
-Read more about any of the following features or options.
-
-| Guide | Description |
-|||
-| [Azure resiliency and reliability](/azure/architecture/framework/resiliency/overview) | A detailed overview of resiliency and reliability in Azure.
-| [storage account disaster recovery](/azure/storage/common/storage-disaster-recovery-guidance) | Concepts and processes involved with a storage account failover and recovery. |
+- [Reliability in Azure](./overview.md)
+- [Storage account disaster recovery](/azure/storage/common/storage-disaster-recovery-guidance)
reliability Reliability Bot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-bot.md
During a zone-wide outage, the customer should expect a brief degradation of per
### Cross-region disaster recovery in multi-region geography + Azure Bot Service runs in active-active mode for both global and regional services. When an outage occurs, you don't need to detect errors or manage the service. Azure Bot Service automatically performs autofailover and auto recovery in a multi-region geographical architecture. For the EU bot regional service, Azure Bot Service provides two full regions inside Europe with active/active replication to ensure redundancy. For the global bot service, all available regions/geographies can be served as the global footprint. ## Next steps
reliability Reliability Deployment Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-deployment-environments.md
+
+ Title: Reliability and availability in Azure Deployment Environments
+description: Learn how Azure Deployment Environments supports disaster recovery. Understand reliability and availability within a single region and across regions.
++++ Last updated : 08/25/2023+++
+# Reliability in Azure Deployment Environments
+
+This article describes reliability support in Azure Deployment Environments, and covers intra-regional resiliency with availability zones and inter region resiliency with disaster recovery. For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/well-architected/resiliency/overview).
+
+## Availability zone support
+++
+Availability zone support for all resources in Azure Deployment Environments is enabled automatically. There's no action for you to take.
+
+Regions supported:
+- West US 2
+- South Central US
+- UK South
+- West Europe
+- East US
+- Australia East
+- East US 2
+- North Europe
+- West US 3
+- Japan East
+- East Asia
+- Central India
+- Korea Central
+- Canada Central
+
+For more detailed information on availability zones in Azure, seeΓÇ»[Regions and availability zones](../reliability/availability-zones-overview.md).
+
+## Cross-region disaster recovery and business continuity
++
+You can replicate the following Deployment Environments resources in an alternate region to prevent data loss if a cross-region failover occurs:
+
+- Dev center
+- Project
+- Catalog
+- Catalog items
+- Dev center environment type
+- Project environment type
+- Environments
+++
+For more information on Azure disaster recovery architecture, seeΓÇ»[Azure to Azure disaster recovery architecture](../site-recovery/azure-to-azure-architecture.md).
+
+## Next steps
+
+- To learn more about how Azure supports reliability, see [Azure reliability](/azure/reliability).
+- To learn more about Deployment Environments resources, see [Azure Deployment Environments key concepts](../deployment-environments/concept-environments-key-concepts.md).
+- To get started with Deployment Environments, see [Quickstart: Create and configure the Azure Deployment Environments dev center](../deployment-environments/quickstart-create-and-configure-devcenter.md).
reliability Reliability Energy Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-energy-data-services.md
+
+ Title: Reliability in Azure Data Manager for Energy
+description: Find out about reliability in Azure Data Manager for Energy
+++++ Last updated : 06/07/2023+++
+# Reliability in Azure Data Manager for Energy
+
+This article describes reliability support in [Azure Data Manager for Energy](/azure/energy-data-services/), and covers both regional resiliency with availability zones and cross-region resiliency with disaster recovery. For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/well-architected/resiliency/overview).
+
+## Availability zone support
++
+Azure Data Manager for Energy supports zone-redundant instance by default and there's no additional configuration required.
+
+## Prerequisites
+
+The Azure Data Manager for Energy supports availability zones in the following regions:
++
+| Americas | Europe |
+||-|
+| South Central US | North Europe |
+| East US | West Europe |
+| Brazil South | |
+
+### Zone down experience
+During a zone-wide outage, no action is required during zone recovery. There may be a brief degradation of performance until the service self-heals and rebalances underlying capacity to adjust to healthy zones. During this period, you may experience 5xx errors and you may have to retry API calls until the service is restored.
+
+## Cross-region disaster recovery and business continuity
+++
+### Disaster recovery in multi-region geography
+
+Azure Data Manager for Energy is a regional service and, therefore, is susceptible to region-down service failures. Azure Data Manager for Energy follows an active-passive failover configuration to recover from regional disaster. An active-passive configuration keeps warm Azure Data Manager for Energy resource running in the secondary region, but doesn't send traffic there unless the primary region fails.
++
+Below is the list of primary and secondary regions for regions where disaster recovery is supported:
+
+| Geography | Primary | Secondary |
+||-||
+|Americas | South Central US | North Central US |
+|Americas | East US | West US |
+|Europe | North Europe | West Europe |
+|Europe | West Europe | North Europe |
+
+Azure Data Manager for Energy uses Azure Storage, Azure Cosmos DB and Elasticsearch index as underlying data stores for persisting your data partition data. These data stores offer high durability, availability, and scalability. Azure Data Manager for Energy uses [geo-zone-redundant storage](../storage/common/storage-redundancy.md#geo-zone-redundant-storage) or GZRS to automatically replicate data to a secondary region that's hundreds of miles away from the primary region. The same security features enabled in the primary region (for example, encryption at rest using your encryption key) to protect your data are applicable to the secondary region. Similarly, Azure Cosmos DB is a globally distributed data service, which replicates the metadata (catalog) across regions. Elasticsearch index snapshots are taken at regular intervals and geo-replicated to the secondary region. All inflight data are ephemeral and therefore subject to loss. For example, in-transit data that is part of an on-going ingestion job that isn't persisted yet is lost, and you must restart the ingestion process upon recovery.
+
+> [!IMPORTANT]
+> In the following regions, disaster recovery is not available. For more information please contact your Microsoft sales or customer representative.
+> 1. Brazil South
+
+#### Set up disaster recovery and outage detection
+
+Azure Data Manager for Energy service continuously monitors service health in the primary region. If a hard service down failure is detected in the primary region, we attempt recovery before initiating failover to the secondary region on your behalf. We will notify you about the failover progress. Once the failover completes, you could connect to the Azure Data Manager for Energy resource in the secondary region and continue operations. However, there could be slight degradation in performance due to any capacity constraints in the secondary region.
+
+##### Managing the resources in your subscription
+You must handle the failover of your business apps connecting to Azure Data Manager for Energy resource and hosted in the same primary region. Additionally, you're responsible for recovering any diagnostic logs stored in your Log Analytics Workspace.
+
+If you [set up private links](../energy-data-services/how-to-set-up-private-links.md) to your Azure Data Manager for Energy resource in the primary region, then you must create a secondary private endpoint to the same resource in the [paired region](cross-region-replication-azure.md#azure-paired-regions).
+
+> [!CAUTION]
+> If you don't enable public access networks or create a secondary private endpoint before an outage, you'll lose access to the failed over Azure Data Manager for Energy resource in the secondary region. You will be able to access the Azure Data Manager for Energy resource only after the primary region failback is complete.
+
+> [!IMPORTANT]
+> After failover and until the primary region failback completes, you will be unable to perform state modifications to Azure Data Manager for Energy resource created in your subscription. For example,
+> - you cannot **Enable** or **Disable** public access networks.
+> - you cannot **Approve** or **Reject** private endpoint connection to Azure Data Manager for Energy resource
+> - you cannot create a new data partition.
+
+## Next steps
+
+- [Reliability in Azure](availability-zones-overview.md)
reliability Reliability Guidance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-guidance-overview.md
Azure reliability guidance contains the following:
[Azure Cache for Redis](../azure-cache-for-redis/cache-how-to-zone-redundancy.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Cognitive Search](../search/search-reliability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Communications Gateway](../communications-gateway/reliability-communications-gateway.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
+[Azure Container Apps](reliability-azure-container-apps.md)|
[Azure Container Instances](reliability-containers.md)| [Azure Container Registry](../container-registry/zone-redundancy.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Data Explorer](/azure/data-explorer/create-cluster-database-portal?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
Azure reliability guidance contains the following:
Azure Service Manager (ASM) is the old control plane of Azure responsible for creating, managing, deleting VMs and performing other control plane operations, and has been in use since 2011. ASM is retiring in August 2024, and customers can now migrate to [Azure Resource Manager (ARM)](/azure/azure-resource-manager/management/overview). For more information on specific retirement dates and migration documentation, see [Azure Service Manager Retirement](./asm-retirement.md).+ ## Next steps
reliability Reliability Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-virtual-machines.md
To run production workloads, you can use:
When you create your VMs, use availability zones to protect your applications and data against unlikely datacenter failure. For more information about availability zones for VMs, see [Availability zone support](#availability-zone-support) in this document.
-For information on how to enable availability zones support when you create your VM, see [create availability zone support](#create-a-resource-with-availability-zone-enabled).
+For information on how to enable availability zones support when you create your VM, see [create availability zone support](#create-a-resource-with-availability-zones-enabled).
For information on how to migrate your existing VMs to availability zone support, see [migrate to availability zone support](#migrate-to-availability-zone-support).
To learn more about availability zone readiness options, see:
Because availability zones are physically separate and provide distinct power source, network, and cooling, SLAs (Service-level agreements) increase. For more information, see the [SLA for Virtual Machines](https://azure.microsoft.com/support/legal/sla/virtual-machines/v1_9/).
-#### Create a resource with availability zone enabled
+### Create a resource with availability zones enabled
Get started by creating a virtual machine (VM) with availability zone enabled from the following deployment options below: - [Azure CLI](../virtual-machines/linux/create-cli-availability-zone.md)
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
Previously updated : 08/07/2023 Last updated : 10/30/2023
The following table provides a brief description of each built-in role. Click th
> | [Cognitive Services OpenAI User](#cognitive-services-openai-user) | Read access to view files, models, deployments. The ability to create completion and embedding calls. | 5e0bd9bd-7b93-4f28-af87-19fc36ad61bd | > | [Cognitive Services QnA Maker Editor](#cognitive-services-qna-maker-editor) | Let's you create, edit, import and export a KB. You cannot publish or delete a KB. | f4cc2bf9-21be-47a1-bdf1-5c5804381025 | > | [Cognitive Services QnA Maker Reader](#cognitive-services-qna-maker-reader) | Let's you read and test a KB only. | 466ccd10-b268-4a11-b098-b4849f024126 |
+> | [Cognitive Services Usages Reader](#cognitive-services-usages-reader) | Minimal permission to view Cognitive Services usages. | bba48692-92b0-4667-a9ad-c31c7b334ac2 |
> | [Cognitive Services User](#cognitive-services-user) | Lets you read and list keys of Cognitive Services. | a97b65f3-24c7-4388-baec-2e87135dc908 | > | **Internet of things** | | | > | [Device Update Administrator](#device-update-administrator) | Gives you full access to management and content operations | 02ca0879-e8e4-47a5-a61e-5c618b76e64a |
List cluster monitoring user credential action.
"/" ], "description": "List cluster monitoring user credential action.",
- "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/1afdec4b-e479-420e-99e7-f82237c7c5e6",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/1afdec4b-e479-420e-99e7-f82237c7c5e6",
"name": "1afdec4b-e479-420e-99e7-f82237c7c5e6", "permissions": [ {
Can perform all actions within an Azure Machine Learning workspace, except for c
### Cognitive Services Contributor
-Lets you create, read, update, delete and manage keys of Cognitive Services. [Learn more](../ai-services/cognitive-services-virtual-networks.md)
+Lets you create, read, update, delete and manage keys of Cognitive Services. [Learn more](../ai-services/openai/how-to/role-based-access-control.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
Full access to the project, including the system level configuration. [Learn mor
### Cognitive Services OpenAI Contributor
-Full access including the ability to fine-tune, deploy and generate text
+Full access including the ability to fine-tune, deploy and generate text [Learn more](../ai-services/openai/how-to/role-based-access-control.md)
> [!div class="mx-tableFixed"] > | Actions | Description | > | | | > | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/*/read | |
+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/deployments/write | Writes deployments. |
+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/deployments/delete | Deletes deployments. |
+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/raiPolicies/read | Gets all applicable policies under the account including default policies. |
+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/raiPolicies/write | Create or update a custom Responsible AI policy. |
+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/raiPolicies/delete | Deletes a custom Responsible AI policy that's not referenced by an existing deployment. |
+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/commitmentplans/read | Reads commitment plans. |
+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/commitmentplans/write | Writes commitment plans. |
+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/commitmentplans/delete | Deletes commitment plans. |
> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/roleAssignments/read | Get information about a role assignment. | > | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/roleDefinitions/read | Get information about a role definition. | > | **NotActions** | |
Full access including the ability to fine-tune, deploy and generate text
{ "actions": [ "Microsoft.CognitiveServices/*/read",
+ "Microsoft.CognitiveServices/accounts/deployments/write",
+ "Microsoft.CognitiveServices/accounts/deployments/delete",
+ "Microsoft.CognitiveServices/accounts/raiPolicies/read",
+ "Microsoft.CognitiveServices/accounts/raiPolicies/write",
+ "Microsoft.CognitiveServices/accounts/raiPolicies/delete",
+ "Microsoft.CognitiveServices/accounts/commitmentplans/read",
+ "Microsoft.CognitiveServices/accounts/commitmentplans/write",
+ "Microsoft.CognitiveServices/accounts/commitmentplans/delete",
"Microsoft.Authorization/roleAssignments/read", "Microsoft.Authorization/roleDefinitions/read" ],
Full access including the ability to fine-tune, deploy and generate text
### Cognitive Services OpenAI User
-Read access to view files, models, deployments. The ability to create completion and embedding calls.
+Read access to view files, models, deployments. The ability to create completion and embedding calls. [Learn more](../ai-services/openai/how-to/role-based-access-control.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
Read access to view files, models, deployments. The ability to create completion
> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/OpenAI/deployments/extensions/chat/completions/action | Creates a completion for the chat message with extensions | > | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/OpenAI/deployments/embeddings/action | Return the embeddings for a given prompt. | > | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/OpenAI/deployments/completions/write | |
+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/OpenAI/images/generations/action | Create image generations. |
> | **NotDataActions** | | > | *none* | |
Read access to view files, models, deployments. The ability to create completion
"assignableScopes": [ "/" ],
- "description": "Ability to view files, models, deployments. Readers can't make any changes They can inference",
+ "description": "Ability to view files, models, deployments. Readers are able to call inference operations such as chat completions and image generation.",
"id": "/providers/Microsoft.Authorization/roleDefinitions/5e0bd9bd-7b93-4f28-af87-19fc36ad61bd", "name": "5e0bd9bd-7b93-4f28-af87-19fc36ad61bd", "permissions": [
Read access to view files, models, deployments. The ability to create completion
"Microsoft.CognitiveServices/accounts/OpenAI/deployments/chat/completions/action", "Microsoft.CognitiveServices/accounts/OpenAI/deployments/extensions/chat/completions/action", "Microsoft.CognitiveServices/accounts/OpenAI/deployments/embeddings/action",
- "Microsoft.CognitiveServices/accounts/OpenAI/deployments/completions/write"
+ "Microsoft.CognitiveServices/accounts/OpenAI/deployments/completions/write",
+ "Microsoft.CognitiveServices/accounts/OpenAI/images/generations/action"
], "notDataActions": [] }
Let's you read and test a KB only. [Learn more](../ai-services/qnamaker/index.ym
} ```
+### Cognitive Services Usages Reader
+
+Minimal permission to view Cognitive Services usages. [Learn more](../ai-services/openai/how-to/role-based-access-control.md)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/locations/usages/read | Read all usages data |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Minimal permission to view Cognitive Services usages.",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/bba48692-92b0-4667-a9ad-c31c7b334ac2",
+ "name": "bba48692-92b0-4667-a9ad-c31c7b334ac2",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.CognitiveServices/locations/usages/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Cognitive Services Usages Reader",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+ ### Cognitive Services User Lets you read and list keys of Cognitive Services. [Learn more](../ai-services/authentication.md)
sap Configure Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-control-plane.md
This table shows the parameters related to the deployer VM.
The VM image is defined by using the following structure:
-```python
-{
- "os_type" = ""
- "source_image_id" = ""
- "publisher" = "Canonical"
- "offer" = "0001-com-ubuntu-server-focal"
- "sku" = "20_04-lts"
- "version" = "latest"
- "type" = "marketplace"
+```terraform
+xxx_vm_image = {
+ os_type = ""
+ source_image_id = ""
+ publisher = "Canonical"
+ offer = "0001-com-ubuntu-server-focal"
+ sku = "20_04-lts"
+ version = "latest"
+ type = "marketplace"
} ```
sap Configure System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-system.md
description: Define the SAP system properties for SAP Deployment Automation Fram
Previously updated : 05/04/2023 Last updated : 10/31/2023
To configure this topology, define the database tier values and set `database_hi
This section contains the parameters that define the environment settings. > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type | Notes |
-> | | -- | - | - |
-> | `environment` | Identifier for the workload zone (maximum five characters) | Mandatory | For example, `PROD` for a production environment and `NP` for a nonproduction environment. |
-> | `location` | The Azure region in which to deploy | Required | |
-> | `custom_prefix` | Specifies the custom prefix used in the resource naming | Optional | |
-> | `use_prefix` | Controls if the resource naming includes the prefix | Optional | DEV-WEEU-SAP01-X00_xxxx |
-> | 'name_override_file' | Name override file | Optional | See [Custom naming](naming-module.md). |
-> | 'save_naming_information | Creates a sample naming JSON file | Optional | See [Custom naming](naming-module.md). |
+> | Variable | Description | Type | Notes |
+> | - | -- | - | - |
+> | `environment` | Identifier for the workload zone (max five characters) | Mandatory | For example, `PROD` for a production environment and `NP` for a nonproduction environment. |
+> | `location` | The Azure region in which to deploy | Required | |
+>