Updates from: 11/01/2023 02:14:00
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-domain.md
zone_pivot_groups: b2c-policy-type
This article describes how to enable custom domains in your redirect URLs for Azure Active Directory B2C (Azure AD B2C). By using a verified custom domain, you've benefits such as: - It provides a more seamless user experience. From the user's perspective, they remain in your domain during the sign in process rather than redirecting to the Azure AD B2C default domain *<tenant-name>.b2clogin.com*.
+- By staying in the same domain for your application during sign-in, you mitigate the impact of [third-party cookie blocking](/azure/active-directory/develop/reference-third-party-cookies-spas).
- You increase the number of objects (user accounts and applications) you can create in your Azure AD B2C tenant from the default 1.25 million to 5.25 million.
active-directory-b2c Force Password Reset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/force-password-reset.md
Previously updated : 06/26/2023 Last updated : 10/31/2023 -+ zone_pivot_groups: b2c-policy-type
Once a password expiration policy has been set, you must also configure force pa
### Password expiry duration
-By default, the password is set not to expire. However, the value is configurable by using the [Set-MsolPasswordPolicy](/powershell/module/msonline/set-msolpasswordpolicy) cmdlet from the Azure AD PowerShell module. This command updates the tenant, so that all users' passwords expire after number of days you configure.
+By default, the password is set not to expire. However, the value is configurable by using the [Update-MgDomain](/powershell/module/microsoft.graph.identity.directorymanagement/update-mgdomain) cmdlet from the Microsoft Graph PowerShell module. This command updates the tenant so that all users' passwords expire after a number of days you configure. For example:
+
+```powershell
+Import-Module Microsoft.Graph.Identity.DirectoryManagement
+
+Connect-MgGraph -Scopes 'Domain.ReadWrite.All'
+
+$domainId = "contoso.com"
+$params = @{
+ passwordValidityPeriodInDays = 90
+ passwordNotificationWindowInDays = 15
+}
+
+Update-MgDomain -DomainId $domainId -BodyParameter $params
+```
+
+> [!NOTE]
+> `passwordValidityPeriodInDays` indicates the length of time in days that a password remains valid before it must be changed. `passwordNotificationWindowInDays` indicates the length of time in days before the password expiration date when users receive their first notification to indicate that their password is about to expire.
## Next steps Set up a [self-service password reset](add-password-reset-policy.md).+
ai-services Concept Custom Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-classifier.md
Custom classification models can analyze a single- or multi-file documents to id
* A single file containing multiple instances of the same document. For instance, a collection of scanned invoices.
-Training a custom classifier requires at least two distinct classes and a minimum of five samples per class. The model response contains the page ranges for each of the classes of documents identified.
+✔️ Training a custom classifier requires at least `two` distinct classes and a minimum of `five` samples per class. The model response contains the page ranges for each of the classes of documents identified.
+
+✔️ The maximum allowed number of classes is `500`. The maximum allowed number of samples per class is `100`.
The model classifies each page of the input document to one of the classes in the labeled dataset. Use the confidence score from the response to set the threshold for your application.
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/language-support.md
Previously updated : 03/09/2023 Last updated : 10/24/2023
Use this article to learn about the languages currently supported by different features.
-> [!NOTE]
-> Some of the languages listed below are only supported in some [model versions](../concepts/model-lifecycle.md#choose-the-model-version-used-on-your-data). See the linked feature-level language support article for details.
-- | Language | Language code | [Custom text classification](../custom-text-classification/language-support.md) | [Custom named entity recognition(NER)](../custom-named-entity-recognition/language-support.md) | [Conversational language understanding](../conversational-language-understanding/language-support.md) | [Entity linking](../entity-linking/language-support.md) | [Language detection](../language-detection/language-support.md) | [Key phrase extraction](../key-phrase-extraction/language-support.md) | [Named entity recognition(NER)](../named-entity-recognition/language-support.md) | [Orchestration workflow](../orchestration-workflow/language-support.md) | [Personally Identifiable Information (PII)](../personally-identifiable-information/language-support.md?tabs=documents) | [Conversation PII](../personally-identifiable-information/language-support.md?tabs=conversations) | [Question answering](../question-answering/language-support.md) | [Sentiment analysis](../sentiment-opinion-mining/language-support.md#sentiment-analysis-language-support) | [Opinion mining](../sentiment-opinion-mining/language-support.md#opinion-mining-language-support) | [Text Analytics for health](../text-analytics-for-health/language-support.md) | [Summarization](../summarization/language-support.md?tabs=document-summarization) | [Conversation summarization](../summarization/language-support.md?tabs=conversation-summarization) | |::|:-:|:-:|:-:|:--:|:-:|::|::|:--:|:--:|:-:|:-:|::|::|:-:|:--:|::|:--:| | Afrikaans | `af` | ✓ | ✓ | ✓ | | ✓ | ✓ | | | ✓ | | | ✓ | ✓ | | | |
Use this article to learn about the languages currently supported by different f
## See also
-See the following service-level language support articles for information on model version support for each language:
+See the following service-level language support articles for more information on language support:
* [Custom text classification](../custom-text-classification/language-support.md) * [Custom named entity recognition(NER)](../custom-named-entity-recognition/language-support.md) * [Conversational language understanding](../conversational-language-understanding/language-support.md)
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/entity-linking/language-support.md
Previously updated : 11/02/2021 Last updated : 10/24/2023 # Entity linking language support
-> [!NOTE]
-> Languages are added as new model versions are released for specific features. The current model version for Entity Linking is `2020-02-01`.
-
-| Language | Language code | v3 support | Starting with v3 model version: | Notes |
-|:|:-:|:-:|:--:|:--:|
-| English | `en` | Γ£ô | 2019-10-01 | |
-| Spanish | `es` | Γ£ô | 2019-10-01 | |
+| Language | Language code | Notes |
+|:|:-:|:--:|
+| English | `en` | |
+| Spanish | `es` | |
## Next steps
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/key-phrase-extraction/language-support.md
Previously updated : 09/18/2023 Last updated : 10/24/2023
Use this article to find the natural languages supported by Key Phrase Extractio
## Supported languages
-> [!NOTE]
-> Languages are added as new [model versions](how-to/call-api.md#specify-the-key-phrase-extraction-model) are released for specific features. The current model version for Key Phrase Extraction is `2022-07-01`.
- Total supported language codes: 94
-| Language | Language code | Starting with model version | Notes |
-|--||-|--|
-| Afrikaans      |     `af`  |                2020-07-01                 |                    |
-| Albanian     |     `sq`  |                2022-10-01                 |                    |
-| Amharic     |     `am`  |                2022-10-01                 |                    |
-| Arabic    |     `ar`  |                2022-10-01                 |                    |
-| Armenian    |     `hy`  |                2022-10-01                 |                    |
-| Assamese    |     `as`  |                2022-10-01                 |                    |
-| Azerbaijani    |     `az`  |                2022-10-01                 |                    |
-| Basque    |     `eu`  |                2022-10-01                 |                    |
-| Belarusian |     `be`  |                2022-10-01                 |                    |
-| Bengali     |     `bn`  |                2022-10-01                 |                    |
-| Bosnian    |     `bs`  |                2022-10-01                 |                    |
-| Breton    |     `br`  |                2022-10-01                 |                    |
-| Bulgarian      |     `bg`  |                2020-07-01                 |                    |
-| Burmese    |     `my`  |                2022-10-01                 |                    |
-| Catalan    |     `ca`  |                2020-07-01                 |                    |
-| Chinese-Simplified    |     `zh-hans` |                2021-06-01                 |                    |
-| Chinese-Traditional |     `zh-hant` |                2022-10-01                 |                    |
-| Croatian | `hr` | 2020-07-01 | |
-| Czech    |     `cs`  |                2022-10-01                 |                    |
-| Danish | `da` | 2019-10-01 | |
-| Dutch                 |     `nl`      |                2019-10-01                 |                    |
-| English               |     `en`      |                2019-10-01                 |                    |
-| Esperanto    |     `eo`  |                2022-10-01                 |                    |
-| Estonian              |     `et`      |                2020-07-01                 |                    |
-| Filipino    |     `fil`  |                2022-10-01                 |                    |
-| Finnish               |     `fi`      |                2019-10-01                 |                    |
-| French                |     `fr`      |                2019-10-01                 |                    |
-| Galician    |     `gl`  |                2022-10-01                 |                    |
-| Georgian    |     `ka`  |                2022-10-01                 |                    |
-| German                |     `de`      |                2019-10-01                 |                    |
-| Greek    |     `el`  |                2020-07-01                 |                    |
-| Gujarati    |     `gu`  |                2022-10-01                 |                    |
-| Hausa      |     `ha`  |                2022-10-01                 |                    |
-| Hebrew    |     `he`  |                2022-10-01                 |                    |
-| Hindi      |     `hi`  |                2022-10-01                 |                    |
-| Hungarian    |     `hu`  |                2020-07-01                 |                    |
-| Indonesian            |     `id`      |                2020-07-01                 |                    |
-| Irish            |     `ga`      |                2022-10-01                 |                    |
-| Italian               |     `it`      |                2019-10-01                 |                    |
-| Japanese              |     `ja`      |                2019-10-01                 |                    |
-| Javanese            |     `jv`      |                2022-10-01                 |                    |
-| Kannada            |     `kn`      |                2022-10-01                 |                    |
-| Kazakh            |     `kk`      |                2022-10-01                 |                    |
-| Khmer            |     `km`      |                2022-10-01                 |                    |
-| Korean                |     `ko`      |                2019-10-01                 |                    |
-| Kurdish (Kurmanji)   |     `ku`      |                2022-10-01                 |                    |
-| Kyrgyz            |     `ky`      |                2022-10-01                 |                    |
-| Lao            |     `lo`      |                2022-10-01                 |                    |
-| Latin            |     `la`      |                2022-10-01                 |                    |
-| Latvian               |     `lv`      |                2020-07-01                 |                    |
-| Lithuanian            |     `lt`      |                2022-10-01                 |                    |
-| Macedonian            |     `mk`      |                2022-10-01                 |                    |
-| Malagasy            |     `mg`      |                2022-10-01                 |                    |
-| Malay            |     `ms`      |                2022-10-01                 |                    |
-| Malayalam            |     `ml`      |                2022-10-01                 |                    |
-| Marathi            |     `mr`      |                2022-10-01                 |                    |
-| Mongolian            |     `mn`      |                2022-10-01                 |                    |
-| Nepali            |     `ne`      |                2022-10-01                 |                    |
-| Norwegian (Bokmål)    |     `no`      |                2020-07-01                 | `nb` also accepted |
-| Odia            |     `or`      |                2022-10-01                 |                    |
-| Oromo            |     `om`      |                2022-10-01                 |                    |
-| Pashto            |     `ps`      |                2022-10-01                 |                    |
-| Persian       |     `fa`      |                2022-10-01                 |                    |
-| Polish                |     `pl`      |                2019-10-01                 |                    |
-| Portuguese (Brazil)   |    `pt-BR`    |                2019-10-01                 |                    |
-| Portuguese (Portugal) |    `pt-PT`    |                2019-10-01                 | `pt` also accepted |
-| Punjabi            |     `pa`      |                2022-10-01                 |                    |
-| Romanian              |     `ro`      |                2020-07-01                 |                    |
-| Russian               |     `ru`      |                2019-10-01                 |                    |
-| Sanskrit            |     `sa`      |                2022-10-01                 |                    |
-| Scottish Gaelic       |     `gd`      |                2022-10-01                 |                    |
-| Serbian            |     `sr`      |                2022-10-01                 |                    |
-| Sindhi            |     `sd`      |                2022-10-01                 |                    |
-| Sinhala            |     `si`      |                2022-10-01                 |                    |
-| Slovak                |     `sk`      |                2020-07-01                 |                    |
-| Slovenian             |     `sl`      |                2020-07-01                 |                    |
-| Somali            |     `so`      |                2022-10-01                 |                    |
-| Spanish               |     `es`      |                2019-10-01                 |                    |
-| Sudanese            |     `su`      |                2022-10-01                 |                    |
-| Swahili            |     `sw`      |                2022-10-01                 |                    |
-| Swedish               |     `sv`      |                2019-10-01                 |                    |
-| Tamil            |     `ta`      |                2022-10-01                 |                    |
-| Telugu           |     `te`      |                2022-10-01                 |                    |
-| Thai            |     `th`      |                2022-10-01                 |                    |
-| Turkish              |     `tr`      |                2020-07-01                 |                    |
-| Ukrainian           |     `uk`      |                2022-10-01                 |                    |
-| Urdu            |     `ur`      |                2022-10-01                 |                    |
-| Uyghur            |     `ug`      |                2022-10-01                 |                    |
-| Uzbek            |     `uz`      |                2022-10-01                 |                    |
-| Vietnamese            |     `vi`      |                2022-10-01                 |                    |
-| Welsh            |     `cy`      |                2022-10-01                 |                    |
-| Western Frisian       |     `fy`      |                2022-10-01                 |                    |
-| Xhosa            |     `xh`      |                2022-10-01                 |                    |
-| Yiddish            |     `yi`      |                2022-10-01                 |                    |
+| Language | Language code | Notes |
+|--||--|
+| Afrikaans      |     `af`  |                    |
+| Albanian     |     `sq`  |                    |
+| Amharic     |     `am`  |                    |
+| Arabic    |     `ar`  |                    |
+| Armenian    |     `hy`  |                    |
+| Assamese    |     `as`  |                    |
+| Azerbaijani    |     `az`  |                    |
+| Basque    |     `eu`  |                    |
+| Belarusian |     `be`  |                    |
+| Bengali     |     `bn`  |                    |
+| Bosnian    |     `bs`  |                    |
+| Breton    |     `br`  |                    |
+| Bulgarian      |     `bg`  |                    |
+| Burmese    |     `my`  |                    |
+| Catalan    |     `ca`  |                    |
+| Chinese-Simplified    |     `zh-hans` |                    |
+| Chinese-Traditional |     `zh-hant` |                    |
+| Croatian | `hr` | |
+| Czech    |     `cs`  |                    |
+| Danish | `da` | |
+| Dutch                 |     `nl`      |                    |
+| English               |     `en`      |                    |
+| Esperanto    |     `eo`  |                    |
+| Estonian              |     `et`      |                    |
+| Filipino    |     `fil`  |                    |
+| Finnish               |     `fi`      |                    |
+| French                |     `fr`      |                    |
+| Galician    |     `gl`  |                    |
+| Georgian    |     `ka`  |                    |
+| German                |     `de`      |                    |
+| Greek    |     `el`  |                    |
+| Gujarati    |     `gu`  |                    |
+| Hausa      |     `ha`  |                    |
+| Hebrew    |     `he`  |                    |
+| Hindi      |     `hi`  |                    |
+| Hungarian    |     `hu`  |                    |
+| Indonesian            |     `id`      |                    |
+| Irish            |     `ga`      |                    |
+| Italian               |     `it`      |                    |
+| Japanese              |     `ja`      |                    |
+| Javanese            |     `jv`      |                    |
+| Kannada            |     `kn`      |                    |
+| Kazakh            |     `kk`      |                    |
+| Khmer            |     `km`      |                    |
+| Korean                |     `ko`      |                    |
+| Kurdish (Kurmanji)   |     `ku`      |                    |
+| Kyrgyz            |     `ky`      |                    |
+| Lao            |     `lo`      |                    |
+| Latin            |     `la`      |                    |
+| Latvian               |     `lv`      |                    |
+| Lithuanian            |     `lt`      |                    |
+| Macedonian            |     `mk`      |                    |
+| Malagasy            |     `mg`      |                    |
+| Malay            |     `ms`      |                    |
+| Malayalam            |     `ml`      |                    |
+| Marathi            |     `mr`      |                    |
+| Mongolian            |     `mn`      |                    |
+| Nepali            |     `ne`      |                    |
+| Norwegian (Bokmål)    |     `no`      | `nb` also accepted |
+| Odia            |     `or`      |                    |
+| Oromo            |     `om`      |                    |
+| Pashto            |     `ps`      |                    |
+| Persian       |     `fa`      |                    |
+| Polish                |     `pl`      |                    |
+| Portuguese (Brazil)   |    `pt-BR`    |                    |
+| Portuguese (Portugal) |    `pt-PT`    | `pt` also accepted |
+| Punjabi            |     `pa`      |                    |
+| Romanian              |     `ro`      |                    |
+| Russian               |     `ru`      |                    |
+| Sanskrit            |     `sa`      |                    |
+| Scottish Gaelic       |     `gd`      |                    |
+| Serbian            |     `sr`      |                    |
+| Sindhi            |     `sd`      |                    |
+| Sinhala            |     `si`      |                    |
+| Slovak                |     `sk`      |                    |
+| Slovenian             |     `sl`      |                    |
+| Somali            |     `so`      |                    |
+| Spanish               |     `es`      |                    |
+| Sudanese            |     `su`      |                    |
+| Swahili            |     `sw`      |                    |
+| Swedish               |     `sv`      |                    |
+| Tamil            |     `ta`      |                    |
+| Telugu           |     `te`      |                    |
+| Thai            |     `th`      |                    |
+| Turkish              |     `tr`      |                    |
+| Ukrainian           |     `uk`      |                    |
+| Urdu            |     `ur`      |                    |
+| Uyghur            |     `ug`      |                    |
+| Uzbek            |     `uz`      |                    |
+| Vietnamese            |     `vi`      |                    |
+| Welsh            |     `cy`      |                    |
+| Western Frisian       |     `fy`      |                    |
+| Xhosa            |     `xh`      |                    |
+| Yiddish            |     `yi`      |                    |
## Next steps
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/language-detection/language-support.md
Previously updated : 11/02/2021 Last updated : 10/24/2023 # Language support for Language Detection
-Use this article to learn which natural languages are supported by Language Detection.
--
-> [!NOTE]
-> Languages are added as new [model versions](how-to/call-api.md#specify-the-language-detection-model) are released. The current model version for Language Detection is `2022-10-01`.
+Use this article to learn which natural languages that language detection supports.
The Language Detection feature can detect a wide range of languages, variants, dialects, and some regional/cultural languages, and return detected languages with their name and code. The returned language code parameters conform to [BCP-47](https://tools.ietf.org/html/bcp47) standard with most of them conforming to [ISO-639-1](https://www.iso.org/iso-639-language-codes.html) identifiers.
-If you have content expressed in a less frequently used language, you can try Language Detection to see if it returns a code. The response for languages that cannot be detected is `unknown`.
+If you have content expressed in a less frequently used language, you can try Language Detection to see if it returns a code. The response for languages that can't be detected is `unknown`.
## Languages supported by Language Detection
-| Language | Language Code | Starting with model version: |
-||||
-| Afrikaans | `af` | |
-| Albanian | `sq` | |
-| Amharic | `am` | 2021-01-05 |
-| Arabic | `ar` | |
-| Armenian | `hy` | |
-| Assamese | `as` | 2021-01-05 |
-| Azerbaijani | `az` | 2021-01-05 |
-| Bashkir | `ba` | 2022-10-01 |
-| Basque | `eu` | |
-| Belarusian | `be` | |
-| Bengali | `bn` | |
-| Bosnian | `bs` | 2020-09-01 |
-| Bulgarian | `bg` | |
-| Burmese | `my` | |
-| Catalan | `ca` | |
-| Central Khmer | `km` | |
-| Chinese | `zh` | |
-| Chinese Simplified | `zh_chs` | |
-| Chinese Traditional | `zh_cht` | |
-| Chuvash | `cv` | 2022-10-01 |
-| Corsican | `co` | 2021-01-05 |
-| Croatian | `hr` | |
-| Czech | `cs` | |
-| Danish | `da` | |
-| Dari | `prs` | 2020-09-01 |
-| Divehi | `dv` | |
-| Dutch | `nl` | |
-| English | `en` | |
-| Esperanto | `eo` | |
-| Estonian | `et` | |
-| Faroese | `fo` | 2022-10-01 |
-| Fijian | `fj` | 2020-09-01 |
-| Finnish | `fi` | |
-| French | `fr` | |
-| Galician | `gl` | |
-| Georgian | `ka` | |
-| German | `de` | |
-| Greek | `el` | |
-| Gujarati | `gu` | |
-| Haitian | `ht` | |
-| Hausa | `ha` | 2021-01-05 |
-| Hebrew | `he` | |
-| Hindi | `hi` | |
-| Hmong Daw | `mww` | 2020-09-01 |
-| Hungarian | `hu` | |
-| Icelandic | `is` | |
-| Igbo | `ig` | 2021-01-05 |
-| Indonesian | `id` | |
-| Inuktitut | `iu` | |
-| Irish | `ga` | |
-| Italian | `it` | |
-| Japanese | `ja` | |
-| Javanese | `jv` | 2021-01-05 |
-| Kannada | `kn` | |
-| Kazakh | `kk` | 2020-09-01 |
-| Kinyarwanda | `rw` | 2021-01-05 |
-| Kirghiz | `ky` | 2022-10-01 |
-| Korean | `ko` | |
-| Kurdish | `ku` | |
-| Lao | `lo` | |
-| Latin | `la` | |
-| Latvian | `lv` | |
-| Lithuanian | `lt` | |
-| Luxembourgish | `lb` | 2021-01-05 |
-| Macedonian | `mk` | |
-| Malagasy | `mg` | 2020-09-01 |
-| Malay | `ms` | |
-| Malayalam | `ml` | |
-| Maltese | `mt` | |
-| Maori | `mi` | 2020-09-01 |
-| Marathi | `mr` | 2020-09-01 |
-| Mongolian | `mn` | 2021-01-05 |
-| Nepali | `ne` | 2021-01-05 |
-| Norwegian | `no` | |
-| Norwegian Nynorsk | `nn` | |
-| Odia | `or` | |
-| Pasht | `ps` | |
-| Persian | `fa` | |
-| Polish | `pl` | |
-| Portuguese | `pt` | |
-| Punjabi | `pa` | |
-| Queretaro Otomi | `otq` | 2020-09-01 |
-| Romanian | `ro` | |
-| Russian | `ru` | |
-| Samoan | `sm` | 2020-09-01 |
-| Serbian | `sr` | |
-| Shona | `sn` | 2021-01-05 |
-| Sindhi | `sd` | 2021-01-05 |
-| Sinhala | `si` | |
-| Slovak | `sk` | |
-| Slovenian | `sl` | |
-| Somali | `so` | |
-| Spanish | `es` | |
-| Sundanese | `su` | 2021-01-05 |
-| Swahili | `sw` | |
-| Swedish | `sv` | |
-| Tagalog | `tl` | |
-| Tahitian | `ty` | 2020-09-01 |
-| Tajik | `tg` | 2021-01-05 |
-| Tamil | `ta` | |
-| Tatar | `tt` | 2021-01-05 |
-| Telugu | `te` | |
-| Thai | `th` | |
-| Tibetan | `bo` | 2021-01-05 |
-| Tigrinya | `ti` | 2021-01-05 |
-| Tongan | `to` | 2020-09-01 |
-| Turkish | `tr` | 2021-01-05 |
-| Turkmen | `tk` | 2021-01-05 |
-| Upper Sorbian | `hsb` | 2022-10-01 |
-| Uyghur | `ug` | 2022-10-01 |
-| Ukrainian | `uk` | |
-| Urdu | `ur` | |
-| Uzbek | `uz` | |
-| Vietnamese | `vi` | |
-| Welsh | `cy` | |
-| Xhosa | `xh` | 2021-01-05 |
-| Yiddish | `yi` | |
-| Yoruba | `yo` | 2021-01-05 |
-| Yucatec Maya | `yua` | |
-| Zulu | `zu` | 2021-01-05 |
+| Language | Language Code |
+|||
+| Afrikaans | `af` |
+| Albanian | `sq` |
+| Amharic | `am` |
+| Arabic | `ar` |
+| Armenian | `hy` |
+| Assamese | `as` |
+| Azerbaijani | `az` |
+| Bashkir | `ba` |
+| Basque | `eu` |
+| Belarusian | `be` |
+| Bengali | `bn` |
+| Bosnian | `bs` |
+| Bulgarian | `bg` |
+| Burmese | `my` |
+| Catalan | `ca` |
+| Central Khmer | `km` |
+| Chinese | `zh` |
+| Chinese Simplified | `zh_chs` |
+| Chinese Traditional | `zh_cht` |
+| Chuvash | `cv` |
+| Corsican | `co` |
+| Croatian | `hr` |
+| Czech | `cs` |
+| Danish | `da` |
+| Dari | `prs` |
+| Divehi | `dv` |
+| Dutch | `nl` |
+| English | `en` |
+| Esperanto | `eo` |
+| Estonian | `et` |
+| Faroese | `fo` |
+| Fijian | `fj` |
+| Finnish | `fi` |
+| French | `fr` |
+| Galician | `gl` |
+| Georgian | `ka` |
+| German | `de` |
+| Greek | `el` |
+| Gujarati | `gu` |
+| Haitian | `ht` |
+| Hausa | `ha` |
+| Hebrew | `he` |
+| Hindi | `hi` |
+| Hmong Daw | `mww` |
+| Hungarian | `hu` |
+| Icelandic | `is` |
+| Igbo | `ig` |
+| Indonesian | `id` |
+| Inuktitut | `iu` |
+| Irish | `ga` |
+| Italian | `it` |
+| Japanese | `ja` |
+| Javanese | `jv` |
+| Kannada | `kn` |
+| Kazakh | `kk` |
+| Kinyarwanda | `rw` |
+| Kirghiz | `ky` |
+| Korean | `ko` |
+| Kurdish | `ku` |
+| Lao | `lo` |
+| Latin | `la` |
+| Latvian | `lv` |
+| Lithuanian | `lt` |
+| Luxembourgish | `lb` |
+| Macedonian | `mk` |
+| Malagasy | `mg` |
+| Malay | `ms` |
+| Malayalam | `ml` |
+| Maltese | `mt` |
+| Maori | `mi` |
+| Marathi | `mr` |
+| Mongolian | `mn` |
+| Nepali | `ne` |
+| Norwegian | `no` |
+| Norwegian Nynorsk | `nn` |
+| Odia | `or` |
+| Pasht | `ps` |
+| Persian | `fa` |
+| Polish | `pl` |
+| Portuguese | `pt` |
+| Punjabi | `pa` |
+| Queretaro Otomi | `otq` |
+| Romanian | `ro` |
+| Russian | `ru` |
+| Samoan | `sm` |
+| Serbian | `sr` |
+| Shona | `sn` |
+| Sindhi | `sd` |
+| Sinhala | `si` |
+| Slovak | `sk` |
+| Slovenian | `sl` |
+| Somali | `so` |
+| Spanish | `es` |
+| Sundanese | `su` |
+| Swahili | `sw` |
+| Swedish | `sv` |
+| Tagalog | `tl` |
+| Tahitian | `ty` |
+| Tajik | `tg` |
+| Tamil | `ta` |
+| Tatar | `tt` |
+| Telugu | `te` |
+| Thai | `th` |
+| Tibetan | `bo` |
+| Tigrinya | `ti` |
+| Tongan | `to` |
+| Turkish | `tr` |
+| Turkmen | `tk` |
+| Upper Sorbian | `hsb` |
+| Uyghur | `ug` |
+| Ukrainian | `uk` |
+| Urdu | `ur` |
+| Uzbek | `uz` |
+| Vietnamese | `vi` |
+| Welsh | `cy` |
+| Xhosa | `xh` |
+| Yiddish | `yi` |
+| Yoruba | `yo` |
+| Yucatec Maya | `yua` |
+| Zulu | `zu` |
## Romanized Indic Languages supported by Language Detection
-| Language | Language Code | Starting with model version: |
-||||
-| Assamese | `as` | 2022-10-01 |
-| Bengali | `bn` | 2022-10-01 |
-| Gujarati | `gu` | 2022-10-01 |
-| Hindi | `hi` | 2022-10-01 |
-| Kannada | `kn` | 2022-10-01 |
-| Malayalam | `ml` | 2022-10-01 |
-| Marathi | `mr` | 2022-10-01 |
-| Odia | `or` | 2022-10-01 |
-| Punjabi | `pa` | 2022-10-01 |
-| Tamil | `ta` | 2022-10-01 |
-| Telugu | `te` | 2022-10-01 |
-| Urdu | `ur` | 2022-10-01 |
+| Language | Language Code |
+|||
+| Assamese | `as` |
+| Bengali | `bn` |
+| Gujarati | `gu` |
+| Hindi | `hi` |
+| Kannada | `kn` |
+| Malayalam | `ml` |
+| Marathi | `mr` |
+| Odia | `or` |
+| Punjabi | `pa` |
+| Tamil | `ta` |
+| Telugu | `te` |
+| Urdu | `ur` |
## Next steps
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/named-entity-recognition/language-support.md
Previously updated : 06/27/2022 Last updated : 10/24/2023
Use this article to learn which natural languages are supported by the NER feature of Azure AI Language. > [!NOTE]
-> * Languages are added as new [model versions](how-to-call.md#specify-the-ner-model) are released.
-> * The language support below is for model version `2023-04-15-preview` for the Generally Available API.
> * You can additionally find the language support for the Preview API in the second tab. ## NER language support
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/personally-identifiable-information/language-support.md
Previously updated : 08/02/2022 Last updated : 10/24/2023
Use this article to learn which natural languages are supported by the PII and conversation PII (preview) features of Azure AI Language.
-> [!NOTE]
-> * Languages are added as new [model versions](how-to-call.md#specify-the-pii-detection-model) are released.
- # [PII for documents](#tab/documents) ## PII language support
-|Language |Language code|Starting with model version|Notes |
-||-|||
-|Afrikaans |`af` |2023-04-15-preview | |
-|Amharic |`am` |2023-04-15-preview | |
-|Arabic |`ar` |2023-01-01-preview | |
-|Assamese |`as` |2023-04-15-preview | |
-|Azerbaijani |`az` |2023-04-15-preview | |
-|Bulgarian |`bg` |2023-04-15-preview | |
-|Bengali |`bn` |2023-04-15-preview | |
-|Bosnian |`bs` |2023-04-15-preview | |
-|Catalan |`ca` |2023-04-15-preview | |
-|Czech |`cs` |2023-01-01-preview | |
-|Welsh |`cy` |2020-04-01 | |
-|Danish |`da` |2023-01-01-preview | |
-|German |`de` |2021-01-15 | |
-|Greek |`el` |2023-04-15-preview | |
-|English |`en` |2020-07-01 | |
-|Spanish |`es` |2020-04-01 | |
-|Estonian |`et` |2023-04-15-preview | |
-|Basque |`eu` |2023-04-15-preview | |
-|Persian |`fa` |2023-04-15-preview | |
-|Finnish |`fi` |2023-01-01-preview | |
-|French |`fr` |2021-01-15 | |
-|Irish |`ga` |2023-04-15-preview | |
-|Galician |`gl` |2023-04-15-preview | |
-|Gujarati |`gu` |2023-04-15-preview | |
-|Hebrew |`he` |2023-01-01-preview | |
-|Hindi |`hi` |2023-01-01-preview | |
-|Croatian |`hr` |2023-04-15-preview | |
-|Hungarian |`hu` |2023-01-01-preview | |
-|Armenian |`hy` |2023-04-15-preview | |
-|Italian |`it` |2021-01-15 | |
-|Indonesian |`id` |2023-04-15-preview | |
-|Japanese |`ja` |2021-01-15 | |
-|Georgian |`ka` |2023-04-15-preview | |
-|Kazakh |`kk` |2023-04-15-preview | |
-|Khmer |`km` |2023-04-15-preview | |
-|Kannada |`kn` |2023-04-15-preview | |
-|Korean |`ko` |2021-01-15 | |
-|Kurdish(Kurmanji) |`ku` |2023-04-15-preview | |
-|Kyrgyz |`ky` |2023-04-15-preview | |
-|Lao |`lo` |2023-04-15-preview | |
-|Lithuanian |`lt` |2023-04-15-preview | |
-|Latvian |`lv` |2023-04-15-preview | |
-|Malagasy |`mg` |2023-04-15-preview | |
-|Macedonian |`mk` |2023-04-15-preview | |
-|Malayalam |`ml` |2023-04-15-preview | |
-|Mongolian |`mn` |2023-04-15-preview | |
-|Marathi |`mr` |2023-04-15-preview | |
-|Malay |`ms` |2023-04-15-preview | |
-|Burmese |`my` |2023-04-15-preview | |
-|Nepali |`ne` |2023-04-15-preview | |
-|Dutch |`nl` |2023-01-01-preview | |
-|Norwegian (Bokmål) |`no` |2023-01-01-preview |`nb` also accepted|
-|Odia |`or` |2023-04-15-preview | |
-|Punjabi |`pa` |2023-04-15-preview | |
-|Polish |`pl` |2023-01-01-preview | |
-|Pashto |`ps` |2023-04-15-preview | |
-|Portuguese (Brazil) |`pt-BR` |2021-01-15 | |
-|Portuguese (Portugal)|`pt-PT` |2021-01-15 |`pt` also accepted|
-|Romanian |`ro` |2023-04-15-preview | |
-|Russian |`ru` |2023-01-01-preview | |
-|Slovak |`sk` |2023-04-15-preview | |
-|Slovenian |`sl` |2023-04-15-preview | |
-|Somali |`so` |2023-04-15-preview | |
-|Albanian |`sq` |2023-04-15-preview | |
-|Serbian |`sr` |2023-04-15-preview | |
-|Swazi |`ss` |2023-04-15-preview | |
-|Swedish |`sv` |2023-01-01-preview | |
-|Swahili |`sw` |2023-04-15-preview | |
-|Tamil |`ta` |2023-04-15-preview | |
-|Telugu |`te` |2023-04-15-preview | |
-|Thai |`th` |2023-04-15-preview | |
-|Turkish |`tr` |2023-01-01-preview | |
-|Uyghur |`ug` |2023-04-15-preview | |
-|Ukrainian |`uk` |2023-04-15-preview | |
-|Urdu |`ur` |2023-04-15-preview | |
-|Uzbek |`uz` |2023-04-15-preview | |
-|Vietnamese |`vi` |2023-04-15-preview | |
-|Chinese-Simplified |`zh-hans` |2021-01-15 |`zh` also accepted|
-|Chinese-Traditional |`zh-hant` |2023-01-01-preview | |
+|Language |Language code|Notes |
+||-||
+|Afrikaans |`af` | |
+|Amharic |`am` | |
+|Arabic |`ar` | |
+|Assamese |`as` | |
+|Azerbaijani |`az` | |
+|Bulgarian |`bg` | |
+|Bengali |`bn` | |
+|Bosnian |`bs` | |
+|Catalan |`ca` | |
+|Czech |`cs` | |
+|Welsh |`cy` | |
+|Danish |`da` | |
+|German |`de` | |
+|Greek |`el` | |
+|English |`en` | |
+|Spanish |`es` | |
+|Estonian |`et` | |
+|Basque |`eu` | |
+|Persian |`fa` | |
+|Finnish |`fi` | |
+|French |`fr` | |
+|Irish |`ga` | |
+|Galician |`gl` | |
+|Gujarati |`gu` | |
+|Hebrew |`he` | |
+|Hindi |`hi` | |
+|Croatian |`hr` | |
+|Hungarian |`hu` | |
+|Armenian |`hy` | |
+|Italian |`it` | |
+|Indonesian |`id` | |
+|Japanese |`ja` | |
+|Georgian |`ka` | |
+|Kazakh |`kk` | |
+|Khmer |`km` | |
+|Kannada |`kn` | |
+|Korean |`ko` | |
+|Kurdish(Kurmanji) |`ku` | |
+|Kyrgyz |`ky` | |
+|Lao |`lo` | |
+|Lithuanian |`lt` | |
+|Latvian |`lv` | |
+|Malagasy |`mg` | |
+|Macedonian |`mk` | |
+|Malayalam |`ml` | |
+|Mongolian |`mn` | |
+|Marathi |`mr` | |
+|Malay |`ms` | |
+|Burmese |`my` | |
+|Nepali |`ne` | |
+|Dutch |`nl` | |
+|Norwegian (Bokmål) |`no` |`nb` also accepted|
+|Odia |`or` | |
+|Punjabi |`pa` | |
+|Polish |`pl` | |
+|Pashto |`ps` | |
+|Portuguese (Brazil) |`pt-BR` | |
+|Portuguese (Portugal)|`pt-PT` |`pt` also accepted|
+|Romanian |`ro` | |
+|Russian |`ru` | |
+|Slovak |`sk` | |
+|Slovenian |`sl` | |
+|Somali |`so` | |
+|Albanian |`sq` | |
+|Serbian |`sr` | |
+|Swazi |`ss` | |
+|Swedish |`sv` | |
+|Swahili |`sw` | |
+|Tamil |`ta` | |
+|Telugu |`te` | |
+|Thai |`th` | |
+|Turkish |`tr` | |
+|Uyghur |`ug` | |
+|Ukrainian |`uk` | |
+|Urdu |`ur` | |
+|Uzbek |`uz` | |
+|Vietnamese |`vi` | |
+|Chinese-Simplified |`zh-hans` |`zh` also accepted|
+|Chinese-Traditional |`zh-hant` | |
# [PII for conversations (preview)](#tab/conversations) ## PII language support
-| Language | Language code | Starting with model version | Notes |
-|--|||--|
-|German |`de` |2023-04-15-preview | |
-|English |`en` |2022-05-15-preview | |
-|Spanish |`es` |2023-04-15-preview | |
-|French |`fr` |2023-04-15-preview | |
+| Language | Language code | Notes |
+|--||--|
+|German |`de` | |
+|English |`en` | |
+|Spanish |`es` | |
+|French |`fr` | |
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/sentiment-opinion-mining/language-support.md
Previously updated : 09/18/2023 Last updated : 10/24/2023
Use this article to learn which languages are supported by Sentiment Analysis and Opinion Mining. Both the cloud-based API and [Docker containers](./how-to/use-containers.md) support the same languages.
-> [!NOTE]
-> Languages are added as new [model versions](../concepts/model-lifecycle.md) are released.
- ## Sentiment Analysis language support Total supported language codes: 94
-| Language | Language code | Starting with model version | Notes |
-|-|-|-|-|
-| Afrikaans | `af` | 2022-10-01 | |
-| Albanian | `sq` | 2022-10-01 | |
-| Amharic | `am` | 2022-10-01 | |
-| Arabic | `ar` | 2022-06-01 | |
-| Armenian | `hy` | 2022-10-01 | |
-| Assamese | `as` | 2022-10-01 | |
-| Azerbaijani | `az` | 2022-10-01 | |
-| Basque | `eu` | 2022-10-01 | |
-| Belarusian (new) | `be` | 2022-10-01 | |
-| Bengali | `bn` | 2022-10-01 | |
-| Bosnian | `bs` | 2022-10-01 | |
-| Breton (new) | `br` | 2022-10-01 | |
-| Bulgarian | `bg` | 2022-10-01 | |
-| Burmese | `my` | 2022-10-01 | |
-| Catalan | `ca` | 2022-10-01 | |
-| Chinese (Simplified) | `zh-hans` | 2019-10-01 | `zh` also accepted |
-| Chinese (Traditional) | `zh-hant` | 2019-10-01 | |
-| Croatian | `hr` | 2022-10-01 | |
-| Czech | `cs` | 2022-10-01 | |
-| Danish | `da` | 2022-06-01 | |
-| Dutch | `nl` | 2019-10-01 | |
-| English | `en` | 2019-10-01 | |
-| Esperanto (new) | `eo` | 2022-10-01 | |
-| Estonian | `et` | 2022-10-01 | |
-| Filipino | `fil` | 2022-10-01 | |
-| Finnish | `fi` | 2022-06-01 | |
-| French | `fr` | 2019-10-01 | |
-| Galician | `gl` | 2022-10-01 | |
-| Georgian | `ka` | 2022-10-01 | |
-| German | `de` | 2019-10-01 | |
-| Greek | `el` | 2022-06-01 | |
-| Gujarati | `gu` | 2022-10-01 | |
-| Hausa (new) | `ha` | 2022-10-01 | |
-| Hebrew | `he` | 2022-10-01 | |
-| Hindi | `hi` | 2020-04-01 | |
-| Hungarian | `hu` | 2022-10-01 | |
-| Indonesian | `id` | 2022-10-01 | |
-| Irish | `ga` | 2022-10-01 | |
-| Italian | `it` | 2019-10-01 | |
-| Japanese | `ja` | 2019-10-01 | |
-| Javanese (new) | `jv` | 2022-10-01 | |
-| Kannada | `kn` | 2022-10-01 | |
-| Kazakh | `kk` | 2022-10-01 | |
-| Khmer | `km` | 2022-10-01 | |
-| Korean | `ko` | 2019-10-01 | |
-| Kurdish (Kurmanji) | `ku` | 2022-10-01 | |
-| Kyrgyz | `ky` | 2022-10-01 | |
-| Lao | `lo` | 2022-10-01 | |
-| Latin (new) | `la` | 2022-10-01 | |
-| Latvian | `lv` | 2022-10-01 | |
-| Lithuanian | `lt` | 2022-10-01 | |
-| Macedonian | `mk` | 2022-10-01 | |
-| Malagasy | `mg` | 2022-10-01 | |
-| Malay | `ms` | 2022-10-01 | |
-| Malayalam | `ml` | 2022-10-01 | |
-| Marathi | `mr` | 2022-10-01 | |
-| Mongolian | `mn` | 2022-10-01 | |
-| Nepali | `ne` | 2022-10-01 | |
-| Norwegian | `no` | 2019-10-01 | |
-| Odia | `or` | 2022-10-01 | |
-| Oromo (new) | `om` | 2022-10-01 | |
-| Pashto | `ps` | 2022-10-01 | |
-| Persian | `fa` | 2022-10-01 | |
-| Polish | `pl` | 2022-06-01 | |
-| Portuguese (Portugal) | `pt-PT` | 2019-10-01 | `pt` also accepted |
-| Portuguese (Brazil) | `pt-BR` | 2019-10-01 | |
-| Punjabi | `pa` | 2022-10-01 | |
-| Romanian | `ro` | 2022-10-01 | |
-| Russian | `ru` | 2022-06-01 | |
-| Sanskrit (new) | `sa` | 2022-10-01 | |
-| Scottish Gaelic (new) | `gd` | 2022-10-01 | |
-| Serbian | `sr` | 2022-10-01 | |
-| Sindhi (new) | `sd` | 2022-10-01 | |
-| Sinhala (new) | `si` | 2022-10-01 | |
-| Slovak | `sk` | 2022-10-01 | |
-| Slovenian | `sl` | 2022-10-01 | |
-| Somali | `so` | 2022-10-01 | |
-| Spanish | `es` | 2019-10-01 | |
-| Sundanese (new) | `su` | 2022-10-01 | |
-| Swahili | `sw` | 2022-10-01 | |
-| Swedish | `sv` | 2022-06-01 | |
-| Tamil | `ta` | 2022-10-01 | |
-| Telugu | `te` | 2022-10-01 | |
-| Thai | `th` | 2022-10-01 | |
-| Turkish | `tr` | 2022-10-01 | |
-| Ukrainian | `uk` | 2022-10-01 | |
-| Urdu | `ur` | 2022-10-01 | |
-| Uyghur | `ug` | 2022-10-01 | |
-| Uzbek | `uz` | 2022-10-01 | |
-| Vietnamese | `vi` | 2022-10-01 | |
-| Welsh | `cy` | 2022-10-01 | |
-| Western Frisian (new) | `fy` | 2022-10-01 | |
-| Xhosa (new) | `xh` | 2022-10-01 | |
-| Yiddish (new) | `yi` | 2022-10-01 | |
+| Language | Language code | Notes |
+|-|-|-|
+| Afrikaans | `af` | |
+| Albanian | `sq` | |
+| Amharic | `am` | |
+| Arabic | `ar` |
+| Armenian | `hy` | |
+| Assamese | `as` | |
+| Azerbaijani | `az` | |
+| Basque | `eu` | |
+| Belarusian (new) | `be` | |
+| Bengali | `bn` | |
+| Bosnian | `bs` | |
+| Breton (new) | `br` | |
+| Bulgarian | `bg` | |
+| Burmese | `my` | |
+| Catalan | `ca` | |
+| Chinese (Simplified) | `zh-hans` | `zh` also accepted |
+| Chinese (Traditional) | `zh-hant` | |
+| Croatian | `hr` | |
+| Czech | `cs` | |
+| Danish | `da` |
+| Dutch | `nl` | |
+| English | `en` | |
+| Esperanto (new) | `eo` | |
+| Estonian | `et` | |
+| Filipino | `fil` | |
+| Finnish | `fi` |
+| French | `fr` | |
+| Galician | `gl` | |
+| Georgian | `ka` | |
+| German | `de` | |
+| Greek | `el` |
+| Gujarati | `gu` | |
+| Hausa (new) | `ha` | |
+| Hebrew | `he` | |
+| Hindi | `hi` | |
+| Hungarian | `hu` | |
+| Indonesian | `id` | |
+| Irish | `ga` | |
+| Italian | `it` | |
+| Japanese | `ja` | |
+| Javanese (new) | `jv` | |
+| Kannada | `kn` | |
+| Kazakh | `kk` | |
+| Khmer | `km` | |
+| Korean | `ko` | |
+| Kurdish (Kurmanji) | `ku` | |
+| Kyrgyz | `ky` | |
+| Lao | `lo` | |
+| Latin (new) | `la` | |
+| Latvian | `lv` | |
+| Lithuanian | `lt` | |
+| Macedonian | `mk` | |
+| Malagasy | `mg` | |
+| Malay | `ms` | |
+| Malayalam | `ml` | |
+| Marathi | `mr` | |
+| Mongolian | `mn` | |
+| Nepali | `ne` | |
+| Norwegian | `no` | |
+| Odia | `or` | |
+| Oromo (new) | `om` | |
+| Pashto | `ps` | |
+| Persian | `fa` | |
+| Polish | `pl` |
+| Portuguese (Portugal) | `pt-PT` | `pt` also accepted |
+| Portuguese (Brazil) | `pt-BR` | |
+| Punjabi | `pa` | |
+| Romanian | `ro` | |
+| Russian | `ru` |
+| Sanskrit (new) | `sa` | |
+| Scottish Gaelic (new) | `gd` | |
+| Serbian | `sr` | |
+| Sindhi (new) | `sd` | |
+| Sinhala (new) | `si` | |
+| Slovak | `sk` | |
+| Slovenian | `sl` | |
+| Somali | `so` | |
+| Spanish | `es` | |
+| Sundanese (new) | `su` | |
+| Swahili | `sw` | |
+| Swedish | `sv` |
+| Tamil | `ta` | |
+| Telugu | `te` | |
+| Thai | `th` | |
+| Turkish | `tr` | |
+| Ukrainian | `uk` | |
+| Urdu | `ur` | |
+| Uyghur | `ug` | |
+| Uzbek | `uz` | |
+| Vietnamese | `vi` | |
+| Welsh | `cy` | |
+| Western Frisian (new) | `fy` | |
+| Xhosa (new) | `xh` | |
+| Yiddish (new) | `yi` | |
### Opinion Mining language support Total supported language codes: 94
-| Language | Language code | Starting with model version | Notes |
-|-|-|-|-|
-| Afrikaans (new) | `af` | 2022-11-01 | |
-| Albanian (new) | `sq` | 2022-11-01 | |
-| Amharic (new) | `am` | 2022-11-01 | |
-| Arabic | `ar` | 2022-11-01 | |
-| Armenian (new) | `hy` | 2022-11-01 | |
-| Assamese (new) | `as` | 2022-11-01 | |
-| Azerbaijani (new) | `az` | 2022-11-01 | |
-| Basque (new) | `eu` | 2022-11-01 | |
-| Belarusian (new) | `be` | 2022-11-01 | |
-| Bengali | `bn` | 2022-11-01 | |
-| Bosnian (new) | `bs` | 2022-11-01 | |
-| Breton (new) | `br` | 2022-11-01 | |
-| Bulgarian (new) | `bg` | 2022-11-01 | |
-| Burmese (new) | `my` | 2022-11-01 | |
-| Catalan (new) | `ca` | 2022-11-01 | |
-| Chinese (Simplified) | `zh-hans` | 2022-11-01 | `zh` also accepted |
-| Chinese (Traditional) (new) | `zh-hant` | 2022-11-01 | |
-| Croatian (new) | `hr` | 2022-11-01 | |
-| Czech (new) | `cs` | 2022-11-01 | |
-| Danish | `da` | 2022-11-01 | |
-| Dutch | `nl` | 2022-11-01 | |
-| English | `en` | 2020-04-01 | |
-| Esperanto (new) | `eo` | 2022-11-01 | |
-| Estonian (new) | `et` | 2022-11-01 | |
-| Filipino (new) | `fil` | 2022-11-01 | |
-| Finnish | `fi` | 2022-11-01 | |
-| French | `fr` | 2021-10-01 | |
-| Galician (new) | `gl` | 2022-11-01 | |
-| Georgian (new) | `ka` | 2022-11-01 | |
-| German | `de` | 2021-10-01 | |
-| Greek | `el` | 2022-11-01 | |
-| Gujarati (new) | `gu` | 2022-11-01 | |
-| Hausa (new) | `ha` | 2022-11-01 | |
-| Hebrew (new) | `he` | 2022-11-01 | |
-| Hindi | `hi` | 2022-11-01 | |
-| Hungarian | `hu` | 2022-11-01 | |
-| Indonesian | `id` | 2022-11-01 | |
-| Irish (new) | `ga` | 2022-11-01 | |
-| Italian | `it` | 2021-10-01 | |
-| Japanese | `ja` | 2022-11-01 | |
-| Javanese (new) | `jv` | 2022-11-01 | |
-| Kannada (new) | `kn` | 2022-11-01 | |
-| Kazakh (new) | `kk` | 2022-11-01 | |
-| Khmer (new) | `km` | 2022-11-01 | |
-| Korean | `ko` | 2022-11-01 | |
-| Kurdish (Kurmanji) | `ku` | 2022-11-01 | |
-| Kyrgyz (new) | `ky` | 2022-11-01 | |
-| Lao (new) | `lo` | 2022-11-01 | |
-| Latin (new) | `la` | 2022-11-01 | |
-| Latvian (new) | `lv` | 2022-11-01 | |
-| Lithuanian (new) | `lt` | 2022-11-01 | |
-| Macedonian (new) | `mk` | 2022-11-01 | |
-| Malagasy (new) | `mg` | 2022-11-01 | |
-| Malay (new) | `ms` | 2022-11-01 | |
-| Malayalam (new) | `ml` | 2022-11-01 | |
-| Marathi | `mr` | 2022-11-01 | |
-| Mongolian (new) | `mn` | 2022-11-01 | |
-| Nepali (new) | `ne` | 2022-11-01 | |
-| Norwegian | `no` | 2022-11-01 | |
-| Odia (new) | `or` | 2022-11-01 | |
-| Oromo (new) | `om` | 2022-11-01 | |
-| Pashto (new) | `ps` | 2022-11-01 | |
-| Persian (new) | `fa` | 2022-11-01 | |
-| Polish | `pl` | 2022-11-01 | |
-| Portuguese (Portugal) | `pt-PT` | 2021-10-01 | `pt` also accepted |
-| Portuguese (Brazil) | `pt-BR` | 2021-10-01 | |
-| Punjabi (new) | `pa` | 2022-11-01 | |
-| Romanian (new) | `ro` | 2022-11-01 | |
-| Russian | `ru` | 2022-11-01 | |
-| Sanskrit (new) | `sa` | 2022-11-01 | |
-| Scottish Gaelic (new) | `gd` | 2022-11-01 | |
-| Serbian (new) | `sr` | 2022-11-01 | |
-| Sindhi (new) | `sd` | 2022-11-01 | |
-| Sinhala (new) | `si` | 2022-11-01 | |
-| Slovak (new) | `sk` | 2022-11-01 | |
-| Slovenian (new) | `sl` | 2022-11-01 | |
-| Somali (new) | `so` | 2022-11-01 | |
-| Spanish | `es` | 2021-10-01 | |
-| Sundanese (new) | `su` | 2022-11-01 | |
-| Swahili (new) | `sw` | 2022-11-01 | |
-| Swedish | `sv` | 2022-11-01 | |
-| Tamil | `ta` | 2022-11-01 | |
-| Telugu | `te` | 2022-11-01 | |
-| Thai (new) | `th` | 2022-11-01 | |
-| Turkish | `tr` | 2022-11-01 | |
-| Ukrainian (new) | `uk` | 2022-11-01 | |
-| Urdu (new) | `ur` | 2022-11-01 | |
-| Uyghur (new) | `ug` | 2022-11-01 | |
-| Uzbek (new) | `uz` | 2022-11-01 | |
-| Vietnamese (new) | `vi` | 2022-11-01 | |
-| Welsh (new) | `cy` | 2022-11-01 | |
-| Western Frisian (new) | `fy` | 2022-11-01 | |
-| Xhosa (new) | `xh` | 2022-11-01 | |
-| Yiddish (new) | `yi` | 2022-11-01 | |
+| Language | Language code | Notes |
+|-|-|-|
+| Afrikaans (new) | `af` | |
+| Albanian (new) | `sq` | |
+| Amharic (new) | `am` | |
+| Arabic | `ar` | |
+| Armenian (new) | `hy` | |
+| Assamese (new) | `as` | |
+| Azerbaijani (new) | `az` | |
+| Basque (new) | `eu` | |
+| Belarusian (new) | `be` | |
+| Bengali | `bn` | |
+| Bosnian (new) | `bs` | |
+| Breton (new) | `br` | |
+| Bulgarian (new) | `bg` | |
+| Burmese (new) | `my` | |
+| Catalan (new) | `ca` | |
+| Chinese (Simplified) | `zh-hans` | `zh` also accepted |
+| Chinese (Traditional) (new) | `zh-hant` | |
+| Croatian (new) | `hr` | |
+| Czech (new) | `cs` | |
+| Danish | `da` | |
+| Dutch | `nl` | |
+| English | `en` | |
+| Esperanto (new) | `eo` | |
+| Estonian (new) | `et` | |
+| Filipino (new) | `fil` | |
+| Finnish | `fi` | |
+| French | `fr` | |
+| Galician (new) | `gl` | |
+| Georgian (new) | `ka` | |
+| German | `de` | |
+| Greek | `el` | |
+| Gujarati (new) | `gu` | |
+| Hausa (new) | `ha` | |
+| Hebrew (new) | `he` | |
+| Hindi | `hi` | |
+| Hungarian | `hu` | |
+| Indonesian | `id` | |
+| Irish (new) | `ga` | |
+| Italian | `it` | |
+| Japanese | `ja` | |
+| Javanese (new) | `jv` | |
+| Kannada (new) | `kn` | |
+| Kazakh (new) | `kk` | |
+| Khmer (new) | `km` | |
+| Korean | `ko` | |
+| Kurdish (Kurmanji) | `ku` | |
+| Kyrgyz (new) | `ky` | |
+| Lao (new) | `lo` | |
+| Latin (new) | `la` | |
+| Latvian (new) | `lv` | |
+| Lithuanian (new) | `lt` | |
+| Macedonian (new) | `mk` | |
+| Malagasy (new) | `mg` | |
+| Malay (new) | `ms` | |
+| Malayalam (new) | `ml` | |
+| Marathi | `mr` | |
+| Mongolian (new) | `mn` | |
+| Nepali (new) | `ne` | |
+| Norwegian | `no` | |
+| Odia (new) | `or` | |
+| Oromo (new) | `om` | |
+| Pashto (new) | `ps` | |
+| Persian (new) | `fa` | |
+| Polish | `pl` | |
+| Portuguese (Portugal) | `pt-PT` | `pt` also accepted |
+| Portuguese (Brazil) | `pt-BR` | |
+| Punjabi (new) | `pa` | |
+| Romanian (new) | `ro` | |
+| Russian | `ru` | |
+| Sanskrit (new) | `sa` | |
+| Scottish Gaelic (new) | `gd` | |
+| Serbian (new) | `sr` | |
+| Sindhi (new) | `sd` | |
+| Sinhala (new) | `si` | |
+| Slovak (new) | `sk` | |
+| Slovenian (new) | `sl` | |
+| Somali (new) | `so` | |
+| Spanish | `es` | |
+| Sundanese (new) | `su` | |
+| Swahili (new) | `sw` | |
+| Swedish | `sv` | |
+| Tamil | `ta` | |
+| Telugu | `te` | |
+| Thai (new) | `th` | |
+| Turkish | `tr` | |
+| Ukrainian (new) | `uk` | |
+| Urdu (new) | `ur` | |
+| Uyghur (new) | `ug` | |
+| Uzbek (new) | `uz` | |
+| Vietnamese (new) | `vi` | |
+| Welsh (new) | `cy` | |
+| Western Frisian (new) | `fy` | |
+| Xhosa (new) | `xh` | |
+| Yiddish (new) | `yi` | |
## Multi-lingual option (Custom sentiment analysis only)
ai-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/text-analytics-for-health/how-to/call-api.md
There are two ways to call the service:
## Development options ---
-## Specify the Text Analytics for health model
-
-By default, Text Analytics for health will use the ("2022-03-01") model version on your text. You can also configure your API requests to use a specific model version. The model you specify will be used to perform operations provided by the Text Analytics for health. Extraction of social determinants of health entities along with their assertions and relationships (**only in English**) is supported with the latest preview model version "2023-04-15-preview".
-
-| Supported Versions | Status |
-|--|--|
-| `2023-04-15-preview` | Preview |
-| `2023-04-01` | Generally available |
-| `2023-01-01-preview` | Preview |
-| `2022-08-15-preview` | Preview |
-| `2022-03-01` | Generally available |
-
-## Specify the Text Analytics for health API version
-
-When making a Text Analytics for health API call, you must specify an API version. The latest generally available API version is "2023-04-01" which supports relationship confidence scores in the results. The latest preview API version is "2023-04-15-preview", offering the latest feature which is support for [temporal assertions](../concepts/assertion-detection.md).
-
-| Supported Versions | Status |
-|--|--|
-| `2023-04-15-preview`| Preview |
-| `2023-04-01`| Generally available |
-| `2022-10-01-preview` | Preview |
-| `2022-05-01` | Generally available |
--
-### Text Analytics for health container
-
-The [Text Analytics for health container](use-containers.md) uses separate model versioning than the REST API and client libraries. Only one model version is available per container image.
-
-| Endpoint | Container Image Tag | Model version |
-||--||
-| `/entities/health` | `3.0.59413252-onprem-amd64` (latest) | `2022-03-01` |
-| `/entities/health` | `3.0.59413252-latin-onprem-amd64` (latin) | `2022-08-15-preview` |
-| `/entities/health` | `3.0.59413252-semitic-onprem-amd64` (semitic) | `2022-08-15-preview` |
-| `/entities/health` | `3.0.016230002-onprem-amd64` | `2021-05-15` |
-| `/entities/health` | `3.0.015370001-onprem-amd64` | `2021-03-01` |
-| `/entities/health` | `1.1.013530001-amd64-preview` | `2020-09-03` |
-| `/entities/health` | `1.1.013150001-amd64-preview` | `2020-07-24` |
-| `/domains/health` | `1.1.012640001-amd64-preview` | `2020-05-08` |
-| `/domains/health` | `1.1.012420001-amd64-preview` | `2020-05-08` |
-| `/domains/health` | `1.1.012070001-amd64-preview` | `2020-04-16` |
### Input languages
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/text-analytics-for-health/language-support.md
Previously updated : 01/04/2023 Last updated : 10/24/2023
Use this article to learn which natural languages are supported by Text Analytic
## Hosted API Service
-The hosted API service supports English language, model version 03-01-2022. Additional languages, English, Spanish, French, German Italian, Portuguese and Hebrew are supported with model version 2022-08-15-preview.
+The hosted API service supports the English, Spanish, French, German, Italian, Portuguese and Hebrew languages.
When structuring the API request, the relevant language tags must be added for these languages:
json
## Docker container
-The docker container supports English language, model version 2022-03-01.
-Additional languages are also supported when using a docker container to deploy the API: Spanish, French, German Italian, Portuguese and Hebrew. This functionality is currently in preview, model version 2022-08-15-preview.
+The docker container supports the English, Spanish, French, German, Italian, Portuguese and Hebrew languages.
Full details for deploying the service in a container can be found [here](../text-analytics-for-health/how-to/use-containers.md). In order to download the new container images from the Microsoft public container registry, use the following [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command.
json
] } ```
-## Details of the supported model versions for each language:
+## Details of the supported container tags:
-| Language Code | Model Version: | Featured Tag | Specific Tag |
-|:--|:-:|:-:|::|
-| `en` | 2022-03-01 | latest | 3.0.59413252-onprem-amd64 |
-| `en`, `es`, `it`, `fr`, `de`, `pt` | 2022-08-15-preview | latin | 3.0.60903415-latin-onprem-amd64 |
-| `he` | 2022-08-15-preview | semitic | 3.0.60903415-semitic-onprem-amd64 |
+| Language Code | Featured Tag | Specific Tag |
+|:--|:-:|::|
+| `en` | latest | 3.0.59413252-onprem-amd64 |
+| `en`, `es`, `it`, `fr`, `de`, `pt` | latin | 3.0.60903415-latin-onprem-amd64 |
+| `he` | semitic | 3.0.60903415-semitic-onprem-amd64 |
ai-services Content Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/content-filter.md
keywords:
> [!IMPORTANT] > The content filtering system isn't applied to prompts and completions processed by the Whisper model in Azure OpenAI Service. Learn more about the [Whisper model in Azure OpenAI](models.md#whisper-preview).
-Azure OpenAI Service includes a content filtering system that works alongside core models. This system works by running both the prompt and completion through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Variations in API configurations and application design may affect completions and thus filtering behavior.
+Azure OpenAI Service includes a content filtering system that works alongside core models. This system works by running both the prompt and completion through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Variations in API configurations and application design might affect completions and thus filtering behavior.
-The content filtering models have been specifically trained and tested on the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality may vary. In all cases, you should do your own testing to ensure that it works for your application.
+The content filtering models have been specifically trained and tested on the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application.
-In addition to the content filtering system, the Azure OpenAI Service performs monitoring to detect content and/or behaviors that suggest use of the service in a manner that may violate applicable product terms. For more information about understanding and mitigating risks associated with your application, see the [Transparency Note for Azure OpenAI](/legal/cognitive-services/openai/transparency-note?tabs=text). For more information about how data is processed in connection with content filtering and abuse monitoring, see [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation).
+In addition to the content filtering system, the Azure OpenAI Service performs monitoring to detect content and/or behaviors that suggest use of the service in a manner that might violate applicable product terms. For more information about understanding and mitigating risks associated with your application, see the [Transparency Note for Azure OpenAI](/legal/cognitive-services/openai/transparency-note?tabs=text). For more information about how data is processed in connection with content filtering and abuse monitoring, see [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation).
The following sections provide information about the content filtering categories, the filtering severity levels and their configurability, and API scenarios to be considered in application design and implementation.
The content filtering system integrated in the Azure OpenAI Service contains neu
|Category|Description| |--|--|
-|Safe | Content may be related to violence, self-harm, sexual, or hate categories but the terms are used in general, journalistic, scientific, medical, and similar professional contexts, which are appropriate for most audiences. |
+|Safe | Content might be related to violence, self-harm, sexual, or hate categories but the terms are used in general, journalistic, scientific, medical, and similar professional contexts, which are appropriate for most audiences. |
|Low | Content that expresses prejudiced, judgmental, or opinionated views, includes offensive use of language, stereotyping, use cases exploring a fictional world (for example, gaming, literature) and depictions at low intensity.| | Medium | Content that uses offensive, insulting, mocking, intimidating, or demeaning language towards specific identity groups, includes depictions of seeking and executing harmful instructions, fantasies, glorification, promotion of harm at medium intensity. | |High | Content that displays explicit and severe harmful instructions, actions, damage, or abuse; includes endorsement, glorification, or promotion of severe harmful acts, extreme or illegal forms of harm, radicalization, or non-consensual power exchange or abuse.|
Content filtering configurations are created within a Resource in Azure AI Studi
## Scenario details
-When the content filtering system detects harmful content, you'll receive either an error on the API call if the prompt was deemed inappropriate or the `finish_reason` on the response will be `content_filter` to signify that some of the completion was filtered. When building your application or system, you'll want to account for these scenarios where the content returned by the Completions API is filtered, which may result in content that is incomplete. How you act on this information will be application specific. The behavior can be summarized in the following points:
+When the content filtering system detects harmful content, you'll receive either an error on the API call if the prompt was deemed inappropriate or the `finish_reason` on the response will be `content_filter` to signify that some of the completion was filtered. When building your application or system, you'll want to account for these scenarios where the content returned by the Completions API is filtered, which might result in content that is incomplete. How you act on this information will be application specific. The behavior can be summarized in the following points:
- Prompts that are classified at a filtered category and severity level will return an HTTP 400 error. - Non-streaming completions calls won't return any content when the content is filtered. The `finish_reason` value will be set to content_filter. In rare cases with longer responses, a partial result can be returned. In these cases, the `finish_reason` will be updated.
When annotations are enabled as shown in the code snippet below, the following i
Annotations are currently in preview for Completions and Chat Completions (GPT models); the following code snippet shows how to use annotations in preview:
+# [Python](#tab/python)
++ ```python # Note: The openai-python library support for Azure OpenAI is in preview. # os.getenv() for the endpoint and key assumes that you are using environment variables.
except openai.error.InvalidRequestError as e:
```
+# [JavaScript](#tab/javascrit)
+
+[Azure OpenAI JavaScript SDK source code & samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/openai/openai)
+
+```javascript
+
+import { OpenAIClient, AzureKeyCredential } from "@azure/openai";
+
+// Load the .env file if it exists
+import * as dotenv from "dotenv";
+dotenv.config();
+
+// You will need to set these environment variables or edit the following values
+const endpoint = process.env["ENDPOINT"] || "<endpoint>";
+const azureApiKey = process.env["AZURE_API_KEY"] || "<api key>";
+
+const messages = [
+ { role: "system", content: "You are a helpful assistant. You will talk like a pirate." },
+ { role: "user", content: "Can you help me?" },
+ { role: "assistant", content: "Arrrr! Of course, me hearty! What can I do for ye?" },
+ { role: "user", content: "What's the best way to train a parrot?" },
+];
+
+export async function main() {
+ console.log("== Get completions Sample ==");
+
+ const client = new OpenAIClient(endpoint, new AzureKeyCredential(azureApiKey));
+ const deploymentId = "gpt-35-turbo"; //This needs to correspond to the name you chose when you deployed the model.
+ const events = await client.listChatCompletions(deploymentId, messages, { maxTokens: 128 });
+
+ for await (const event of events) {
+ for (const choice of event.choices) {
+ console.log(choice.message);
+ if (!choice.contentFilterResults) {
+ console.log("No content filter is found");
+ return;
+ }
+ if (choice.contentFilterResults.error) {
+ console.log(
+ `Content filter ran into the error ${choice.contentFilterResults.error.code}: ${choice.contentFilterResults.error.message}`
+ );
+ } else {
+ const { hate, sexual, selfHarm, violence } = choice.contentFilterResults;
+ console.log(
+ `Hate category is filtered: ${hate?.filtered} with ${hate?.severity} severity`
+ );
+ console.log(
+ `Sexual category is filtered: ${sexual?.filtered} with ${sexual?.severity} severity`
+ );
+ console.log(
+ `Self-harm category is filtered: ${selfHarm?.filtered} with ${selfHarm?.severity} severity`
+ );
+ console.log(
+ `Violence category is filtered: ${violence?.filtered} with ${violence?.severity} severity`
+ );
+ }
+ }
+ }
+}
+
+main().catch((err) => {
+ console.error("The sample encountered an error:", err);
+});
+```
++ For details on the inference REST API endpoints for Azure OpenAI and how to create Chat and Completions please follow [Azure OpenAI Service REST API reference guidance](../reference.md). Annotations are returned for all scenarios when using `2023-06-01-preview`. ### Example scenario: An input prompt containing content that is classified at a filtered category and severity level is sent to the completions API
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json) - `2023-07-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json) - `2023-08-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-09-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
**Request body**
ai-services How To Migrate To Prebuilt Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-migrate-to-prebuilt-neural-voice.md
# Migrate from prebuilt standard voice to prebuilt neural voice > [!IMPORTANT]
-> We are retiring the standard voices from September 1, 2021 through August 31, 2024. If you used a standard voice with your Speech resource that was created prior to September 1, 2021 then you can continue to do so until August 31, 2024. All other Speech resources can only use prebuilt neural voices. You can choose from the supported [neural voice names](language-support.md?tabs=tts). After August 31, 2024 the standard voices won't be supported with any Speech resource.
+> We are retiring the standard voices from September 1, 2021 through August 31, 2024. If you used a standard voice with your Speech resource that was created prior to September 1, 2021 then you can continue to do so until August 31, 2024. To use neural voices, choose voice names that include 'Neural' in their name, for example: en-US-JennyMultilingualNeural. All other Speech resources can only use prebuilt neural voices. You can choose from the supported [neural voice names](language-support.md?tabs=tts). After August 31, 2024 the standard voices won't be supported with any Speech resource.
> > The pricing for prebuilt standard voice is different from prebuilt neural voice. Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and check the pricing details in the collapsable "Deprecated" section. Prebuilt standard voice (retired) is referred as **Standard**.
aks Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-security.md
Node authorization is a special-purpose authorization mode that specifically aut
### Node deployment
-Nodes are deployed onto a private virtual network subnet, with no public IP addresses assigned. For troubleshooting and management purposes, SSH is enabled by default and only accessible using the internal IP address. Disabling SSH is during cluster and node pool creation, or for an existing cluster or node pool is in preview. See [Manage SSH access][manage-ssh-access] for more information.
+Nodes are deployed onto a private virtual network subnet, with no public IP addresses assigned. For troubleshooting and management purposes, SSH is enabled by default and only accessible using the internal IP address. Disabling SSH during cluster and node pool creation, or for an existing cluster or node pool, is in preview. See [Manage SSH access][manage-ssh-access] for more information.
### Node storage
For more information on core Kubernetes and AKS concepts, see:
[network-policy]: use-network-policies.md [microsoft-vulnerability-management-aks]: concepts-vulnerability-management.md [aks-vulnerability-management-nodes]: concepts-vulnerability-management.md#worker-nodes
-[manage-ssh-access]: manage-ssh-node-access.md
+[manage-ssh-access]: manage-ssh-node-access.md
aks Load Balancer Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/load-balancer-standard.md
description: Learn how to use a public load balancer with a Standard SKU to expo
Previously updated : 07/14/2023 Last updated : 10/30/2023 #Customer intent: As a cluster operator or developer, I want to learn how to create a service in AKS that uses an Azure Load Balancer with a Standard SKU.
spec:
The following annotations are supported for Kubernetes services with type `LoadBalancer`, and they only apply to **INBOUND** flows.
-| Annotation | Value | Description
-| -- | - |
-| `service.beta.kubernetes.io/azure-load-balancer-internal` | `true` or `false` | Specify whether the load balancer should be internal. If not set, it defaults to public.
-| `service.beta.kubernetes.io/azure-load-balancer-internal-subnet` | Name of the subnet | Specify which subnet the internal load balancer should be bound to. If not set, it defaults to the subnet configured in cloud config file.
-| `service.beta.kubernetes.io/azure-dns-label-name` | Name of the DNS label on Public IPs | Specify the DNS label name for the **public** service. If it's set to an empty string, the DNS entry in the Public IP isn't used.
-| `service.beta.kubernetes.io/azure-shared-securityrule` | `true` or `false` | Specify that the service should be exposed using an Azure security rule that might be shared with another service. Trade specificity of rules for an increase in the number of services that can be exposed. This annotation relies on the Azure [Augmented Security Rules](../virtual-network/network-security-groups-overview.md#augmented-security-rules) feature of Network Security groups.
-| `service.beta.kubernetes.io/azure-load-balancer-resource-group` | Name of the resource group | Specify the resource group of load balancer public IPs that aren't in the same resource group as the cluster infrastructure (node resource group).
-| `service.beta.kubernetes.io/azure-allowed-service-tags` | List of allowed service tags | Specify a list of allowed [service tags][service-tags] separated by commas.
-| `service.beta.kubernetes.io/azure-load-balancer-tcp-idle-timeout` | TCP idle timeouts in minutes | Specify the time in minutes for TCP connection idle timeouts to occur on the load balancer. The default and minimum value is 4. The maximum value is 30. The value must be an integer.
-| `service.beta.kubernetes.io/azure-load-balancer-ipv4` | IPv4 address | Specify the IPv4 address to assign to the load balancer.
-| `service.beta.kubernetes.io/azure-load-balancer-ipv6` | IPv6 address | Specify the IPv6 address to assign to the load balancer.
-
-> [!NOTE]
-> `service.beta.kubernetes.io/azure-load-balancer-disable-tcp-reset` was deprecated in Kubernetes 1.18 and removed in 1.20.
+| Annotation | Value | Description |
+|--|-|--|
+| `service.beta.kubernetes.io/azure-load-balancer-internal` | `true` or `false` | Specify whether the load balancer should be internal. If not set, it defaults to public. |
+| `service.beta.kubernetes.io/azure-load-balancer-internal-subnet` | Name of the subnet | Specify which subnet the internal load balancer should be bound to. If not set, it defaults to the subnet configured in cloud config file. |
+| `service.beta.kubernetes.io/azure-dns-label-name` | Name of the DNS label on Public IPs | Specify the DNS label name for the **public** service. If it's set to an empty string, the DNS entry in the Public IP isn't used. |
+| `service.beta.kubernetes.io/azure-shared-securityrule` | `true` or `false` | Specify exposing the service through a potentially shared Azure security rule to increase service exposure, utilizing Azure [Augmented Security Rules][augmented-security-rules] in Network Security groups. |
+| `service.beta.kubernetes.io/azure-load-balancer-resource-group` | Name of the resource group | Specify the resource group of load balancer public IPs that aren't in the same resource group as the cluster infrastructure (node resource group). |
+| `service.beta.kubernetes.io/azure-allowed-service-tags` | List of allowed service tags | Specify a list of allowed [service tags][service-tags] separated by commas. |
+| `service.beta.kubernetes.io/azure-load-balancer-tcp-idle-timeout` | TCP idle timeouts in minutes | Specify the time in minutes for TCP connection idle timeouts to occur on the load balancer. The default and minimum value is 4. The maximum value is 30. The value must be an integer. |
+| `service.beta.kubernetes.io/azure-load-balancer-disable-tcp-reset` | `true` or `false` | Specify whether the load balancer should disable TCP reset on idle timeout. |
+| `service.beta.kubernetes.io/azure-load-balancer-ipv4` | IPv4 address | Specify the IPv4 address to assign to the load balancer. |
+| `service.beta.kubernetes.io/azure-load-balancer-ipv6` | IPv6 address | Specify the IPv6 address to assign to the load balancer. |
### Customize the load balancer health probe
-| Annotation | Value | Description |
-| - | -- | -- |
-| `service.beta.kubernetes.io/azure-load-balancer-health-probe-interval` | Health probe interval | |
-| `service.beta.kubernetes.io/azure-load-balancer-health-probe-num-of-probe` | The minimum number of unhealthy responses of health probe | |
-| `service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path` | Request path of the health probe | |
-| `service.beta.kubernetes.io/port_{port}_no_lb_rule` | true/false | {port} is the port number in the service. When it is set to true, no lb rule and health probe rule for this port will be generated. health check service should not be exposed to the public internet(e.g. istio/envoy health check service)|
-| `service.beta.kubernetes.io/port_{port}_no_probe_rule` | true/false | {port} is the port number in the service. When it is set to true, no health probe rule for this port will be generated. |
-| `service.beta.kubernetes.io/port_{port}_health-probe_protocol` | Health probe protocol | {port} is the port number in the service. Explicit protocol for the health probe for the service port {port}, overriding port.appProtocol if set.|
-| `service.beta.kubernetes.io/port_{port}_health-probe_port` | port number or port name in service manifest | {port} is the port number in the service. Explicit port for the health probe for the service port {port}, overriding the default value. |
-| `service.beta.kubernetes.io/port_{port}_health-probe_interval` | Health probe interval | {port} is port number of service. |
-| `service.beta.kubernetes.io/port_{port}_health-probe_num-of-probe` | The minimum number of unhealthy responses of health probe | {port} is port number of service. |
-| `service.beta.kubernetes.io/port_{port}_health-probe_request-path` | Request path of the health probe | {port} is port number of service. |
+| Annotation | Value | Description |
+|-|--|--|
+| `service.beta.kubernetes.io/azure-load-balancer-health-probe-interval` | Health probe interval | |
+| `service.beta.kubernetes.io/azure-load-balancer-health-probe-num-of-probe` | The minimum number of unhealthy responses of health probe | |
+| `service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path` | Request path of the health probe | |
+| `service.beta.kubernetes.io/port_{port}_no_lb_rule` | true/false | {port} is service port number. When set to true, no lb rule or health probe rule for this port is generated. Health check service should not be exposed to the public internet(e.g. istio/envoy health check service) |
+| `service.beta.kubernetes.io/port_{port}_no_probe_rule` | true/false | {port} is service port number. When set to true, no health probe rule for this port is generated. |
+| `service.beta.kubernetes.io/port_{port}_health-probe_protocol` | Health probe protocol | {port} is service port number. Explicit protocol for the health probe for the service port {port}, overriding port.appProtocol if set. |
+| `service.beta.kubernetes.io/port_{port}_health-probe_port` | port number or port name in service manifest | {port} is service port number. Explicit port for the health probe for the service port {port}, overriding the default value. |
+| `service.beta.kubernetes.io/port_{port}_health-probe_interval` | Health probe interval | {port} is service port number. |
+| `service.beta.kubernetes.io/port_{port}_health-probe_num-of-probe` | The minimum number of unhealthy responses of health probe | {port} is service port number. |
+| `service.beta.kubernetes.io/port_{port}_health-probe_request-path` | Request path of the health probe | {port} is service port number. |
As documented [here](../load-balancer/load-balancer-custom-probe-overview.md), Tcp, Http and Https are three protocols supported by load balancer service.
Since v1.20, service annotation `service.beta.kubernetes.io/azure-load-balancer-
Note that the request path would be ignored when using TCP or the `spec.ports.appProtocol` is empty. More specifically: | loadbalancer sku | `externalTrafficPolicy` | spec.ports.Protocol | spec.ports.AppProtocol | `service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path` | LB Probe Protocol | LB Probe Request Path |
-| - | -- | - | - | -- | | |
+||-|||-|--|--|
| standard | local | any | any | any | http | `/healthz` | | standard | cluster | udp | any | any | null | null | | standard | cluster | tcp | | (ignored) | tcp | null |
Different ports in a service can require different health probe configurations.
The following annotations can be used to customize probe configuration per service port. | port specific annotation | global probe annotation | Usage |
-| - | | - |
+||--||
| service.beta.kubernetes.io/port_{port}_no_lb_rule | N/A (no equivalent globally) | if set true, no lb rules and probe rules will be generated | | service.beta.kubernetes.io/port_{port}_no_probe_rule | N/A (no equivalent globally) | if set true, no probe rules will be generated | | service.beta.kubernetes.io/port_{port}_health-probe_protocol | N/A (no equivalent globally) | Set the health probe protocol for this service port (e.g. Http, Https, Tcp) |
To learn more about using internal load balancer for inbound traffic, see the [A
[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md [aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md [aks-sp]: kubernetes-service-principal.md#delegate-access-to-other-azure-resources
+[augmented-security-rules]: ../virtual-network/network-security-groups-overview.md#augmented-security-rules
[az-aks-show]: /cli/azure/aks#az_aks_show [az-aks-create]: /cli/azure/aks#az_aks_create [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
aks Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/nat-gateway.md
This configuration requires bring-your-own networking (via [Kubenet][byo-vnet-ku
--assign-identity $IDENTITY_ID ```
-## Disable OutboundNAT for Windows (preview)
+## Disable OutboundNAT for Windows (Preview)
-Windows OutboundNAT can cause certain connection and communication issues with your AKS pods. Some of these issues include:
-
-* **Unhealthy backend status**: When you deploy an AKS cluster with [Application Gateway Ingress Control (AGIC)][agic] and [Application Gateway][app-gw] in different VNets, the backend health status becomes "Unhealthy." The outbound connectivity fails because the peered networked IP isn't present in the CNI config of the Windows nodes.
-* **Node port reuse**: Windows OutboundNAT uses port to translate your pod IP to your Windows node host IP, which can cause an unstable connection to the external service due to a port exhaustion issue.
-* **Invalid traffic routing to internal service endpoints**: When you create a load balancer service with `externalTrafficPolicy` set to *Local*, kube-proxy on Windows doesn't create the proper rules in the IPTables to route traffic to the internal service endpoints.
+Windows OutboundNAT can cause certain connection and communication issues with your AKS pods. An example issue is node port reuse. In this example, Windows OutboundNAT uses ports to translate your pod IP to your Windows node host IP, which can cause an unstable connection to the external service due to a port exhaustion issue.
Windows enables OutboundNAT by default. You can now manually disable OutboundNAT when creating new Windows agent pools.
-> [!NOTE]
-> OutboundNAT can only be disabled on Windows Server 2019 node pools.
- ### Prerequisites
-* You need to use `aks-preview` and register the feature flag.
+* If you're using Kubernetes version 1.25 or older, you need to [update your deployment configuration][upgrade-kubernetes].
+* You need to install or update `aks-preview` and register the feature flag.
1. Install or update `aks-preview` using the [`az extension add`][az-extension-add] or [`az extension update`][az-extension-update] command.
- ```azurecli
- # Install aks-preview
-
- az extension add --name aks-preview
-
- # Update aks-preview
+ ```azurecli-interactive
+ # Install aks-preview
+ az extension add --name aks-preview
- az extension update --name aks-preview
- ```
+ # Update aks-preview
+ az extension update --name aks-preview
+ ```
2. Register the feature flag using the [`az feature register`][az-feature-register] command.
- ```azurecli
- az feature register --namespace Microsoft.ContainerService --name DisableWindowsOutboundNATPreview
- ```
+ ```azurecli-interactive
+ az feature register --namespace Microsoft.ContainerService --name DisableWindowsOutboundNATPreview
+ ```
3. Check the registration status using the [`az feature list`][az-feature-list] command.
- ```azurecli
- az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/DisableWindowsOutboundNATPreview')].{Name:name,State:properties.state}"
- ```
+ ```azurecli-interactive
+ az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/DisableWindowsOutboundNATPreview')].{Name:name,State:properties.state}"
+ ```
- 4. Refresh the registration of the `Microsoft.ContainerService` resource provider us
+ 4. Refresh the registration of the `Microsoft.ContainerService` resource provider using the [`az provider register`][az-provider-register] command.
- ```azurecli
- az provider register --namespace Microsoft.ContainerService
- ```
+ ```azurecli-interactive
+ az provider register --namespace Microsoft.ContainerService
+ ```
-* If you're using Kubernetes version 1.25 or older, you need to [update your deployment configuration][upgrade-kubernetes].
-* Cluster outbound type can't be set to load balancer.
-* If you need to switch from a load balancer to NAT gateway, you can either add a NAT gateway into the VNet or run [`az aks upgrade`][aks-upgrade] to update the outbound type.
+### Limitations
+
+* You can't set cluster outbound type to LoadBalancer. You can set it to Nat Gateway or UDR:
+ * [NAT Gateway](./nat-gateway.md): NAT Gateway can automatically handle NAT connection and is more powerful than Standard Load Balancer. You might incur extra charges with this option.
+ * [UDR (UserDefinedRouting)](./limit-egress-traffic.md): You must keep port limitations in mind when configuring routing rules.
+ * If you need to switch from a load balancer to NAT Gateway, you can either add a NAT gateway into the VNet or run [`az aks upgrade`][aks-upgrade] to update the outbound type.
+
+> [!NOTE]
+> UserDefinedRouting has the following limitations:
+>
+> * SNAT by Load Balancer (must use the default OutboundNAT) has "64 ports on the host IP".
+> * SNAT by Azure Firewall (disable OutboundNAT) has 2496 ports per public IP.
+> * SNAT by NAT Gateway (disable OutboundNAT) has 64512 ports per public IP.
+> * If the Azure Firewall port range isn't enough for your application, you need to use NAT Gateway.
+> * Azure Firewall doesn't SNAT with Network rules when the destination IP address is in a private IP address range per [IANA RFC 1918 or shared address space per IANA RFC 6598](../firewall/snat-private-range.md).
### Manually disable OutboundNAT for Windows * Manually disable OutboundNAT for Windows when creating new Windows agent pools using the [`az aks nodepool add`][az-aks-nodepool-add] command with the `--disable-windows-outbound-nat` flag. > [!NOTE]
- > You can use an existing AKS cluster, but you may need to update the outbound type and add a node pool to enable `--disable-windows-outbound-nat`.
+ > You can use an existing AKS cluster, but you might need to update the outbound type and add a node pool to enable `--disable-windows-outbound-nat`.
- ```azurecli
+ ```azurecli-interactive
az aks nodepool add \ --resource-group myResourceGroup --cluster-name myNatCluster
For more information on Azure NAT Gateway, see [Azure NAT Gateway][nat-docs].
[az-network-nat-gateway-create]: /cli/azure/network/nat/gateway#az_network_nat_gateway_create [az-network-vnet-create]: /cli/azure/network/vnet#az_network_vnet_create [az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add
+[az-provider-register]: /cli/azure/provider#az_provider_register
api-management Self Hosted Gateway Settings Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-settings-reference.md
This guidance helps you provide the required information to define how to authen
| neighborhood.heartbeat.port | UDP port used for instances of a self-hosted gateway deployment to send heartbeats to other instances. | No | 4291 | v2.0+ | | policy.rate-limit.sync.port | UDP port used for self-hosted gateway instances to synchronize rate limiting across multiple instances. | No | 4290 | v2.0+ |
+## Kubernetes Integration
+
+### Kubernetes Ingress
+
+> [!IMPORTANT]
+> Support for Kubernetes Ingress is currently experimental and not covered through Azure Support. Learn more on [GitHub](https://github.com/Azure/api-management-self-hosted-gateway-ingress).
+
+| Name | Description | Required | Default | Availability |
+|-||-|-| -|
+| k8s.ingress.enabled | Enable Kubernetes Ingress integration. | No | `false` | v1.2+ |
+| k8s.ingress.namespace | Kubernetes namespace to watch Kubernetes Ingress resources in. | No | `default` | v1.2+ |
+| k8s.ingress.dns.suffix | DNS suffix to build DNS hostname for services to send requests to. | No | `svc.cluster.local` | v2.4+ |
+| k8s.ingress.config.path | Path to Kubernetes configuration (Kubeconfig). | No | N/A | v2.4+ |
+ ## Metrics | Name | Description | Required | Default | Availability |
application-gateway Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/troubleshooting-guide.md
Logs can be collected from the ALB Controller by using the _kubectl logs_ comman
You should see the following if the pod is primary: `successfully acquired lease azure-alb-system/alb-controller-leader-election` 2. Collect the logs
- Logs from ALB Controller will be returned in JSON format.
+
+ Logs from ALB Controller will be returned in JSON format.
Execute the following kubectl command, replacing the name with the pod name returned in step 1: ```bash
automation Enable Vms Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/enable-vms-monitoring-agent.md
Last updated 06/28/2023
-# Enable Change Tracking and Inventory using Azure Monitoring Agent (Preview)
+# Enable Change Tracking and Inventory using Azure Monitoring Agent
-**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: Windows Registry :heavy_check_mark: Windows Files :heavy_check_mark: Linux Files :heavy_check_mark: Windows Software
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: Windows Registry :heavy_check_mark: Windows Files :heavy_check_mark: Linux Files :heavy_check_mark: Windows Software :heavy_check_mark: File Content Changes
-> [!IMPORTANT]
-> Currently, the policies to enable Change tracking and inventory with Azure monitoring Agent are in preview. For a seamless policy experience, we recommend that you begin by enabling the *Microsoft.Compute/AutomaticExtensionUpgradePreview* feature flag for your specific subscription. To register for this feature flag, go to **Azure portal** > **Subscriptions** > *Select specific subscription name*. In the **Preview features**, select **Automatic Extension Upgrade Preview** and then select **Register**. :::image type="content" source="media/enable-vms-monitoring-agent/enable-feature-flag.png" alt-text="Screenshot to register the feature flag.":::
- This article describes how you can enable [Change Tracking and Inventory](overview.md) for single and multiple Azure Virtual Machines (VMs) from the Azure portal. ## Prerequisites
automation Guidance Migration Log Analytics Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/guidance-migration-log-analytics-monitoring-agent.md
+
+ Title: Migration guidance from Change Tracking and inventory using Log Analytics to Azure Monitoring Agent
+description: An overview on how to migrate from Change Tracking and inventory using Log Analytics to Azure Monitoring Agent.
++++ Last updated : 09/14/2023+++
+# Migration guidance from Change Tracking and inventory using Log Analytics to Change Tracking and inventory using Azure Monitoring Agent version
+
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: Azure Arc-enabled servers.
+
+This article provides guidance to move from Change Tracking and Inventory using Log Analytics version to the Azure Monitoring Agent version.
+
+## Onboarding to Change tracking and inventory using Azure Monitoring Agent
+
+### [Using Azure portal - for single VM](#tab/ct-single-vm)
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and select your virtual machine
+1. Under **Operations** , select **Change tracking**.
+1. Select **Configure with AMA** and in the **Configure with Azure monitor agent**, provide the **Log analytics workspace** and select **Migrate** to initiate the deployment.
+
+ :::image type="content" source="media/guidance-migration-log-analytics-monitoring-agent/onboarding-single-vm-inline.png" alt-text="Screenshot of onboarding a single VM to Change tracking and inventory using Azure monitoring agent." lightbox="media/guidance-migration-log-analytics-monitoring-agent/onboarding-single-vm-expanded.png":::
+
+1. Select **Switch to CT&I with AMA** to evaluate the incoming events and logs across LA agent and AMA version.
+
+ :::image type="content" source="media/guidance-migration-log-analytics-monitoring-agent/switch-versions-inline.png" alt-text="Screenshot that shows switching between log analytics and Azure Monitoring Agent after a successful migration." lightbox="media/guidance-migration-log-analytics-monitoring-agent/switch-versions-expanded.png":::
+
+### [Using Azure portal - for Automation account](#tab/ct-at-scale)
+
+1. Sign in to [Azure portal](https://portal.azure.com) and select your Automation account.
+1. Under **Configuration Management**, select **Change tracking** and then select **Configure with AMA**.
+
+ :::image type="content" source="media/guidance-migration-log-analytics-monitoring-agent/onboarding-at-scale-inline.png" alt-text="Screenshot of onboarding at scale to Change tracking and inventory using Azure monitoring agent." lightbox="media/guidance-migration-log-analytics-monitoring-agent/onboarding-at-scale-expanded.png":::
+
+1. On the **Onboarding to Change Tracking with Azure Monitoring** page, you can view your automation account and list of machines that are currently on Log Analytics and ready to be onboarded to Azure Monitoring Agent of Change Tracking and inventory.
+1. On the **Assess virtual machines** tab, select the machines and then select **Next**.
+1. On **Assign workspace** tab, assign a new [Log Analytics workspace resource ID](#obtain-log-analytics-workspace-resource-id) to which the settings of AMA based solution should be stored and select **Next**.
+
+ :::image type="content" source="media/guidance-migration-log-analytics-monitoring-agent/assign-workspace-inline.png" alt-text="Screenshot of assigning new Log Analytics resource ID." lightbox="media/guidance-migration-log-analytics-monitoring-agent/assign-workspace-expanded.png":::
+
+1. On **Review** tab, you can review the machines that are being onboarded and the new workspace.
+1. Select **Migrate** to initiate the deployment.
+
+1. After a successful migration, select **Switch to CT&I with AMA** to compare both the LA and AMA experience.
+
+ :::image type="content" source="media/guidance-migration-log-analytics-monitoring-agent/switch-versions-inline.png" alt-text="Screenshot that shows switching between log analytics and Azure Monitoring Agent after a successful migration." lightbox="media/guidance-migration-log-analytics-monitoring-agent/switch-versions-expanded.png":::
++
+### [Using PowerShell script](#tab/ps-policy)
+
+#### Prerequisites
+
+- Ensure to have the Windows PowerShell console installed. Follow the steps to [install Windows PowerShell](https://learn.microsoft.com/powershell/scripting/windows-powershell/install/installing-windows-powershell?view=powershell-7.3).
+- We recommend that you use PowerShell version 7.1.3 or higher.
+- Obtain Read access for the specified workspace resources.
+- Ensure that you have `Az.Accounts` and `Az.OperationalInsights` modules installed. The `Az.PowerShell` module is used to pull workspace agent configuration information.
+- Ensure to have the Azure credentials to run `Connect-AzAccount` and `Select Az-Context` that set the context for the script to run.
+Follow these steps to migrate using scripts.
+
+#### Migration guidance
+
+1. Install the script to run to conduct migrations.
+1. Ensure that the new workspace resource ID is different to the one with which it's associated to in the Change Tracking and Inventory using the LA version.
+1. Migrate settings for the following data types:
+ - Windows Services
+ - Linux Files
+ - Windows Files
+ - Windows Registry
+ - Linux Daemons
+1. Generate and associates a new DCR to transfer the settings to the Change Tracking and Inventory using AMA.
+
+#### Onboard at scale
+
+Use the [script](https://github.com/mayguptMSFT/AzureMonitorCommunity/blob/master/Azure%20Services/Azure%20Monitor/Agents/Migration%20Tools/DCR%20Config%20Generator/CTDcrGenerator/CTWorkSpaceSettingstoDCR.ps1) to migrate Change tracking workspace settings to data collection rule.
+
+#### Parameters
+
+**Parameter** | **Required** | **Description** |
+ | | |
+`InputWorkspaceResourceId`| Yes | Resource ID of the workspace associated to Change Tracking & Inventory with Log Analytics. |
+`OutputWorkspaceResourceId`| Yes | Resource ID of the workspace associated to Change Tracking & Inventory with Azure Monitoring Agent. |
+`OutputDCRName`| Yes | Custom name of the new DCR created. |
+`OutputDCRLocation`| Yes | Azure location of the output workspace ID. |
+`OutputDCRTemplateFolderPath`| Yes | Folder path where DCR templates are created. |
+++
+### Obtain Log Analytics Workspace Resource ID
+
+To obtain the Log Analytics Workspace resource ID, follow these steps:
+
+1. Sign in to [Azure portal](https://portal.azure.com)
+1. In **Log Analytics Workspace**, select the specific workspace and select **Json View**.
+1. Copy the **Resource ID**.
++
+## Limitations
+
+### [Using Azure portal](#tab/limit-single-vm)
+
+**For single VM and Automation Account**
+
+1. 100 VMs per Automation Account can be migrated in one instance.
+1. Any VM with > 100 file/registry settings for migration via portal isn't supported now.
+1. Arc VM migration isn't supported with portal, we recommend that you use PowerShell script migration.
+1. For File Content changes-based settings, you have to migrate manually from LA version to AMA version of Change Tracking & Inventory. Follow the guidance listed in [Track file contents](manage-change-tracking-monitoring-agent.md#configure-file-content-changes).
+1. Alerts that you configure using the Log Analytics Workspace must be [manually configured](configure-alerts.md).
+
+### [Using PowerShell script](#tab/limit-policy)
+
+1. For File Content changes-based settings, you have to migrate manually from LA version to AMA version of Change Tracking & Inventory. Follow the guidance listed in [Track file contents](manage-change-tracking.md#track-file-contents).
+1. Alerts that you configure using the Log Analytics Workspace must be [manually configured](configure-alerts.md).
+++
+## Disable Change tracking using Log Analytics Agent
+
+After you enable management of your virtual machines using Change Tracking and Inventory using Azure Monitoring Agent, you might decide to stop using Change Tracking & Inventory with LA agent version and remove the configuration from the account.
+
+The disable method incorporates the following:
+- [Removes change tracking with LA agent for selected few VMs within Log Analytics Workspace](remove-vms-from-change-tracking.md).
+- [Removes change tracking with LA agent from the entire Log Analytics Workspace](remove-feature.md).
+
+## Next steps
+- To enable from the Azure portal, see [Enable Change Tracking and Inventory from the Azure portal](../change-tracking/enable-vms-monitoring-agent.md).
+
automation Manage Change Tracking Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/manage-change-tracking-monitoring-agent.md
Title: Manage change tracking and inventory in Azure Automation using Azure Monitoring Agent (Preview)
-description: This article tells how to use change tracking and inventory to track software and Microsoft service changes in your environment using Azure Monitoring Agent (Preview)
+ Title: Manage change tracking and inventory in Azure Automation using Azure Monitoring Agent
+description: This article tells how to use change tracking and inventory to track software and Microsoft service changes in your environment using Azure Monitoring Agent
Last updated 07/17/2023
-# Manage change tracking and inventory using Azure Monitoring Agent (Preview)
+# Manage change tracking and inventory using Azure Monitoring Agent
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: Windows Registry :heavy_check_mark: Windows Files :heavy_check_mark: Linux Files :heavy_check_mark: Windows Software
automation Overview Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview-monitoring-agent.md
Title: Azure Automation Change Tracking and Inventory overview using Azure Monitoring Agent (Preview)
-description: This article describes the Change Tracking and Inventory feature using Azure monitoring agent (Preview), which helps you identify software and Microsoft service changes in your environment.
+ Title: Azure Automation Change Tracking and Inventory overview using Azure Monitoring Agent
+description: This article describes the Change Tracking and Inventory feature using Azure monitoring agent, which helps you identify software and Microsoft service changes in your environment.
Previously updated : 09/08/2023 Last updated : 10/02/2023
-# Overview of change tracking and inventory using Azure Monitoring Agent (Preview)
+# Overview of change tracking and inventory using Azure Monitoring Agent
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: Windows Registry :heavy_check_mark: Windows Files :heavy_check_mark: Linux Files :heavy_check_mark: Windows Software :heavy_check_mark: Windows Services & Linux Daemons > [!Important]
-> Currently, Change tracking and inventory uses Log Analytics Agent and this is scheduled to retire by 31.August.2024. We recommend that you use Azure Monitoring Agent as the new supporting agent.
-> Guidance on migration from Change Tracking & Inventory using Log Analytics agent to Azure Monitoring Agent will be available once it is generally available.
+> - Currently, Change tracking and inventory uses Log Analytics Agent and this is scheduled to retire by 31.August.2024. We recommend that you use Azure Monitoring Agent as the new supporting agent.
+> - Guidance on migration from Change Tracking & Inventory using Log Analytics agent to Azure Monitoring Agent will be available once it is generally available. [Learn more](guidance-migration-log-analytics-monitoring-agent.md).
+> - We recommend that you use Change Tracking with Azure Monitoring Agent with the Change tracking extension version 2.20.0.0 (or above) to access the GA version of this service.
-This article explains on the latest version of change tracking support using Azure Monitoring Agent (Preview) as a singular agent for data collection.
+This article explains on the latest version of change tracking support using Azure Monitoring Agent as a singular agent for data collection.
+
+> [!NOTE]
+> The [Current GA version](../../defender-for-cloud/file-integrity-monitoring-enable-log-analytics.md) of File Integrity Monitoring based on Log Analytics agent, will be deprecated in August 2024, and a **new version will be provided over MDE soon**.  The **[FIM Public Preview](../../defender-for-cloud/file-integrity-monitoring-enable-ama.md) based on Azure Monitor Agent (AMA), will be deprecated when the alternative is provided over MDE**. Hence, the FIM with AMA Public Preview version is not planned for GA. Read the announcement [here](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-cloud-strategy-and-plan-towards-log/ba-p/3883341).
## Key benefits
so that all VMs point to a single workspace for data collection and maintenance.
## Current limitations
-Change Tracking and Inventory using Azure Monitoring Agent (Preview) doesn't support or has the following limitations:
+Change Tracking and Inventory using Azure Monitoring Agent doesn't support or has the following limitations:
- Recursion for Windows registry tracking - Network file systems
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview.md
The following table shows the tracked item limits per machine for Change Trackin
|Services|250| |Daemons|250|
-The average Log Analytics data usage for a machine using Change Tracking and Inventory is approximately 40 MB per month, depending on your environment. With the Usage and Estimated Costs feature of the Log Analytics workspace, you can view the data ingested by Change Tracking and Inventory in a usage chart. Use this data view to evaluate your data usage and determine how it affects your bill. See [Understand your usage and estimate costs](../../azure-monitor/usage-estimated-costs.md).
+The average Log Analytics data usage for a machine using Change Tracking and Inventory is approximately 40 MB per month, depending on your environment. With the Usage and Estimated Costs feature of the Log Analytics workspace, you can view the data ingested by Change Tracking and Inventory in a usage chart. Use this data view to evaluate your data usage and determine how it affects your bill. See [Understand your usage and estimate costs](../../azure-monitor/cost-usage.md#usage-and-estimated-costs).
### Windows services data
automation Region Mappings Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/region-mappings-monitoring-agent.md
Title: Supported regions for Change tracking and inventory using Azure Monitoring Agent (Preview)
+ Title: Supported regions for Change tracking and inventory using Azure Monitoring Agent
description: This article describes the supported region mappings between an Automation account and monitoring agent workspace as it relates to certain features of Azure Automation. Last updated 12/14/2022
-# Supported regions for Change tracking and inventory Azure Monitoring Agent (Preview)
+# Supported regions for Change tracking and inventory Azure Monitoring Agent
This article provides the supported regions for change tracking and inventory using Azure Monitoring Agent.
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/overview.md
Configuration Management in Azure Automation is supported by two capabilities:
### Change Tracking and Inventory
-Change Tracking and Inventory combines functions to allow you to track Linux and Windows virtual machine and server infrastructure changes. The service supports change tracking across services, daemons, software, registry, and files in your environment to help you diagnose unwanted changes and raise alerts. Inventory support allows you to query in-guest resources for visibility into installed applications and other configuration items. For details of this feature, see [Change Tracking and Inventory](change-tracking/overview.md).
+[Change Tracking and Inventory](change-tracking/overview.md) combines functions to allow you to track Linux and Windows virtual machine and server infrastructure changes. The service supports change tracking across services, daemons, software, registry, and files in your environment to help you diagnose unwanted changes and raise alerts. Inventory support allows you to query in-guest resources for visibility into installed applications and other configuration items. Change Tracking & Inventory is now supported with the Azure Monitoring Agent version. [Learn more](change-tracking/overview-monitoring-agent.md).
### Azure Automation State Configuration
Azure Automation supports management throughout the lifecycle of your infrastruc
- Subscription management. - Start-stop resources to save cost. * **Monitoring & integrate** with 1st party (through Azure Monitor) or 3rd party external systems.
- - Ensure resource creation\deletion operations is captured to SQL.
+ - Ensure resource creation\deletion operations are captured to SQL.
- Send resource usage data to web API.
- - Send monitoring data to ServiceNow, Event Hub, New Relic and so on.
+ - Send monitoring data to ServiceNow, Event Hubs, New Relic and so on.
- Collect and store information about Azure resources. - Perform SQL monitoring checks & reporting. - Check website availability.
Azure Automation supports management throughout the lifecycle of your infrastruc
* **Find changes** - Identify and isolate machine changes that can cause misconfiguration and improve operational compliance. Remediate or escalate them to management systems.
-Depending on your requirements, one or more of the following Azure services integrate with or compliment Azure Automation to help fullfil them:
+Depending on your requirements, one or more of the following Azure services integrate with or complement Azure Automation to help fulfill them:
* [Azure Arc-enabled servers](../azure-arc/servers/overview.md) enables simplified onboarding of hybrid machines to Update Management, Change Tracking and Inventory, and the Hybrid Runbook Worker role. * [Azure Alerts action groups](../azure-monitor/alerts/action-groups.md) can initiate an Automation runbook when an alert is raised.
automation Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/onboarding.md
Failed to configure automation account for diagnostic logging
#### Cause
-This error can be caused if the pricing tier doesn't match the subscription's billing model. For more information, see [Monitoring usage and estimated costs in Azure Monitor](../../azure-monitor//usage-estimated-costs.md).
+This error can be caused if the pricing tier doesn't match the subscription's billing model. For more information, see [Monitoring usage and estimated costs in Azure Monitor](../../azure-monitor/cost-usage.md#usage-and-estimated-costs).
#### Resolution
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
This page is updated monthly, so revisit it regularly. If you're looking for ite
## October 2023 +
+### General Availability: Change Tracking using Azure Monitoring Agent
+
+Azure Automation announces General Availability of Change Tracking using Azure Monitoring Agent. [Learn more](change-tracking/guidance-migration-log-analytics-monitoring-agent.md).
+ ### Retirement of Run As accounts **Type: Retirement**
azure-app-configuration Concept Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-snapshots.md
For stores that use HMAC authentication, both the "read snapshot" operation (to
## Billing considerations and limits
-The storage quota for snapshots is detailed in the "storage per resource section" of the [App Configuration pricing page](https://azure.microsoft.com/pricing/details/app-configuration/) There's no extra charge for snapshots before the included snapshot storage quota is exhausted.
- App Configuration has two tiers, Free and Standard. Check the following details for snapshot quotas in each tier. * **Free tier**: This tier has a snapshot storage quota of 10 MB. One can create as many snapshots as possible as long as the total storage size of all active and archived snapshots is less than 10 MB.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md
Title: Azure Arc resource bridge (preview) overview description: Learn how to use Azure Arc resource bridge (preview) to support VM self-servicing on Azure Stack HCI, VMware, and System Center Virtual Machine Manager. Previously updated : 10/26/2023 Last updated : 10/31/2023
While Azure has a number of redundancy features at every level of failure, if a
The following private cloud environments and their versions are officially supported for Arc resource bridge:
-* VMware vSphere version 6.7, 7.0, 8.0
+* VMware vSphere version 7.0, 8.0
* Azure Stack HCI * SCVMM
Arc resource bridge typically releases a new version on a monthly cadence, at th
* Learn more about [how Azure Arc-enabled VMware vSphere extends Azure's governance and management capabilities to VMware vSphere infrastructure](../vmware-vsphere/overview.md). * Learn more about [provisioning and managing on-premises Windows and Linux VMs running on Azure Stack HCI clusters](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines).
-* Review the [system requirements](system-requirements.md) for deploying and managing Arc resource bridge.
+* Review the [system requirements](system-requirements.md) for deploying and managing Arc resource bridge.
azure-arc Prepare Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prepare-extended-security-updates.md
We recommend you deploy your machines to Azure Arc in preparation for when the r
There are several at-scale onboarding options for Azure Arc-enabled servers, including running a [Custom Task Sequence](onboard-configuration-manager-custom-task.md) through Configuration Manager and deploying a [Scheduled Task through Group Policy](onboard-group-policy-powershell.md). > [!NOTE]
-> Delivery of ESUs through Azure Arc to virtual machines running on Virtual Desktop Infrastructure (VDI) is not supported. VDI systems should use Multiple Activation Keys (MAK) to apply ESUs. See [Access your Multiple Activation Key from the Microsoft 365 Admin Center](/windows-server/get-started/extended-security-updates-deploy) to learn more.
+> Delivery of ESUs through Azure Arc to virtual machines running on Virtual Desktop Infrastructure (VDI) is not recommended. VDI systems should use Multiple Activation Keys (MAK) to apply ESUs. See [Access your Multiple Activation Key from the Microsoft 365 Admin Center](/windows-server/get-started/extended-security-updates-deploy) to learn more.
> ### Networking
azure-arc Azure Arc Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/azure-arc-agent.md
Title: Azure Arc agent description: Learn about Azure Arc agent Previously updated : 10/23/2023 Last updated : 10/31/2023
# Azure Arc agent
-The Azure Connected Machine agent enables you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud providers.
+When you [enable guest management](enable-guest-management-at-scale.md) on VMware VMs, Azure Arc agent is installed on the VMs. The Azure Connected Machine agent enables you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud providers. This article provides an architectural overview of Azure connected machine agent.
## Agent components
azure-arc Browse And Enable Vcenter Resources In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/browse-and-enable-vcenter-resources-in-azure.md
Title: Enable your VMware vCenter resources in Azure description: Learn how to browse your vCenter inventory and represent a subset of your VMware vCenter resources in Azure to enable self-service. Previously updated : 11/06/2023 Last updated : 10/31/2023
In this section, you will enable resource pools, networks, and other non-VM reso
For information on the capabilities enabled by a guest agent, see [Manage access to VMware resources through Azure RBAC](setup-and-manage-self-service-access.md).
+>[!NOTE]
+>Moving VMware vCenter resources between Resource Groups and Subscriptions is currently not supported.
+
## Next steps -- [Manage access to VMware resources through Azure RBAC](setup-and-manage-self-service-access.md).
+[Manage access to VMware resources through Azure RBAC](setup-and-manage-self-service-access.md).
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/overview.md
Title: What is Azure Arc-enabled VMware vSphere (preview)? description: Azure Arc-enabled VMware vSphere extends Azure governance and management capabilities to VMware vSphere infrastructure and delivers a consistent management experience across both platforms. Previously updated : 08/21/2023 Last updated : 10/31/2023
You have the flexibility to start with either option, and incorporate the other
## Supported VMware vSphere versions
-Azure Arc-enabled VMware vSphere (preview) currently works with vCenter Server versions 6.7, 7, and 8.
+Azure Arc-enabled VMware vSphere (preview) currently works with vCenter Server versions 7 and 8.
+ > [!NOTE] > Azure Arc-enabled VMware vSphere (preview) supports vCenters with a maximum of 9500 VMs. If your vCenter has more than 9500 VMs, we don't recommend you to use Arc-enabled VMware vSphere with it at this point.
azure-arc Switch To New Preview Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/switch-to-new-preview-version.md
If you're an existing **Azure Arc-enabled VMware** customer, for VMs that are Az
5. Once the resources are re-enabled, the VMs are auto switched to the new preview version. The VM resources will now be represented as **Machine - Azure Arc (VMware)**. :::image type=" New VM browse view" source="media/switch-to-new-preview-version/new-vm-browse-view-inline.png" alt-text="Screenshot of New VM browse view." lightbox="media/switch-to-new-preview-version/new-vm-browse-view-expanded.png":::
-
+ ## Next steps [Quickstart: Connect VMware vCenter Server to Azure Arc by using the helper script](/azure/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script).
azure-functions Dotnet Isolated In Process Differences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-in-process-differences.md
Use the following table to compare feature and functional differences between th
| Logging | [ILogger&lt;T&gt;]/[ILogger] obtained from [FunctionContext](/dotnet/api/microsoft.azure.functions.worker.functioncontext) or via [dependency injection](dotnet-isolated-process-guide.md#dependency-injection)| [ILogger] passed to the function<br/>[ILogger&lt;T&gt;] via [dependency injection](functions-dotnet-dependency-injection.md) | | Application Insights dependencies | [Supported](./dotnet-isolated-process-guide.md#application-insights) | [Supported](functions-monitoring.md#dependencies) | | Cancellation tokens | [Supported](dotnet-isolated-process-guide.md#cancellation-tokens) | [Supported](functions-dotnet-class-library.md#cancellation-tokens) |
-| Cold start times<sup>2</sup> | [Configurable optimizations (preview)](./dotnet-isolated-process-guide.md#performance-optimizations) | Optimized |
+| Cold start times<sup>2</sup> | [Configurable optimizations](./dotnet-isolated-process-guide.md#performance-optimizations) | Optimized |
| ReadyToRun | [Supported](dotnet-isolated-process-guide.md#readytorun) | [Supported](functions-dotnet-class-library.md#readytorun) | <sup>1</sup> When you need to interact with a service using parameters determined at runtime, using the corresponding service SDKs directly is recommended over using imperative bindings. The SDKs are less verbose, cover more scenarios, and have advantages for error handling and debugging purposes. This recommendation applies to both models.
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
The following example performs clean-up actions if a cancellation request has be
This section outlines options you can enable to improve performance around [cold start](./event-driven-scaling.md#cold-start).
-### Placeholders (preview)
+In general, your app should use the latest versions of its core dependencies. At a minimum, you should update your project as follows:
-Placeholders are a platform capability that improves cold start. Normally, you do not have to be aware of them, but during the preview period for placeholders for .NET Isolated, they require some opt-in configuration. Placeholders require .NET 6 or later. To enable placeholders:
+- Upgrade [Microsoft.Azure.Functions.Worker] to version 1.19.0 or later.
+- Upgrade [Microsoft.Azure.Functions.Worker.Sdk] to version 1.15.1 or later.
+- Add a framework reference to `Microsoft.AspNetCore.App`, unless your app targets .NET Framework.
-- Set the `WEBSITE_USE_PLACEHOLDER_DOTNETISOLATED` application setting to "1"-- Ensure that the `netFrameworkVersion` property of the function app matches your project's target framework, which must be .NET 6 or later.-- Ensure that the function app is configured to use a 64-bit process.-- Update your project file:
- - Upgrade [Microsoft.Azure.Functions.Worker] to version 1.19.0 or later
- - Upgrade [Microsoft.Azure.Functions.Worker.Sdk] to version 1.14.1 or later
- - Add a framework reference to `Microsoft.AspNetCore.App`
- - Set the property `FunctionsEnableWorkerIndexing` to "True".
- - Set the property `FunctionsAutoRegisterGeneratedMetadataProvider` to "True"
-
-> [!NOTE]
-> Setting `FunctionsEnableWorkerIndexing` to "True" may cause an issue when debugging locally using version 4.0.5274 or earlier of the [Azure Functions Core Tools](./functions-run-local.md). The issue manifests with the debugger not being able to attach. If you encounter this issue, remove the `FunctionsEnableWorkerIndexing` property during local testing.
-
-The following CLI commands will set the application setting, update the `netFrameworkVersion` property, and make the app run as 64-bit. Replace `<groupName>` with the name of the resource group, and replace `<appName>` with the name of your function app. Replace `<framework>` with the appropriate version string, such as "v6.0" or "v7.0", according to your target .NET version.
-
-```azurecli
-az functionapp config appsettings set -g <groupName> -n <appName> --settings 'WEBSITE_USE_PLACEHOLDER_DOTNETISOLATED=1'
-az functionapp config set -g <groupName> -n <appName> --net-framework-version <framework>
-az functionapp config set -g <groupName> -n <appName> --use-32bit-worker-process false
-```
-
-The following example shows a project file with the appropriate changes in place:
+The following example shows this configuration in the context of a project file:
```xml
-<Project Sdk="Microsoft.NET.Sdk">
- <PropertyGroup>
- <TargetFramework>net6.0</TargetFramework>
- <AzureFunctionsVersion>v4</AzureFunctionsVersion>
- <OutputType>Exe</OutputType>
- <ImplicitUsings>enable</ImplicitUsings>
- <Nullable>enable</Nullable>
- <FunctionsEnableWorkerIndexing>True</FunctionsEnableWorkerIndexing>
- <FunctionsAutoRegisterGeneratedMetadataProvider>True</FunctionsAutoRegisterGeneratedMetadataProvider>
- </PropertyGroup>
<ItemGroup> <FrameworkReference Include="Microsoft.AspNetCore.App" /> <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.19.0" />
- <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.14.1" />
- </ItemGroup>
- <ItemGroup>
- <None Update="host.json">
- <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
- </None>
- <None Update="local.settings.json">
- <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
- <CopyToPublishDirectory>Never</CopyToPublishDirectory>
- </None>
- </ItemGroup>
- <ItemGroup>
- <Using Include="System.Threading.ExecutionContext" Alias="ExecutionContext" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.15.1" />
</ItemGroup>
-</Project>
```
-### Optimized executor (preview)
+### Placeholders
-The function executor is a component of the platform that causes invocations to run. By default, it makes use of reflection, but a newer version is available in preview which removes this performance overhead. Normally, you do not have to be aware of this component, but during the preview period of the new version, it requires some opt-in configuration.
+Placeholders are a platform capability that improves cold start for apps targeting .NET 6 or later. The feature requires some opt-in configuration. To enable placeholders:
-To enable the optimized executor, you must update your project file:
+- **Update your project as detailed in the preceding section.**
+- Additionally, when using version 1.15.1 or earlier of `Microsoft.Azure.Functions.Worker.Sdk`, you must add two properties to the project file:
+ - Set the property `FunctionsEnableWorkerIndexing` to "True".
+ - Set the property `FunctionsAutoRegisterGeneratedMetadataProvider` to "True".
+- Set the `WEBSITE_USE_PLACEHOLDER_DOTNETISOLATED` application setting to "1".
+- Ensure that the `netFrameworkVersion` property of the function app matches your project's target framework, which must be .NET 6 or later.
+- Ensure that the function app is configured to use a 64-bit process.
-- Upgrade [Microsoft.Azure.Functions.Worker] to version 1.19.0 or later-- Upgrade [Microsoft.Azure.Functions.Worker.Sdk] to version 1.14.1 or later-- Set the property `FunctionsEnableExecutorSourceGen` to "True"
+> [!IMPORTANT]
+> Setting the `WEBSITE_USE_PLACEHOLDER_DOTNETISOLATED` to "1" requires all other aspects of the configuration to be set correctly. Any deviation can cause startup failures.
-The following example shows a project file with the appropriate changes in place:
+The following CLI commands will set the application setting, update the `netFrameworkVersion` property, and make the app run as 64-bit. Replace `<groupName>` with the name of the resource group, and replace `<appName>` with the name of your function app. Replace `<framework>` with the appropriate version string, such as "v6.0", "v7.0", or "v8.0", according to your target .NET version.
-```xml
-<Project Sdk="Microsoft.NET.Sdk">
- <PropertyGroup>
- <TargetFramework>net6.0</TargetFramework>
- <AzureFunctionsVersion>v4</AzureFunctionsVersion>
- <OutputType>Exe</OutputType>
- <ImplicitUsings>enable</ImplicitUsings>
- <Nullable>enable</Nullable>
- <FunctionsEnableExecutorSourceGen>True</FunctionsEnableExecutorSourceGen>
- </PropertyGroup>
- <ItemGroup>
- <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.19.0" />
- <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.14.1" />
- </ItemGroup>
- <ItemGroup>
- <None Update="host.json">
- <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
- </None>
- <None Update="local.settings.json">
- <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
- <CopyToPublishDirectory>Never</CopyToPublishDirectory>
- </None>
- </ItemGroup>
- <ItemGroup>
- <Using Include="System.Threading.ExecutionContext" Alias="ExecutionContext" />
- </ItemGroup>
-</Project>
+```azurecli
+az functionapp config appsettings set -g <groupName> -n <appName> --settings 'WEBSITE_USE_PLACEHOLDER_DOTNETISOLATED=1'
+az functionapp config set -g <groupName> -n <appName> --net-framework-version <framework>
+az functionapp config set -g <groupName> -n <appName> --use-32bit-worker-process false
```
+### Optimized executor
+
+The function executor is a component of the platform that causes invocations to run. An optimized version of this component is available, and in version 1.15.1 or earlier of the SDK, it requires opt-in configuration. To enable the optimized executor, you must update your project file:
+
+- **Update your project as detailed in the above section.**
+- Additionally set the property `FunctionsEnableExecutorSourceGen` to "True"
+ ### ReadyToRun You can compile your function app as [ReadyToRun binaries](/dotnet/core/deploying/ready-to-run). ReadyToRun is a form of ahead-of-time compilation that can improve startup performance to help reduce the effect of cold starts when running in a [Consumption plan](consumption-plan.md). ReadyToRun is available in .NET 6 and later versions and requires [version 4.0 or later](functions-versions.md) of the Azure Functions runtime.
ReadyToRun requires you to build the project against the runtime architecture of
| Linux | True | N/A (not supported) | | Linux | False | `linux-x64` |
-<sup>1</sup> Only 64-bit apps are eligible for some other performance optimizations such as [placeholders](#placeholders-preview).
+<sup>1</sup> Only 64-bit apps are eligible for some other performance optimizations.
To check if your Windows app is 32-bit or 64-bit, you can run the following CLI command, substituting `<group_name>` with the name of your resource group and `<app_name>` with the name of your application. An output of "true" indicates that the app is 32-bit, and "false" indicates 64-bit.
azure-maps Migrate From Bing Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md
If migrating an existing web application, check to see if it's using an open-sou
* [Leaflet] ΓÇô Lightweight 2D map control for the web. [Leaflet code samples] \| [Leaflet plugin] * [OpenLayers] - A 2D map control for the web that supports projections. <!--[OpenLayers code samples] \|--> [OpenLayers plugin]
-If developing using a JavaScript framework, one of the following open-source projects may be useful:
+If developing using a JavaScript framework, one of the following open-source projects can be useful:
* [ng-azure-maps] - Angular 10 wrapper around Azure maps. * [AzureMapsControl.Components] - An Azure Maps Blazor component.
Azure Maps more [open-source modules for the web SDK] that extend its capabiliti
The following are some of the key differences between the Bing Maps and Azure Maps Web SDKs to be aware of: * In addition to providing a hosted endpoint for accessing the Azure Maps Web SDK, an npm package is available for embedding the Web SDK into apps if preferred. For more information, see [Use the Azure Maps map control]. This package also includes TypeScript definitions.
-* Bing Maps provides two hosted branches of their SDK; Release and Experimental. The Experimental branch may receive multiple updates a day when new development is taking place. Azure Maps only hosts a release branch, however experimental features are created as custom modules in the open-source Azure Maps code samples project. Bing Maps used to have a frozen branch as well that was updated less frequently, thus reducing the risk of breaking changes due to a release. In Azure Maps, you can use the npm module and point to any previous minor version release.
+* Bing Maps provides two hosted branches of their SDK; Release and Experimental. The Experimental branch can receive multiple updates a day when new development is taking place. Azure Maps only hosts a release branch, however experimental features are created as custom modules in the open-source Azure Maps code samples project. Bing Maps used to have a frozen branch as well that was updated less frequently, thus reducing the risk of breaking changes due to a release. In Azure Maps, you can use the npm module and point to any previous minor version release.
> [!TIP] > Azure Maps publishes both minified and unminified versions of the SDK. Simply remove `.min` from the file names. The unminified version is useful when debugging issues but be sure to use the minified version in production to take advantage of the smaller file size.
The following code shows how to load a map with the same view in Azure Maps alon
Running this code in a browser displays a map that looks like the following image:
-![Azure Maps map](media/migrate-bing-maps-web-app/azure-maps-load-map.jpg)
For more information on how to set up and use the Azure Maps map control in a web app, see [Use the Azure Maps map control].
map = new atlas.Map('myMap', {
Here's an example of Azure Maps with the language set to "fr" and the user region set to `fr-FR`.
-![Localized Azure Maps map](media/migrate-bing-maps-web-app/bing-maps-localized-map.jpg)
+![Localized Azure Maps map](media/migrate-bing-maps-web-app/azure-maps-localized-map.jpg)
### Setting the map view
map.setStyle({
}); ```
-![Azure Maps set map view](media/migrate-bing-maps-web-app/azure-maps-set-map-view.jpg)
**More resources**
map.markers.add(new atlas.HtmlMarker({
})); ```
-![Azure Maps add marker](media/migrate-bing-maps-web-app/azure-maps-add-pushpin.jpg)
**After: Azure Maps using a Symbol Layer**
When using a Symbol layer, the data must be added to a data source, and the data
</html> ```
-![Azure Maps add symbol layer](media/migrate-bing-maps-web-app/azure-maps-add-pushpin.jpg)
**More resources**
When using a Symbol layer, the data must be added to a data source, and the data
Custom images can be used to represent points on a map. The following image is used in the below examples and uses a custom image to display a point on the map at (latitude: 51.5, longitude: -0.2) and offsets the position of the marker so that the point of the pushpin icon aligns with the correct position on the map.
-| ![Azure Maps add puspin](media/migrate-bing-maps-web-app/yellow-pushpin.png)|
+| ![Azure Maps add pushpin.](media/migrate-bing-maps-web-app/yellow-pushpin.png)|
|:--:| | yellow-pushpin.png |
layer.add(pushpin);
map.layers.insert(layer); ```
-![Bing Maps add custom puspin](media/migrate-bing-maps-web-app/bing-maps-add-custom-pushpin.jpg)
+![Bing Maps add custom pushpin](media/migrate-bing-maps-web-app/bing-maps-add-custom-pushpin.jpg)
**After: Azure Maps using HTML Markers**
map.markers.add(new atlas.HtmlMarker({
})); ```
-![Azure Maps add custom marker](media/migrate-bing-maps-web-app/azure-maps-add-custom-marker.jpg)
**After: Azure Maps using a Symbol Layer** Symbol layers in Azure Maps support custom images as well, but the image needs to be loaded into the map resources first and assigned a unique ID. The symbol layer can then reference this ID. The symbol can be offset to align to the correct point on the image by using the icon `offset` option. In Azure Maps, an `anchor` option is used to specify the relative position of the symbol relative to the position coordinate using one of nine defined reference points; "center", "top", "bottom", "left", "right", "top-left", "top-right", "bottom-left", "bottom-right". The content is anchored and set to "bottom" by default that is the bottom center of the HTML content. To make it easier to migrate code from Bing Maps, set the anchor to "top-left", and then use the `offset` option with the same offset used in Bing Maps. The offsets in Azure Maps move in the opposite direction of Bing Maps, so multiply them by minus one.
-```javascript
+```html
<!DOCTYPE html> <html> <head>
map.layers.add(new atlas.layer.LineLayer(datasource, null, {
})); ```
-![Azure Maps line](media/migrate-bing-maps-web-app/azure-maps-line.jpg)
**More resources**
layer.add(polygon);
map.layers.insert(layer); ```
-![Bing Maps polyogn](media/migrate-bing-maps-web-app/azure-maps-polygon.jpg)
+![Bing Maps polyogn](media/migrate-bing-maps-web-app/bing-maps-polygon.jpg)
**After: Azure Maps**
map.layers.add(new atlas.layer.LineLayer(datasource, null, {
})); ```
-![Azure Maps polyogn](media/migrate-bing-maps-web-app/azure-maps-polygon.jpg)
**More resources**
map.events.add('click', marker, function () {
}); ```
-![Azure Maps popup](media/migrate-bing-maps-web-app/azure-maps-popup.jpg)
> [!NOTE] > To do the same thing with a symbol, bubble, line or polygon layer, pass the layer into the maps event code instead of a marker.
The `DataSource` class has the following helper function for accessing additiona
| Function | Return type | Description | |-|--|--|
-| `getClusterChildren(clusterId: number)` | `Promise<Feature<Geometry, any> | Shape>` | Retrieves the children of the given cluster on the next zoom level. These children may be a combination of shapes and subclusters. The subclusters are features with properties matching cluster properties. |
+| `getClusterChildren(clusterId: number)` | `Promise<Feature<Geometry, any> | Shape>` | Retrieves the children of the given cluster on the next zoom level. These children can be a combination of shapes and subclusters. The subclusters are features with properties matching cluster properties. |
| `getClusterExpansionZoom(clusterId: number)` | `Promise<number>` | Calculates a zoom level that the cluster starts expanding or break apart. | | `getClusterLeaves(clusterId: number, limit: number, offset: number)` | `Promise<Feature<Geometry, any> | Shape>` | Retrieves all points in a cluster. Set the `limit` to return a subset of the points and use the `offset` to page through the points. |
GeoJSON data can be directly imported in Azure Maps using the `importDataFromUrl
</html> ```
-![Azure Maps clustering](media/migrate-bing-maps-web-app/azure-maps-clustering.jpg)
**More resources**
In Azure Maps, georeferenced images can be overlaid using the `atlas.layer.Image
</html> ```
-![Azure Maps ground overlay](media/migrate-bing-maps-web-app/azure-maps-ground-overlay.jpg)
**More resources**
In Azure Maps, GeoJSON is the main data format used in the web SDK, more spatial
</html> ```
-![Azure Maps kml](media/migrate-bing-maps-web-app/azure-maps-kml.jpg)
**More resources**
In Azure Maps, the drawing tools module needs to be loaded by loading the JavaSc
</html> ```
-![Azure Maps drawing tools](media/migrate-bing-maps-web-app/azure-maps-drawing-tools.jpg)
> [!TIP] > In Azure Maps layers the drawing tools provide multiple ways that users can draw shapes. For example, when drawing a polygon the user can click to add each point, or hold the left mouse button down and drag the mouse to draw a path. This can be modified using the `interactionType` option of the `DrawingManager`.
Learn more about migrating from Bing Maps to Azure Maps.
[atlas.io.read function]: /javascript/api/azure-maps-spatial-io/atlas.io#read-stringarraybufferblob--spatialdatareadoptions- [atlas.layer.ImageLayer.getCoordinatesFromEdges]: /javascript/api/azure-maps-control/atlas.layer.imagelayer#getcoordinatesfromedges-number--number--number--number--number- [atlas.Shape]: /javascript/api/azure-maps-control/atlas.shape
-[Azure AD]: azure-maps-authentication.md#azure-ad-authentication
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [Azure Maps Glossary]: glossary.md [Azure Maps React Component]: https://github.com/WiredSolutions/react-azure-maps
azure-maps Migrate From Google Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-app.md
Also:
> [!div class="checklist"] > * How to accomplish common mapping tasks using the Azure Maps Web SDK. > * Best practices to improve performance and user experience.
-> * Tips on how to make your application using more advanced features available in Azure Maps.
+> * Tips on using more advanced features available in Azure Maps.
If migrating an existing web application, check to see if it's using an open-source map control library. Examples of open-source map control library are: Cesium, Leaflet, and OpenLayers. You can still migrate your application, even if it uses an open-source map control library, and you don't want to use the Azure Maps Web SDK. In such case, connect your application to the Azure Maps [Render] services ([road tiles] | [satellite tiles]). The following points detail on how to use Azure Maps in some commonly used open-source map control libraries.
If migrating an existing web application, check to see if it's using an open-sou
* Leaflet ΓÇô Lightweight 2D map control for the web. [Leaflet code sample] \| [Leaflet documentation]. * OpenLayers - A 2D map control for the web that supports projections. [OpenLayers documentation].
-If developing using a JavaScript framework, one of the following open-source projects may be useful:
+If developing using a JavaScript framework, one of the following open-source projects can be useful:
* [ng-azure-maps] - Angular 10 wrapper around Azure Maps. * [AzureMapsControl.Components] - An Azure Maps Blazor component.
For more information on supported languages, see [Localization support in Azure
Here's an example of Azure Maps with the language set to "fr" and the user region set to "fr-FR".
-![Azure Maps localization](media/migrate-google-maps-web-app/azure-maps-localization.jpg)
### Setting the map view
map.setStyle({
}); ```
-![Azure Maps set view](media/migrate-google-maps-web-app/azure-maps-set-view.jpg)
**More resources:**
map.markers.add(new atlas.HtmlMarker({
})); ```
-![Azure Maps HTML marker](media/migrate-google-maps-web-app/azure-maps-html-marker.jpg)
**After: Azure Maps using a Symbol Layer**
For a Symbol layer, add the data to a data source. Attach the data source to the
</html> ```
-![Azure Maps symbol layer](media/migrate-google-maps-web-app/azure-maps-symbol-layer.jpg)
**More resources:**
For a Symbol layer, add the data to a data source. Attach the data source to the
### Adding a custom marker
-You may use Custom images to represent points on a map. The following map uses a custom image to display a point on the map. The point is displayed at latitude: 51.5 and longitude: -0.2. The anchor offsets the position of the marker, so that the point of the pushpin icon aligns with the correct position on the map.
+You can use Custom images to represent points on a map. The following map uses a custom image to display a point on the map. The point is displayed at latitude: 51.5 and longitude: -0.2. The anchor offsets the position of the marker, so that the point of the pushpin icon aligns with the correct position on the map.
<center>
map.markers.add(new atlas.HtmlMarker({
})); ```
-![Azure Maps custom HTML marker](media/migrate-google-maps-web-app/azure-maps-custom-html-marker.jpg)
**After: Azure Maps using a Symbol Layer**
Symbol layers in Azure Maps support custom images as well. First, load the image
</html> ```
-![Azure Maps custom icon symbol layer](media/migrate-google-maps-web-app/azure-maps-custom-icon-symbol-layer.jpg)</
> [!TIP] > To render advanced custom points, use multiple rendering layers together. For example, let's say you want to have multiple pushpins that have the same icon on different colored circles. Instead of creating a bunch of images for each color overlay, add a symbol layer on top of a bubble layer. Have the pushpins reference the same data source. This approach will be more efficient than creating and maintaining a bunch of different images.
map.layers.add(new atlas.layer.LineLayer(datasource, null, {
})); ```
-![Azure Maps polyline](media/migrate-google-maps-web-app/azure-maps-polyline.jpg)
**More resources:**
map.layers.add(new atlas.layer.LineLayer(datasource, null, {
``` ![Azure Maps polygon](media/migrate-google-maps-web-app/azure-maps-polygon.jpg) **More resources:**
map.events.add('click', marker, function () {
}); ```
-![Azure Maps popup](media/migrate-google-maps-web-app/azure-maps-popup.jpg)
> [!NOTE] > You may do the same thing with a symbol, bubble, line or polygon layer by passing the chosen layer to the maps event code instead of a marker.
GeoJSON is the base data type in Azure Maps. Import it into a data source using
</html> ```
-![Azure Maps GeoJSON](media/migrate-google-maps-web-app/azure-maps-geojson.jpg)
**More resources:**
GeoJSON is the base data type in Azure Maps. Import it into a data source using
### Marker clustering
-When visualizing many data points on the map, points may overlap each other. Overlapping makes the map look cluttered, and the map becomes difficult to read and use. Clustering point data is the process of combining data points that are near each other and representing them on the map as a single clustered data point. As the user zooms into the map, the clusters break apart into their individual data points. Cluster data points to improve user experience and map performance.
+When lots of data points appear on the map, points can overlap, making the map look cluttered and difficult to read and use. Clustering point data is the process of combining data points that are near each other and representing them on the map as a single clustered data point. As the user zooms into the map, the clusters break apart into their individual data points. Clustering data points improves the user experience and map performance.
In the following examples, the code loads a GeoJSON feed of earthquake data from the past week and adds it to the map. Clusters are rendered as scaled and colored circles. The scale and color of the circles depends on the number of points they contain.
The `DataSource` class has the following helper function for accessing additiona
| Method | Return type | Description | |--|-|-|
-| `getClusterChildren(clusterId: number)` | Promise&lt;Array&lt;Feature&lt;Geometry, any&gt; \| Shape&gt;&gt; | Retrieves the children of the given cluster on the next zoom level. These children may be a combination of shapes and subclusters. The subclusters are features with properties matching ClusteredProperties. |
+| `getClusterChildren(clusterId: number)` | Promise&lt;Array&lt;Feature&lt;Geometry, any&gt; \| Shape&gt;&gt; | Retrieves the children of the given cluster on the next zoom level. These children can be a combination of shapes and subclusters. The subclusters are features with properties matching ClusteredProperties. |
| `getClusterExpansionZoom(clusterId: number)` | Promise&lt;number&gt; | Calculates a zoom level at which the cluster starts expanding or break apart. | | `getClusterLeaves(clusterId: number, limit: number, offset: number)` | Promise&lt;Array&lt;Feature&lt;Geometry, any&gt; \| Shape&gt;&gt; | Retrieves all points in a cluster. Set the `limit` to return a subset of the points, and use the `offset` to page through the points. |
Directly import GeoJSON data using the `importDataFromUrl` function on the `Data
map.layers.add([ //Create a bubble layer for rendering clustered data points. new atlas.layer.BubbleLayer(datasource, null, {
- //Scale the size of the clustered bubble based on the number of points inthe cluster.
+ //Scale the size of the clustered bubble based on the number of points in the cluster.
radius: [ 'step', ['get', 'point_count'],
Directly import GeoJSON data using the `importDataFromUrl` function on the `Data
``` ![Azure Maps clustering](media/migrate-google-maps-web-app/azure-maps-clustering.jpg) **More resources:**
Load the GeoJSON data into a data source and connect the data source to a heat m
</html> ```
-![Azure Maps heat map](media/migrate-google-maps-web-app/azure-maps-heatmap.jpg)
**More resources:**
map.overlayMapTypes.insertAt(0, new google.maps.ImageMapType({
Add a tile layer to the map similarly as any other layer. Use a formatted URL that has in x, y, zoom placeholders; `{x}`, `{y}`, `{z}` to tell the layer where to access the tiles. Azure Maps tile layers also support `{quadkey}`, `{bbox-epsg-3857}`, and `{subdomain}` placeholders. > [!TIP]
-> In Azure Maps layers can easily be rendered below other layers, including base map layers. Often it is desirable to render tile layers below the map labels so that they are easy to read. The `map.layers.add` method takes in a second parameter which is the id of the layer in which to insert the new layer below. To insert a tile layer below the map labels, use this code: `map.layers.add(myTileLayer, "labels");`
+> In Azure Maps layers can easily be rendered beneath other layers, including base map layers. Often it is desirable to render tile layers below the map labels so that they are easy to read. The `map.layers.add` method takes in a second parameter which is the id of the layer in which to insert the new layer below. To insert a tile layer below the map labels, use this code: `map.layers.add(myTileLayer, "labels");`
```javascript //Create a tile layer and add it to the map below the label layer.
map.layers.add(new atlas.layer.TileLayer({
}), 'labels'); ```
-![Azure Maps tile layer](media/migrate-google-maps-web-app/azure-maps-tile-layer.jpg)
> [!TIP] > Tile requests can be captured using the `transformRequest` option of the map. This will allow you to modify or add headers to the request if desired.
map.setTraffic({
}); ```
-![Azure Maps traffic](media/migrate-google-maps-web-app/azure-maps-traffic.jpg)
If you select one of the traffic icons in Azure Maps, more information is displayed in a popup.
-![Azure Maps traffic incident](media/migrate-google-maps-web-app/azure-maps-traffic-incident.jpg)
**More resources:**
Use the `atlas.layer.ImageLayer` class to overlay georeferenced images. This cla
</html> ```
-![Azure Maps image overlay](media/migrate-google-maps-web-app/azure-maps-image-overlay.jpg)
**More resources:**
Both Azure and Google Maps can import and render KML, KMZ and GeoRSS data on the
#### Before: Google Maps
-```javascript
+```html
<!DOCTYPE html> <html> <head>
In Azure Maps, GeoJSON is the main data format used in the web SDK, more spatial
</html> ```
-![Azure Maps KML](media/migrate-google-maps-web-app/azure-maps-kml.png)
**More resources:**
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
The tables below provide a comparison of Azure Monitor Agent with the legacy the
| **Data sent to** | | | | | | | Azure Monitor Logs | Γ£ô | Γ£ô | | | | Azure Monitor Metrics<sup>1</sup> | Γ£ô (Public preview) | | Γ£ô (Public preview) |
-| | Azure Storage | | | Γ£ô |
-| | Event Hubs | | | Γ£ô |
+| | Azure Storage | Γ£ô (Preview) | | Γ£ô |
+| | Event Hubs | Γ£ô (Preview) | | Γ£ô |
| **Services and features supported** | | | | | | | Microsoft Sentinel | Γ£ô ([View scope](./azure-monitor-agent-migration.md#migrate-additional-services-and-features)) | Γ£ô | | | | VM Insights | Γ£ô | Γ£ô | |
The tables below provide a comparison of Azure Monitor Agent with the legacy the
| **Data sent to** | | | | | | | | Azure Monitor Logs | Γ£ô | Γ£ô | | | | | Azure Monitor Metrics<sup>1</sup> | Γ£ô (Public preview) | | | Γ£ô (Public preview) |
-| | Azure Storage | | | Γ£ô | |
-| | Event Hubs | | | Γ£ô | |
+| | Azure Storage | Γ£ô (Preview) | | Γ£ô | |
+| | Event Hubs | Γ£ô (Preview) | | Γ£ô | |
| **Services and features supported** | | | | | | | | Microsoft Sentinel | Γ£ô ([View scope](./azure-monitor-agent-migration.md#migrate-additional-services-and-features)) | Γ£ô | | | | VM Insights | Γ£ô | Γ£ô | |
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
We strongly recommended to always update to the latest version, or opt in to the
## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|
-| October 2023| **Linux** <ul><li>Add support for Process metrics counters for Log Analytics upload and Azure Monitor Metrics<li>Use rsyslog omfwd TCP for improved syslog reliability</li><li>Support Palo Alto CEF logs where hostname is followed by 2 spaces</li><li>Bug and reliability improvements</li></ui> |None|1.28.0|
+| October 2023| **Windows** <ul><li>Minimize CPU spikes when resetting an Event Log subscription</li><li>Enable multiple IIS subscriptions to use same filter</li><li>Cleanup files and folders for inactive tenants in multi-tenant mode</li><li>AMA installer will not install unnecessary certs</li><li>AMA emits Telemetry table locally</li><li>Update Metric Extension to v2.2023.721.1630</li><li>Update AzureSecurityPack to v4.29.0.4</li><li>Update AzureWatson to v1.0.99</li></ul>**Linux**<ul><li> Add support for Process metrics counters for Log Analytics upload and Azure Monitor Metrics</li><li>Use rsyslog omfwd TCP for improved syslog reliability</li><li>Support Palo Alto CEF logs where hostname is followed by 2 spaces</li><li>Bug and reliability improvements</li></ul> |1.20.0|1.28.11|
| September 2023| **Windows** <ul><li>Fix issue with high CPU usage due to excessive Windows Event Logs subscription reset</li><li>Reduce fluentbit resource usage by limiting tracked files older than 3 days and limiting logging to errors only</li><li>Fix race-condition where resource_id is unavailable when agent is restarted</li><li>Fix race-condition when vm-extension provision agent (aka GuestAgent) is issuing a disable-vm-extension command to AMA.</li><li>Update MetricExtension version to 2.2023.721.1630</li><li>Update Troubleshooter to v1.5.14 </li></ul>|1.20.0| None | | August 2023| **Windows** <ul><li>AMA: Allow prefixes in the tag names to handle regression</li><li>Updating package version for AzSecPack 4.28 release</li></ul>**Linux**<ul><li> Comming soon</li></ui>|1.19.0| Comming Soon | | July 2023| **Windows** <ul><li>Fix crash when Event Log subscription callback throws errors.<li>MetricExtension updated to 2.2023.609.2051</li></ui> |1.18.0|None|
azure-monitor Azure Monitor Agent Send Data To Event Hubs And Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-send-data-to-event-hubs-and-storage.md
+
+ Title: Send data to Event Hubs and Storage (Preview)
+description: This article describes how to use Azure Monitor Agent to upload data to Azure Storage and Event Hubs.
+++ Last updated : 10/09/2023+++
+# Send data to Event Hubs and Storage (Preview)
+
+This article describes how to use the Azure Monitor Agent (AMA) to upload data to Azure Storage and Event Hubs. This feature is in preview.
+
+The Azure Monitor Agent is the new, consolidated telemetry agent for collecting data from IaaS resources like virtual machines. By using the upload capability in this preview, you can upload the logs<sup>[1](#FN1)</sup> you send to Log Analytics workspaces to Event Hubs and Storage. Both data destinations use data collection rules to configure collection setup for the agents.
+
+> [!NOTE]
+> This functionality replaces the Windows diagnostics extension (WAD) and Linux diagnostics extension (LAD). For more information, see [Compare Azure Monitor Agent to legacy agents](./agents-overview.md#compare-to-legacy-agents).
+
+**Footnotes**
+
+<a name="FN1">1</a>: Not all data types are supported; refer to [What's supported](#whats-supported) for specifics.
+
+## What's supported
+
+### Data types
+
+- Windows:
+ - Windows Event Logs ΓÇô to eventhub and storage
+ - Perf counters ΓÇô eventhub and storage
+ - IIS logs ΓÇô to storage blob
+ - Custom logs ΓÇô to storage blob
+
+- Linux:
+ - Syslog ΓÇô to eventhub and storage
+ - Perf counters ΓÇô to eventhub and storage
+ - Custom Logs / Log files ΓÇô to eventhub and storage
+
+### Operating systems
+
+- Environments that are supported by the Azure Monitoring Agent on Windows and Linux
+- This feature is only supported and planned to be supported for Azure VMs. There are no plans to bring this to on-premises or Azure Arc scenarios.
+
+## What's not supported
+
+### Data types
+
+- Windows:
+ - ETW Logs
+ - Windows Crash Dumps (not planned nor will be supported)
+ - Application Logs (not planned nor will be supported)
+ - .NET event source logs (not planned nor will be supported)
+
+## Prerequisites
+
+A [user-assigned managed identity](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) associated with the following resources:
+
+- [Storage account](../../storage/common/storage-account-create.md)
+- [Event Hubs namespace and event hub](../../event-hubs/event-hubs-create.md)
+- [Virtual machine](../../virtual-machines/overview.md)
+
+## Create a data collection rule
+
+Create a data collection rule for collecting events and sending to storage and event hub.
+
+1. In the Azure portal's search box, type in *template* and then select **Deploy a custom template**.
+
+ :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/deploy-custom-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/deploy-custom-template.png" alt-text="Screenshot that shows the Azure portal with template entered in the search box and Deploy a custom template highlighted in the search results.":::
+
+1. Select **Build your own template in the editor**.
+
+ :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/build-custom-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/build-custom-template.png" alt-text="Screenshot that shows portal screen to build template in the editor.":::
+
+1. Paste this Azure Resource Manager template into the editor:
+
+ ### [Windows](#tab/windows)
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Location for all resources."
+ }
+ },
+ "dataCollectionRulesName": {
+ "defaultValue": "[concat(resourceGroup().name, 'DCR')]",
+ "type": "String"
+ },
+ "storageAccountName": {
+ "defaultValue": "[concat(resourceGroup().name, 'sa')]",
+ "type": "String"
+ },
+ "eventHubNamespaceName": {
+ "defaultValue": "[concat(resourceGroup().name, 'eh')]",
+ "type": "String"
+ },
+ "eventHubInstanceName": {
+ "defaultValue": "[concat(resourceGroup().name, 'ehins')]",
+ "type": "String"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/dataCollectionRules",
+ "apiVersion": "2022-06-01",
+ "name": "[parameters('dataCollectionRulesName')]",
+ "location": "[parameters('location')]",
+ "kind": "AgentDirectToStore",
+ "properties": {
+ "dataSources": {
+ "performanceCounters": [
+ {
+ "streams": [
+ "Microsoft-Perf"
+ ],
+ "samplingFrequencyInSeconds": 10,
+ "counterSpecifiers": [
+ "\\Process(_Total)\\Working Set - Private",
+ "\\Memory\\% Committed Bytes In Use",
+ "\\LogicalDisk(_Total)\\% Free Space",
+ "\\Network Interface(*)\\Bytes Total/sec"
+ ],
+ "name": "perfCounterDataSource10"
+ }
+ ],
+ "windowsEventLogs": [
+ {
+ "streams": [
+ "Microsoft-Event"
+ ],
+ "xPathQueries": [
+ "Application!*[System[(Level=2)]]",
+ "System!*[System[(Level=2)]]"
+ ],
+ "name": "eventLogsDataSource"
+ }
+ ],
+ "iisLogs": [
+ {
+ "streams": [
+ "Microsoft-W3CIISLog"
+ ],
+ "logDirectories": [
+ "C:\\inetpub\\logs\\LogFiles\\W3SVC1\\"
+ ],
+ "name": "myIisLogsDataSource"
+ }
+ ],
+ "logFiles": [
+ {
+ "streams": [
+ "Custom-Text-logs"
+ ],
+ "filePatterns": [
+ "C:\\JavaLogs\\*.log"
+ ],
+ "format": "text",
+ "settings": {
+ "text": {
+ "recordStartTimestampFormat": "ISO 8601"
+ }
+ },
+ "name": "myTextLogs"
+ }
+ ]
+ },
+ "destinations": {
+ "eventHubsDirect": [
+ {
+ "eventHubResourceId": "[resourceId('Microsoft.EventHub/namespaces/eventhubs', parameters('eventHubNamespaceName'), parameters('eventHubInstanceName'))]",
+ "name": "myEh1"
+ }
+ ],
+ "storageBlobsDirect": [
+ {
+ "storageAccountResourceId": "[resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
+ "name": "blobNamedPerf",
+ "containerName": "PerfBlob"
+ },
+ {
+ "storageAccountResourceId": "[resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
+ "name": "blobNamedWin",
+ "containerName": "WinEventBlob"
+ },
+ {
+ "storageAccountResourceId": "[resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
+ "name": "blobNamedIIS",
+ "containerName": "IISBlob"
+ },
+ {
+ "storageAccountResourceId": "[resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
+ "name": "blobNamedTextLogs",
+ "containerName": "TxtLogBlob"
+ }
+ ],
+ "storageTablesDirect": [
+ {
+ "storageAccountResourceId": "[resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
+ "name": "tableNamedPerf",
+ "tableName": "PerfTable"
+ },
+ {
+ "storageAccountResourceId": "[resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
+ "name": "tableNamedWin",
+ "tableName": "WinTable"
+ },
+ {
+ "storageAccountResourceId": "[resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
+ "name": "tableUnnamed"
+ }
+ ]
+ },
+ "dataFlows": [
+ {
+ "streams": [
+ "Microsoft-Perf"
+ ],
+ "destinations": [
+ "myEh1",
+ "blobNamedPerf",
+ "tableNamedPerf",
+ "tableUnnamed"
+ ]
+ },
+ {
+ "streams": [
+ "Microsoft-WindowsEvent"
+ ],
+ "destinations": [
+ "myEh1",
+ "blobNamedWin",
+ "tableNamedWin",
+ "tableUnnamed"
+ ]
+ },
+ {
+ "streams": [
+ "Microsoft-W3CIISLog"
+ ],
+ "destinations": [
+ "blobNamedIIS"
+ ]
+ },
+ {
+ "streams": [
+ "Custom-Text-logs"
+ ],
+ "destinations": [
+ "blobNamedTextLogs"
+ ]
+ }
+ ]
+ }
+ }
+ ]
+ }
+ ```
+
+ ### [Linux](#tab/linux)
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Location for all resources."
+ }
+ },
+ "dataCollectionRulesName": {
+ "defaultValue": "[concat(resourceGroup().name, 'DCR')]",
+ "type": "String"
+ },
+ "storageAccountName": {
+ "defaultValue": "[concat(resourceGroup().name, 'sa')]",
+ "type": "String"
+ },
+ "eventHubNamespaceName": {
+ "defaultValue": "[concat(resourceGroup().name, 'eh')]",
+ "type": "String"
+ },
+ "eventHubInstanceName": {
+ "defaultValue": "[concat(resourceGroup().name, 'ehins')]",
+ "type": "String"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Insights/dataCollectionRules",
+ "apiVersion": "2022-06-01",
+ "name": "[parameters('dataCollectionRulesName')]",
+ "location": "[parameters('location')]",
+ "kind": "AgentDirectToStore",
+ "properties": {
+ "dataSources": {
+ "performanceCounters": [
+ {
+ "streams": [
+ "Microsoft-Perf"
+ ],
+ "samplingFrequencyInSeconds": 10,
+ "counterSpecifiers": [
+ "Processor(*)\\% Processor Time",
+ "Processor(*)\\% Idle Time",
+ "Processor(*)\\% User Time",
+ "Processor(*)\\% Nice Time",
+ "Processor(*)\\% Privileged Time",
+ "Processor(*)\\% IO Wait Time",
+ "Processor(*)\\% Interrupt Time",
+ "Processor(*)\\% DPC Time",
+ "Memory(*)\\Available MBytes Memory",
+ "Memory(*)\\% Available Memory",
+ "Memory(*)\\Used Memory MBytes",
+ "Memory(*)\\% Used Memory",
+ "Memory(*)\\Pages/sec",
+ "Memory(*)\\Page Reads/sec",
+ "Memory(*)\\Page Writes/sec",
+ "Memory(*)\\Available MBytes Swap",
+ "Memory(*)\\% Available Swap Space",
+ "Memory(*)\\Used MBytes Swap Space",
+ "Memory(*)\\% Used Swap Space",
+ "Logical Disk(*)\\% Free Inodes",
+ "Logical Disk(*)\\% Used Inodes",
+ "Logical Disk(*)\\Free Megabytes",
+ "Logical Disk(*)\\% Free Space",
+ "Logical Disk(*)\\% Used Space",
+ "Logical Disk(*)\\Logical Disk Bytes/sec",
+ "Logical Disk(*)\\Disk Read Bytes/sec",
+ "Logical Disk(*)\\Disk Write Bytes/sec",
+ "Logical Disk(*)\\Disk Transfers/sec",
+ "Logical Disk(*)\\Disk Reads/sec",
+ "Logical Disk(*)\\Disk Writes/sec",
+ "Network(*)\\Total Bytes Transmitted",
+ "Network(*)\\Total Bytes Received",
+ "Network(*)\\Total Bytes",
+ "Network(*)\\Total Packets Transmitted",
+ "Network(*)\\Total Packets Received",
+ "Network(*)\\Total Rx Errors",
+ "Network(*)\\Total Tx Errors",
+ "Network(*)\\Total Collisions"
+ ],
+ "name": "perfCounterDataSource10"
+ }
+ ],
+ "syslog": [
+ {
+ "streams": [
+ "Microsoft-Syslog"
+ ],
+ "facilityNames": [
+ "auth",
+ "authpriv",
+ "cron",
+ "daemon",
+ "mark",
+ "kern",
+ "local0",
+ "local1",
+ "local2",
+ "local3",
+ "local4",
+ "local5",
+ "local6",
+ "local7",
+ "lpr",
+ "mail",
+ "news",
+ "syslog",
+ "user",
+ "UUCP"
+ ],
+ "logLevels": [
+ "Debug",
+ "Info",
+ "Notice",
+ "Warning",
+ "Error",
+ "Critical",
+ "Alert",
+ "Emergency"
+ ],
+ "name": "syslogDataSource"
+ }
+ ],
+ "logFiles": [
+ {
+ "streams": [
+ "Custom-Text-logs"
+ ],
+ "filePatterns": [
+ "/var/log/messages"
+ ],
+ "format": "text",
+ "settings": {
+ "text": {
+ "recordStartTimestampFormat": "ISO 8601"
+ }
+ },
+ "name": "myTextLogs"
+ }
+ ]
+ },
+ "destinations": {
+ "eventHubsDirect": [
+ {
+ "eventHubResourceId": "[resourceId('Microsoft.EventHub/namespaces/eventhubs', parameters('eventHubNamespaceName'), parameters('eventHubInstanceName'))]",
+ "name": "myEh1"
+ }
+ ],
+ "storageBlobsDirect": [
+ {
+ "storageAccountResourceId": "[resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
+ "name": "blobNamedPerf",
+ "containerName": "PerfBlob"
+ },
+ {
+ "storageAccountResourceId": "[resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
+ "name": "blobNamedLinux",
+ "containerName": "SyslogBlob"
+ },
+ {
+ "storageAccountResourceId": "[resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
+ "name": "blobNamedTextLogs",
+ "containerName": "TxtLogBlob"
+ }
+ ],
+ "storageTablesDirect": [
+ {
+ "storageAccountResourceId": "[resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
+ "name": "tableNamedPerf",
+ "tableName": "PerfTable"
+ },
+ {
+ "storageAccountResourceId": "[resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
+ "name": "tableNamedLinux",
+ "tableName": "LinuxTable"
+ },
+ {
+ "storageAccountResourceId": "[resourceId('Microsoft.Storage/storageAccounts/', parameters('storageAccountName'))]",
+ "name": "tableUnnamed"
+ }
+ ]
+ },
+ "dataFlows": [
+ {
+ "streams": [
+ "Microsoft-Perf"
+ ],
+ "destinations": [
+ "myEh1",
+ "blobNamedPerf",
+ "tableNamedPerf",
+ "tableUnnamed"
+ ]
+ },
+ {
+ "streams": [
+ "Microsoft-Syslog"
+ ],
+ "destinations": [
+ "myEh1",
+ "blobNamedLinux",
+ "tableNamedLinux",
+ "tableUnnamed"
+ ]
+ },
+ {
+ "streams": [
+ "Custom-Text-logs"
+ ],
+ "destinations": [
+ "blobNamedTextLogs"
+ ]
+ }
+ ]
+ }
+ }
+ ]
+ }
+ ```
+
+
+
+1. Update the following values in the Azure Resource Manager template. See the example Azure Resource Manager template for a sample.
+
+ **Event hub**
+
+ | Value | Description |
+ |:|:|
+ | `dataSources` | Define it per your requirements. The supported types for direct upload to Event Hubs for Windows are `performanceCounters` and `windowsEventLogs` and for Linux, they're `performanceCounters` and `syslog`. |
+ | `destinations` | Use `eventHubsDirect` for direct upload to the event hub. |
+ | `eventHubResourceId` | Resource ID of the event hub instance.<br><br>NOTE: It isn't the event hub namespace resource ID. |
+ | `dataFlows` | Under `dataFlows`, include destination name. |
+
+ **Storage table**
+
+ | Value | Description |
+ |:|:|
+ | `dataSources` | Define it per your requirements. The supported types for direct upload to storage Table for Windows are `performanceCounters`, `windowsEventLogs` and for Linux, they're `performanceCounters` and `syslog`. |
+ | `destinations` | Use `storageTablesDirect` for direct upload to table storage. |
+ | `storageAccountResourceId` | Resource ID of the storage account. |
+ | `tableName` | The name of the Table where JSON blob with event data is uploaded to. |
+ | `dataFlows` | Under `dataFlows`, include destination name. |
+
+ **Storage blob**
+
+ | Value | Description |
+ |:|:|
+ | `dataSources` | Define it per your requirements. The supported types for direct upload to storage blob for Windows are `performanceCounters`, `windowsEventLogs`, `iisLogs`, `logFiles` and for Linux, they're `performanceCounters`, `syslog` and `logFiles`. |
+ | `destinations` | Use `storageBlobsDirect` for direct upload to blob storage. |
+ | `storageAccountResourceId` | The resource ID of the storage account. |
+ | `containerName` | The name of the container where JSON blob with event data is uploaded to. |
+ | `dataFlows` | Under `dataFlows`, include destination name. |
+
+1. Select **Save**.
+
+## Create DCR association and deploy AzureMonitorAgent
+
+Use custom template deployment to create the DCR association and AMA deployment.
+
+1. In the Azure portal's search box, type in *template* and then select **Deploy a custom template**.
+
+ :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/deploy-custom-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/deploy-custom-template.png" alt-text="Screenshot that shows the Azure portal with template entered in the search box and Deploy a custom template highlighted in the search results.":::
+
+1. Select **Build your own template in the editor**.
+
+ :::image type="content" source="../logs/media/tutorial-workspace-transformations-api/build-custom-template.png" lightbox="../logs/media/tutorial-workspace-transformations-api/build-custom-template.png" alt-text="Screenshot that shows portal screen to build template in the editor.":::
+
+1. Paste this Azure Resource Manager template into the editor:
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "vmName": {
+ "defaultValue": "[concat(resourceGroup().name, 'vm')]",
+ "type": "String"
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Location for all resources."
+ }
+ },
+ "dataCollectionRulesName": {
+ "defaultValue": "[concat(resourceGroup().name, 'DCR')]",
+ "type": "String",
+ "metadata": {
+ "description": "Data Collection Rule Name"
+ }
+ },
+ "dcraName": {
+ "type": "string",
+ "defaultValue": "[concat(uniquestring(resourceGroup().id), 'DCRLink')]",
+ "metadata": {
+ "description": "Name of the association."
+ }
+ },
+ "identityName": {
+ "type": "string",
+ "defaultValue": "[concat(resourceGroup().name, 'UAI')]",
+ "metadata": {
+ "description": "Managed Identity"
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Compute/virtualMachines/providers/dataCollectionRuleAssociations",
+ "name": "[concat(parameters('vmName'),'/microsoft.insights/', parameters('dcraName'))]",
+ "apiVersion": "2021-04-01",
+ "properties": {
+ "description": "Association of data collection rule. Deleting this association will break the data collection for this virtual machine.",
+ "dataCollectionRuleId": "[resourceID('Microsoft.Insights/dataCollectionRules',parameters('dataCollectionRulesName'))]"
+ }
+ },
+ {
+ "type": "Microsoft.Compute/virtualMachines/extensions",
+ "name": "[concat(parameters('vmName'), '/AMAExtension')]",
+ "apiVersion": "2020-06-01",
+ "location": "[parameters('location')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.Compute/virtualMachines/providers/dataCollectionRuleAssociations', parameters('vmName'), 'Microsoft.Insights', parameters('dcraName'))]"
+ ],
+ "properties": {
+ "publisher": "Microsoft.Azure.Monitor",
+ "type": "AzureMonitorWindowsAgent",
+ "typeHandlerVersion": "1.0",
+ "autoUpgradeMinorVersion": true,
+ "settings": {
+ "authentication": {
+ "managedIdentity": {
+ "identifier-type": "mi_res_id",
+ "identifier-value": "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/',parameters('identityName'))]"
+ }
+ }
+ }
+ }
+ }
+ ]
+ }
+ ```
+
+1. Select **Save**.
+
+## Troubleshooting
+
+Use the following section to troubleshoot sending data to Event Hubs and Storage.
+
+### Data not found in storage account blob storage
+
+- Check that the built-in role `Storage Blob Data Contributor` is assigned with managed identity on the storage account.
+- Check that the managed identity is assigned to the VM.
+- Check that the AMA settings have managed identity parameter.
+
+### Data not found in storage account table storage
+
+- Check that the built-in role `Storage Table Data Contributor` is assigned with managed identity on storage account.
+- Check that the managed identity is assigned to the VM.
+- Check that the AMA settings have managed identity parameter.
+
+### Data not flowing to event hub
+
+- Check that the built-in role `Azure Event Hubs Data Sender` is assigned with managed identity on storage account.
+- Check that the managed identity is assigned to the VM.
+- Check that the AMA settings have managed identity parameter.
+
+## AMA and WAD/LAD Convergence
+
+### Will the Azure Monitoring Agent support data upload to Application Insights?
+
+No, this support isn't a part of the roadmap. Application Insights are now powered by Log Analytics Workspaces.
+
+### Will the Azure Monitoring Agent support Windows Crash Dumps as a data type to upload?
+
+No, this support isn't a part of the roadmap. The Azure Monitoring Agent is meant for telemetry logs and not large file types.
+
+### Does this mean the Linux (LAD) and Windows (WAD) Diagnostic Extensions are no longer supported/retired?
+
+No, not until Azure formally announces the deprecation of these agents, which would start a three-year clock until they're no longer supported.
+
+### How to configure AMA for event hubs and storage data destinations
+
+Today the configuration experience is by using the DCR API.
+
+### Will you still be actively developing on WAD and LAD?
+
+WAD and LAD will only be getting security/patches going forward. Most engineering funding has gone to the Azure Monitoring Agent. We highly recommend migrating to the Azure Monitoring Agent to benefit from all its awesome capabilities.
+
+## See also
+
+- For more information on creating a data collection rule, see [Collect events and performance counters from virtual machines with Azure Monitor Agent](./data-collection-rule-azure-monitor-agent.md).
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
Title: Collect text logs with Azure Monitor Agent
-description: Configure collection of filed-based text logs using a data collection rule on virtual machines with the Azure Monitor Agent.
+ Title: Collect logs from a text or JSON file with Azure Monitor Agent
+description: Configure a data collection rule to collect log data from a text or JSON file on a virtual machine using Azure Monitor Agent.
Previously updated : 12/11/2022 Last updated : 10/31/2023 -+
-# Collect text logs with Azure Monitor Agent
+# Collect logs from a text or JSON file with Azure Monitor Agent
-Many applications log information to text files instead of standard logging services such as Windows Event log or Syslog. This article explains how to collect text logs from monitored machines using [Azure Monitor Agent](azure-monitor-agent-overview.md) by creating a [data collection rule (DCR)](../essentials/data-collection-rule-overview.md).
+Many applications log information to text or JSON files instead of standard logging services such as Windows Event log or Syslog. This article explains how to collect log data from text and JSON files on monitored machines using [Azure Monitor Agent](azure-monitor-agent-overview.md) by creating a [data collection rule (DCR)](../essentials/data-collection-rule-overview.md).
## Prerequisites To complete this procedure, you need:
To complete this procedure, you need:
- [Permissions to create Data Collection Rule objects](../essentials/data-collection-rule-overview.md#permissions) in the workspace. -- A VM, Virtual Machine Scale Set, or Arc-enabled on-premises server that writes logs to a text file.
+- A VM, Virtual Machine Scale Set, or Arc-enabled on-premises server that writes logs to a text or JSON file.
- Text file requirements and best practices:
+ Text and JSON file requirements and best practices:
- Do store files on the local drive of the machine on which Azure Monitor Agent is running and in the directory that is being monitored. - Do delineate the end of a record with an end of line. - Do use ASCII or UTF-8 encoding. Other formats such as UTF-16 aren't supported.
To complete this procedure, you need:
## Create a custom table
-This step will create a new custom table, which is any table name that ends in \_CL. Currently a direct REST call to the table management endpoint is used to create a table. The script at the end of this section is the input to the REST call.
+The table created in the script has two columns:
-The table created in the script has two columns TimeGenerated: datetime and RawData: string, which is the default schema for a custom text log. If you know your final schema, then you can add columns in the script before creating the table. If you don't, columns can always be added in the log analytics table UI.
+- `TimeGenerated` (datetime)
+- `RawData` (string
-The easiest way to make the REST call is from an Azure Cloud PowerShell command line (CLI). To open the shell, go to the Azure portal, press the Cloud Shell button, and select PowerShell. If this is your first-time using Azure Cloud PowerShell, you will need to walk through the one-time configuration wizard.
-
+This is the default table schema for log data collected from text and JSON files. If you know your final schema, you can add columns in the script before creating the table. If you don't, you can [add columns using the Log Analytics table UI](../logs/create-custom-table.md#add-or-delete-a-custom-column).
+
+The easiest way to make the REST call is from an Azure Cloud PowerShell command line (CLI). To open the shell, go to the Azure portal, press the Cloud Shell button, and select PowerShell. If this is your first time using Azure Cloud PowerShell, you'll need to walk through the one-time configuration wizard.
-Copy and paste the following script in to PowerShell to create the table in your workspace. Make sure to replace the {subscription}, {resource group}, {workspace name}, and {table name} in the script. Make sure that there are no extra blanks at the beginning or end of the parameters
+Copy and paste this script into PowerShell to create the table in your workspace:
```code $tableParams = @'
$tableParams = @'
Invoke-AzRestMethod -Path "/subscriptions/{subscription}/resourcegroups/{resourcegroup}/providers/microsoft.operationalinsights/workspaces/{WorkspaceName}/tables/{TableName}_CL?api-version=2021-12-01-preview" -Method PUT -payload $tableParams ```
-Press return to execute the code. You should see a 200 response, and details about the table you just created will show up. To validate that the table was created go to your workspace and select Tables on the left blade. You should see your table in the list.
+You should receive a 200 response and details about the table you just created.
+ > [!Note] > The column names are case sensitive. For example `Rawdata` will not correctly collect the event data. It must be `RawData`. -
-## Create data collection rule to collect text logs
+## Create a data collection rule to collect data from a text or JSON file
The data collection rule defines:
To create the data collection rule in the Azure portal:
- **Region** specifies where the DCR will be created. The virtual machines and their associations can be in any subscription or resource group in the tenant. - **Platform Type** specifies the type of resources this rule can apply to. The **Custom** option allows for both Windows and Linux types.
- - **Data Collection Endpoint** specifies the data collection endpoint used to collect data. This data collection endpoint must be in the same region as the Log Analytics workspace. For more information, see [How to set up data collection endpoints based on your deployment](../essentials/data-collection-endpoint-overview.md#how-to-set-up-data-collection-endpoints-based-on-your-deployment).
+ - **Data Collection Endpoint** specifies the data collection endpoint to which Azure Monitor Agent sends collected data. This data collection endpoint must be in the same region as the Log Analytics workspace. For more information, see [How to set up data collection endpoints based on your deployment](../essentials/data-collection-endpoint-overview.md#how-to-set-up-data-collection-endpoints-based-on-your-deployment).
:::image type="content" source="media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png" lightbox="media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png" alt-text="Screenshot that shows the Basics tab of the Data Collection Rule screen.":::
To create the data collection rule in the Azure portal:
:::image type="content" source="media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png" lightbox="media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png" alt-text="Screenshot that shows the Resources tab of the Data Collection Rule screen."::: 1. On the **Collect and deliver** tab, select **Add data source** to add a data source and set a destination.
-1. Select **Custom Text Logs**.
-
- :::image type="content" source="media/data-collection-text-log/custom-text-log-data-collection-rule.png" lightbox="media/data-collection-text-log/custom-text-log-data-collection-rule.png" alt-text="Screenshot that shows the Add data source screen for a data collection rule in Azure portal.":::
-
+1. From the **Data source type** dropdown, select **Custom Text Logs** or **JSON Logs**.
1. Specify the following information: - **File Pattern** - Identifies where the log files are located on the local disk. You can enter multiple file patterns separated by commas (on Linux, AMA version 1.26 or higher is required to collect from a comma-separated list of file patterns).
To create the data collection rule in the Azure portal:
> Multiple log files of the same type commonly exist in the same directory. For example, a machine might create a new file every day to prevent the log file from growing too large. To collect log data in this scenario, you can use a file wildcard. Use the format `C:\directoryA\directoryB\*MyLog.txt` for Windows and `/var/*.log` for Linux. There is no support for directory wildcards.
- - **Table name** - The name of the destination table you created in your Log Analytics Workspace. For more information, see [Prerequisites](#prerequisites).
+ - **Table name** - The name of the destination table you created in your Log Analytics Workspace. For more information, see [Create a custom table](#create-a-custom-table).
- **Record delimiter** - Will be used in the future to allow delimiters other than the currently supported end of line (`/r/n`). - **Transform** - Add an [ingestion-time transformation](../essentials/data-collection-transformations.md) or leave as **source** if you don't need to transform the collected data. 1. On the **Destination** tab, add one or more destinations for the data source. You can select multiple destinations of the same or different types. For instance, you can select multiple Log Analytics workspaces, which is also known as multihoming. <!-- convertborder later -->
- :::image type="content" source="media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png" lightbox="media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png" alt-text="Screenshot that shows the destination tabe of the Add data source screen for a data collection rule in Azure portal." border="false":::
+ :::image type="content" source="media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png" lightbox="media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png" alt-text="Screenshot that shows the destination tab of the Add data source screen for a data collection rule in Azure portal." border="false":::
1. Select **Review + create** to review the details of the data collection rule and association with the set of virtual machines. 1. Select **Create** to create the data collection rule.
To create the data collection rule in the Azure portal:
1. Paste this Resource Manager template into the editor:
- ```json
- {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "dataCollectionRuleName": {
- "type": "string",
- "metadata": {
- "description": "Specifies the name of the Data Collection Rule to create."
- }
- },
- "location": {
- "type": "string",
- "metadata": {
- "description": "Specifies the location in which to create the Data Collection Rule."
- }
- },
- "workspaceName": {
- "type": "string",
- "metadata": {
- "description": "Name of the Log Analytics workspace to use."
- }
- },
- "workspaceResourceId": {
- "type": "string",
- "metadata": {
- "description": "Specifies the Azure resource ID of the Log Analytics workspace to use."
+ - To collect data from a text file, use this template:
+
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "dataCollectionRuleName": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the name of the Data Collection Rule to create."
+ }
+ },
+ "location": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the location in which to create the Data Collection Rule."
+ }
+ },
+ "workspaceName": {
+ "type": "string",
+ "metadata": {
+ "description": "Name of the Log Analytics workspace to use."
+ }
+ },
+ "workspaceResourceId": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the Azure resource ID of the Log Analytics workspace to use."
+ }
+ },
+ "endpointResourceId": {
+ "type": "string",
+ "metadata": {
+ "description": "Specifies the Azure resource ID of the Data Collection Endpoint to use."
+ }
} },
- "endpointResourceId": {
- "type": "string",
- "metadata": {
- "description": "Specifies the Azure resource ID of the Data Collection Endpoint to use."
- }
- }
- },
- "resources": [
- {
- "type": "Microsoft.Insights/dataCollectionRules",
- "name": "[parameters('dataCollectionRuleName')]",
- "location": "[parameters('location')]",
- "apiVersion": "2021-09-01-preview",
- "properties": {
- "dataCollectionEndpointId": "[parameters('endpointResourceId')]",
- "streamDeclarations": {
- "Custom-MyLogFileFormat": {
- "columns": [
+ "resources": [
+ {
+ "type": "Microsoft.Insights/dataCollectionRules",
+ "name": "[parameters('dataCollectionRuleName')]",
+ "location": "[parameters('location')]",
+ "apiVersion": "2021-09-01-preview",
+ "properties": {
+ "dataCollectionEndpointId": "[parameters('endpointResourceId')]",
+ "streamDeclarations": {
+ "Custom-MyLogFileFormat": {
+ "columns": [
+ {
+ "name": "TimeGenerated",
+ "type": "datetime"
+ },
+ {
+ "name": "RawData",
+ "type": "string"
+ }
+ ]
+ }
+ },
+ "dataSources": {
+ "logFiles": [
{
- "name": "TimeGenerated",
- "type": "datetime"
+ "streams": [
+ "Custom-MyLogFileFormat"
+ ],
+ "filePatterns": [
+ "C:\\JavaLogs\\*.log"
+ ],
+ "format": "text",
+ "settings": {
+ "text": {
+ "recordStartTimestampFormat": "ISO 8601"
+ }
+ },
+ "name": "myLogFileFormat-Windows"
}, {
- "name": "RawData",
- "type": "string"
+ "streams": [
+ "Custom-MyLogFileFormat"
+ ],
+ "filePatterns": [
+ "//var//*.log"
+ ],
+ "format": "text",
+ "settings": {
+ "text": {
+ "recordStartTimestampFormat": "ISO 8601"
+ }
+ },
+ "name": "myLogFileFormat-Linux"
+ }
+ ]
+ },
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": "[parameters('workspaceResourceId')]",
+ "name": "[parameters('workspaceName')]"
} ]
- }
- },
- "dataSources": {
- "logFiles": [
+ },
+ "dataFlows": [
{ "streams": [ "Custom-MyLogFileFormat" ],
- "filePatterns": [
- "C:\\JavaLogs\\*.log"
+ "destinations": [
+ "[parameters('workspaceName')]"
],
- "format": "text",
- "settings": {
- "text": {
- "recordStartTimestampFormat": "ISO 8601"
+ "transformKql": "source",
+ "outputStream": "Custom-MyTable_CL"
+ }
+ ]
+ }
+ }
+ ],
+ "outputs": {
+ "dataCollectionRuleId": {
+ "type": "string",
+ "value": "[resourceId('Microsoft.Insights/dataCollectionRules', parameters('dataCollectionRuleName'))]"
+ }
+ }
+ }
+ ```
+
+ - To collect data from a JSON file, use this template:
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "type": "Microsoft.Insights/dataCollectionRules",
+ "name": `DataCollectionRuleName`,
+ "location": `location` ,
+ "apiVersion": "2021-09-01-preview",
+ "properties": {
+ "dataCollectionEndpointId": `endpointResourceId` ,
+ "streamDeclarations": {
+ "Custom-JSONLog": {
+ "columns": [
+ {
+ "name": "TimeGenerated",
+ "type": "datetime"
+ },
+ {
+ "name": "RawData",
+ "type": "string"
}
- },
- "name": "myLogFileFormat-Windows"
- },
+ ]
+ }
+ },
+ "dataSources": {
+ "logFiles": [
+ {
+ "streams": [
+ "Custom-JSONLog"
+ ],
+ "filePatterns": [
+ "C:\\JavaLogs\\*.log"
+ ],
+ "format": "json",
+ "settings": {
+ },
+ "name": "myLogFileFormat "
+ }
+ ]
+ },
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": `workspaceResourceId` ,
+ "name": "`workspaceName`"
+ }
+ ]
+ },
+ "dataFlows": [
{ "streams": [
- "Custom-MyLogFileFormat"
+ "Custom-JSONLog"
],
- "filePatterns": [
- "//var//*.log"
+ "destinations": [
+ "`workspaceName`"
],
- "format": "text",
- "settings": {
- "text": {
- "recordStartTimestampFormat": "ISO 8601"
- }
- },
- "name": "myLogFileFormat-Linux"
- }
- ]
- },
- "destinations": {
- "logAnalytics": [
- {
- "workspaceResourceId": "[parameters('workspaceResourceId')]",
- "name": "[parameters('workspaceName')]"
+ "transformKql": "source",
+ "outputStream": "`Table-Name_CL`"
} ]
- },
- "dataFlows": [
- {
- "streams": [
- "Custom-MyLogFileFormat"
- ],
- "destinations": [
- "[parameters('workspaceName')]"
- ],
- "transformKql": "source",
- "outputStream": "Custom-MyTable_CL"
- }
- ]
+ }
+ }
+ ],
+ "outputs": {
+ "dataCollectionRuleId": {
+ "type": "string",
+ "value": "[resourceId('Microsoft.Insights/dataCollectionRules', `dataCollectionRuleName`"
}
- }
- ],
- "outputs": {
- "dataCollectionRuleId": {
- "type": "string",
- "value": "[resourceId('Microsoft.Insights/dataCollectionRules', parameters('dataCollectionRuleName'))]"
} }
- }
- ```
+ ```
+ 1. Update the following values in the Resource Manager template:
To create the data collection rule in the Azure portal:
- `filePatterns`: Specifies the location and file pattern of the log files to collect. This defines a separate pattern for Windows and Linux agents. - `transformKql`: Specifies a [transformation](../logs/../essentials//data-collection-transformations.md) to apply to the incoming data before it's sent to the workspace.
- See [Structure of a data collection rule in Azure Monitor (preview)](../essentials/data-collection-rule-structure.md#custom-logs) if you want to modify the text log DCR.
+ See [Structure of a data collection rule in Azure Monitor](../essentials/data-collection-rule-structure.md#custom-logs) if you want to modify the data collection rule.
> [!IMPORTANT] > Custom data collection rules have a prefix of *Custom-*; for example, *Custom-rulename*. The *Custom-rulename* in the stream declaration must match the *Custom-rulename* name in the Log Analytics workspace.
To create the data collection rule in the Azure portal:
1. Copy the **Resource ID** for the data collection rule. You'll use this in the next step.
-1. Create a data collection association that associates the data collection rule to the agents with the log file to be collected. You can associate the same data collection rule with multiple agents:
+1. Associate the data collection rule to the virtual machine you want to collect data from. You can associate the same data collection rule with multiple machines:
1. From the **Monitor** menu in the Azure portal, select **Data Collection Rules** and select the rule that you created.
To create the data collection rule in the Azure portal:
:::image type="content" source="media/data-collection-text-log/add-resources.png" lightbox="media/data-collection-text-log/add-resources.png" alt-text="Screenshot that shows the Data Collection Rules pane in the portal with resources for the data collection rule.":::
- 1. Select either individual agents to associate the data collection rule, or select a resource group to create an association for all agents in that resource group. Select **Apply**.
+ 1. Select either individual virtual machines to associate the data collection rule, or select a resource group to create an association for all virtual machines in that resource group. Select **Apply**.
:::image type="content" source="media/data-collection-text-log/select-resources.png" lightbox="media/data-collection-text-log/select-resources.png" alt-text="Screenshot that shows the Resources pane in the portal to add resources to the data collection rule.":::
The column names used here are for example only. The column names for your log w
``` - ## Troubleshoot
-Use the following steps to troubleshoot collection of text logs.
+Use the following steps to troubleshoot collection of logs from text and JSON files.
-## Troubleshooting Tool
-Use the [Azure monitor troubleshooter tool](use-azure-monitor-agent-troubleshooter.md) to look for common issues and share results with Microsoft.
+## Use the Azure Monitor Agent Troubleshooter
+Use the [Azure Monitor Agent Troubleshooter](use-azure-monitor-agent-troubleshooter.md) to look for common issues and share results with Microsoft.
-### Check if any custom logs have been received
-Start by checking if any records have been collected for your custom log table by running the following query in Log Analytics. If records aren't returned, check the other sections for possible causes. This query looks for entires in the last two days, but you can modify for another time range. It can take 5-7 minutes for new data from your tables to be uploaded. Only new data will be uploaded any log file last written to prior to the DCR rules being created won't be uploaded.
+### Check if you've ingested data to your custom table
+Start by checking if any records have been ingested into your custom log table by running the following query in Log Analytics:
``` kusto
-<YourCustomLog>_CL
+<YourCustomTable>_CL
| where TimeGenerated > ago(48h) | order by TimeGenerated desc ```
+If records aren't returned, check the other sections for possible causes. This query looks for entries in the last two days, but you can modify for another time range. It can take 5-7 minutes for new data to appear in your table. The Azure Monitor Agent only collects data written to the text or JSON file after you associate the data collection rule with the virtual machine.
+ ### Verify that you created a custom table You must [create a custom log table](../logs/create-custom-table.md#create-a-custom-table) in your Log Analytics workspace before you can send data to it.
This file pattern should correspond to the logs on the agent machine.
:::image type="content" source="media/data-collection-text-log/text-log-files.png" lightbox="media/data-collection-text-log/text-log-files.png" alt-text="Screenshot of text log files on agent machine." border="false":::
-### Verify that the text logs are being populated
-The agent will only collect new content written to the log file being collected. If you're experimenting with the text logs collection feature, you can use the following script to generate sample logs.
+### Verify that logs are being populated
+The agent will only collect new content written to the log file being collected. If you're experimenting with the collection logs from a text or JSON file, you can use the following script to generate sample logs.
```powershell # This script writes a new log entry at the specified interval indefinitely.
azure-monitor Java Get Started Supplemental https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-get-started-supplemental.md
Title: Application Insights with containers description: This article shows you how to set-up Application Insights Previously updated : 10/10/2023 Last updated : 10/30/2023 ms.devlang: java
For more information, see [Use Application Insights Java In-Process Agent in Azu
### Docker entry point
-If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.17.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.18.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.17.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.18.jar", "-jar", "<myapp.jar>"]
```
-If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.17.jar"` somewhere before `-jar`, for example:
+If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.18.jar"` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.17.jar" -jar <myapp.jar>
+ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.18.jar" -jar <myapp.jar>
```
FROM ...
COPY target/*.jar app.jar
-COPY agent/applicationinsights-agent-3.4.17.jar applicationinsights-agent-3.4.17.jar
+COPY agent/applicationinsights-agent-3.4.18.jar applicationinsights-agent-3.4.18.jar
COPY agent/applicationinsights.json applicationinsights.json ENV APPLICATIONINSIGHTS_CONNECTION_STRING="CONNECTION-STRING"
-ENTRYPOINT["java", "-javaagent:applicationinsights-agent-3.4.17.jar", "-jar", "app.jar"]
+ENTRYPOINT["java", "-javaagent:applicationinsights-agent-3.4.18.jar", "-jar", "app.jar"]
```
-In this example we have copied the `applicationinsights-agent-3.4.17.jar` and `applicationinsights.json` files from an `agent` folder (you can choose any folder of your machine). These two files have to be in the same folder in the Docker container.
+In this example we have copied the `applicationinsights-agent-3.4.18.jar` and `applicationinsights.json` files from an `agent` folder (you can choose any folder of your machine). These two files have to be in the same folder in the Docker container.
### Third-party container images
For information on setting up the Application Insights Java agent, see [Enabling
If you installed Tomcat via `apt-get` or `yum`, you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file: ```
-JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.17.jar"
+JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.18.jar"
``` #### Tomcat installed via download and unzip
JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.17.jar"
If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content: ```
-CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.17.jar"
+CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.18.jar"
```
-If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.17.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.18.jar` to `CATALINA_OPTS`.
### Tomcat 8 (Windows)
If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-
Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content: ```
-set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.17.jar
+set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.18.jar
``` Quotes aren't necessary, but if you want to include them, the proper placement is: ```
-set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.17.jar"
+set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.18.jar"
```
-If the file `<tomcat>/bin/setenv.bat` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.17.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.bat` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.18.jar` to `CATALINA_OPTS`.
#### Run Tomcat as a Windows service
-Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.17.jar` to the `Java Options` under the `Java` tab.
+Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.18.jar` to the `Java Options` under the `Java` tab.
### JBoss EAP 7 #### Standalone server
-Add `-javaagent:path/to/applicationinsights-agent-3.4.17.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
+Add `-javaagent:path/to/applicationinsights-agent-3.4.18.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
```java ...
- JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.17.jar -Xms1303m -Xmx1303m ..."
+ JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.18.jar -Xms1303m -Xmx1303m ..."
... ``` #### Domain server
-Add `-javaagent:path/to/applicationinsights-agent-3.4.17.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.4.18.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
```xml ...
Add `-javaagent:path/to/applicationinsights-agent-3.4.17.jar` to the existing `j
<jvm-options> <option value="-server"/> <!--Add Java agent jar file here-->
- <option value="-javaagent:path/to/applicationinsights-agent-3.4.17.jar"/>
+ <option value="-javaagent:path/to/applicationinsights-agent-3.4.18.jar"/>
<option value="-XX:MetaspaceSize=96m"/> <option value="-XX:MaxMetaspaceSize=256m"/> </jvm-options>
Add these lines to `start.ini`:
``` --exec--javaagent:path/to/applicationinsights-agent-3.4.17.jar
+-javaagent:path/to/applicationinsights-agent-3.4.18.jar
``` ### Payara 5
-Add `-javaagent:path/to/applicationinsights-agent-3.4.17.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.4.18.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
```xml ... <java-config ...> <!--Edit the JVM options here--> <jvm-options>
- -javaagent:path/to/applicationinsights-agent-3.4.17.jar>
+ -javaagent:path/to/applicationinsights-agent-3.4.18.jar>
</jvm-options> ... </java-config>
Add `-javaagent:path/to/applicationinsights-agent-3.4.17.jar` to the existing `j
1. In `Generic JVM arguments`, add the following JVM argument: ```
- -javaagent:path/to/applicationinsights-agent-3.4.17.jar
+ -javaagent:path/to/applicationinsights-agent-3.4.18.jar
``` 1. Save and restart the application server.
Add `-javaagent:path/to/applicationinsights-agent-3.4.17.jar` to the existing `j
Create a new file `jvm.options` in the server directory (for example, `<openliberty>/usr/servers/defaultServer`), and add this line: ```--javaagent:path/to/applicationinsights-agent-3.4.17.jar
+-javaagent:path/to/applicationinsights-agent-3.4.18.jar
``` ### Others
azure-monitor Java Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-spring-boot.md
Title: Configure Azure Monitor Application Insights for Spring Boot description: How to configure Azure Monitor Application Insights for Spring Boot applications Previously updated : 09/18/2023 Last updated : 10/30/2023 ms.devlang: java
There are two options for enabling Application Insights Java with Spring Boot: J
## Enabling with JVM argument
-Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.17.jar"` somewhere before `-jar`, for example:
+Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.18.jar"` somewhere before `-jar`, for example:
```
-java -javaagent:"path/to/applicationinsights-agent-3.4.17.jar" -jar <myapp.jar>
+java -javaagent:"path/to/applicationinsights-agent-3.4.18.jar" -jar <myapp.jar>
``` ### Spring Boot via Docker entry point
-If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.17.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.18.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.17.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.18.jar", "-jar", "<myapp.jar>"]
```
-If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.17.jar"` somewhere before `-jar`, for example:
+If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.18.jar"` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.17.jar" -jar <myapp.jar>
+ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.18.jar" -jar <myapp.jar>
``` ### Configuration
To enable Application Insights Java programmatically, you must add the following
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-runtime-attach</artifactId>
- <version>3.4.17</version>
+ <version>3.4.18</version>
</dependency> ```
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
Title: Configuration options - Azure Monitor Application Insights for Java description: This article shows you how to configure Azure Monitor Application Insights for Java. Previously updated : 10/10/2023 Last updated : 10/30/2023 ms.devlang: java
More information and configuration options are provided in the following section
## Configuration file path
-By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.17.jar`.
+By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.18.jar`.
You can specify your own configuration file path by using one of these two options: * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable * `applicationinsights.configuration.file` Java system property
-If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.17.jar` is located.
+If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.18.jar` is located.
Alternatively, instead of using a configuration file, you can specify the entire _content_ of the JSON configuration via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`.
Or you can set the connection string by using the Java system property `applicat
You can also set the connection string by specifying a file to load the connection string from.
-If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.17.jar` is located.
+If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.18.jar` is located.
```json {
and add `applicationinsights-core` to your application:
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-core</artifactId>
- <version>3.4.17</version>
+ <version>3.4.18</version>
</dependency> ```
In the preceding configuration example:
* `level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`. * `path` can be an absolute or relative path. Relative paths are resolved against the directory where
-`applicationinsights-agent-3.4.17.jar` is located.
+`applicationinsights-agent-3.4.18.jar` is located.
Starting from version 3.0.2, you can also set the self-diagnostics `level` by using the environment variable `APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL`. It then takes precedence over the self-diagnostics level specified in the JSON configuration.
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
Title: Upgrading from 2.x - Azure Monitor Application Insights Java description: Upgrading from Azure Monitor Application Insights Java 2.x Previously updated : 09/18/2023 Last updated : 10/30/2023 ms.devlang: java
There are typically no code changes when upgrading to 3.x. The 3.x SDK dependenc
Add the 3.x Java agent to your JVM command-line args, for example ```--javaagent:path/to/applicationinsights-agent-3.4.17.jar
+-javaagent:path/to/applicationinsights-agent-3.4.18.jar
``` If you're using the Application Insights 2.x Java agent, just replace your existing `-javaagent:...` with the aforementioned example.
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Title: Enable Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications description: This article provides guidance on how to enable Azure Monitor on applications by using OpenTelemetry. Previously updated : 10/10/2023 Last updated : 10/30/2023 ms.devlang: csharp, javascript, typescript, python
dotnet add package Azure.Monitor.OpenTelemetry.Exporter
#### [Java](#tab/java)
-Download the [applicationinsights-agent-3.4.17.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.17/applicationinsights-agent-3.4.17.jar) file.
+Download the [applicationinsights-agent-3.4.18.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.18/applicationinsights-agent-3.4.18.jar) file.
> [!WARNING] >
var loggerFactory = LoggerFactory.Create(builder =>
Java autoinstrumentation is enabled through configuration changes; no code changes are required.
-Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights-agent-3.4.17.jar"` to your application's JVM args.
+Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights-agent-3.4.18.jar"` to your application's JVM args.
> [!TIP] > For scenario-specific guidance, see [Get Started (Supplemental)](./java-get-started-supplemental.md).
To paste your Connection String, select from the following options:
B. Set via Configuration File - Java Only (Recommended)
- Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-3.4.17.jar` with the following content:
+ Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-3.4.18.jar` with the following content:
```json {
azure-monitor Pre Aggregated Metrics Log Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pre-aggregated-metrics-log-metrics.md
There are several [ways of sending custom metrics from the Application Insights
## Custom metrics dimensions and pre-aggregation
-All metrics that you send by using [trackMetric](./api-custom-events-metrics.md#trackmetric) or [GetMetric and TrackValue](./api-custom-events-metrics.md#getmetric) API calls are automatically stored in both logs and metrics stores. Although the log-based version of your custom metric always retains all dimensions, the pre-aggregated version of the metric is stored by default with no dimensions. You can turn on collection of dimensions of custom metrics on the [usage and estimated cost](../usage-estimated-costs.md#usage-and-estimated-costs) tab by selecting the **Enable alerting on custom metric dimensions** checkbox.
+All metrics that you send by using [trackMetric](./api-custom-events-metrics.md#trackmetric) or [GetMetric and TrackValue](./api-custom-events-metrics.md#getmetric) API calls are automatically stored in both logs and metrics stores. Although the log-based version of your custom metric always retains all dimensions, the pre-aggregated version of the metric is stored by default with no dimensions. You can turn on collection of dimensions of custom metrics on the [usage and estimated cost](../cost-usage.md#usage-and-estimated-costs) tab by selecting the **Enable alerting on custom metric dimensions** checkbox.
:::image type="content" source="./media/pre-aggregated-metrics-log-metrics/001-cost.png" lightbox="./media/pre-aggregated-metrics-log-metrics/001-cost.png" alt-text="Screenshot that shows usage and estimated costs.":::
azure-monitor Best Practices Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-alerts.md
Security is one of the most important aspects of any architecture. Azure Monitor
## Cost optimization
-Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. See [Azure Monitor cost and usage](usage-estimated-costs.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
+Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. See [Azure Monitor cost and usage](cost-usage.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
> [!NOTE] > See [Optimize costs in Azure Monitor](best-practices-cost.md) for cost optimization recommendations across all features of Azure Monitor.
azure-monitor Best Practices Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-containers.md
Security is one of the most important aspects of any architecture. Azure Monitor
## Cost optimization
-Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. See [Azure Monitor cost and usage](usage-estimated-costs.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
+Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. See [Azure Monitor cost and usage](cost-usage.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
> [!NOTE] > See [Optimize costs in Azure Monitor](best-practices-cost.md) for cost optimization recommendations across all features of Azure Monitor.
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
# Cost optimization in Azure Monitor
-Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. Before you use this article, you should see [Azure Monitor cost and usage](usage-estimated-costs.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
+Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. Before you use this article, you should see [Azure Monitor cost and usage](cost-usage.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
This article describes [Cost optimization](/azure/architecture/framework/cost/) for Azure Monitor as part of the [Azure Well-Architected Framework](/azure/architecture/framework/). This is a set of guiding tenets that can be used to improve the quality of a workload. The framework consists of five pillars of architectural excellence:
azure-monitor Best Practices Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-logs.md
Security is one of the most important aspects of any architecture. Azure Monitor
## Cost optimization
-Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. See [Azure Monitor cost and usage](usage-estimated-costs.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
+Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. See [Azure Monitor cost and usage](cost-usage.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
> [!NOTE] > See [Optimize costs in Azure Monitor](best-practices-cost.md) for cost optimization recommendations across all features of Azure Monitor.
azure-monitor Best Practices Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-plan.md
This article is part of the scenario [Recommendations for configuring Azure Moni
If you're not already familiar with monitoring concepts, start with the [Cloud monitoring guide](/azure/cloud-adoption-framework/manage/monitor), which is part of the [Microsoft Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/). That guide defines high-level concepts of monitoring and provides guidance for defining requirements for your monitoring environment and supporting processes. This article refers to sections of that guide that are relevant to particular planning steps. ## Understand Azure Monitor costs
-Minimizing costs is a core goal of your monitoring strategy. Some data collection and features in Azure Monitor have no cost. However, others have costs based on their particular configuration, amount of data collected, or frequency that they're run. The articles in this scenario identify any recommendations that include a cost, but you should be familiar with Azure Monitor pricing as you design your implementation for cost optimization. See the following pages for details and guidance on Azure Monitor pricing:
+A core goal of your monitoring strategy will be minimizing costs. Some data collection and features in Azure Monitor have no cost while other have costs based on their particular configuration, amount of data collected, or frequency that they're run. The articles in this scenario identify any recommendations that include a cost, but you should be familiar with Azure Monitor pricing as you design your implementation for cost optimization. See the following for details and guidance on Azure Monitor pricing:
- [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/)-- [Monitor usage and estimated costs in Azure Monitor](usage-estimated-costs.md)
+- [Azure Monitor cost and usage](cost-usage.md)
+- [Cost optimization in Azure Monitor](best-practices-cost.md)
## Define strategy Before you design and implement any monitoring solution, you should establish a monitoring strategy so that you understand the goals and requirements of your plan. The strategy defines your particular requirements, the configuration that best meets those requirements, and processes to use the monitoring environment to maximize your applications' performance and reliability. The configuration options that you choose for Azure Monitor should be consistent with your strategy.
-See [Cloud monitoring guide: Formulate a monitoring strategy](/azure/cloud-adoption-framework/strategy/monitoring-strategy) for many factors that you should consider when developing a monitoring strategy. You should also refer to [Monitoring strategy for cloud deployment models](/azure/cloud-adoption-framework/manage/monitor/cloud-models-monitor-overview) for assistance with comparing completely cloud based monitoring with a hybrid model.
+See [Cloud monitoring guide: Formulate a monitoring strategy](/azure/cloud-adoption-framework/strategy/monitoring-strategy) for a number of factors that you should consider when developing a monitoring strategy. You should also refer to [Monitoring strategy for cloud deployment models](/azure/cloud-adoption-framework/manage/monitor/cloud-models-monitor-overview), which assist in comparing completely cloud based monitoring with a hybrid model.
## Gather required information Before you determine the details of your implementation, you should gather information required to define those details. The following sections described information typically required for a complete implementation of Azure Monitor. ### What needs to be monitored?
- You don't need to necessarily configure complete monitoring for all of your cloud resources but instead focus on your critical applications and the components they depend on. This focus will not only reduce your monitoring costs but also reduce the complexity of your monitoring environment. See [Cloud monitoring guide: Collect the right data](/azure/cloud-adoption-framework/manage/monitor/data-collection) for guidance on defining the data that you require.
+ You won't necessarily configure complete monitoring for all of your cloud resources but instead focus on your critical applications and the components they depend on. This not only reduces your monitoring costs but also reduce the complexity of your monitoring environment. See [Cloud monitoring guide: Collect the right data](/azure/cloud-adoption-framework/manage/monitor/data-collection) for guidance on defining the data that you require.
### Who needs to have access and be notified
-As you configure your monitoring environment, you need to determine the folllowing:
--- Which users should have access to monitoring data-- Which users need to be notified when an issue is detected-
-These users may be application and resource owners, or you may have a centralized monitoring team. This information determines how you configure permissions for data access and notifications for alerts. You may also require custom workbooks to present particular sets of information to different users.
+As you configure your monitoring environment, you need to determine which users should have access to monitoring data and which users need to be notified when an issue is detected. These may be application and resource owners, or you may have a centralized monitoring team. This information determines how you configure permissions for data access and notifications for alerts. You may also require custom workbooks to present particular sets of information to different users.
### Service level agreements Your organization may have SLAs that define your commitments for performance and uptime of your applications. These SLAs may determine how you need to configure time sensitive features of Azure Monitor such as alerts. You also need to understand [data latency in Azure Monitor](logs/data-ingestion-time.md) since this affects the responsiveness of monitoring scenarios and your ability to meet SLAs. ## Identify monitoring services and products
-Azure Monitor is designed to address Health and Status monitoring. A complete monitoring solution typically involves multiple Azure services and potentially other products. Other monitoring objectives, which may require more solutions, are described in the Cloud Monitoring Guide in [primary monitoring objectives](/azure/cloud-adoption-framework/strategy/monitoring-strategy#formulate-monitoring-requirements).
+Azure Monitor is designed to address Health and Status monitoring. A complete monitoring solution typically involves multiple Azure services and potentially other products. Other monitoring objectives, which may require additional solutions, are described in the Cloud Monitoring Guide in [primary monitoring objectives](/azure/cloud-adoption-framework/strategy/monitoring-strategy#formulate-monitoring-requirements).
The following sections describe other services and products that you may use with Azure Monitor. This scenario currently doesn't include guidance on implementing these solutions so you should refer to their documentation.
azure-monitor Best Practices Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-vm.md
Security is one of the most important aspects of any architecture. Azure Monitor
## Cost optimization
-Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. See [Azure Monitor cost and usage](usage-estimated-costs.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
+Cost optimization refers to ways to reduce unnecessary expenses and improve operational efficiencies. You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. See [Azure Monitor cost and usage](cost-usage.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill.
> [!NOTE] > See [Optimize costs in Azure Monitor](best-practices-cost.md) for cost optimization recommendations across all features of Azure Monitor.
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
Last updated 03/02/2023 + # Understand monitoring costs for Container insights This article provides pricing guidance for Container insights to help you understand how to:
-* Estimate costs up front before you enable Container insights.
* Measure costs after Container insights has been enabled for one or more containers. * Control the collection of data and make cost reductions.
This article provides pricing guidance for Container insights to help you unders
The Azure Monitor pricing model is primarily based on the amount of data ingested in gigabytes per day into your Log Analytics workspace. The cost of a Log Analytics workspace isn't based only on the volume of data collected, it is also dependent on the plan selected, and how long you chose to store data generated from your clusters. >[!NOTE]
->All sizes and pricing are for sample estimation only. See the Azure Monitor [pricing](https://azure.microsoft.com/pricing/details/monitor/) page for the most recent pricing based on your Azure Monitor Log Analytics pricing model and Azure region.
+> See [Estimate Azure Monitor costs](../cost-estimate.md#log-data-ingestion) to estimate your costs for Container insights before you enable it.
The following types of data collected from a Kubernetes cluster with Container insights influence cost and can be customized based on your usage:
The following types of data collected from a Kubernetes cluster with Container i
- Active scraping of Prometheus metrics - [Resource log collection](../../aks/monitor-aks.md#resource-logs) of Kubernetes main node logs in your Azure Kubernetes Service (AKS) cluster to analyze log data generated by main components, such as `kube-apiserver` and `kube-controller-manager`.
-## Estimating costs to monitor your AKS cluster
-
-The following estimation is based on an AKS cluster with the following sizing example. The estimate applies only for metrics and inventory data collected. For container logs like stdout, stderr, and environmental variables, the estimate varies based on the log sizes generated by the workload. They're excluded from our estimation.
-
-If you enabled monitoring of an AKS cluster configured as follows:
--- Three nodes-- Two disks per node-- One network interface per node-- 20 pods (one container in each pod = 20 containers in total)-- Two Kubernetes namespaces-- Five Kubernetes services (includes kube-system pods, services, and namespace)-- Collection frequency = 60 secs (default)-
-You can see the tables and volume of data generated per hour in the assigned Log Analytics workspace. For more information about each of these tables, see [Azure Monitor Logs tables](../../aks/monitor-aks-reference.md#azure-monitor-logs-tables).
-
-|Table | Size estimate (MB/hour) |
-|||
-|Perf | 12.9 |
-|InsightsMetrics | 11.3 |
-|KubePodInventory | 1.5 |
-|KubeNodeInventory | 0.75 |
-|KubeServices | 0.13 |
-|ContainerInventory | 3.6 |
-|KubeHealth | 0.1 |
-|KubeMonAgentEvents |0.005 |
-
-Total = 31 MB/hour = 23.1 GB/month (one month = 31 days)
-
-By using the default [pricing](https://azure.microsoft.com/pricing/details/monitor/) for Log Analytics, which is a pay-as-you-go model, you can estimate the Azure Monitor cost per month. After a capacity reservation is included, the price would be higher per month depending on the reservation selected.
## Control ingestion to reduce cost
You must be on the ContainerLogV2 schema to configure Basic Logs. For more infor
### Prometheus metrics scraping
-If you use [Prometheus metric scraping](container-insights-prometheus.md), make sure that you limit the number of metrics you collect from your cluster:
+> [!NOTE]
+> This section describes [collection of Prometheus metrics in your Log Analytics workspace](container-insights-prometheus-logs.md). This information does not apply if you're using [Managed Prometheus to scrape your Prometheus metrics](prometheus-metrics-enable.md).
+
+If you [collect Prometheus metrics in your Log Analytics workspace](container-insights-prometheus-logs.md), make sure that you limit the number of metrics you collect from your cluster:
- Ensure that scraping frequency is optimally set. The default is 60 seconds. You can increase the frequency to 15 seconds, but you must ensure that the metrics you're scraping are published at that frequency. Otherwise, many duplicate metrics will be scraped and sent to your Log Analytics workspace at intervals that add to data ingestion and retention costs but are of less value. - Container insights supports exclusion and inclusion lists by metric name. For example, if you're scraping **kubedns** metrics in your cluster, hundreds of them might get scraped by default. But you're most likely only interested in a subset of the metrics. Confirm that you specified a list of metrics to scrape, or exclude others except for a few to save on data ingestion volume. It's easy to enable scraping and not use many of those metrics, which will only add charges to your Log Analytics bill.
If you use [Prometheus metric scraping](container-insights-prometheus.md), make
Container insights includes a predefined set of metrics and inventory items collected that are written as log data in your Log Analytics workspace. All metrics in the following table are collected every one minute. + | Type | Metrics | |:|:| | Node metrics | `cpuUsageNanoCores`<br>`cpuCapacityNanoCores`<br>`cpuAllocatableNanoCores`<br>`memoryRssBytes`<br>`memoryWorkingSetBytes`<br>`memoryCapacityBytes`<br>`memoryAllocatableBytes`<br>`restartTimeEpoch`<br>`used` (disk)<br>`free` (disk)<br>`used_percent` (disk)<br>`io_time` (diskio)<br>`writes` (diskio)<br>`reads` (diskio)<br>`write_bytes` (diskio)<br>`write_time` (diskio)<br>`iops_in_progress` (diskio)<br>`read_bytes` (diskio)<br>`read_time` (diskio)<br>`err_in` (net)<br>`err_out` (net)<br>`bytes_recv` (net)<br>`bytes_sent` (net)<br>`Kubelet_docker_operations` (kubelet)
The following list is the cluster inventory data collected by default:
## Next steps To help you understand what the costs are likely to be based on recent usage patterns from data collected with Container insights, see [Analyze usage in a Log Analytics workspace](../logs/analyze-usage.md).+
azure-monitor Cost Estimate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/cost-estimate.md
+
+ Title: Estimate Azure Monitor costs
+description: Guidance on using the Azure Monitor pricing calculator to estimate Azure Monitor billable usage.
+++ Last updated : 10/27/2023+
+# Estimate Azure Monitor costs
+
+Your Azure Monitor cost will vary significantly based on your expected utilization and configuration. Use the [Azure Monitor Pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) to get cost estimates for different features of Azure Monitor based on your particular environment.
+
+Since Azure Monitor has [multiple types of charges](cost-usage.md#pricing-model), its calculator has multiple categories. See the sections below for an explanation of these categories and guidance for providing estimates. See [Azure Monitor Pricing](https://azure.microsoft.com/pricing/details/monitor/) for current pricing details.
+
+Some of the values required by the calculator might be difficult to provide if you're just getting started with Azure Monitor. For example, you might have no idea of the volume of analytics logs generated from the different Azure resources that you intend to monitor. A common strategy is to enable monitoring for a small group of resources and use the observed data volumes with the calculator to determine your costs for a full environment. See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for queries and other methods to measure the billable data in your Log Analytics workspace.
+++
+## Log data ingestion
+This section includes the ingestion and retention of data in your Log Analytics workspaces. This includes such features as Container insights and Application insights in addition to resource logs collected from your Azure resources and agents installed on your virtual machines. This is where the bulk of monitoring costs will typically be incurred in most environments.
+
+| Category | Description |
+|:|:|
+| Estimate Data Volume For Monitoring VMs | Data collected from your virtual machines either using VM insights or by creating a DCR to events and performance data. The data collected from each VM will vary significantly depending on your particular collection settings and the workloads running on your virtual machines, so you should validate these estimates in your own environment. |
+| Estimate Data Volume Using Container Insights | Data collected from your Kubernetes clusters. The estimate is based on the number of clusters and their configuration. This estimate applies only for metrics and inventory data collected. Container logs (stdout, stderr, and environmental variables) vary significantly based on the log sizes generated by the workload, and they're excluded from this estimate. You should include their volume in the *Analytics Logs* category. |
+| Estimate Data Volume Based On Application Activity | Data collected from your workspace-based applications using Application Insights. The data collected from each application will vary significantly depending on your particular collection settings and applications, so you should validate these estimates in your own environment.
+| Analytics Logs | Resource logs collected from Azure resources and any other data aside from those listed above sent to Log Analytics tables not configured for [basic logs](logs/basic-logs-configure.md). This can be difficult to estimate so you should enable monitoring for a small group of resources and use the observed data volumes to extrapolate for a full environment. |
+| Basic Logs | Resource logs collected from Azure resources and any other data aside from those listed above sent to Log Analytics tables configured for [basic logs](logs/basic-logs-configure.md). This can be difficult to estimate so you should enable monitoring for a small group of resources and use the observed data volumes to extrapolate for a full environment. |
+| Interactive Data Retention | [Interactive retention](logs/data-retention-archive.md) setting for your Log Analytics workspace. |
+| Data Archive | [Archive](logs/data-retention-archive.md) setting for your Log Analytics workspace. |
+| Basic Logs Search Queries | Estimated number and scanned data of the queries that you expect to run using tables configured for [basic logs](logs/basic-logs-configure.md). |
+| Search Jobs | Estimated number and scanned data of the [search jobs](logs/search-jobs.md) that you expect to run against [archived data](logs/data-retention-archive.md). |
+| Platform logs| Resource logs collected from Azure resources to an Event Hub, Storage account, or a partner. This doesn't include logs sent to your Log Analytics workspace, which are included in the **Analytics Logs** and **Basic Logs** category. This can be difficult to estimate so you should enable monitoring for a small group of resources and use the observed data volumes to extrapolate for a full environment. |
+
+## Managed Prometheus
+This section includes charges for the ingestion and query of Prometheus metrics by your Kubernetes clusters.
+
+| Category | Description |
+|:|:|
+| Metric Sample Ingestion | Number and frequency of the Prometheus metrics collected by your AKS nodes. See [Default Prometheus metrics configuration in Azure Monitor](containers/prometheus-metrics-scrape-default.md). |
+| Query Samples Processed | Number of query samples can be estimated from the dashboards and alerting rules that use them. |
++
+## Application Insights
+This section includes charges from [classic Application Insights resources](app/convert-classic-resource.md). Workspace-based Application Insights resources are included in the Log Data Ingestion category.
+
+| Category | Description |
+|:|:|
+| Data ingestion | Volume of data that you expect from your classic Application Insights resources. This can be difficult to estimate so you should enable monitoring for a small group of resources and use the observed data volumes to extrapolate for a full environment. |
+| Data Retention | [Data retention setting](logs/data-retention-archive.md#set-data-retention-for-classic-application-insights-resources) for your classic Application Insights resources. |
+| Multi-step Web Test | Number of legacy [multi-step web tests](/previous-versions/azure/azure-monitor/app/availability-multistep) that you expect to run. |
++
+## Alert rules
+This section includes charges for alert rules.
+
+| Category | Description |
+|:|:|
+| Metric Signals Monitored | Number of [metrics alert rules](alerts/alerts-types.md#metric-alerts) and their time series. |
+| Log Signals Monitored | Number of [log alert rules](alerts/alerts-types.md#log-alerts) and their frequency. |
+
+## ITSM connector - ticket creation/update
+This section includes charges for ITSM events, which are sent in response to alerts being triggered.
+
+| Category | Description |
+|:|:|
+| Estimate the number of ITSM events that will be sent beyond the number included for free. |
++
+## Notifications
+This section includes charges for notifications, which are sent in response to alerts being triggered.
+
+| Category | Description |
+|:|:|
+| Emails, web hooks and push notifications | Estimate the number of different types of notifications that will be sent beyond the number included for free. |
+++
+## Next steps
+
+- See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.
+- See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for details on analyzing the data in your workspace to determine to source of any higher than expected usage and opportunities to reduce your amount of data collected.
+- See [Azure Monitor best practices - Cost management](best-practices-cost.md) for best practices on configuring and managing Azure Monitor to minimize your charges.
azure-monitor Cost Meters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/cost-meters.md
+
+ Title: Azure Monitor billing meter names
+description: Reference of Azure Monitor billing meter names.
+++ Last updated : 09/20/2023+
+# Azure Monitor billing meter names
+
+This article contains a reference of the billing meter names used by Azure Monitor in [Azure Cost Management + Billing](cost-usage.md#azure-cost-management--billing). Use this information to interpret your monthly charges for Azure Monitor.
+
+## Log data ingestion
+The following table lists the meters used to bill for data ingestion in your Log Analytics workspaces, and whether the meter is regional. There's a different billing meter, `MeterId` in the export usage report for each region. Note that Basic Logs ingestion can be used when the workspace's pricing tier is Pay-as-you-go or any commitment tier.
++
+| Pricing tier |ServiceName | MeterName | Regional Meter? |
+| -- | -- | -- | -- |
+| (any) | Azure Monitor | Basic Logs Data Ingestion | yes |
+| Pay-as-you-go | Log Analytics | Pay-as-you-go Data Ingestion | yes |
+| 100 GB/day Commitment Tier | Azure Monitor | 100 GB Commitment Tier Capacity Reservation | yes |
+| 200 GB/day Commitment Tier | Azure Monitor | 200 GB Commitment Tier Capacity Reservation | yes |
+| 300 GB/day Commitment Tier | Azure Monitor | 300 GB Commitment Tier Capacity Reservation | yes |
+| 400 GB/day Commitment Tier | Azure Monitor | 400 GB Commitment Tier Capacity Reservation | yes |
+| 500 GB/day Commitment Tier | Azure Monitor | 500 GB Commitment Tier Capacity Reservation | yes |
+| 1000 GB/day Commitment Tier | Azure Monitor | 1000 GB Commitment Tier Capacity Reservation | yes |
+| 2000 GB/day Commitment Tier | Azure Monitor | 2000 GB Commitment Tier Capacity Reservation | yes |
+| 5000 GB/day Commitment Ties | Azure Monitor | 5000 GB Commitment Tier Capacity Reservation | yes |
+| Per Node (legacy tier) | Insight and Analytics | Standard Node | no |
+| Per Node (legacy tier) | Insight and Analytics | Standard Data Overage per Node | no |
+| Per Node (legacy tier) | Insight and Analytics | Standard Data Included per Node | no |
+| Standalone (legacy tier) | Log Analytics | Pay-as-you-go Data Analyzed | no |
+| Standard (legacy tier) | Log Analytics | Standard Data Analyzed | no |
+| Premium (legacy tier) | Log Analytics | Premium Data Analyzed | no |
++
+The *Standard Data Included per Node* meter is used both for the Log Analytics [Per Node tier](logs/cost-logs.md#per-node-pricing-tier) data allowance, and also the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud), for workspaces in any pricing tier.
++
+## Other Azure Monitor logs meters
+
+| ServiceName | MeterName | Regional Meter? |
+| -- | | |
+| Log Analytics | Pay-as-you-go Data Retention | yes |
+| Insight and Analytics | Standard Data Retention | no |
+| Azure Monitor | Data Archive | yes |
+| Azure Monitor | Search Queries Scanned | yes |
+| Azure Monitor | Search Jobs Scanned | yes |
+| Azure Monitor | Data Restore | yes |
+| Azure Monitor | Log Analytics data export Data Exported | yes |
+| Azure Monitor | Platform Logs Data Processed | yes |
+
+*Pay-as-you-go Data Retention* is used for workspaces in all modern pricing tiers (Pay-as-you-go and Commitment Tiers). "Standard Data Retention" is used for workspaces in the legacy Per Node and Standalone pricing tiers.
+
+## Azure Monitor metrics meters:
+
+| ServiceName | MeterName | Regional Meter? |
+| -- | | |
+| Azure Monitor | Metrics ingestion Metric samples | yes |
+| Azure Monitor | Prometheus Metrics Queries Metric samples | yes |
+| Azure Monitor | Native Metric Queries API Calls | yes |
+
+## Azure Monitor alerts meters
+
+| ServiceName | MeterName | Regional Meter? |
+| -- | | |
+| Azure Monitor | Alerts Metric Monitored | no |
+| Azure Monitor | Alerts Dynamic Threshold | no |
+| Azure Monitor | Alerts System Log Monitored at 1 Minute Frequency | no |
+| Azure Monitor | Alerts System Log Monitored at 10 Minute Frequency | no |
+| Azure Monitor | Alerts System Log Monitored at 15 Minute Frequency | no |
+| Azure Monitor | Alerts System Log Monitored at 5 Minute Frequency | no |
+| Azure Monitor | Alerts Resource Monitored at 1 Minute Frequency | no |
+| Azure Monitor | Alerts Resource Monitored at 10 Minute Frequency | no |
+| Azure Monitor | Alerts Resource Monitored at 15 Minute Frequency | no |
+| Azure Monitor | Alerts Resource Monitored at 5 Minute Frequency | no |
+
+## Azure Monitor web test meters
+
+| ServiceName | MeterName | Regional Meter? |
+| -- | | |
+| Azure Monitor | Standard Web Test Execution | yes |
+| Application Insights | Multi-step Web Test | no |
+
+## Legacy classic Application Insights meters
+
+| ServiceName | MeterName | Regional Meter? |
+| -- | | |
+| Application Insights | Enterprise Node | no |
+| Application Insights | Enterprise Overage Data | no |
++
+### Legacy Application Insights meters
+
+Most Application Insights usage for both classic and workspace-based resources is reported on meters with **Log Analytics** for **Meter Category** because there's a single log back-end for all Azure Monitor components. Only Application Insights resources on legacy pricing tiers and multiple-step web tests are reported with **Application Insights** for **Meter Category**. The usage is shown in the **Consumed Quantity** column. The unit for each entry is shown in the **Unit of Measure** column. For more information, see [Understand your Microsoft Azure bill](../cost-management-billing/understand/review-individual-bill.md).
+
+To separate costs from your Log Analytics and classic Application Insights usage, [create a filter](../cost-management-billing/costs/group-filter.md) on **Resource type**. To see all Application Insights costs, filter **Resource type** to **microsoft.insights/components**. For Log Analytics costs, filter **Resource type** to **microsoft.operationalinsights/workspaces**. (Workspace-based Application Insights is all billed to the Log Analytics workspace resourced.)
azure-monitor Cost Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/cost-usage.md
+
+ Title: Azure Monitor cost and usage
+description: Overview of how Azure Monitor is billed and how to analyze billable usage.
+++ Last updated : 10/20/2023+
+# Azure Monitor cost and usage
+This article describes the different ways that Azure Monitor charges for usage and how to evaluate charges on your Azure bill.
++
+## Pricing model
+Azure Monitor uses a consumption-based pricing (pay-as-you-go) billing model where you only pay for what you use. Features of Azure Monitor that are enabled by default do not incur any charge, including collection and alerting on the [Activity log](essentials/activity-log.md) and collection and analysis of [platform metrics](essentials/metrics-supported.md).
+
+Several other features don't have a direct cost, but you instead pay for the ingestion and retention of data that they collect. The following table describes the different types of usage that are charged in Azure Monitor. Detailed current pricing for each is provided in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
++
+| Type | Description |
+|:|:|
+| Logs | Ingestion, retention, and export of data in [Log Analytics workspaces](logs/log-analytics-workspace-overview.md) and [legacy Application insights resources](app/convert-classic-resource.md). This will typically be the bulk of Azure Monitor charges for most customers. There is no charge for querying this data except in the case of [Basic Logs](logs/basic-logs-configure.md) or [Archived Logs](logs/data-retention-archive.md).<br><br>Charges for Logs can vary significantly on the configuration that you choose. See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how charges for Logs data are calculated and the different pricing tiers available. |
+| Platform Logs | Processing of [diagnostic and auditing information](essentials/resource-logs.md) is charged for [certain services](essentials/resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace. There's no direct charge when this data is sent to a Log Analytics workspace, but there is a charge for the workspace data ingestion and collection. |
+| Metrics | There is no charge for [standard metrics](essentials/metrics-supported.md) collected from Azure resources. There is a cost for collecting [custom metrics](essentials/metrics-custom-overview.md) and for retrieving metrics from the [REST API](essentials/rest-api-walkthrough.md#retrieve-metric-values). |
+| Prometheus Metrics | Pricing for [Azure Monitor managed service for Prometheus](essentials/prometheus-metrics-overview.md) is based on [data samples ingested](essentials/prometheus-metrics-enable.md) and [query samples processed](essentials/azure-monitor-workspace-manage.md#link-a-grafana-workspace). Data is retained for 18 months at no extra charge. |
+| Alerts | Alerts are charged based on the type and number of [signals](alerts/alerts-overview.md) used by the alert rule, its frequency, and the type of [notification](alerts/action-groups.md) used in response. For [log alerts](alerts/alerts-unified-log.md) configured for [at scale monitoring](alerts/alerts-unified-log.md#split-by-alert-dimensions), the cost will also depend on the number of time series created by the dimensions resulting from your query. |
+| Web tests | There is a cost for [standard web tests](app/availability-standard-tests.md) and [multi-step web tests](app/availability-multistep.md) in Application Insights. Multi-step web tests have been deprecated.
++
+### Data transfer charges
+Sending data to Azure Monitor can incur data bandwidth charges. As described in the [Azure Bandwidth pricing page](https://azure.microsoft.com/pricing/details/bandwidth/), data transfer between Azure services located in two regions charged as outbound data transfer at the normal rate. Inbound data transfer is free. Data transfer charges for Azure Monitor though are typically very small compared to the costs for data ingestion and retention. You should focus more on your ingested data volume to control your costs.
+
+> [!NOTE]
+> Data sent to a different region using [Diagnostic Settings](essentials/diagnostic-settings.md) does not incur data transfer charges
+
+## View Azure Monitor usage and charges
+There are two primary tools to view and analyze your Azure Monitor billing and estimated charges. Each is described in detail in the following sections.
+
+| Tool | Description |
+|:|:|
+| [Azure Cost Management + Billing](#azure-cost-management--billing) | The primary tool that you use to analyze your usage and costs. It gives you multiple options to analyze your monthly charges for different Azure Monitor features and their projected cost over time. |
+| [Usage and Estimated Costs](#usage-and-estimated-costs) | Provides a listing of monthly charges for different Azure Monitor features. This is particularly useful for Log Analytics workspaces where it helps you to select your pricing tier by showing how your cost would change at different pricing tiers. |
++
+## Azure Cost Management + Billing
+To get started analyzing your Azure Monitor charges, open [Cost Management + Billing](../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=/azure/billing/TOC.json) in the Azure portal. This tool includes several built-in dashboards for deep cost analysis like cost by resource and invoice details. Select **Cost Management** and then **Cost analysis**. Select your subscription or another [scope](../cost-management-billing/costs/understand-work-scopes.md).
+
+>[!NOTE]
+>You might need additional access to use Cost Management data. See [Assign access to Cost Management data](../cost-management-billing/costs/assign-access-acm-data.md).
+++
+To limit the view to Azure Monitor charges, [create a filter](../cost-management-billing/costs/group-filter.md) for the following **Service names**. See [Azure Monitor billing meter names](cost-meters.md) for the different charges that are included in each service.
+
+- Azure Monitor
+- Log Analytics
+- Insight and Analytics
+- Application Insights
+
+Other services such as Microsoft Defender for Cloud and Microsoft Sentinel also bill their usage against Log Analytics workspace resources, so you might want to add them to your filter. See [Common cost analysis uses](../cost-management-billing/costs/cost-analysis-common-uses.md) for details on using this view.
++
+>[!NOTE]
+>Alternatively, you can go to the **Overview** page of a Log Analytics workspace or Application Insights resource and click **View Cost** in the upper right corner of the **Essentials** section. This will launch the **Cost Analysis** from Azure Cost Management + Billing already scoped to the workspace or application.
+> :::image type="content" source="logs/media/view-bill/view-cost-option.png" lightbox="logs/media/view-bill/view-cost-option.png" alt-text="Screenshot of option to view cost for Log Analytics workspace.":::
+
+### Automated mails and alerts
+Rather than manually analyzing your costs in the Azure portal, you can automate delivery of information using the following methods.
+
+ - **Daily cost analysis emails.** Once you've configured your Cost Analysis view, you should click **Subscribe** at the top of the screen to receive regular email updates from Cost Analysis.
+ - **Budget alerts.** To be notified if there are significant increases in your spending, create a [budget alerts](../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md) for a single workspace or group of workspaces.
+
+### Export usage details
+
+To gain deeper understanding of your usage and costs, create exports using **Cost Analysis**. See [Tutorial: Create and manage exported data](../cost-management-billing/costs/tutorial-export-acm-data.md) to learn how to automatically create a daily export you can use for regular analysis.
+
+These exports are in CSV format and will contain a list of daily usage (billed quantity and cost) by resource, billing meter, and several other fields such as [AdditionalInfo](../cost-management-billing/automate/understand-usage-details-fields.md#list-of-fields-and-descriptions). You can use Microsoft Excel to do rich analyses of your usage not possible in the **Cost Analytics** experiences in the portal.
+
+The usage export has both the number of units of usage and their cost. Consequently, you can use this export to see the amount of benefits you are receiving from various offers such as the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) and the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/).
+
+For example, usage from Log Analytics can be found by first filtering on the **Meter Category** column to show
+
+1. **Log Analytics** (for Pay-as-you-go data ingestion and interactive Data Retention),
+2. **Insight and Analytics** (used by some of the legacy pricing tiers), and
+3. **Azure Monitor** (used by most other Log Analytics features such as Commitment Tiers, Basic Logs ingesting, Data Archive, Search Queries, Search Jobs, etc.)
+
+Add a filter on the **Instance ID** column for **contains workspace** or **contains cluster**. The usage is shown in the **Consumed Quantity** column. The unit for each entry is shown in the **Unit of Measure** column.
+
+> [!NOTE]
+> See [Azure Monitor billing meter names](cost-meters.md) for a reference of the billing meter names used by Azure Monitor in Azure Cost Management + Billing.
++
+## Usage and estimated costs
+You can get additional usage details about Log Analytics workspaces and Application Insights resources from the **Usage and Estimated Costs** option for each.
+
+### Log Analytics workspace
+To learn about your usage trends and choose the most cost-effective [commitment tier](logs/cost-logs.md#commitment-tiers) for your Log Analytics workspace, select **Usage and Estimated Costs** from the **Log Analytics workspace** menu in the Azure portal.
++
+This view includes the following:
+
+A. Estimated monthly charges based on usage from the past 31 days using the current pricing tier.<br>
+B. Estimated monthly charges using different commitment tiers.<br>
+C. Billable data ingestion by solution from the past 31 days.
+
+To explore the data in more detail, click on the icon in the upper-right corner of either chart to work with the query in Log Analytics.
++
+### Application insights
+To learn about your usage trends for your classic Application Insights resource, select **Usage and Estimated Costs** from the **Applications** menu in the Azure portal.
++
+This view includes the following:
+
+A. Estimated monthly charges based on usage from the past month.<br>
+B. Billable data ingestion by table from the past month.
+
+To investigate your Application Insights usage more deeply, open the **Metrics** page, add the metric named *Data point volume*, and then select the *Apply splitting* option to split the data by "Telemetry item type".
++
+## View data allocation benefits
+
+To view data allocation benefits from sources such as [Microsoft Defender for Servers](https://azure.microsoft.com/pricing/details/defender-for-cloud/), [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5 and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/), or the [Sentinel Free Trial](https://azure.microsoft.com/pricing/details/microsoft-sentinel/), you need to [export your usage details](#export-usage-details).
+
+1. Open the exported usage spreadsheet and filter the *Instance ID* column to your workspace. To select all of your workspaces in the spreadsheet, filter the *Instance ID* column to "contains /workspaces/".
+2. Filter the *ResourceRate* column to show only rows where this is equal to zero. Now you will see the data allocations from these various sources.
+
+> [!NOTE]
+> Data allocations from Defender for Servers 500 MB/server/day will appear in rows with the meter name "Data Included per Node" and the meter category to "Insight and Analytics" (the name of a legacy offer still used with this meter.) If the workspace is in the legacy Per Node Log Analytics pricing tier, this meter will also include the data allocations from this Log Analytics pricing tier.
++
+## Operations Management Suite subscription entitlements
+
+Customers who purchased Microsoft Operations Management Suite E1 and E2 are eligible for per-node data ingestion entitlements for Log Analytics and Application Insights. Each Application Insights node includes up to 200 MB of data ingested per day (separate from Log Analytics data ingestion), with 90-day data retention at no extra cost.
+
+To receive these entitlements for Log Analytics workspaces or Application Insights resources in a subscription, they must use the Per-Node (OMS) pricing tier. This entitlement isn't visible in the estimated costs shown in the Usage and estimated cost pane.
+
+Depending on the number of nodes of the suite that your organization purchased, moving some subscriptions into a Per GB (pay-as-you-go) pricing tier might be advantageous, but this requires careful consideration.
++
+Also, if you move a subscription to the new Azure monitoring pricing model in April 2018, the Per GB tier is the only tier available. Moving a subscription to the new Azure monitoring pricing model isn't advisable if you have an Operations Management Suite subscription.
+
+> [!TIP]
+> If your organization has Microsoft Operations Management Suite E1 or E2, it's usually best to keep your Log Analytics workspaces in the Per-Node (OMS) pricing tier and your Application Insights resources in the Enterprise pricing tier.
+>
+
+## Next steps
+
+- See [Azure Monitor Logs pricing details](logs/cost-logs.md) for details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.
+- See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for details on analyzing the data in your workspace to determine to source of any higher than expected usage and opportunities to reduce your amount of data collected.
+- See [Set daily cap on Log Analytics workspace](logs/daily-cap.md) to control your costs by setting a daily limit on the amount of data that might be ingested in a workspace.
+- See [Azure Monitor best practices - Cost management](best-practices-cost.md) for best practices on configuring and managing Azure Monitor to minimize your charges.
azure-monitor Metrics Custom Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-custom-overview.md
For details on when billing is enabled for custom metrics and metrics queries, c
Custom metrics are retained for the [same amount of time as platform metrics](../essentials/data-platform-metrics.md#retention-of-metrics). > [!NOTE]
-> Metrics sent to Azure Monitor via the Application Insights SDK are billed as ingested log data. They incur additional metrics charges only if the Application Insights feature [Enable alerting on custom metric dimensions](../app/pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) has been selected. This checkbox sends data to the Azure Monitor metrics database by using the custom metrics API to allow the more complex alerting. Learn more about the [Application Insights pricing model](../usage-estimated-costs.md) and [prices in your region](https://azure.microsoft.com/pricing/details/monitor/).
+> Metrics sent to Azure Monitor via the Application Insights SDK are billed as ingested log data. They incur additional metrics charges only if the Application Insights feature [Enable alerting on custom metric dimensions](../app/pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) has been selected. This checkbox sends data to the Azure Monitor metrics database by using the custom metrics API to allow the more complex alerting. Learn more about the [Application Insights pricing model](../cost-usage.md) and [prices in your region](https://azure.microsoft.com/pricing/details/monitor/).
## How to send custom metrics
azure-monitor Analyze Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/analyze-usage.md
See the **Usage** tab for a breakdown of ingestion by solution and table. This i
Select **Additional Queries** for prebuilt queries that help you further understand your data patterns. ### Usage and estimated costs
-The **Data ingestion per solution** chart on the [Usage and estimated costs](../usage-estimated-costs.md#usage-and-estimated-costs) page for each workspace shows the total volume of data sent and how much is being sent by each solution over the previous 31 days. This information helps you determine trends such as whether any increase is from overall data usage or usage by a particular solution.
+The **Data ingestion per solution** chart on the [Usage and estimated costs](../cost-usage.md#usage-and-estimated-costs) page for each workspace shows the total volume of data sent and how much is being sent by each solution over the previous 31 days. This information helps you determine trends such as whether any increase is from overall data usage or usage by a particular solution.
## Querying data volumes from the Usage table
W3CIISLog
## Next steps - See [Azure Monitor Logs pricing details](cost-logs.md) for information on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.-- See [Azure Monitor cost and usage](../usage-estimated-costs.md) for a description of the different types of Azure Monitor charges and how to analyze them on your Azure bill.
+- See [Azure Monitor cost and usage](../cost-usage.md) for a description of the different types of Azure Monitor charges and how to analyze them on your Azure bill.
- See [Azure Monitor best practices - Cost management](../best-practices-cost.md) for best practices on configuring and managing Azure Monitor to minimize your charges. - See [Data collection transformations in Azure Monitor (preview)](../essentials/data-collection-transformations.md) for information on using transformations to reduce the amount of data you collected in a Log Analytics workspace by filtering unwanted records and columns.
azure-monitor Azure Monitor Data Explorer Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-monitor-data-explorer-proxy.md
Perf | where ObjectName == "Memory" and (CounterName == "Available MBytes Memory
## Create an alert based on a cross-service query
-To create a new alert rule based on a cross-service query, follow the steps in [Create a new alert rule](../alerts/alerts-create-new-alert-rule.md), selecting your Log Analytics workspace on the Scope tab.
+To create a new alert rule based on a cross-service query, follow the steps in [Create a new alert rule](../alerts/alerts-create-new-alert-rule.md), selecting your Log Analytics workspace on the **Scope** tab.
## Limitations-
+### General cross-service query limitations
* Database names are case sensitive. * Identifying the Timestamp column in the cluster isn't supported. The Log Analytics Query API won't pass the time filter. * Cross-service queries support data retrieval only. * [Private Link](../logs/private-link-security.md) (private endpoints) and [IP restrictions](/azure/data-explorer/security-network-restrict-public-access) do not support cross-service queries. * `mv-expand` is limited to 2000 records.
-* Azure Resource Graph cross-queries do not support these operators: `smv-apply()`, `rand()`, `arg_max()`, `arg_min()`, `avg()`, `avg_if()`, `countif()`, `sumif()`, `percentile()`, `percentiles()`, `percentilew()`, `percentilesw()`, `stdev()`, `stdevif()`, `stdevp()`, `variance()`, `variancep()`, `varianceif()`.
+
+### Azure Resource Graph cross-service query limitations
+When you query Azure Resource Graph data from Azure Monitor:
+* The query returns the first 1000 records only.
+* Azure Monitor doesn't return Azure Resource Graph query errors.
+* The Log Analytics query editor marks valid Azure Resource Graph queries as syntax errors.
+* These operators aren't supported: `smv-apply()`, `rand()`, `arg_max()`, `arg_min()`, `avg()`, `avg_if()`, `countif()`, `sumif()`, `percentile()`, `percentiles()`, `percentilew()`, `percentilesw()`, `stdev()`, `stdevif()`, `stdevp()`, `variance()`, `variancep()`, `varianceif()`.
## Next steps * [Write queries](/azure/data-explorer/write-queries)
azure-monitor Change Pricing Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/change-pricing-tier.md
Last updated 03/25/2022
Each Log Analytics workspace in Azure Monitor can have a different [pricing tier](cost-logs.md#commitment-tiers). This article describes how to change the pricing tier for a workspace and how to track these changes. > [!NOTE]
-> This article describes how to change the commitment tier for a Log Analytics workspace once you determine which commitment tier you want to use. See [Azure Monitor Logs pricing details](cost-logs.md) for details on how commitment tiers work and [Azure Monitor cost and usage](../usage-estimated-costs.md#log-analytics-workspace) for recommendations on the most cost effective commitment based on your observed Azure Monitor usage.
+> This article describes how to change the commitment tier for a Log Analytics workspace once you determine which commitment tier you want to use. See [Azure Monitor Logs pricing details](cost-logs.md) for details on how commitment tiers work and [Azure Monitor cost and usage](../cost-usage.md#log-analytics-workspace) for recommendations on the most cost effective commitment based on your observed Azure Monitor usage.
## Permissions required To change the pricing tier for a workspace, you must be assigned to one of the following roles:
Changes to a workspace's pricing tier are recorded in the [Activity Log](../esse
## Next steps - See [Azure Monitor Logs pricing details](cost-logs.md) for details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges.-- See [Azure Monitor cost and usage](../usage-estimated-costs.md) for a description of the different types of Azure Monitor charges and how to analyze them on your Azure bill.
+- See [Azure Monitor cost and usage](../cost-usage.md) for a description of the different types of Azure Monitor charges and how to analyze them on your Azure bill.
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
Billing for the commitment tiers is done per workspace on a daily basis. If the
Azure Commitment Discounts, such as discounts received from [Microsoft Enterprise Agreements](https://www.microsoft.com/licensing/licensing-programs/enterprise), are applied to Azure Monitor Logs commitment-tier pricing just as they are to pay-as-you-go pricing. Discounts are applied whether the usage is being billed per workspace or per dedicated cluster. > [!TIP]
-> The **Usage and estimated costs** menu item for each Log Analytics workspace shows an estimate of what your data ingestion charges would be at each commitment level to help you choose the optimal commitment tier for your data ingestion patterns. Review this information periodically to determine if you can reduce your charges by moving to another tier. For information on this view, see [Usage and estimated costs](../usage-estimated-costs.md#usage-and-estimated-costs). To review your actual charges, use [Azure Cost Management = Billing](../usage-estimated-costs.md#azure-cost-management--billing).
+> The **Usage and estimated costs** menu item for each Log Analytics workspace shows an estimate of what your data ingestion charges would be at each commitment level to help you choose the optimal commitment tier for your data ingestion patterns. Review this information periodically to determine if you can reduce your charges by moving to another tier. For information on this view, see [Usage and estimated costs](../cost-usage.md#usage-and-estimated-costs). To review your actual charges, use [Azure Cost Management = Billing](../cost-usage.md#azure-cost-management--billing).
## Dedicated clusters
This query isn't an exact replication of how usage is calculated, but it provide
## Next steps -- See [Azure Monitor cost and usage](../usage-estimated-costs.md) for a description of the different types of Azure Monitor charges and how to analyze them on your Azure bill.
+- See [Azure Monitor cost and usage](../cost-usage.md) for a description of the different types of Azure Monitor charges and how to analyze them on your Azure bill.
- See [Analyze usage in Log Analytics workspace](analyze-usage.md) for details on analyzing the data in your workspace to determine the source of any higher-than-expected usage and opportunities to reduce your amount of data collected. - See [Set daily cap on Log Analytics workspace](daily-cap.md) to control your costs by configuring a maximum volume that might be ingested in a workspace each day. - See [Azure Monitor best practices - Cost management](../best-practices-cost.md) for best practices on configuring and managing Azure Monitor to minimize your charges.
azure-monitor Daily Cap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/daily-cap.md
The maximum cap for an Application Insights classic resource is 1,000 GB/day unl
We've removed the restriction on some subscription types that have credit that couldn't be used for Application Insights. Previously, if the subscription has a spending limit, the daily cap dialog has instructions to remove the spending limit and enable the daily cap to be raised beyond 32.3 MB/day. ## Determine your daily cap
-To help you determine an appropriate daily cap for your workspace, see [Azure Monitor cost and usage](../usage-estimated-costs.md) to understand your data ingestion trends. You can also review [Analyze usage in Log Analytics workspace](analyze-usage.md) which provides methods to analyze your workspace usage in more detail.
+To help you determine an appropriate daily cap for your workspace, see [Azure Monitor cost and usage](../cost-usage.md) to understand your data ingestion trends. You can also review [Analyze usage in Log Analytics workspace](analyze-usage.md) which provides methods to analyze your workspace usage in more detail.
azure-monitor Get Started Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/get-started-queries.md
description: This article provides a tutorial for getting started writing log qu
Previously updated : 10/20/2021 Last updated : 10/31/2023
azure-monitor Migrate Splunk To Azure Monitor Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/migrate-splunk-to-azure-monitor-logs.md
The benefits of migrating to Azure Monitor include:
Your current usage in Splunk will help you decide which [pricing tier](../logs/change-pricing-tier.md) to select in Azure Monitor and estimate your future costs: - [Follow Splunk guidance](https://docs.splunk.com/Documentation/Splunk/latest/Admin/AboutSplunksLicenseUsageReportView) to view your usage report.-- [Estimate Azure Monitor usage and costs](../usage-estimated-costs.md#estimate-azure-monitor-usage-and-costs) using the [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor).
+- [Azure Monitor cost estimates](../cost-estimate.md) using the [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor).
## 2. Set up a Log Analytics workspace
azure-monitor Workspace Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/workspace-design.md
Each Log Analytics workspace resides in a [particular Azure region](https://azur
- **If you have requirements for keeping data in a particular geography:** Create a separate workspace for each region with such requirements. - **If you don't have requirements for keeping data in a particular geography:** Use a single workspace for all regions.
-Also consider potential [bandwidth charges](https://azure.microsoft.com/pricing/details/bandwidth/) that might apply when you're sending data to a workspace from a resource in another region. These charges are usually minor relative to data ingestion costs for most customers. These charges typically result from sending data to the workspace from a virtual machine. Monitoring data from other Azure resources by using [diagnostic settings](../essentials/diagnostic-settings.md) doesn't [incur egress charges](../usage-estimated-costs.md#data-transfer-charges).
+Also consider potential [bandwidth charges](https://azure.microsoft.com/pricing/details/bandwidth/) that might apply when you're sending data to a workspace from a resource in another region. These charges are usually minor relative to data ingestion costs for most customers. These charges typically result from sending data to the workspace from a virtual machine. Monitoring data from other Azure resources by using [diagnostic settings](../essentials/diagnostic-settings.md) doesn't [incur egress charges](../cost-usage.md#data-transfer-charges).
Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator) to estimate the cost and determine which regions you need. Consider workspaces in multiple regions if bandwidth charges are significant.
You might have a requirement to segregate data or define boundaries based on own
- **If you don't require data segregation:** Use a single workspace for all data owners. ### Split billing
-You might need to split billing between different parties or perform charge back to a customer or internal business unit. You can use [Azure Cost Management + Billing](../usage-estimated-costs.md#azure-cost-management--billing) to view charges by workspace. You can also use a log query to view [billable data volume by Azure resource, resource group, or subscription](analyze-usage.md#data-volume-by-azure-resource-resource-group-or-subscription). This approach might be sufficient for your billing requirements.
+You might need to split billing between different parties or perform charge back to a customer or internal business unit. You can use [Azure Cost Management + Billing](../cost-usage.md#azure-cost-management--billing) to view charges by workspace. You can also use a log query to view [billable data volume by Azure resource, resource group, or subscription](analyze-usage.md#data-volume-by-azure-resource-resource-group-or-subscription). This approach might be sufficient for your billing requirements.
- **If you don't need to split billing or perform charge back:** Use a single workspace for all cost owners.-- **If you need to split billing or perform charge back:** Consider whether [Azure Cost Management + Billing](../usage-estimated-costs.md#azure-cost-management--billing) or a log query provides cost reporting that's granular enough for your requirements. If not, use a separate workspace for each cost owner.
+- **If you need to split billing or perform charge back:** Consider whether [Azure Cost Management + Billing](../cost-usage.md#azure-cost-management--billing) or a log query provides cost reporting that's granular enough for your requirements. If not, use a separate workspace for each cost owner.
### Data retention and archive You can configure default [data retention and archive settings](data-retention-archive.md) for a workspace or [configure different settings for each table](data-retention-archive.md#configure-retention-and-archive-at-the-table-level). You might require different settings for different sets of data in a particular table. If so, you need to separate that data into different workspaces, each with unique retention settings.
azure-monitor Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/service-limits.md
This article lists limits in different areas of Azure Monitor.
## Next steps - [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/)-- [Monitoring usage and estimated costs in Azure Monitor](./usage-estimated-costs.md)
+- [Azure Monitor cost and usage](cost-usage.md)
azure-monitor Usage Estimated Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/usage-estimated-costs.md
- Title: Azure Monitor cost and usage
-description: Overview of how Azure Monitor is billed and how to estimate and analyze billable usage.
--- Previously updated : 08/06/2023--
-# Azure Monitor cost and usage
-
-This article describes the different ways that Azure Monitor charges for usage. It also explains how to evaluate charges on your Azure bill and how to estimate charges to monitor your entire environment.
--
-## Pricing model
-
-Azure Monitor uses consumption-based pricing, which is also known as pay-as-you-go pricing. With this billing model, you only pay for what you use. Features of Azure Monitor that are enabled by default don't incur any charge. These features include collection and alerting on the [Activity log](essentials/activity-log.md) and collection and analysis of [platform metrics](essentials/metrics-supported.md).
-
-Several other features don't have a direct cost, but instead you pay for the ingestion and retention of data that they collect. The following table describes the different types of usage that are charged in Azure Monitor. Detailed pricing for each type is provided in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
-
-| Type | Description |
-|:|:|
-| Logs | Ingestion, retention, and export of data in Log Analytics workspaces and legacy Application Insights resources. For most customers, this category typically incurs the bulk of Azure Monitor charges. There's no charge for querying this data except in the case of [Basic Logs](logs/basic-logs-configure.md) or [Archived Logs](logs/data-retention-archive.md).<br><br>Charges for logs can vary significantly on the configuration that you choose. For information on how charges for logs data are calculated and the different pricing tiers available, see [Azure Monitor logs pricing details](logs/cost-logs.md). |
-| Platform logs | Processing of [diagnostic and auditing information](essentials/resource-logs.md) is charged for [certain services](essentials/resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace. There's no direct charge when this data is sent to a Log Analytics workspace, but there's a charge for the workspace data ingestion and collection. |
-| Metrics | There's no charge for [standard metrics](essentials/metrics-supported.md) collected from Azure resources. There's a cost for collecting [custom metrics](essentials/metrics-custom-overview.md) and for retrieving metrics from the [REST API](essentials/rest-api-walkthrough.md#retrieve-metric-values). |
-| Prometheus Metrics | Pricing for [Azure Monitor managed service for Prometheus](essentials/prometheus-metrics-overview.md) is based on [data samples ingested](essentials/prometheus-metrics-enable.md) and [query samples processed](essentials/azure-monitor-workspace-manage.md#link-a-grafana-workspace). Data is retained for 18 months at no extra charge. |
-| Alerts | Charges are based on the type and number of signals used by the alert rule, its frequency, and the type of [notification](alerts/action-groups.md) used in response. For [Log alerts](alerts/alerts-types.md#log-alerts) configured for [at-scale monitoring](alerts/alerts-types.md#splitting-by-dimensions-in-log-alert-rules), the cost also depends on the number of time series created by the dimensions resulting from your query. |
-| Web tests | There's a cost for [standard web tests](app/availability-standard-tests.md) and [multistep web tests](/previous-versions/azure/azure-monitor/app/availability-multistep) in Application Insights. Multistep web tests have been deprecated.
-
-## Data transfer charges
-
-Sending data to Azure Monitor can incur data bandwidth charges. As described in [Bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/), data transfer between Azure services located in two regions is charged as outbound data transfer at the normal rate. Data sent to a different region via [Diagnostic settings](essentials/diagnostic-settings.md) doesn't incur data transfer charges. Inbound data transfer is free.
-
-Data transfer charges are typically small compared to the costs for data ingestion and retention. Focus on your ingested data volume to control costs for Log Analytics.
-
-## Estimate Azure Monitor usage and costs
-
-If you're new to Azure Monitor, use the [Azure Monitor pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) to estimate your costs. In the **Search** box, enter **Azure Monitor**, and then select the **Azure Monitor** tile. The pricing calculator helps you estimate your likely costs based on your expected utilization.
-
-The bulk of your costs typically come from data ingestion and retention for your Log Analytics workspaces and Application Insights resources. It's difficult to give accurate estimates for data volumes that you can expect because they'll vary significantly based on your configuration.
-
-A common strategy is to enable monitoring for a small group of resources and use the observed data volumes with the calculator to determine your costs for a full environment.
-
-See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for queries and other methods to measure the billable data in your Log Analytics workspace.
-
-Use the following basic guidance for common resources:
--- **Virtual machines**: With typical monitoring enabled, a virtual machine generates from 1 GB to 3 GB of data per month. This range is highly dependent on the configuration of your agents.-- **Application Insights**: For different methods to estimate data from your applications, see the following section.-- **Container insights**: For guidance on estimating data for your Azure Kubernetes Service (AKS) cluster, see [Estimating costs to monitor your AKS cluster](containers/container-insights-cost.md#estimating-costs-to-monitor-your-aks-cluster).-
-The [Azure Monitor pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) includes data volume estimation calculators for these three cases.
-
->[!NOTE]
->The billable data volume is calculated by using a customer-friendly, cost-effective method. The billed data volume is defined as the size of the data that will be stored, excluding a set of standard columns and any JSON wrapper that was part of the data received for ingestion. This billable data volume is substantially smaller than the size of the entire JSON-packaged event, often less than 50%.
->
->It's essential to understand this calculation of billed data size when you estimate costs and compare them with other pricing models. For more information on pricing, see [Azure Monitor Logs pricing details](logs/cost-logs.md#data-size-calculation).
-
-## Estimate application usage
-
-There are two methods you can use to estimate the amount of data from an application monitored with Application Insights.
-
-### Learn from what similar applications collect
-
-In the Azure Monitor pricing calculator for Application Insights, enable **Estimate data volume based on application activity**. You use this option to provide inputs about your application. The calculator then tells you the median and 90th percentile amount of data collected by similar applications. These applications span the range of Application Insights configuration, so you can still use options such as [sampling]() to reduce the volume of data you ingest for your application below the median level.
-
-### Data collection when you use sampling
-
-With the ASP.NET SDK's [adaptive sampling](app/sampling.md#adaptive-sampling), the data volume is adjusted automatically to keep within a specified maximum rate of traffic for default Application Insights monitoring.
-
-If the application produces a low amount of telemetry, such as when debugging or because of low usage, items won't be dropped by the sampling processor if the volume is below the configured-events-per-second level.
-
-For a high-volume application, with the default threshold of five events per second, adaptive sampling limits the number of daily events to 432,000. If you consider a typical average event size of 1 KB, this size corresponds to 13.4 GB of telemetry per 31-day month per node hosting your application because the sampling is done locally to each node.
-
-For SDKs that don't support adaptive sampling, you can employ [ingestion sampling](app/sampling.md#ingestion-sampling). This technique samples when the data is received by Application Insights based on a percentage of data to retain. Or you can use [fixed-rate sampling for ASP.NET, ASP.NET Core, and Java websites](app/sampling.md#fixed-rate-sampling) to reduce the traffic sent from your web server and web browsers.
-
-## View Azure Monitor usage and charges
-
-There are two primary tools to view and analyze your Azure Monitor billing and estimated charges:
--- [Azure Cost Management + Billing](#azure-cost-management--billing) is the primary tool you'll use to analyze your usage and costs. It gives you multiple options to analyze your monthly charges for different Azure Monitor features and their projected cost over time.-- [Usage and estimated costs](#usage-and-estimated-costs) helps optimize log data ingestion costs by estimating what the data ingestion costs would be for Log Analytics in each of the available pricing tiers. -
-## Azure Cost Management + Billing
-
-Azure Cost Management + Billing includes several built-in dashboards for deep cost analysis like cost by resource and invoice details. To get started analyzing your Azure Monitor charges, open [Cost Management + Billing](../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=/azure/billing/TOC.json) in the Azure portal. Select **Cost Management** > **Cost analysis**. Select your subscription or another [scope](../cost-management-billing/costs/understand-work-scopes.md).
-
->[!NOTE]
->You might need additional access to cost management data. See [Assign access to cost management data](../cost-management-billing/costs/assign-access-acm-data.md).
-
-To create a view just Azure Monitor charges, [create a filter](../cost-management-billing/costs/group-filter.md) for the following service names:
--- Azure Monitor-- Log Analytics-- Insight and Analytics-- Application Insights-
->[!NOTE]
->Usage for Azure Monitor Logs (Log Analytics) can be billed with the **Log Analytics** service (for Pay-as-you-go Data Ingestion and Data Retention), or with the **Azure Monitor** service (for Commitment Tiers, Basic Logs, Search, Search Jobs, Data Archive and Data Export) or with the **Insight and Analytics** service when using the legacy Per Node pricing tier. Except for a small set of legacy resources, classic Application Insights data ingestion and retention are billed as the **Log Analytics** service. Note then when you change your workspace from a Pay-as-you-go pricing tier to a Commitment Tier, on your bill, the costs will appear to shift from Log Analytics to Azure Monitor, reflecting the service associated to each pricing tier.
-
-[Classic Application Insights](app/convert-classic-resource.md) usage is billed using Log Analytics data ingestion and retention meters. In the context of biling, the Application Insights service is only includes usage for multi-step web tests and some old Application Insights resources still using legacy classic-mode Application Insights pricing tiers.
-
-Other services such as Microsoft Defender for Cloud and Microsoft Sentinel also bill their usage against Log Analytics workspace resources, so you might want to add them to your filter.
-
-### Cost analysis
-
-To get the most useful view for understanding your cost trends in the **Cost analysis** view,
-
-1. Select the date range you want to investigate
-2. Select the desired "Granularity" of "Daily" or "Monthly" (not "Accumulated")
-3. Set the chart type to "Column (stacked)" in the top right above the chart
-4. Set "Group by" to be "Meter"
-
-See [Common cost analysis uses](../cost-management-billing/costs/cost-analysis-common-uses.md) for more information on how to use this Cost analysis view.
-
-![Screenshot that shows Cost Management with cost information.](./media/usage-estimated-costs/010.png)
-
->[!NOTE]
->Alternatively, you can go to the overview page of a Log Analytics workspace or Application Insights resource and select **View Cost** in the upper-right corner of the **Essentials** section. This option opens **Cost Analysis** from Azure Cost Management + Billing already scoped to the workspace or application.
->
-> :::image type="content" source="logs/media/view-bill/view-cost-option.png" lightbox="logs/media/view-bill/view-cost-option.png" alt-text="Screenshot of option to view cost for a Log Analytics workspace.":::
-
-### Get daily cost analysis emails
-
-Once you have configured your Cost Analysis view, it is strongly recommended to subscribe to get regular email updates from Cost Analysis. The "Subscribe" option is located in the list of options just above the main chart.
-
-### Create cost alerts
-
-To be notified if there are significant increases in your spending, you can set up [cost alerts](../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md) (specifically a budget alert) for a single workspace or group of workspaces.
-
-### Export usage details
-
-To gain more understanding of your usage and costs, create exports using Cost Analysis in Azure Cost Management + Billing. See [Tutorial: Create and manage exported data](../cost-management-billing/costs/tutorial-export-acm-data.md) to learn how to automatically create a daily export you can use for regular analysis.
-
-These exports are in CSV format and will contain a list of daily usage (billed quantity and cost) by resource, billing meter and a few more fields such as [AdditionalInfo](../cost-management-billing/automate/understand-usage-details-fields.md#list-of-fields-and-descriptions). You can use Microsoft Excel to do rich analyses of your usage not possible in the Cost Analytics experiences in the portal.
-
-The usage export has both the cost for your usage, and the number of units of usage. Consequently, you can use this export to see the amount of benefits you are receiving from various offers such as the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) and the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/).
-
-For instance, usage from Log Analytics can be found by first filtering on the **Meter Category** column to show
-
-1. **Log Analytics** (for Pay-as-you-go data ingestion and interactive Data Retention),
-2. **Insight and Analytics** (used by some of the legacy pricing tiers), and
-3. **Azure Monitor** (used by most other Log Analytics features such as Commitment Tiers, Basic Logs ingesting, Data Archive, Search Queries, Search Jobs, etc.)
-
-Add a filter on the **Instance ID** column for **contains workspace** or **contains cluster**. The usage is shown in the **Consumed Quantity** column. The unit for each entry is shown in the **Unit of Measure** column.
-
-### Azure Monitor billing meter names
-
-Below is a list of Azure Monitor billing meters names that you'll see in the Azure Cost Management + Billing user experience and in your usage exports.
-
-Here is a list of the meters used to bill for log data ingestion, and whether the meter is regional (there is a different billing meter, `MeterId` in the export usage report, for each region). Note that Basic Logs ingestion can be used when the workspace's pricing tier is Pay-as-you-go or any commitment tier.
--
-| Pricing tier |ServiceName | MeterName | Regional Meter? |
-| -- | -- | -- | -- |
-| (any) | Azure Monitor | Basic Logs Data Ingestion | yes |
-| Pay-as-you-go | Log Analytics | Pay-as-you-go Data Ingestion | yes |
-| 100 GB/day Commitment Tier | Azure Monitor | 100 GB Commitment Tier Capacity Reservation | yes |
-| 200 GB/day Commitment Tier | Azure Monitor | 200 GB Commitment Tier Capacity Reservation | yes |
-| 300 GB/day Commitment Tier | Azure Monitor | 300 GB Commitment Tier Capacity Reservation | yes |
-| 400 GB/day Commitment Tier | Azure Monitor | 400 GB Commitment Tier Capacity Reservation | yes |
-| 500 GB/day Commitment Tier | Azure Monitor | 500 GB Commitment Tier Capacity Reservation | yes |
-| 1000 GB/day Commitment Tier | Azure Monitor | 1000 GB Commitment Tier Capacity Reservation | yes |
-| 2000 GB/day Commitment Tier | Azure Monitor | 2000 GB Commitment Tier Capacity Reservation | yes |
-| 5000 GB/day Commitment Ties | Azure Monitor | 5000 GB Commitment Tier Capacity Reservation | yes |
-| Per Node (legacy tier) | Insight and Analytics | Standard Node | no |
-| Per Node (legacy tier) | Insight and Analytics | Standard Data Overage per Node | no |
-| Per Node (legacy tier) | Insight and Analytics | Standard Data Included per Node | no |
-| Standalone (legacy tier) | Log Analytics | Pay-as-you-go Data Analyzed | no |
-| Standard (legacy tier) | Log Analytics | Standard Data Analyzed | no |
-| Premium (legacy tier) | Log Analytics | Premium Data Analyzed | no |
--
-The "Standard Data Included per Node" meter is used both for the Log Analytics [Per Node tier](logs/cost-logs.md#per-node-pricing-tier) data allowance, and also the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud), for workspaces in any pricing tier.
-
-Other Azure Monitor logs meters:
-
-| ServiceName | MeterName | Regional Meter? |
-| -- | | |
-| Log Analytics | Pay-as-you-go Data Retention | yes |
-| Insight and Analytics | Standard Data Retention | no |
-| Azure Monitor | Data Archive | yes |
-| Azure Monitor | Search Queries Scanned | yes |
-| Azure Monitor | Search Jobs Scanned | yes |
-| Azure Monitor | Data Restore | yes |
-| Azure Monitor | Log Analytics data export Data Exported | yes |
-| Azure Monitor | Platform Logs Data Processed | yes |
-
-"Pay-as-you-go Data Retention" is used for workspaces in all modern pricing tiers (Pay-as-you-go and Commitment Tiers). "Standard Data Retention" is used for workspaces in the legacy Per Node and Standalone pricing tiers.
-
-Azure Monitor metrics meters:
-
-| ServiceName | MeterName | Regional Meter? |
-| -- | | |
-| Azure Monitor | Metrics ingestion Metric samples | yes |
-| Azure Monitor | Prometheus Metrics Queries Metric samples | yes |
-| Azure Monitor | Native Metric Queries API Calls | yes |
-
-Azure Monitor alerts meters:
-
-| ServiceName | MeterName | Regional Meter? |
-| -- | | |
-| Azure Monitor | Alerts Metric Monitored | no |
-| Azure Monitor | Alerts Dynamic Threshold | no |
-| Azure Monitor | Alerts System Log Monitored at 1 Minute Frequency | no |
-| Azure Monitor | Alerts System Log Monitored at 10 Minute Frequency | no |
-| Azure Monitor | Alerts System Log Monitored at 15 Minute Frequency | no |
-| Azure Monitor | Alerts System Log Monitored at 5 Minute Frequency | no |
-| Azure Monitor | Alerts Resource Monitored at 1 Minute Frequency | no |
-| Azure Monitor | Alerts Resource Monitored at 10 Minute Frequency | no |
-| Azure Monitor | Alerts Resource Monitored at 15 Minute Frequency | no |
-| Azure Monitor | Alerts Resource Monitored at 5 Minute Frequency | no |
-
-Azure Monitor web test meters:
-
-| ServiceName | MeterName | Regional Meter? |
-| -- | | |
-| Azure Monitor | Standard Web Test Execution | yes |
-| Application Insights | Multi-step Web Test | no |
-
-Leagcy classic Application Insights meters:
-
-| ServiceName | MeterName | Regional Meter? |
-| -- | | |
-| Application Insights | Enterprise Node | no |
-| Application Insights | Enterprise Overage Data | no |
--
-### Legacy Application Insights meters
-
-Most Application Insights usage for both classic and workspace-based resources is reported on meters with **Log Analytics** for **Meter Category** because there's a single log back-end for all Azure Monitor components. Only Application Insights resources on legacy pricing tiers and multiple-step web tests are reported with **Application Insights** for **Meter Category**. The usage is shown in the **Consumed Quantity** column. The unit for each entry is shown in the **Unit of Measure** column. For more information, see [Understand your Microsoft Azure bill](../cost-management-billing/understand/review-individual-bill.md).
-
-To separate costs from your Log Analytics and classic Application Insights usage, [create a filter](../cost-management-billing/costs/group-filter.md) on **Resource type**. To see all Application Insights costs, filter **Resource type** to **microsoft.insights/components**. For Log Analytics costs, filter **Resource type** to **microsoft.operationalinsights/workspaces**. (Workspace-based Application Insights is all billed to the Log Analytics workspace resourced.)
-
-## Usage and estimated costs
-
-You can get more usage details about Log Analytics workspaces and Application Insights resources from the **Usage and estimated costs** option for each.
-
-### Log Analytics workspace
-
-To learn about your usage trends and choose the most cost-effective pricing tier (Pay-as-you-go or a [commitment tier](logs/cost-logs.md#commitment-tiers)) for your Log Analytics workspace, select **Usage and estimated costs** from the **Log Analytics workspace** menu in the Azure portal.
-
-> [!NOTE]
->
-> **Usage and estimated costs** does *not* show your actual billed usage. It calculates what your data ingestion charges would have been for the last 31 days of usage if your workspace had been in each of the available pricing tiers. You can use these estimated costs to select the lowest cost tier based on your workspace's data ingestion.
--
-This view includes:
--- Estimated monthly charges based on usage from the past 31 days by using the current pricing tier.<br>-- Estimated monthly charges by using different commitment tiers.<br>-- Billable data ingestion by table from the past 31 days.-
-To explore the data in more detail, select the icon in the upper-right corner of either chart to work with the query in Log Analytics.
--
-### Application Insights
-
-To learn about your usage trends for your classic Application Insights resource, select **Usage and estimated costs** from the **Applications** menu in the Azure portal.
--
-This view includes:
--- Estimated monthly charges based on usage from the past month.<br>-- Billable data ingestion by table from the past month.-
-To investigate your Application Insights usage more deeply, open the **Metrics** page and add the metric named **Data point volume**. Then select the **Apply splitting** option to split the data by **Telemetry item type**.
-
-## View data allocation benefits
-
-To view data allocation benefits from sources such as [Microsoft Defender for Servers](https://azure.microsoft.com/pricing/details/defender-for-cloud/), [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/), or the [Sentinel Free Trial](https://azure.microsoft.com/pricing/details/microsoft-sentinel/), you need to export your usage details as described above.
-
-Open the exported usage spreadsheet and filter the **Instance ID** column to your workspace. (To select all your workspaces in the spreadsheet, filter the **Instance ID** column to **contains /workspaces/**.) Next, filter the **ResourceRate** column to show only rows where this rate is equal to zero. Now you'll see the data allocations from these various sources.
-
-> [!NOTE]
-> Data allocations from Defender for Servers 500 MB/server/day will appear in rows with the meter name **Data Included per Node** and the meter category **Insight and Analytics**. (This name is for a legacy offer still used with this meter.) If the workspace is in the legacy Per-Node Log Analytics pricing tier, this meter also includes the data allocations from this Log Analytics pricing tier.
-
-## Operations Management Suite subscription entitlements
-
-Customers who purchased Operations Management Suite E1 and E2 are eligible for per-node data ingestion entitlements for Log Analytics and Application Insights. Each Application Insights node includes up to 200 MB of data ingested per day (separate from Log Analytics data ingestion), with 90-day data retention at no extra cost.
-
-To receive these entitlements for Log Analytics workspaces or Application Insights resources in a subscription, they must use the Per Node (Operations Management Suite) pricing tier. This entitlement isn't visible in the estimated costs shown in the **Usage and estimated cost** pane.
-
-Depending on the number of nodes of the suite that your organization purchased, moving some subscriptions into a Per GB (pay-as-you-go) pricing tier might be advantageous. This move requires careful consideration.
-
-Also, if you move a subscription to the new Azure monitoring pricing model in April 2018, the Per GB tier is the only tier available. Moving a subscription to the new Azure monitoring pricing model isn't advisable if you have an Operations Management Suite subscription.
-
-> [!TIP]
-> If your organization has Operations Management Suite E1 or E2, it's usually best to keep your Log Analytics workspaces in the Per Node (Operations Management Suite) pricing tier and your Application Insights resources in the Enterprise pricing tier.
-
-## Next steps
--- For details on how charges are calculated for data in a Log Analytics workspace and different configuration options to reduce your charges, see [Azure Monitor Logs pricing details](logs/cost-logs.md).-- For details on how to analyze the data in your workspace to determine the source of any higher-than-expected usage and opportunities to reduce your amount of data collected, see [Analyze usage in Log Analytics workspace](logs/analyze-usage.md).-- To control your costs by setting a daily limit on the amount of data that can be ingested in a workspace, see [Set daily cap on Log Analytics workspace](logs/daily-cap.md).-- For best practices on how to configure and manage Azure Monitor to minimize your charges, see [Azure Monitor best practices - Cost management](best-practices-cost.md).---
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Containers|[Migrate from ContainerLog to ContainerLogV2](containers/container-in
Containers|[Configure remote write for Azure managed service for Prometheus using Azure Active Directory workload identity (preview)](containers/prometheus-remote-write-azure-workload-identity.md)|New article Configure remote write for Azure Monitor managed service …| Essentials|[Migrate from diagnostic settings storage retention to Azure Storage lifecycle management](essentials/migrate-to-azure-storage-lifecycle-policy.md)|Added CLI and template tabs showing storage lifecycle setting.| General|[Plan your alerts and automated actions](alerts/alerts-plan.md)|Add alerts best practices article|
-General|[Azure Monitor cost and usage](usage-estimated-costs.md)|Updated information about the Cost Analysis usage report which contains both the cost for your usage, and the number of units of usage. You can use this export to see the amount of benefit you're receiving from various offers such as the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) and the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/). |
+General|[Azure Monitor cost and usage](cost-usage.md)|Updated information about the Cost Analysis usage report which contains both the cost for your usage, and the number of units of usage. You can use this export to see the amount of benefit you're receiving from various offers such as the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) and the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/). |
Logs|[Send log data to Azure Monitor by using the HTTP Data Collector API (deprecated)](logs/data-collector-api.md)|Added deprecation notice.| Logs|[Azure Monitor Logs overview](logs/data-platform-logs.md)|Added code samples for the Azure Monitor Ingestion client module for Go.| Logs|[Set a table's log data plan to Basic or Analytics](logs/basic-logs-configure.md)|Added new Virtual Network Manager, Dev Center, and Communication Services tables that now support Basic logs.|
Logs|[Set a table's log data plan to Basic or Analytics](logs/basic-logs-configu
|Subservice| Article | Description | ||||
-General|[Azure Monitor cost and usage](usage-estimated-costs.md)|Added section detailing billing meter names.|
+General|[Azure Monitor cost and usage](cost-usage.md)|Added section detailing billing meter names.|
Application-Insights|[Add, modify, and filter OpenTelemetry](app/opentelemetry-add-modify.md)|A caution has been added about using community libraries with additional information on how to request we include them in our distro.| Application-Insights|[Add, modify, and filter OpenTelemetry](app/opentelemetry-add-modify.md)|Support and feedback options are now available across all of our OpenTelemetry pages.| Application-Insights|[How many Application Insights resources should I deploy?](app/create-workspace-resource.md#how-many-application-insights-resources-should-i-deploy)|We added an important warning about additional network costs when monitoring across regions.|
Application-Insights|[Data Collection Basics of Azure Monitor Application Insigh
Application-Insights|[Enable a framework extension for Application Insights JavaScript SDK](app/javascript-framework-extensions.md)|The "Explore your data" section has been improved.| Application-Insights|[Sampling overrides (preview) - Azure Monitor Application Insights for Java](app/java-standalone-sampling-overrides.md)|We've documented steps for troubleshooting sampling.| Logs|[Set a table's log data plan to Basic or Analytics](logs/basic-logs-configure.md)|Additional Azure tables now support low-cost basic logs, including tables for the Bare Metal Machines, Managed Lustre, Nexus Clusters, and Nexus Storage Appliances services. |
-Logs|[Create and manage a dedicated cluster in Azure Monitor Logs](logs/logs-dedicated-clusters.md)|The minimum ingestion commitment for a dedicated cluster is now 100 GB per day (previously 500 GB). |
Logs|[Query Basic Logs in Azure Monitor](logs/basic-logs-query.md)|Basic log queries are now billable.| Logs|[Restore logs in Azure Monitor](logs/restore.md)|Restored logs are now billable.| Logs|[Run search jobs in Azure Monitor](logs/search-jobs.md)|Search jobs are now billable.|
Azure Monitor Workbooks documentation previously resided on an external GitHub r
| Article | Description | |:|:|
-| [Azure Monitor cost and usage](usage-estimated-costs.md) | Added standard web tests to table.<br>Added explanation of billable GB calculation. |
+| [Azure Monitor cost and usage](cost-usage.md) | Added standard web tests to table.<br>Added explanation of billable GB calculation. |
| [Azure Monitor overview](overview.md) | Updated overview diagram. | ### Agents
azure-portal Capture Browser Trace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/capture-browser-trace.md
If you're troubleshooting an issue with the Azure portal, and you need to contact Microsoft support, you may want to first capture some additional information. For example, it can be helpful to share a browser trace, a step recording, and console output. This information can provide important details about what exactly is happening in the portal when your issue occurs.
-> [!IMPORTANT]
-> Microsoft support uses these traces for troubleshooting purposes only. Please be mindful who you share your traces with, as they may contain sensitive information about your environment.
+> [!WARNING]
+> Browser traces often contain sensitive information and might include authentication tokens linked to your identity. Please remove any sensitive information before sharing traces with others. Microsoft support uses these traces for troubleshooting purposes only.
You can capture this information any [supported browser](azure-portal-supported-browsers-devices.md): Microsoft Edge, Google Chrome, Safari (on Mac), or Firefox. Steps for each browser are shown below.
azure-vmware Deploy Vsan Stretched Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-vsan-stretched-clusters.md
It should be noted that these types of failures, although rare, fall outside the
Azure VMware Solution stretched clusters are available in the following regions: -- UK South (on AV36)
+- UK South (on AV36, and AV36P)
- West Europe (on AV36, and AV36P) - Germany West Central (on AV36) - Australia East (on AV36P)
azure-vmware Request Host Quota Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/request-host-quota-azure-vmware-solution.md
In this how-to, you'll request host quota/capacity for [Azure VMware Solution](i
If you have an existing Azure VMware Solution private cloud and want more hosts allocated, you'll follow the same process. >[!IMPORTANT]
->It can take up to five business days to allocate the hosts, depending on the number requested. Therefore, request what you need for provisioning to avoid the delays associated with making additional quota increase requests.
+> It can take up to five business days to allocate the hosts, depending on the number requested. Therefore, request what you need for provisioning to avoid the delays associated with making additional quota increase requests.
## Eligibility criteria
You'll need an Azure account in an Azure subscription that adheres to one of the
- Any other details, including Availability Zone requirements for integrating with other Azure services (e.g. Azure NetApp Files, Azure Blob Storage) >[!NOTE]
- >Azure VMware Solution requires a minimum of three hosts and recommends redundancy of N+1 hosts.
+ > - Azure VMware Solution requires a minimum of three hosts and recommends redundancy of N+1 hosts.
+ > - **New** The unused quota expires after 30 days. A new request will need to be submitted for any additional quota.
1. Select **Review + Create** to submit the request.
Access the Azure portal using the **Admin On Behalf Of** (AOBO) procedure from P
- Is intended to host multiple customers? >[!NOTE]
- >Azure VMware Solution requires a minimum of three hosts and recommends redundancy of N+1 hosts.
+ > - Azure VMware Solution requires a minimum of three hosts and recommends redundancy of N+1 hosts.
+ > - **New** The unused quota expires after 30 days. A new request will need to be submitted for any additional quota.
1. Select **Review + Create** to submit the request.
bastion Bastion Vm Copy Paste https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-vm-copy-paste.md
description: Learn how copy and paste to and from a Windows VM using Bastion.
Previously updated : 09/20/2022 Last updated : 10/31/2023 # Customer intent: I want to copy and paste to and from VMs using Azure Bastion.
-# Copy and paste to a Windows virtual machine: Azure Bastion
+# Windows VMs - copy and paste via Bastion
This article helps you copy and paste text to and from virtual machines when using Azure Bastion.
This article helps you copy and paste text to and from virtual machines when usi
Before you proceed, make sure you have the following items.
-* A VNet with [Azure Bastion](./tutorial-create-host-portal.md) deployed.
-* A Windows VM deployed to your VNet.
+* A virtual network with [Azure Bastion](./tutorial-create-host-portal.md) deployed.
+* A Windows virtual machine deployed to your virtual network.
## <a name="configure"></a> Configure the bastion host
-By default, Azure Bastion is automatically enabled to allow copy and paste for all sessions connected through the bastion resource. You don't need to configure anything additional. This applies to both the Basic and the Standard SKU tier. If you want to disable this feature, you can disable it for web-based clients on the configuration page of your Bastion resource.
+By default, Azure Bastion is automatically enabled to allow copy and paste for all sessions connected through the bastion resource. You don't need to configure anything extra. This applies to both the Basic and the Standard SKU tier. If you want to disable this feature, you can disable it for web-based clients on the configuration page of your Bastion resource.
1. To view or change your configuration, in the portal, go to your Bastion resource. 1. Go to the **Configuration** page. * To enable, select the **Copy and paste** checkbox if it isn't already selected. * To disable, clear the checkbox. Disable is only available with the Standard SKU. You can upgrade the SKU if necessary.
-1. **Apply** changes. The bastion host will update.
-
- :::image type="content" source="./media/bastion-vm-copy-paste/configure.png" alt-text="Screenshot that shows the configuration page." lightbox="./media/bastion-vm-copy-paste/configure.png":::
+1. **Apply** changes. The bastion host updates.
## <a name="to"></a> Copy and paste
For browsers that support the advanced Clipboard API access, you can copy and pa
> [!NOTE] > Only text copy/paste is currently supported.
->
### <a name="advanced"></a> Advanced Clipboard API browsers
-1. Connect to your VM.
-1. For direct copy and paste, your browser may prompt you for clipboard access when the Bastion session is being initialized. **Allow** the web page to access the clipboard.
-
- :::image type="content" source="./media/bastion-vm-copy-paste/copy-paste.png" alt-text="Screenshot that shows allow clipboard access." lightbox="./media/bastion-vm-copy-paste/copy-paste.png":::
+1. Connect to your virtual machine.
+1. For direct copy and paste, your browser might prompt you for clipboard access when the Bastion session is being initialized. **Allow** the web page to access the clipboard.
1. You can now use keyboard shortcuts as usual to copy and paste. If you're working from a Mac, the keyboard shortcut to paste is **SHIFT-CTRL-V**. ### <a name="other"></a>Non-advanced Clipboard API browsers
-To copy text from your local computer to a VM, use the following steps.
+To copy text from your local computer to a virtual machine, use the following steps.
-1. Connect to your VM.
+1. Connect to your virtual machine.
1. Copy the text/content from the local device into your local clipboard.
-1. On the VM, launch the Bastion clipboard access tool palette by selecting the two arrows. The arrows are located on the left center of the session.
-
- :::image type="content" source="./media/bastion-vm-copy-paste/left.png" alt-text="Screenshot that shows the launch arrows for the clipboard access tool palette." lightbox="./media/bastion-vm-copy-paste/left.png":::
-1. Copy the text from your local computer. Typically, the copied text automatically shows on the Bastion clipboard access tool palette. If doesn't show up on the tool palette, then paste the text in the text area on the tool palette. Once the text is in the text area, you can paste it to the remote session. In this example, we copied text to the Bastion clipboard tool palette, then pasted it to the VM Notepad app.
+1. On the virtual machine, you'll see two arrows on the left side of the session screen about halfway down. Launch the Bastion **Clipboard** access tool palette by selecting the two arrows.
+1. Copy the text from your local computer. Typically, the copied text automatically shows on the Bastion clipboard access tool palette. If it doesn't show up on the tool palette, then paste the text in the text area on the tool palette. Once the text is in the text area, you can paste it to the remote session.
- :::image type="content" source="./media/bastion-vm-copy-paste/clipboard-paste.png" alt-text="Screenshot shows a clipboard for text copied in Bastion." lightbox="./media/bastion-vm-copy-paste/clipboard-paste.png":::
+ :::image type="content" source="./media/bastion-vm-copy-paste/clipboard-copy.png" alt-text="Screenshot shows the clipboard for text copied in Bastion." lightbox="./media/bastion-vm-copy-paste/clipboard-copy.png":::
-1. If you want to copy the text from the VM to your local computer, copy the text to the clipboard access tool. Once your text is in the text area on the palette, paste it to your local computer.
+1. If you want to copy the text from the virtual machine to your local computer, copy the text to the clipboard access tool. Once your text is in the text area on the palette, paste it to your local computer.
## Next steps
-For more VM features, see [About VM connections and features](vm-about.md).
+For more virtual machine features, see [About VM connections and features](vm-about.md).
bastion Configuration Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/configuration-settings.md
You can configure this setting using the following methods:
## <a name="instance"></a>Instances and host scaling
-An instance is an optimized Azure VM that is created when you configure Azure Bastion. It's fully managed by Azure and runs all of the processes needed for Azure Bastion. An instance is also referred to as a scale unit. You connect to client VMs via an Azure Bastion instance. When you configure Azure Bastion using the Basic SKU, two instances are created. If you use the Bastion Standard SKU, you can specify the number of instances. This is called **host scaling**.
+An instance is an optimized Azure VM that is created when you configure Azure Bastion. It's fully managed by Azure and runs all of the processes needed for Azure Bastion. An instance is also referred to as a scale unit. You connect to client VMs via an Azure Bastion instance. When you configure Azure Bastion using the Basic SKU, two instances are created. If you use the Standard SKU, you can specify the number of instances (with a minimum of two instances). This is called **host scaling**.
Each instance can support 20 concurrent RDP connections and 40 concurrent SSH connections for medium workloads (see [Azure subscription limits and quotas](../azure-resource-manager/management/azure-subscription-service-limits.md) for more information). The number of connections per instances depends on what actions you're taking when connected to the client VM. For example, if you're doing something data intensive, it creates a larger load for the instance to process. Once the concurrent sessions are exceeded, another scale unit (instance) is required.
bastion Quickstart Developer Sku https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/quickstart-developer-sku.md
-# Quickstart: Deploy Bastion using the Developer SKU (Preview)
+# Quickstart: Deploy Azure Bastion - Developer SKU (Preview)
In this quickstart, you'll learn how to deploy Azure Bastion using the Developer SKU. After Bastion is deployed, you can connect to virtual machines (VM) in the virtual network via Bastion using the private IP address of the VM. The VMs you connect to don't need a public IP address, client software, agent, or a special configuration. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
In this quickstart, you'll learn how to deploy Azure Bastion using the Developer
[!INCLUDE [regions](../../includes/bastion-developer-sku-regions.md)]
+> [!NOTE]
+> VNet peering isn't currently supported for the Developer SKU.
+ ## About the Developer SKU The Bastion Developer SKU is a new [lower-cost](https://azure.microsoft.com/pricing/details/azure-bastion/), lightweight SKU. This SKU is ideal for Dev/Test users who want to securely connect to their VMs if they don't need additional features or scaling. With the Developer SKU, you can connect to one Azure VM at a time directly through the virtual machine connect page.
-When you deploy Bastion using the Developer SKU, the deployment requirements are different than when you deploy using other SKUs. Typically when you create a bastion host, a host is deployed to the AzureBastionSubnet in your virtual network. The Bastion host is dedicated for your use. When using the Developer SKU, a bastion host isn't deployed to your virtual network and you don't need a AzureBastionSubnet. However, the Developer SKU bastion host isn't a dedicated resource and is, instead, part of a shared pool.
+When you deploy Bastion using the Developer SKU, the deployment requirements are different than when you deploy using other SKUs. Typically when you create a bastion host, a host is deployed to the AzureBastionSubnet in your virtual network. The Bastion host is dedicated for your use. When using the Developer SKU, a bastion host isn't deployed to your virtual network and you don't need an AzureBastionSubnet. However, the Developer SKU bastion host isn't a dedicated resource and is, instead, part of a shared pool.
Because the Developer SKU bastion resource isn't dedicated, the features for the Developer SKU are limited. See the Bastion configuration settings [SKU](configuration-settings.md) section for features by SKU. For more information about pricing, see the [Pricing](https://azure.microsoft.com/pricing/details/azure-bastion/) page. You can always upgrade the Developer SKU to a higher SKU if you need more features. See [Upgrade a SKU](upgrade-sku.md).
When you're done using the virtual network and the virtual machines, delete the
## Next steps
-In this quickstart, you deployed Bastion using the Developer SKKU, and then connected to a virtual machine securely via Bastion. Next, you can configure more features and work with VM connections.
+In this quickstart, you deployed Bastion using the Developer SKU, and then connected to a virtual machine securely via Bastion. Next, you can configure more features and work with VM connections.
> [!div class="nextstepaction"] > [Upgrade SKUs](upgrade-sku.md)
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md
Currently, the Windows agent doesn't reduce memory pressure when other applicati
|-|-| | Capability name | NetworkLatency-1.1 | | Target type | Microsoft-Agent |
-| Supported OS types | Windows, Linux. |
+| Supported OS types | Windows, Linux (outbound traffic only) |
| Description | Increases network latency for a specified port range and network block. At least one destinationFilter or inboundDestinationFilter array must be provided. | | Prerequisites | Agent (for Windows) must run as administrator. If the agent is installed as a VM extension, it runs as administrator by default. | | Urn | urn:csci:microsoft:agent:networkLatency/1.1 |
The parameters **destinationFilters** and **inboundDestinationFilters** use the
### Limitations * The agent-based network faults currently only support IPv4 addresses.
+* When running in a Linux environment, the agent-based network latency fault can only affect **outbound** traffic, not inbound traffic. The fault can affect **both inbound and outbound** traffic on Windows environments (via the `inboundDestinationFilters` and `destinationFilters` parameters).
## Network disconnect
Currently, only virtual machine scale sets configured with the **Uniform** orche
| Capability name | DisableCertificate-1.0 | | Target type | Microsoft-KeyVault | | Description | By using certificate properties, the fault disables the certificate for a specific duration (provided by the user). It enables the certificate after this fault duration. |
-| Prerequisites | For OneCert certificates, the domain must be registered with OneCert before you attempt to run the fault. |
+| Prerequisites | None. |
| Urn | urn:csci:microsoft:keyvault:disableCertificate/1.0 | | Fault type | Continuous. | | Parameters (key, value) | |
Currently, only virtual machine scale sets configured with the **Uniform** orche
| Capability name | IncrementCertificateVersion-1.0 | | Target type | Microsoft-KeyVault | | Description | Generates a new certificate version and thumbprint by using the Key Vault Certificate client library. Current working certificate is upgraded to this version. |
-| Prerequisites | For OneCert certificates, the domain must be registered with OneCert before you attempt to run the fault. |
+| Prerequisites | None. |
| Urn | urn:csci:microsoft:keyvault:incrementCertificateVersion/1.0 | | Fault type | Discrete. | | Parameters (key, value) | |
Currently, only virtual machine scale sets configured with the **Uniform** orche
| Capability name | UpdateCertificatePolicy-1.0 | | Target type | Microsoft-KeyVault | | Description | Certificate policies (for example, certificate validity period, certificate type, key size, or key type) are updated based on user input and reverted after the fault duration. |
-| Prerequisites | For OneCert certificates, the domain must be registered with OneCert before you attempt to run the fault. |
+| Prerequisites | None. |
| Urn | urn:csci:microsoft:keyvault:updateCertificatePolicy/1.0 | | Fault type | Continuous. | | Parameters (key, value) | |
chaos-studio Chaos Studio Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-limitations.md
During the public preview of Azure Chaos Studio, there are a few limitations and
## Limitations - **Supported regions** - The target resources must be in [one of the regions supported by the Azure Chaos Studio Preview](https://azure.microsoft.com/global-infrastructure/services/?products=chaos-studio).-- **Resource Move not supported** - Azure Chaos Studio tracked resources (for example, Experiments) currently do NOT support Resource Move. Experiments can be easily copied (by copying Experiment JSON) for use in other subscriptions, resource groups, or regions. Experiments can also already target resources across regions. Extension resources (Targets and Capabilities) do support Resource Move.
+- **Resource Move not supported** - Azure Chaos Studio tracked resources (for example, Experiments) currently do not support Resource Move. Experiments can be easily copied (by copying Experiment JSON) for use in other subscriptions, resource groups, or regions. Experiments can also already target resources across regions. Extension resources (Targets and Capabilities) do support Resource Move.
- **VMs require network access to Chaos studio** - For agent-based faults, the virtual machine must have outbound network access to the Chaos Studio agent service: - Regional endpoints to allowlist are listed in [Permissions and security in Azure Chaos Studio](chaos-studio-permissions-security.md#network-security). - If you're sending telemetry data to Application Insights, the IPs in [IP addresses used by Azure Monitor](../azure-monitor/app/ip-addresses.md) are also required. - **Supported VM operating systems** - If you run an experiment that makes use of the Chaos Studio agent, the virtual machine must run one of the following operating systems:
- - Windows Server 2019, Windows Server 2016, Windows Server 2012, and Windows Server 2012 R2
- - Red Hat Enterprise Linux 8.2, SUSE Enterprise Linux 15 SP2, CentOS 8.2, Debian 10 Buster (with unzip installation required), Oracle Linux 7.8, Ubuntu Server 16.04 LTS, and Ubuntu Server 18.04 LTS
-- **Hardened Linux untested** - The Chaos Studio agent isn't tested against custom Linux distributions or hardened Linux distributions (for example, FIPS or SELinux).
+ - Windows Server 2019, Windows Server 2016, and Windows Server 2012 R2
+ - Red Hat Enterprise Linux 8, Red Hat Enterprise Linux 8.2, openSUSE Leap 15.2, CentOS 8, Debian 10 Buster (with unzip installation required), Oracle Linux 8.3, and Ubuntu Server 18.04 LTS
+- **Hardened Linux untested** - The Chaos Studio agent isn't currently tested against custom Linux distributions or hardened Linux distributions (for example, FIPS or SELinux).
- **Supported browsers** - The Chaos Studio portal experience has only been tested on the following browsers: * **Windows:** Microsoft Edge, Google Chrome, and Firefox * **MacOS:** Safari, Google Chrome, and Firefox
During the public preview of Azure Chaos Studio, there are a few limitations and
- **Agent Service Tags** Currently we don't have service tags available for our Agent-based faults. ## Known issues
-When you pick target resources for an agent-based fault in the experiment designer, it's possible to select virtual machines or virtual machine scale sets with an operating system not supported by the fault selected.
+- When selecting target resources for an agent-based fault in the experiment designer, it's possible to select virtual machines or virtual machine scale sets with an operating system not supported by the fault selected.
+- When running in a Linux environment, the agent-based network latency fault (NetworkLatency-1.1) can only affect **outbound** traffic, not inbound traffic. The fault can affect **both inbound and outbound** traffic on Windows environments (via the `inboundDestinationFilters` and `destinationFilters` parameters).
## Next steps Get started creating and running chaos experiments to improve application resilience with Chaos Studio by using the following links:
chaos-studio Chaos Studio Private Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-private-networking.md
Azure [Virtual Network](../virtual-network/virtual-networks-overview.md) is the
Virtual network injection allows an Azure Chaos Studio Preview resource provider to inject containerized workloads into your virtual network so that resources without public endpoints can be accessed via a private IP address on the virtual network. After you've configured virtual network injection for a resource in a virtual network and enabled the resource as a target, you can use it in multiple experiments. An experiment can target a mix of private and nonprivate resources if the private resources are configured according to the instructions in this article.
+We are also now excited to share that Chaos Studio supports running **agent-based experiments** using Private Endpoints! Chaos Studio now supports Private Link for **both** service-direct and agent-based experiments. If you would like to use Private-Link for agent-service, please reach out to your CSA or the Chaos Studio help team for instructions on how to get yourself onboarded. For private link for service-direct faults, read the following sections for instructions on how to use them.
+ ## Resource type support Currently, you can only enable certain resource types for Chaos Studio virtual network injection:
communications-gateway Monitor Azure Communications Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/monitor-azure-communications-gateway.md
Start with the article [Monitoring Azure resources with Azure Monitor](../azure-
The following sections build on this article by describing the specific data gathered for Azure Communications Gateway. These sections also provide examples for configuring data collection and analyzing this data with Azure tools. > [!TIP]
-> To understand costs associated with Azure Monitor, see [Usage and estimated costs](../azure-monitor/usage-estimated-costs.md).
+> To understand costs associated with Azure Monitor, see [Azure Monitor cost and usage](../azure-monitor/cost-usage.md).
## Azure Monitor data for Azure Communications Gateway
communications-gateway Reliability Communications Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/reliability-communications-gateway.md
Management regions contain the infrastructure used for the ordering, monitoring
## Availability zone support
-Azure availability zones have a minimum of three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. If a local zone fails, regional services, capacity, and high availability are supported by the other zones in the region. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Regions and availability zones](/azure/availability-zones/az-overview).
### Zone down experience for service regions
During a zone-wide outage, calls handled by the affected zone are terminated, wi
## Disaster recovery: fallback to other regions +++ This section describes the behavior of Azure Communications Gateway during a region-wide outage. ### Disaster recovery: cross-region failover for service regions
container-registry Intro Connected Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/intro-connected-registry.md
Scenarios for a connected registry include:
The following image shows a typical deployment model for the connected registry. ### Deployment
-Each connected registry is a resource you manage using a cloud-based Azure container registry. The top parent in the connected registry hierarchy is an Azure container registry in an Azure cloud or in a private deployment of [Azure Stack Hub](/azure-stack/operator/azure-stack-overview).
+Each connected registry is a resource you manage using a cloud-based Azure container registry. The top parent in the connected registry hierarchy is an Azure container registry in an Azure cloud.
Use Azure tools to install the connected registry on a server or device on your premises, or an environment that supports container workloads on-premises such as [Azure IoT Edge](../iot-edge/tutorial-nested-iot-edge.md).
A connected registry can work in one of two modes: *ReadWrite* or *ReadOnly*
The ReadWrite mode is useful when a local development environment is in place. The images are pushed to the local connected registry and from there synchronized to the cloud. -- **ReadOnly mode** - When the connected registry is in ReadOnly mode, clients may only pull (read) artifacts. This configuration is used for nested IoT Edge scenarios, or other scenarios where clients need to pull a container image to operate.
+- **ReadOnly mode** - When the connected registry is in ReadOnly mode, clients can only pull (read) artifacts. This configuration is used for nested IoT Edge scenarios, or other scenarios where clients need to pull a container image to operate.
### Registry hierarchy
cosmos-db Tune Connection Configurations Net Sdk V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tune-connection-configurations-net-sdk-v3.md
Direct mode can be customized through the *CosmosClientOptions* passed to the *C
| Configuration option | Default | Recommended | Details | | :: | :--: | :: | :--: | | EnableTcpConnectionEndpointRediscovery | true | true | This represents the flag to enable detection of connections closing from the server. |
-| IdleTcpConnectionTimeout | By default, idle connections are kept open indefinitely. | 20h-24h | This represents the amount of idle time after which unused connections are closed. Recommended values are between 20 minutes and 24 hours. |
+| IdleTcpConnectionTimeout | By default, idle connections are kept open indefinitely. | 20m-24h | This represents the amount of idle time after which unused connections are closed. Recommended values are between 20 minutes and 24 hours. |
| MaxRequestsPerTcpConnection | 30 | 30 | This represents the number of requests allowed simultaneously over a single TCP connection. When more requests are in flight simultaneously, the direct/TCP client opens extra connections. Don't set this value lower than four requests per connection or higher than 50-100 requests per connection. Applications with a high degree of parallelism per connection, with large requests or responses, or with tight latency requirements might get better performance with 8-16 requests per connection. | | MaxTcpConnectionsPerEndpoint | 65535 | 65535 | This represents the maximum number of TCP connections that may be opened to each Cosmos DB back-end. Together with MaxRequestsPerTcpConnection, this setting limits the number of requests that are simultaneously sent to a single Cosmos DB back-end(MaxRequestsPerTcpConnection x MaxTcpConnectionPerEndpoint). Value must be greater than or equal to 16. | | OpenTcpConnectionTimeout | 5 seconds | >= 5 seconds | This represents the amount of time allowed for trying to establish a connection. When the time elapses, the attempt is canceled and an error is returned. Longer timeouts delay retries and failures. |
The Gateway mode can be customized through the *CosmosClientOptions* passed to t
To learn more about performance tips for .NET SDK, see [Performance tips for Azure Cosmos DB NET SDK v3](performance-tips-dotnet-sdk-v3.md). * If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Partial Document Update Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partial-document-update-getting-started.md
Support for Partial Document Update (Patch API) in the [Azure Cosmos DB JavaScri
); ``` -- ## [Python (Preview)](#tab/python) Support for Partial Document Update (Patch API) in the [Azure Cosmos DB Python SDK](nosql/sdk-python.md) is available in Preview starting with version *4.4.0b2*. You can download it from the [pip Registry](https://pypi.org/project/azure-cosmos/4.4.0b2/).
cost-management-billing Analyze Cost Data Azure Cost Management Power Bi Template App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/analyze-cost-data-azure-cost-management-power-bi-template-app.md
To install the app:
1. Select the app that you installed. 1. On the Getting started page, select **Connect your data**. :::image type="content" source="./media/analyze-cost-data-azure-cost-management-power-bi-template-app/connect-your-data.png" alt-text="Screenshot highlighting the Connect your data link." lightbox="./media/analyze-cost-data-azure-cost-management-power-bi-template-app/connect-your-data.png" :::
-1. In the dialog that appears, enter your EA enrollment number for **BillingProfileIdOrEnrollmentNumber**. Specify the number of months of data to get. Leave the default **Scope** value of **Enrollment Number**, then select **Next**.
- >[!NOTE]
- > The default value for Scope is `Enrollment Number`. Do not change the value, otherwise the initial data connection will fail.
+1. In the dialog that appears, enter your EA enrollment number for **BillingProfileIdOrEnrollmentNumber**. Specify the number of months of data to get. Enter "Enrollment Number" for **Scope**, then select **Next**.
:::image type="content" source="./media/analyze-cost-data-azure-cost-management-power-bi-template-app/ea-number.png" alt-text="Screenshot showing where you enter your E A enrollment information." lightbox="./media/analyze-cost-data-azure-cost-management-power-bi-template-app/ea-number.png" ::: 1. The next installation step connects to your EA enrollment and requires an [Enterprise Administrator](../manage/understand-ea-roles.md) account. Leave all the default values. Select **Sign in and continue**.
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-transfer.md
tags: billing
Previously updated : 09/22/2023 Last updated : 10/31/2023
Dev/Test products aren't shown in the following table. Transfers for Dev/Test pr
| MPA | MPA | ΓÇó For details, see [Transfer a customer's Azure subscriptions and/or Reservations (under an Azure plan) to a different CSP](/partner-center/transfer-azure-subscriptions-under-azure-plan). | | MOSP (PAYG) | MOSP (PAYG) | ΓÇó If you're changing the billing owner of the subscription, see [Transfer billing ownership of an Azure subscription to another account](billing-subscription-transfer.md).<br><br> ΓÇó Reservations don't automatically transfer so you must open a [billing support ticket](https://azure.microsoft.com/support/create-ticket/) to transfer them. | | MOSP (PAYG) | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. |
-| MOSP (PAYG) | EA | ΓÇó If you're transferring the subscription to the EA enrollment, see [Transfer a subscription to an EA](mosp-ea-transfer.md#transfer-the-subscription-to-the-ea).<br><br> ΓÇó If you're changing billing ownership, see [Change Azure subscription or account ownership](ea-portal-administration.md#change-azure-subscription-or-account-ownership). |
+| MOSP (PAYG) | EA | ΓÇó If you're transferring the admin account to the EA enrollment, see [Transfer a subscription to an EA](mosp-ea-transfer.md#transfer-the-subscription-to-the-ea).<br><br> ΓÇó If you're transferring subscriptions to the EA enrollment, you must create a [billing support ticket](https://azure.microsoft.com/support/create-ticket/). |
| MOSP (PAYG) | MCA - Enterprise | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. | ## Perform resource transfers
cost-management-billing Prepare Buy Reservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepare-buy-reservation.md
Previously updated : 08/21/2023 Last updated : 10/31/2023
Notifications are sent to the following users:
- Customers with Microsoft Customer Agreement (Azure Plan) - Notifications are sent to the reservation owners and the reservation administrator. - Cloud Solution Provider and new commerce partners
- - Emails are sent to the partner notification contact.
+ - Partner Center Action Center emails are sent to partners. For more information about how partners can update their transactional notifications, see [Action Center preferences](/partner-center/action-center-overview#preferences).
- Individual subscription customers with pay-as-you-go rates - Emails are sent to users who are set up as account administrators, reservation owners, and the reservation administrator.
defender-for-cloud Defender For Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-introduction.md
In this article, you learned about Microsoft Defender for Storage.
+
defender-for-cloud Defender For Storage Malware Scan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-malware-scan.md
Malware Scanning doesn't block access or change permissions to the uploaded blob
- Unsupported storage accounts: Legacy v1 storage accounts aren't supported by malware scanning. - Unsupported service: Azure Files isn't supported by malware scanning.
+- Unsupported regions: Australia Central 2, France South, Germany North, Germany West Central, Jio India West, Korea South, Switzerland West.
+ * Regions that are supported by Defender for Storage but not by malware scanning. Learn more about [availability for Defender for Storage.](/azure/defender-for-cloud/defender-for-storage-introduction)
- Unsupported blob types: [Append and Page blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs) aren't supported for Malware Scanning. - Unsupported encryption: Client-side encrypted blobs aren't supported as they can't be decrypted before scanning by the service. However, data encrypted at rest by Customer Managed Key (CMK) is supported. - Unsupported index tag results: Index tag scan result isn't supported in storage accounts with Hierarchical namespace enabled (Azure Data Lake Storage Gen2).
Malware Scanning doesn't block access or change permissions to the uploaded blob
### Throughput capacity and blob size limit - **Scan throughput rate limit:** Malware Scanning can process up to 2 GB per minute for each storage account. If the rate of file upload momentarily exceeds this threshold for a storage account, the system attempts to scan the files in excess of the rate limit. If the rate of file upload consistently exceeds this threshold, some blobs won't be scanned.- - **Blob scan limit:** Malware Scanning can process up to 2,000 files per minute for each storage account. If the rate of file upload momentarily exceeds this threshold for a storage account, the system attempts to scan the files in excess of the rate limit. If the rate of file upload consistently exceeds this threshold, some blobs won't be scanned.- - **Blob size limit:** The maximum size limit for a single blob to be scanned is 2 GB. Blobs that are larger than the limit won't be scanned. ### Blob uploads and index tag updates
Despite the scanning process, access to uploaded data remains unaffected, and th
## Next steps Learn more on how to [set up response for malware scanning](defender-for-storage-configure-malware-scan.md#setting-up-response-to-malware-scanning) results.++
defender-for-iot Concept Sentinel Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-sentinel-integration.md
Together with the new responsibilities, SOC teams deal with new challenges, incl
- **Siloed or inefficient communication and processes** between OT and SOC organizations. -- **Limited technology and tools**, such as lack of visibility or automated security remediation for OT networks. You'll need to evaluate and link information across data sources for OT networks, and integrations with existing SOC solutions may be costly.
+- **Limited technology and tools**, such as lack of visibility or automated security remediation for OT networks. You need to evaluate and link information across data sources for OT networks, and integrations with existing SOC solutions might be costly.
-However, without OT telemetry, context and integration with existing SOC tools and workflows, OT security and operational threats may be handled incorrectly, or even go unnoticed.
+However, without OT data, context and integration with existing SOC tools and workflows, OT security and operational threats might be handled incorrectly, or even go unnoticed.
## Integrate Defender for IoT and Microsoft Sentinel
-Microsoft Sentinel is a scalable cloud service for security information event management (SIEM) security orchestration automated response (SOAR). SOC teams can use the integration between Microsoft Defender for Iot and Microsoft Sentinel to collect data across networks, detect and investigate threats, and respond to incidents.
+Microsoft Sentinel is a scalable cloud service for security information event management (SIEM) security orchestration automated response (SOAR). SOC teams can use the integration between Microsoft Defender for IoT and Microsoft Sentinel to collect data across networks, detect and investigate threats, and respond to incidents.
In Microsoft Sentinel, the Defender for IoT data connector and solution brings out-of-the-box security content to SOC teams, helping them to view, analyze and respond to OT security alerts, and understand the generated incidents in the broader organizational threat contents.
-Install the Defender for IoT data connector alone to stream your OT network alerts to Microsoft Sentinel. Then, also install the **Microsoft Defender for IoT** solution the extra value of IoT/OT-specific analytics rules, workbooks, and SOAR playbooks, as well as incident mappings to [MITRE ATT&CK for ICS techniques](https://attack.mitre.org/techniques/ics/).
+Install the Defender for IoT data connector alone to stream your OT network alerts to Microsoft Sentinel. Then, also install the **Microsoft Defender for IoT** solution for the extra value of IoT/OT-specific analytics rules, workbooks, and SOAR playbooks, and also incident mappings to [MITRE ATT&CK for ICS techniques](https://attack.mitre.org/techniques/ics/).
+
+Integrating Defender for IoT with Microsoft Sentinel also helps you ingest more data from Microsoft Sentinel's other partner integrations. For more information, see [Integrations with Microsoft and partner services](integrate-overview.md).
+
+> [!NOTE]
+> Some features of Microsoft Sentinel might incur a fee. For more information, see [Plan costs and understand Microsoft Sentinel pricing and billing](/azure/sentinel/billing).
### Integrated detection and response
After you've configured the Defender for IoT data connector and have IoT/OT aler
|Method |Description | ||| |**Use the default data connector rule** | Use the default, **Create incidents based on all alerts generated in Microsoft Defender for IOT** analytics rule provided with the data connector. This rule creates a separate incident in Microsoft Sentinel for each alert streamed from Defender for IoT. |
-|**Use out-of-the-box solution rules** | Enable some or all of the [out-of-the-box analytics rules](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-unifiedmicrosoftsocforot?tab=Overview) provided with the **Microsoft Defender for IoT** solution.<br><br> These analytics rules help to reduce alert fatigue by creating incidents only in specific situations. For example, you might choose to create incidents for excessive login attempts, but for multiple scans detected in the network. |
+|**Use out-of-the-box solution rules** | Enable some or all of the [out-of-the-box analytics rules](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-unifiedmicrosoftsocforot?tab=Overview) provided with the **Microsoft Defender for IoT** solution.<br><br> These analytics rules help to reduce alert fatigue by creating incidents only in specific situations. For example, you might choose to create incidents for excessive sign-in attempts, but for multiple scans detected in the network. |
|**Create custom rules** | Create custom analytics rules to create incidents based only on your specific needs. You can use the out-of-the-box analytics rules as a starting point, or create rules from scratch. <br><br>Add the following filter to prevent duplicate incidents for the same alert ID: `| where TimeGenerated <= ProcessingEndTime + 60m` | Regardless of the method you choose to create alerts, only one incident should be created for each Defender for IoT alert ID.
Playbooks are collections of automated remediation actions that can be run from
For example, use SOAR playbooks to: -- Open an asset ticket in ServiceNow when a new asset is detected, such as a new engineering workstation. This alert can be an unauthorized device that can be used by adversaries to reprogram PLCs.
+- Open an asset ticket in ServiceNow when a new asset is detected, such as a new engineering workstation. This alert can be an unauthorized device that might be used by adversaries to reprogram PLCs.
-- Send an email to relevant stakeholders when suspicious activity is detected, for example unplanned PLC reprogramming. The mail may be sent to OT personnel, such as a control engineer responsible on the related production line.
+- Send an email to relevant stakeholders when suspicious activity is detected, for example unplanned PLC reprogramming. The mail might be sent to OT personnel, such as a control engineer responsible on the related production line.
## Comparing Defender for IoT events, alerts, and incidents This section clarifies the differences between Defender for IoT events, alerts, and incidents in Microsoft Sentinel. Use the listed queries to view a full list of the current events, alerts, and incidents for your OT networks.
-You'll typically see more Defender for IoT *events* in Microsoft Sentinel than *alerts*, and more Defender for IoT *alerts* than *incidents*.
+You typically see more Defender for IoT *events* in Microsoft Sentinel than *alerts*, and more Defender for IoT *alerts* than *incidents*.
### Defender for IoT events in Microsoft Sentinel
After you've installed the Microsoft Defender for IoT solution and deployed the
### Defender for IoT incidents in Microsoft Sentinel
-Microsoft Sentinel creates incidents based on your analytics rules. You might have several alerts grouped in the same incident, or you may have analytics rules configured to *not* create incidents for specific alert types.
+Microsoft Sentinel creates incidents based on your analytics rules. You might have several alerts grouped in the same incident, or you might have analytics rules configured to *not* create incidents for specific alert types.
To view incidents in Microsoft Sentinel, run the following query: ```kql
For more information, see:
- [Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel](../../sentinel/iot-solution.md) - [Detect threats out-of-the-box with Defender for IoT data](../../sentinel/iot-advanced-threat-monitoring.md#detect-threats-out-of-the-box-with-defender-for-iot-data) - [Create custom analytics rules to detect threats](../../sentinel/detect-threats-custom.md)-- [Tutorial Use playbooks with automation rules in Microsoft Sentinel](../../sentinel/tutorial-respond-threats-playbook.md)
+- [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](../../sentinel/tutorial-respond-threats-playbook.md)
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
This article describes how to configure your OT sensor or on-premises management
## Prerequisites -- Depending on where you want to create your forwarding alert rules, you'll need to have either an [OT network sensor or on-premises management console installed](how-to-install-software.md), with access as an **Admin** user.
+- Depending on where you want to create your forwarding alert rules, you need to have either an [OT network sensor or on-premises management console installed](how-to-install-software.md), with access as an **Admin** user.
For more information, see [Install OT agentless monitoring software](how-to-install-software.md) and [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md). -- You'll also need to define SMTP settings on the OT sensor or on-premises management console.
+- You also need to define SMTP settings on the OT sensor or on-premises management console.
For more information, see [Configure SMTP mail server settings on an OT sensor](how-to-manage-individual-sensors.md#configure-smtp-mail-server-settings) and [Configure SMTP mail server settings on the on-premises management console](how-to-manage-the-on-premises-management-console.md#configure-smtp-mail-server-settings).
This article describes how to configure your OT sensor or on-premises management
|Name |Description | |||
- |**Minimal alert level** | Select the minimum [alert severity level](alert-engine-messages.md#alert-severities) you want to forward. <br><br> For example, if you select **Minor**, minor alerts and any alert above this severity level will be forwarded. |
+ |**Minimal alert level** | Select the minimum [alert severity level](alert-engine-messages.md#alert-severities) you want to forward. <br><br> For example, if you select **Minor**, minor alerts and any alert above this severity level are forwarded. |
|**Any protocol detected** | Toggle on to forward alerts from all protocol traffic or toggle off and select the specific protocols you want to include. | |**Traffic detected by any engine** | Toggle on to forward alerts from all [analytics engines](architecture.md#defender-for-iot-analytics-engines), or toggle off and select the specific engines you want to include. | |**Actions** | Select the type of server you want to forward alerts to, and then define any other required information for that server type. <br><br>To add multiple servers to the same rule, select **+ Add server** and add more details. <br><br>For more information, see [Configure alert forwarding rule actions](#configure-alert-forwarding-rule-actions). |
To edit or delete an existing rule:
|Name |Description | |||
- |**Minimal alert level** | At the top-right of the dialog, use the dropdown list to select the minimum [alert severity level](alert-engine-messages.md#alert-severities) that you want to forward. <br><br>For example, if you select **Minor**, minor alerts and any alert above this severity level will be forwarded. |
+ |**Minimal alert level** | At the top-right of the dialog, use the dropdown list to select the minimum [alert severity level](alert-engine-messages.md#alert-severities) that you want to forward. <br><br>For example, if you select **Minor**, minor alerts and any alert above this severity level are forwarded. |
|**Protocols** | Select **All** to forward alerts from all protocol traffic, or select **Specific** to add specific protocols only. |
- |**Engines**** | Select **All** to forward alerts triggered by all sensor analytics engines, or select **Specific** to add specific engines only. |
+ |**Engines** | Select **All** to forward alerts triggered by all sensor analytics engines, or select **Specific** to add specific engines only. |
|**System Notifications** | Select the **Report System Notifications** option to notify about disconnected sensors or remote backup failures. | |**Alert Notifications** | Select the **Report Alert Notifications** option to notify about an alert's date and time, title, severity, source and destination name and IP address, suspicious traffic, and the engine that detected the event. | |**Actions** | Select **Add** to add an action to apply and enter any parameters values needed for the selected action. Repeat as needed to add multiple actions. <br><br>For more information, see [Configure alert forwarding rule actions](#configure-alert-forwarding-rule-actions). |
The following sections describe the syslog output syntax for each format.
| Name | Description | |--|--|
-| Priority | User.Alert |
+| Priority | `User.Alert` |
| Date and Time | Date and time that the syslog server machine received the information. | | Hostname | Sensor IP |
-| Message | Sensor name: The name of the appliance. <br /> Alert time: The time that the alert was detected: Can vary from the time of the syslog server machine, and depends on the time-zone configuration of the forwarding rule. <br /> Alert Title:  The title of the alert. <br /> Alert message: The message of the alert. <br /> Alert severity: The severity of the alert: **Warning**, **Minor**, **Major**, or **Critical**. <br /> Alert type: **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br /> Protocol: The protocol of the alert. <br /> **Source_MAC**: IP address, name, vendor, or OS of the source device. <br /> Destination_MAC: IP address, name, vendor, or OS of the destination. If data is missing, the value will be **N/A**. <br /> alert_group: The alert group associated with the alert. |
+| Message | Sensor name: The name of the appliance. <br /> Alert time: The time that the alert was detected: Can vary from the time of the syslog server machine, and depends on the time-zone configuration of the forwarding rule. <br /> Alert Title:  The title of the alert. <br /> Alert message: The message of the alert. <br /> Alert severity: The severity of the alert: **Warning**, **Minor**, **Major**, or **Critical**. <br /> Alert type: **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br /> Protocol: The protocol of the alert. <br /> **Source_MAC**: IP address, name, vendor, or OS of the source device. <br /> Destination_MAC: IP address, name, vendor, or OS of the destination. If data is missing, the value is **N/A**. <br /> alert_group: The alert group associated with the alert. |
#### Syslog CEF output fields | Name | Description | |--|--|
-| Priority | User.Alert |
+| Priority | `User.Alert` |
| Date and time | Date and time that the sensor sent the information, in UTC format | | Hostname | Sensor hostname |
-| Message | CEF:0 <br />Microsoft Defender for IoT/CyberX <br />Sensor name <br />Sensor version <br />Microsoft Defender for IoT Alert <br />Alert title <br />Integer indication of severity. 1=**Warning**, 4=**Minor**, 8=**Major**, or 10=**Critical**.<br />msg= The message of the alert. <br />protocol= The protocol of the alert. <br />severity= **Warning**, **Minor**, **Major**, or **Critical**. <br />type= **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br />UUID= UUID of the alert (Optional) <br /> start= The time that the alert was detected. <br />Might vary from the time of the syslog server machine, and depends on the time-zone configuration of the forwarding rule. <br />src_ip= IP address of the source device. (Optional) <br />src_mac= MAC address of the source device. (Optional) <br />dst_ip= IP address of the destination device. (Optional)<br />dst_mac= MAC address of the destination device. (Optional)<br />cat= The alert group associated with the alert. |
+| Message | *CEF:0* <br />Microsoft Defender for IoT/CyberX <br />Sensor name <br />Sensor version <br />Microsoft Defender for IoT Alert <br />Alert title <br />Integer indication of severity. 1=**Warning**, 4=**Minor**, 8=**Major**, or 10=**Critical**.<br />msg= The message of the alert. <br />protocol= The protocol of the alert. <br />severity= **Warning**, **Minor**, **Major**, or **Critical**. <br />type= **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br />UUID= UUID of the alert (Optional) <br /> start= The time that the alert was detected. <br />Might vary from the time of the syslog server machine, and depends on the time-zone configuration of the forwarding rule. <br />src_ip= IP address of the source device. (Optional) <br />src_mac= MAC address of the source device. (Optional) <br />dst_ip= IP address of the destination device. (Optional)<br />dst_mac= MAC address of the destination device. (Optional)<br />cat= The alert group associated with the alert. |
#### Syslog LEEF output fields | Name | Description | |--|--|
-| Priority | User.Alert |
+| Priority | `User.Alert` |
| Date and time | Date and time that the sensor sent the information, in UTC format | | Hostname | Sensor IP |
-| Message | Sensor name: The name of the Microsoft Defender for IoT appliance. <br />LEEF:1.0 <br />Microsoft Defender for IoT <br />Sensor <br />Sensor version <br />Microsoft Defender for IoT Alert <br /> Title:  The title of the alert. <br />msg: The message of the alert. <br />protocol: The protocol of the alert.<br />severity: **Warning**, **Minor**, **Major**, or **Critical**. <br />type: The type of the alert: **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br />start: The time of the alert. It may be different from the time of the syslog server machine, and depends on the time-zone configuration. <br />src_ip: IP address of the source device.<br />dst_ip: IP address of the destination device. <br />cat: The alert group associated with the alert. |
+| Message | Sensor name: The name of the Microsoft Defender for IoT appliance. <br />*LEEF:1.0* <br />Microsoft Defender for IoT <br />Sensor <br />Sensor version <br />Microsoft Defender for IoT Alert <br /> Title:  The title of the alert. <br />msg: The message of the alert. <br />protocol: The protocol of the alert.<br />severity: **Warning**, **Minor**, **Major**, or **Critical**. <br />type: The type of the alert: **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br />start: The time of the alert. It might be different from the time of the syslog server machine, and depends on the time-zone configuration. <br />src_ip: IP address of the source device.<br />dst_ip: IP address of the destination device. <br />cat: The alert group associated with the alert. |
### Webhook server action
In the **Actions** area, enter the following details:
|**Hostname / Port** | Enter the NetWitness server's hostname and port. | |**Time zone** | Enter the time zone you want to use in the time stamp for the alert detection at the SIEM. |
-### Other partner server integrations
+## Configure forwarding rules for partner integrations
-You may be integrating Defender for IoT with a partner service to send alert or device inventory information to another security or device management system, or to communicate with partner-side firewalls.
+You might be integrating Defender for IoT with a partner service to send alert or device inventory information to another security or device management system, or to communicate with partner-side firewalls.
[Partner integrations](integrate-overview.md) can help to bridge previously siloed security solutions, enhance device visibility, and accelerate system-wide response to more rapidly mitigate risks.
-In such cases, use the **Actions** area to enter credentials and other information required to communicate with integrated partner services.
+In such cases, use supported **Actions** to enter credentials and other information required to communicate with integrated partner services.
For more information, see: -- [Integrate Qradar with Microsoft Defender for IoT](tutorial-qradar.md)-- [Integrate Splunk with Microsoft Defender for IoT](tutorial-splunk.md)-- [Integrate CyberArk with Microsoft Defender for IoT](tutorial-cyberark.md) - [Integrate Fortinet with Microsoft Defender for IoT](tutorial-fortinet.md)-- [Integrate ClearPass with Microsoft Defender for IoT](tutorial-clearpass.md)-- [Integrate Forescout with Microsoft Defender for IoT](tutorial-forescout.md)-- [Integrate Palo-Alto with Microsoft Defender for IoT](tutorial-palo-alto.md)
+- [Integrate Qradar with Microsoft Defender for IoT](tutorial-qradar.md)
-## Configure alert groups in partner services
+### Configure alert groups in partner services
When you configure forwarding rules to send alert data to Syslog servers, QRadar, and ArcSight, *alert groups* are automatically applied and are available in those partner servers.
-*Alert groups* help SOC teams using those partner solutions to manage alerts based on enterprise security policies and business priorities. For example, alerts about new detections are organized into a *discovery* group, and will include any alerts about new devices, VLANs, user accounts, MAC addresses, and more.
+*Alert groups* help SOC teams using those partner solutions to manage alerts based on enterprise security policies and business priorities. For example, alerts about new detections are organized into a *discovery* group, which includes any alerts about new devices, VLANs, user accounts, MAC addresses, and more.
Alert groups appear in partner services with the following prefixes:
defender-for-iot Integrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrate-overview.md
Title: Integrations with partner services - Microsoft Defender for IoT
-description: Learn about supported integrations with Microsoft Defender for IoT.
Previously updated : 08/02/2022
+ Title: Integrate with partner services | Microsoft Defender for IoT
+description: Learn about supported integrations across your organization's security stack with Microsoft Defender for IoT.
Last updated : 09/06/2023 # Integrations with Microsoft and partner services
-Integrate Microsoft Defender for Iot with partner services to view partner data in Defender for IoT, or to view Defender for IoT data in a partner service.
+Integrate Microsoft Defender for IoT with partner services to view data from across your security stack data in Defender for IoT, or to view Defender for IoT data in one of your security ecosystem integrations.
+
+> [!IMPORTANT]
+> Defender for IoT is refreshing its security stack integrations to improve the overall robustness, scalability, and ease of maintenance of various security solutions.
+>
+> If you're integrating your security solution with cloud-based systems, we recommend that you use data connectors through [Microsoft Sentinel](concept-sentinel-integration.md). For on-premises integrations, we recommend that you either configure your OT sensor to [forward syslog events](how-to-forward-alert-information-to-partners.md)), or use [Defender for IoT APIs](references-work-with-defender-for-iot-apis.md).
+>
+> The legacy [Aruba ClearPass](#aruba-clearpass), [Palo Alto Panorama](#palo-alto), and [Splunk](#splunk) integrations are supported through October 2024 using sensor version 23.1.3, and won't be supported in upcoming major software versions. For customers using legacy integration methods, we recommend moving your integrations to the standard cloud or on-premises methods.
## Aruba ClearPass |Name |Description |Support scope |Supported by |Learn more | ||||||
-|**Aruba ClearPass** | Share Defender for IoT data with ClearPass Security Exchange and update the ClearPass Policy Manager Endpoint Database with Defender for IoT data. | - OT networks<br>- Locally managed sensors and on-premises management consoles | Microsoft | [Integrate ClearPass with Microsoft Defender for IoT](tutorial-clearpass.md) |
+| **Aruba ClearPass** (cloud) | View Defender for IoT data together with Aruba ClearPass data, using Microsoft Sentinel to create custom dashboards, custom alerts, and improve your investigation ability.<br><br> Connect to [Microsoft Sentinel](concept-sentinel-integration.md), and install the [Aruba ClearPass data connector](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-arubaclearpass?tab=Overview). | - OT networks <br>- Cloud-connected or locally managed OT sensors | Microsoft | [Microsoft Sentinel documentation](/azure/sentinel/data-connectors/aruba-clearpass) |
+| **Aruba ClearPass** (on-premises) | View Defender for IoT data together with Aruba ClearPass data by doing one of the following:<br><br>- Configure your sensor to send syslog files directly to ClearPass. <br>- | - OT networks <br>- Cloud-connected or locally managed OT sensors | Microsoft | [Forward on-premises OT alert information](how-to-forward-alert-information-to-partners.md) <br><br>[Defender for IoT API reference](references-work-with-defender-for-iot-apis.md)|
+|**Aruba ClearPass** (legacy) | Share Defender for IoT data directly with ClearPass Security Exchange and update the ClearPass Policy Manager Endpoint Database with Defender for IoT data. | - OT networks<br>- Locally managed sensors and on-premises management consoles | Microsoft | [Integrate ClearPass with Microsoft Defender for IoT](tutorial-clearpass.md) |
+ ## Axonius
Integrate Microsoft Defender for Iot with partner services to view partner data
|Name |Description |Support scope |Supported by |Learn more | ||||||
-|**Defender for IoT data connector in Microsoft Sentinel** | Displays Defender for IoT cloud data in Microsoft Sentinel, supporting end-to-end SOC investigations for Defender for IoT alerts. | - OT and Enterprise IoT networks <br>- Cloud-connected sensors | Microsoft | [Integrate Microsoft Sentinel and Microsoft Defender for IoT](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended) |
-|**Microsoft Sentinel** | Send Defender for IoT alerts from on-premises resources to Microsoft Sentinel. | - OT networks <br>- Locally managed sensors and on-premises management consoles | Microsoft | [Connect on-premises OT network sensors to Microsoft Sentinel](integrations/on-premises-sentinel.md) |
+|**Defender for IoT data connector in Microsoft Sentinel** (cloud) | Displays Defender for IoT cloud data in Microsoft Sentinel, supporting end-to-end SOC investigations for Defender for IoT alerts. <br><br>Connects to other partner services, allowing you to synchronize your data between Defender for IoT and supported partner systems, across Microsoft Sentinel. | - OT and Enterprise IoT networks <br>- Cloud-connected sensors | Microsoft | - [OT threat monitoring in enterprise SOCs](concept-sentinel-integration.md) <br>- [Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel](iot-solution.md) <br>- [Tutorial: Investigate and detect threats for IoT devices](iot-advanced-threat-monitoring.md) |
+| **Microsoft Sentinel** (on-premises) | View Defender for IoT data together with Microsoft Sentinel data by configuring your sensor to send syslog files directly to Microsoft Sentinel.| - OT networks <br>- Cloud-connected or locally managed OT sensors | Microsoft | [Forward on-premises OT alert information](how-to-forward-alert-information-to-partners.md) |
+|**Microsoft Sentinel** (legacy) | Send Defender for IoT alerts from on-premises resources to Microsoft Sentinel. | - OT networks <br>- Locally managed sensors and on-premises management consoles | Microsoft | [Connect on-premises OT network sensors to Microsoft Sentinel](integrations/on-premises-sentinel.md) |
## Palo Alto |Name |Description |Support scope |Supported by |Learn more | ||||||
-|**Palo Alto** | Use Defender for IoT data to block critical threats with Palo Alto firewalls, either with automatic blocking or with blocking recommendations. | - OT networks<br>- Locally managed sensors and on-premises management consoles | Microsoft | [Integrate Palo-Alto with Microsoft Defender for IoT](tutorial-palo-alto.md) |
+| **Palo Alto Panorama** (cloud) | View Defender for IoT data together with Panorama data. Use Microsoft Sentinel solutions, which include out-of-the-box workbooks, hunting queries, automation playbooks, and analytics rules, or create custom dashboards, alerts, and more. <br><br> Connect to [Microsoft Sentinel](concept-sentinel-integration.md), and install one or more of the following solutions: <br>- [Palo Alto PAN-OS Solution](/azure/sentinel/data-connectors/palo-alto-networks-firewall) <br>- [Palo Alto Networks Cortex Data Lake Solution](/azure/sentinel/data-connectors/palo-alto-networks-cortex-data-lake-cdl) <br>- [Palo Alto Prisma Cloud CSPM solution](/azure/sentinel/data-connectors/palo-alto-prisma-cloud-cspm-using-azure-function) | - OT networks <br>- Cloud-connected or locally managed OT sensors | Microsoft |Microsoft Sentinel documentation: <br>- [Palo Alto PAN-OS Solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-paloaltopanos?tab=Overview) <br>- [Palo Alto Networks Cortex Data Lake Solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-paloaltocdl?tab=Overview) <br>- [Palo Alto Prisma Cloud CSPM solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-paloaltoprisma?tab=Overview) |
+| **Palo Alto Panorama** (on-premises) | View Defender for IoT data together with Panorama data by configuring your sensor to send syslog files directly to Palo Alto Panorama.| - OT networks <br>- Cloud-connected or locally managed OT sensors | Microsoft | [Forward on-premises OT alert information](how-to-forward-alert-information-to-partners.md) |
+|**Palo Alto** (legacy) | Use Defender for IoT data to block critical threats with Palo Alto firewalls, either with automatic blocking or with blocking recommendations. | - OT networks<br>- Locally managed sensors and on-premises management consoles | Microsoft | [Integrate Palo-Alto with Microsoft Defender for IoT](tutorial-palo-alto.md) |
## RSA NetWitness
Integrate Microsoft Defender for Iot with partner services to view partner data
|Name |Description |Support scope |Supported by |Learn more | ||||||
-| **Splunk** | Send Defender for IoT alerts to Splunk | - OT networks <br>- Cloud connected sensors | Microsoft | [Stream Defender for IoT cloud alerts to a partner SIEM](integrations/send-cloud-data-to-partners.md) |
-|**Splunk** | Send Defender for IoT alerts to Splunk | - OT networks<br>- Locally managed sensors and on-premises management consoles | Microsoft | [Integrate Splunk with Microsoft Defender for IoT](tutorial-splunk.md) |
+| **Splunk** (cloud) | Send Defender for IoT alerts to Splunk using one of the following methods: <br><br>- Via the [OT Security Add-on for Splunk](https://apps.splunk.com/app/5151), which widens your capacity to ingest and monitor OT assets and provides OT vulnerability management reports that help you comply with and audit for NERC CIP. <br><br>- Via a SIEM that supports Event Hubs, such as Microsoft Sentinel | - OT networks <br>- Cloud-connected or locally managed OT sensors | Microsoft and Splunk |- Splunk documentation on [The OT Security Add-on for Splunk](https://splunk.github.io/ot-security-solution/integrationguide/) and [installing add-ins](https://docs.splunk.com/Documentation/AddOns/released/Overview/Distributedinstall) <br>- [Stream Defender for IoT cloud alerts to a partner SIEM](integrations/send-cloud-data-to-partners.md) |
+| **Splunk** (on-premises) | View Defender for IoT data together with Splunk data by configuring your sensor to send syslog files directly to Splunk.| - OT networks <br>- Cloud-connected or locally managed OT sensors | Microsoft | [Forward on-premises OT alert information](how-to-forward-alert-information-to-partners.md) |
+|**Splunk** (on-premises, legacy integration) | Send Defender for IoT alerts to Splunk | - OT networks<br>- Locally managed sensors and on-premises management consoles | Microsoft | [Integrate Splunk with Microsoft Defender for IoT](tutorial-splunk.md) |
## Next steps
-> [!div class="nextstepaction"]
-> [Stream Defender for IoT cloud alerts to a partner SIEM](integrations/send-cloud-data-to-partners.md)
+For more information, see:
+
+- [Stream Defender for IoT cloud alerts to a partner SIEM](integrations/send-cloud-data-to-partners.md)
defender-for-iot On Premises Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrations/on-premises-sentinel.md
Title: How to connect on-premises Defender for IoT resources to Microsoft Sentinel
-description: Learn how to stream data into Microsoft Sentinel from an on-premises and locally-managed Microsoft Defender for IoT OT network sensor or an on-premises management console.
+ Title: Connect Defender for IoT on-premises resources to Microsoft Sentinel (legacy)
+description: This article describes the legacy method for connecting your OT sensor or on-premises management console to Microsoft Sentinel.
Previously updated : 12/26/2022 Last updated : 08/17/2023
+#CustomerIntent: As an admin user for my locally-managed OT sensor, I want to learn how to connect my sensor to Microsoft Sentinel so that I can view alerts generated together with other Microsoft Sentinel data.
-# Connect on-premises OT network sensors to Microsoft Sentinel
+# Connect OT network sensors or on-premises management consoles to Microsoft Sentinel (legacy)
-You can [stream Microsoft Defender for IoT data into Microsoft Sentinel](../iot-solution.md) via the Azure portal, for any data coming from cloud-connected OT network sensors.
+This article describes the legacy method for connecting your OT sensor or on-premises management console to Microsoft Sentinel. Stream data into Microsoft Sentinel whenever you want to use Microsoft Sentinel's advanced threat hunting, security analytics, and automation features when responding to security incidents and threats across your network.
-However, if you're working either in a hybrid environment, or completely on-premises, you might want to stream data in from your locally-managed sensors to Microsoft Sentinel. To do this, create forwarding rules on either your OT network sensor, or for multiple sensors from an on-premises management console.
-
-Stream data into Microsoft Sentinel whenever you want to use Microsoft Sentinel's advanced threat hunting, security analytics, and automation features when responding to security incidents and threats across your network. For more information, see [Microsoft Sentinel documentation](../../../sentinel/index.yml).
+> [!IMPORTANT]
+> If you're using a cloud connected sensor, we recommend that you connect Defender for IoT data using the Microsoft Sentinel solution instead of the legacy integration method. For more information, see:
+>
+> - [OT threat monitoring in enterprise SOCs](../concept-sentinel-integration.md)
+> - [Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel](../iot-solution.md)
+> - [Tutorial: Investigate and detect threats for IoT devices](../iot-advanced-threat-monitoring.md)
## Prerequisites
defender-for-iot Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes-archive.md
Title: What's new archive for Microsoft Defender for IoT for organizations
-description: Learn about the features and enhancements released for Microsoft Defender for IoT for organizations more than 6 months ago.
+description: Learn about the features and enhancements released for Microsoft Defender for IoT for organizations more than six months ago.
Last updated 08/07/2022
Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Term
The new **IoT OT Threat Monitoring with Defender for IoT solution** is available and provides enhanced capabilities for Microsoft Defender for IoT integration with Microsoft Sentinel. The **IoT OT Threat Monitoring with Defender for IoT solution** is a set of bundled content, including analytics rules, workbooks, and playbooks, configured specifically for Defender for IoT data. This solution currently supports only Operational Networks (OT/ICS).
-For information on integrating with Microsoft Sentinel, see [Tutorial: Integrate Defender for Iot and Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended)
+For information on integrating with Microsoft Sentinel, see [Tutorial: Integrate Defender for IoT and Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended)
### Apache Log4j vulnerability
This new functionality is available on the following alerts:
The following feature enhancements are available with version 10.5.3 of Microsoft Defender for IoT. -- The on-premises management console, has a new API to support our ServiceNow integration. For more information, see [Integration API reference for on-premises management consoles (Public preview)](api/management-integration-apis.md#integration-api-reference-for-on-premises-management-consoles-public-preview).
+- The on-premises management console has a new API to support our ServiceNow integration. For more information, see [Integration API reference for on-premises management consoles (Public preview)](api/management-integration-apis.md#integration-api-reference-for-on-premises-management-consoles-public-preview).
- Enhancements have been made to the network traffic analysis of multiple OT and ICS protocol dissectors.
Certificate and password recovery enhancements were made for this release.
This version lets you: -- Upload SSL certificates directly to the sensors and on-premises management consoles.
+- Upload TLS/SSL certificates directly to the sensors and on-premises management consoles.
- Perform validation between the on-premises management console and connected sensors, and between a management console and a High Availability management console. Validation is based on expiration dates, root CA authenticity, and Certificate Revocation Lists. If validation fails, the session won't continue. For upgrades: -- There's no change in SSL certificate or validation functionality during the upgrade.-- After upgrading, sensor and on-premises management console administrative users can replace SSL certificates, or activate SSL certificate validation from the System Settings, SSL Certificate window.
+- There's no change in TLS/SSL certificate or validation functionality during the upgrade.
+- After you update your sensors and on-premises management consoles, administrative users can replace TLS/SSL certificates, or activate TLS/SSL certificate validation from the System Settings, TLS/SSL Certificate window.
For Fresh Installations: -- During first-time sign-in, users are required to either use an SSL Certificate (recommended) or a locally generated self-signed certificate (not recommended)
+- During first-time sign-in, users are required to either use an TLS/SSL Certificate (recommended) or a locally generated self-signed certificate (not recommended)
- Certificate validation is turned on by default for fresh installations. #### Password recovery
defender-for-iot Tutorial Clearpass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-clearpass.md
Title: Integrate ClearPass with Microsoft Defender for IoT
-description: In this tutorial, you learn how to integrate Microsoft Defender for IoT with ClearPass.
- Previously updated : 02/07/2022
+description: In this tutorial, you learn how to integrate Microsoft Defender for IoT with ClearPass using Defender for IoT's legacy, on-premises integration.
+ Last updated : 09/06/2023 # Integrate ClearPass with Microsoft Defender for IoT
-This article helps you learn how to integrate ClearPass Policy Manager (CPPM) with Microsoft Defender for IoT.
-The Defender for IoT platform delivers continuous ICS threat monitoring and device discovery, combining a deep embedded understanding of industrial protocols, devices, and applications with ICS-specific behavioral anomaly detection, threat intelligence, risk analytics, and automated threat modeling.
+This article describes how to integrate Aruba ClearPass with Microsoft Defender for IoT, in order to view both ClearPass and Defender for IoT information in a single place.
-Defender for IoT detects, discovers, and classifies OT and ICS endpoints, and share information directly with ClearPass using the ClearPass Security Exchange framework and the OpenAPI.
+Viewing both Defender for IoT and ClearPass information together provides SOC analysts with multidimensional visibility into the specialized OT protocols and devices deployed in industrial environments, along with ICS-aware behavioral analytics to rapidly detect suspicious or anomalous behavior.
-Defender for IoT automatically updates the ClearPass Policy Manager Endpoint Database with endpoint classification data and several custom security attributes.
+## Cloud-based integrations
-The integration allows for the following:
+> [!TIP]
+> Cloud-based security integrations provide several benefits over on-premises solutions, such as centralized, simpler sensor management and centralized security monitoring.
+>
+> Other benefits include real-time monitoring, efficient resource use, increased scalability and robustness, improved protection against security threats, simplified maintenance and updates, and seamless integration with third-party solutions.
+>
-- Viewing ICS and SCADA security threats identified by Defender for IoT security engines.
+If you're integrating a cloud-connected OT sensor with Aruba ClearPass, we recommend that you connect to [Microsoft Sentinel](concept-sentinel-integration.md), and then install the [Aruba ClearPass data connector](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-arubaclearpass?tab=Overview).
-- Viewing device inventory information discovered by the Defender for IoT sensor. The sensor delivers centralized visibility of all network devices and endpoints across the IT and OT infrastructure. From here, a centralized endpoint and edge security policy can be defined and administered in the ClearPass system.
+Microsoft Sentinel is a scalable cloud service for security information event management (SIEM) security orchestration automated response (SOAR). SOC teams can use the integration between Microsoft Defender for IoT and Microsoft Sentinel to collect data across networks, detect and investigate threats, and respond to incidents.
-In this article, you learn how to:
+In Microsoft Sentinel, the Defender for IoT data connector and solution brings out-of-the-box security content to SOC teams, helping them to view, analyze and respond to OT security alerts, and understand the generated incidents in the broader organizational threat contents.
-> [!div class="checklist"]
->
-> - Create a ClearPass API user
-> - Create a ClearPass operator profile
-> - Create a ClearPass OAuth API client
-> - Configure Defender for IoT to integrate with ClearPass
-> - Define the ClearPass forwarding rule
-> - Monitor ClearPass and Defender for IoT communication
+For more information, see:
-## Prerequisites
+- [Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel](iot-solution.md)
+- [Tutorial: Investigate and detect threats for IoT devices](iot-advanced-threat-monitoring.md)
+- [Microsoft Sentinel documentation](/azure/sentinel/data-connectors/aruba-clearpass).
-Before you begin, make sure that you have the following prerequisites:
+## On-premises integrations
-### Aruba ClearPass requirements
+If you're working with an air-gapped, locally managed OT sensor, you'll need an on-premises solution to view Defender for IoT and Splunk information in the same place.
-CPPM runs on hardware appliances with pre-installed software or as a Virtual Machine under the following hypervisors. Hypervisors that run on a client computer such as VMware Player aren't supported.
+In such cases, we recommend that you configure your OT sensor to send syslog files directly to ClearPass, or use Defender for IoT's built-in API.
-- VMware ESXi 5.5, 6.0, 6.5, 6.6 or higher.
+For more information, see:
-- Microsoft Hyper-V Server 2012 R2 or 2016 R2.
+- [Forward on-premises OT alert information](how-to-forward-alert-information-to-partners.md)
+- [Defender for IoT API reference](references-work-with-defender-for-iot-apis.md)
-- Hyper-V on Microsoft Windows Server 2012 R2 or 2016 R2. -- KVM on CentOS 7.5 or later.
+## On-premises integration (legacy)
-### Defender for IoT requirements
+This section describes how to integrate Defender for IoT and ClearPass Policy Manager (CPPM) using the legacy, on-premises integration.
-- Defender for IoT version 2.5.1 or higher.
+> [!IMPORTANT]
+> The legacy Aruba ClearPass integration is supported through October 2024 using sensor version 23.1.3, and won't be supported in upcoming major software versions.. For customers using the legacy integration, we recommend moving to one of the following methods:
+>
+> - If you're integrating your security solution with cloud-based systems, we recommend that you use data connectors through [Microsoft Sentinel](#cloud-based-integrations).
+> - For on-premises integrations, we recommend that you either configure your OT sensor to [forward syslog events, or use Defender for IoT APIs](#on-premises-integrations).
+>
+
+### Prerequisites
-- Access to a Defender for IoT OT sensor as an Admin user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
+Before you begin, make sure that you have the following prerequisites:
-## Create a ClearPass API user
+|Prerequisite |Description |
+|||
+|**Aruba ClearPass requirements** | CPPM runs on hardware appliances with pre-installed software or as a Virtual Machine under the following hypervisors. <br>- VMware ESXi 5.5, 6.0, 6.5, 6.6 or higher. <br>- Microsoft Hyper-V Server 2012 R2 or 2016 R2. <br>- Hyper-V on Microsoft Windows Server 2012 R2 or 2016 R2. <br>- KVM on CentOS 7.5 or later. <br><br>Hypervisors that run on a client computer such as VMware Player aren't supported. |
+|**Defender for IoT requirements** | - Defender for IoT version 2.5.1 or higher. <br>- Access to a Defender for IoT OT sensor as an [Admin user](roles-on-premises.md). |
+
+### Create a ClearPass API user
As part of the communications channel between the two products, Defender for IoT uses many APIs (both TIPS, and REST). Access to the TIPS APIs is validated via username and password combination credentials. This user ID needs to have minimum levels of access. Don't use a Super Administrator profile, but instead use API Administrator as shown below.
As part of the communications channel between the two products, Defender for IoT
1. Select **Add**.
-## Create a ClearPass operator profile
+### Create a ClearPass operator profile
Defender for IoT uses the REST API as part of the integration. REST APIs are authenticated under an OAuth framework. To sync with Defender for IoT, you need to create an API Client.
In order to secure access to the REST API for the API Client, create a restricte
| **API Services** | Set to **Allow Access** | | **Policy Manager** | Set the following: <br />- **Dictionaries**: **Attributes** set to **Read, Write, Delete**<br />- **Dictionaries**: **Fingerprints** set to **Read, Write, Delete**<br />- **Identity**: **Endpoints** set to **Read, Write, Delete** |
-## Create a ClearPass OAuth API client
+### Create a ClearPass OAuth API client
1. In the main window, select **Administrator** > **API Services** > **API Clients**.
In order to secure access to the REST API for the API Client, create a restricte
- CPPM OAuth2 API Client Secret
-## Configure Defender for IoT to integrate with ClearPass
+### Configure Defender for IoT to integrate with ClearPass
To enable viewing the device inventory in ClearPass, you need to set up Defender for IoT-ClearPass sync. When the sync configuration is complete, the Defender for IoT platform updates the ClearPass Policy Manager EndpointDb as it discovers new endpoints.
To enable viewing the device inventory in ClearPass, you need to set up Defender
1. Select **Save**.
-## Define a ClearPass forwarding rule
+### Define a ClearPass forwarding rule
To enable viewing the alerts discovered by Defender for IoT in Aruba, you need to set the forwarding rule. This rule defines which information about the ICS, and SCADA security threats identified by Defender for IoT security engines is sent to ClearPass.
-Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created aren't affected by the rule.
-
-**To define a ClearPass forwarding rule on the Defender for IoT sensor**:
-
-1. Sign in to the sensor, and select **Forwarding**.
-
-1. Select **+ Create new rule**.
-
-1. In the **Add forwarding rule** pane, define the rule parameters:
-
- :::image type="content" source="media/tutorial-clearpass/create-rule.png" alt-text="Screenshot of how to create a Forwarding Rule." lightbox="media/tutorial-clearpass/create-rule.png":::
-
- | Parameter | Description |
- |--|--|
- | **Rule name** | The forwarding rule name. |
- | **Minimal alert level** | The minimal security level incident to forward. For example, if Minor is selected, minor alerts and any alert above this severity level will be forwarded. |
- | **Any protocol detected** | Toggle off to select the protocols you want to include in the rule. |
- | **Traffic detected by any engine** | Toggle off to select the traffic you want to include in the rule. |
-
-1. In the **Actions** area, define the following values:
-
- | Parameter | Description |
- |--|--|
- | **Server** | Select ClearPass. |
- | **Host** | Define the ClearPass server IP to send alert information. |
- | **Port** | Define the ClearPass port to send alert information. |
-
-1. Configure which alert information you want to forward:
-
- | Parameter | Description |
- |--|--|
- | **Report illegal function codes** | Protocol violations - Illegal field value violating ICS protocol specification (potential exploit). |
- | **Report unauthorized PLC programming / firmware updates** | Unauthorized PLC changes. |
- | **Report unauthorized PLC stop** | PLC stop (downtime). |
- | **Report malware related alerts** | Industrial malware attempts, such as TRITON, NotPetya. |
- | **Report unauthorized scanning** | Unauthorized scanning (potential reconnaissance) |
-
-1. Select **Save**.
+For more information, see [On-premises integrations](#on-premises-integrations).
-## Monitor ClearPass and Defender for IoT communication
+### Monitor ClearPass and Defender for IoT communication
Once the sync has started, endpoint data is populated directly into the Policy Manager EndpointDb, you can view the last update time from the integration configuration screen.
Once the sync has started, endpoint data is populated directly into the Policy M
:::image type="content" source="media/tutorial-clearpass/last-sync.png" alt-text="Screenshot of the view the time and date of your last sync." lightbox="media/tutorial-clearpass/last-sync.png":::
-If Sync isn't working, or shows an error, then itΓÇÖs likely youΓÇÖve missed capturing some of the information. Recheck the data recorded.
+If the sync isn't working, or shows an error, then itΓÇÖs likely youΓÇÖve missed capturing some of the information. Recheck the data recorded.
Additionally, you can view the API calls between Defender for IoT and ClearPass from **Guest** > **Administration** > **Support** > **Application Log**.
defender-for-iot Tutorial Fortinet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-fortinet.md
The FortiGate firewall can be used to block suspicious traffic.
Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created aren't affected by the rule.
-**To set a forwarding rule to block malware-related alerts**:
+When creating your forwarding rule:
-1. Sign in to the Microsoft Defender for IoT sensor, and select **Forwarding**.
+1. In the **Actions** area, select **FortiGate**.
-1. Select **+ Create new rule**.
-
-1. In the **Add forwarding rule** pane, define the rule parameters:
-
- :::image type="content" source="media/tutorial-fortinet/forward-rule.png" alt-text="Screenshot of the Forwarding window option in a sensor." lightbox="media/tutorial-fortinet/forward-rule.png":::
-
- | Parameter | Description |
- |--|--|
- | **Rule name** | The forwarding rule name. |
- | **Minimal alert level** | The minimal security level incident to forward. For example, if Minor is selected, minor alerts and any alert above this severity level will be forwarded. |
- | **Any protocol detected** | Toggle off to select the protocols you want to include in the rule. |
- | **Traffic detected by any engine** | Toggle off to select the traffic you want to include in the rule. |
+1. Define the server IP address where you want to send the data.
-1. In the **Actions** area, define the following values:
+1. Enter an API key created in FortiGate.
- | Parameter | Description |
- |--|--|
- | **Server** | Select FortiGage. |
- | **Host** | Define the ClearPass server IP to send alert information. |
- | **API key** | Enter the [API key](#create-an-api-key-in-fortinet) that you created in FortiGate. |
- | **Incoming Interface** | Enter the incoming firewall interface port. |
- | **Outgoing Interface** | Enter the outgoing firewall interface port. |
+1. Enter the incoming and outgoing firewall interface ports.
-1. Configure which alert information you want to forward:
+1. Select to forward specific alert details. We recommend selecting one of more of the following:
- | Parameter | Description |
- |--|--|
- | **Block illegal function codes** | Protocol violations - Illegal field value violating ICS protocol specification (potential exploit) |
- | **Block unauthorized PLC programming / firmware updates** | Unauthorized PLC changes. |
- | **Block unauthorized PLC stop** | PLC stop (downtime). |
- | **Block malware related alerts** | Blocking of the industrial malware attempts (TRITON, NotPetya, etc.). |
- | **Block unauthorized scanning** | Unauthorized scanning (potential reconnaissance) |
+ - **Block illegal function codes**: Protocol violations - Illegal field value violating ICS protocol specification (potential exploit)
+ - **Block unauthorized PLC programming / firmware updates**: Unauthorized PLC changes
+ - **Block unauthorized PLC stop** PLC stop (downtime)
+ - **Block malware related alerts**: Blocking of the industrial malware attempts, such as TRITON or NotPetya
+ - **Block unauthorized scanning**: Unauthorized scanning (potential reconnaissance)
-1. Select **Save**.
+For more information, see [Forward on-premises OT alert information](how-to-forward-alert-information-to-partners.md).
## Block the source of suspicious alerts
defender-for-iot Tutorial Palo Alto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-palo-alto.md
Title: Integrate Palo Alto with Microsoft Defender for IoT description: Defender for IoT has integrated its continuous ICS threat monitoring platform with Palo AltoΓÇÖs next-generation firewalls to enable blocking of critical threats, faster and more efficiently. Previously updated : 01/01/2023 Last updated : 09/06/2023
-# Integrate Palo-Alto with Microsoft Defender for IoT
+# Integrate Palo Alto with Microsoft Defender for IoT
-This article helps you learn how to integrate and use Palo Alto with Microsoft Defender for IoT.
+This article describes how to integrate Palo Alto with Microsoft Defender for IoT, in order to view both Palo Alto and Defender for IoT information in a single place, or use Defender for IoT data to configure blocking actions in Palo Alto.
-Defender for IoT has integrated its continuous ICS threat monitoring platform with Palo AltoΓÇÖs next-generation firewalls to enable blocking of critical threats, faster and more efficiently.
+Viewing both Defender for IoT and Palo Alto information together provides SOC analysts with multidimensional visibility so that they can block critical threats faster.
-The following integration types are available:
+## Cloud-based integrations
-- Automatic blocking option: Direct Defender for IoT to Palo Alto integration.--- Send recommendations for blocking to the central management system: Defender for IoT to Panorama integration.-
-In this article, you learn how to:
-
-> [!div class="checklist"]
+> [!TIP]
+> Cloud-based security integrations provide several benefits over on-premises solutions, such as centralized, simpler sensor management and centralized security monitoring.
+>
+> Other benefits include real-time monitoring, efficient resource use, increased scalability and robustness, improved protection against security threats, simplified maintenance and updates, and seamless integration with third-party solutions.
>
-> - Configure immediate blocking by a specified Palo Alto firewall
-> - Create Panorama blocking policies in Defender for IoT
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-## Prerequisites
-
-Before you begin, make sure that you have the following prerequisites:
--- Confirmation by the Panorama Administrator to allow automatic blocking.-- Access to a Defender for IoT OT sensor as an Admin user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).-
-## Configure immediate blocking by a specified Palo Alto firewall
-
-In cases, such as malware-related alerts, you can enable automatic blocking. Defender for IoT forwarding rules are utilized to send a blocking command directly to a specific Palo Alto firewall.
-
-Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created aren't affected by the rule.
-
-When Defender for IoT identifies a critical threat, it sends an alert that includes an option of blocking the infected source. Selecting **Block Source** in the alertΓÇÖs details activates the forwarding rule, which sends the blocking command to the specified Palo Alto firewall.
-
-**To configure immediate blocking**:
-
-1. Sign in to the sensor, and select **Forwarding**.
-
-1. Select **Create new rule**.
-
-1. In the **Add forwarding rule** pane, define the rule parameters:
- :::image type="content" source="media/tutorial-palo-alto/forwarding-rule.png" alt-text="Screenshot of creating the rules for your forwarding rule." lightbox="media/tutorial-palo-alto/forwarding-rule.png":::
+If you're integrating a cloud-connected OT sensor with Palo Alto we recommend that you connect Defender for IoT to [Microsoft Sentinel](concept-sentinel-integration.md).
- | Parameter | Description |
- |--|--|
- | **Rule name** | The forwarding rule name. |
- | **Minimal alert level** | The minimal security level incident to forward. For example, if Minor is selected, minor alerts and any alert above this severity level will be forwarded. |
- | **Any protocol detected** | Toggle off to select the protocols you want to include in the rule. |
- | **Traffic detected by any engine** | Toggle off to select the traffic you want to include in the rule. |
+Install one or more of the following solutions to view both Palo Alto and Defender for IoT data in Microsoft Sentinel.
-1. In the **Actions** area, set the following parameters:
+|Microsoft Sentinel solution |Learn more |
+|||
+|[Palo Alto PAN-OS Solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-paloaltopanos?tab=Overview) | [Palo Alto Networks (Firewall) connector for Microsoft Sentinel](/azure/sentinel/data-connectors/palo-alto-networks-firewall) |
+|[Palo Alto Networks Cortex Data Lake Solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-paloaltocdl?tab=Overview) | [Palo Alto Networks Cortex Data Lake (CDL) connector for Microsoft Sentinel](/azure/sentinel/data-connectors/palo-alto-networks-cortex-data-lake-cdl) |
+|[Palo Alto Prisma Cloud CSPM solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-paloaltoprisma?tab=Overview) | [Palo Alto Prisma Cloud CSPM (using Azure Function) connector for Microsoft Sentinel](/azure/sentinel/data-connectors/palo-alto-prisma-cloud-cspm-using-azure-function) |
- | Parameter | Description |
- |--|--|
- | **Server** | Select Palo Alto NGFW. |
- | **Host** | Enter the NGFW server IP address. |
- | **Port** | Enter the NGFW server port. |
- | **Username** | Enter the NGFW server username. |
- | **Password** | Enter the NGFW server password. |
+Microsoft Sentinel is a scalable cloud service for security information event management (SIEM) security orchestration automated response (SOAR). SOC teams can use the integration between Microsoft Defender for IoT and Microsoft Sentinel to collect data across networks, detect and investigate threats, and respond to incidents.
-1. Configure the following options to allow blocking of the suspicious sources by the Palo Alto firewall:
+In Microsoft Sentinel, the Defender for IoT data connector and solution brings out-of-the-box security content to SOC teams, helping them to view, analyze and respond to OT security alerts, and understand the generated incidents in the broader organizational threat contents.
- | Parameter | Description |
- |--|--|
- | **Block illegal function codes** | Protocol violations - Illegal field value violating ICS protocol specification (potential exploit). |
- | **Block unauthorized PLC programming / firmware updates** | Unauthorized PLC changes. |
- | **Block unauthorized PLC stop** | PLC stop (downtime). |
- | **Block malware related alerts** | Blocking of industrial malware attempts (TRITON, NotPetya, etc.). <br><br> You can select the option of **Automatic blocking**. <br> In that case, the blocking is executed automatically and immediately. |
- | **Block unauthorized scanning** | Unauthorized scanning (potential reconnaissance). |
+For more information, see:
-1. Select **Save**.
+- [Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel](iot-solution.md)
+- [Tutorial: Investigate and detect threats for IoT devices](iot-advanced-threat-monitoring.md)
-You'll then need to block any suspicious source.
+## On-premises integrations
-**To block a suspicious source**:
+If you're working with an air-gapped, locally managed OT sensor, you'll need an on-premises solution to view Defender for IoT and Palo Alto information in the same place.
-1. Navigate to the **Alerts** page, and select the alert related to the Palo Alto integration.
+In such cases, we recommend that you configure your OT sensor to send syslog files directly to Palo Alto, or use Defender for IoT's built-in API.
-1. To automatically block the suspicious source, select **Block Source**.
+For more information, see:
-1. In the **Please Confirm** dialog box, select **OK**.
+- [Forward on-premises OT alert information](how-to-forward-alert-information-to-partners.md)
+- [Defender for IoT API reference](references-work-with-defender-for-iot-apis.md)
-The suspicious source is now blocked by the Palo Alto firewall.
+## On-premises integration (legacy)
-## Create Panorama blocking policies in Defender for IoT
+This section describes how to integrate and use Palo Alto with Microsoft Defender for IoT using the legacy, on-premises integration, which automatically creates new policies in the Palo Alto Network's NMS and Panorama.
-Defender for IoT and Palo Alto Network's integration automatically creates new policies in the Palo Alto Network's NMS and Panorama.
+> [!IMPORTANT]
+> The legacy Palo Alto Panorama integration is supported through October 2024 using sensor version 23.1.3, and won't be supported in upcoming major software versions. For customers using the legacy integration, we recommend moving to one of the following methods:
+>
+> - If you're integrating your security solution with cloud-based systems, we recommend that you use data connectors through [Microsoft Sentinel](#cloud-based-integrations).
+> - For on-premises integrations, we recommend that you either configure your OT sensor to [forward syslog events, or use Defender for IoT APIs](#on-premises-integrations).
+>
-This table shows which incidents this integration is intended for:
+The following table shows which incidents this integration is intended for:
| Incident type | Description | |--|--|
This table shows which incidents this integration is intended for:
|**Protocol Violation** | A packet structure, or field value that violates the protocol specification. This alert can represent a misconfigured application, or a malicious attempt to compromise the device. For example, causing a buffer overflow condition in the target device. | |**PLC Stop** | A command that causes the device to stop functioning, thereby risking the physical process that is being controlled by the PLC. | |**Industrial malware found in the ICS network** | Malware that manipulates ICS devices using their native protocols, such as TRITON and Industroyer. Defender for IoT also detects IT malware that has moved laterally into the ICS, and SCADA environment. For example, Conficker, WannaCry, and NotPetya. |
-|**Scanning malware** | Reconnaissance tools that collect data about system configuration in a pre-attack phase. For example, the Havex Trojan scans industrial networks for devices using OPC, which is a standard protocol used by Windows-based SCADA systems to communicate with ICS devices. |
+|**Scanning malware** | Reconnaissance tools that collect data about system configuration in a preattack phase. For example, the Havex Trojan scans industrial networks for devices using OPC, which is a standard protocol used by Windows-based SCADA systems to communicate with ICS devices. |
-When Defender for IoT detects a pre-configured use case, the **Block Source** button is added to the alert. Then, when the Defender for IoT user selects the **Block Source** button, Defender for IoT creates policies on Panorama by sending the predefined forwarding rule.
+When Defender for IoT detects a preconfigured use case, the **Block Source** button is added to the alert. Then, when the Defender for IoT user selects the **Block Source** button, Defender for IoT creates policies on Panorama by sending the predefined forwarding rule.
The policy is applied only when the Panorama administrator pushes it to the relevant NGFW in the network.
-In IT networks, there may be dynamic IP addresses. Therefore, for those subnets, the policy must be based on FQDN (DNS name) and not the IP address. Defender for IoT performs reverse lookup and matches devices with dynamic IP address to their FQDN (DNS name) every configured number of hours.
+In IT networks, there might be dynamic IP addresses. Therefore, for those subnets, the policy must be based on FQDN (DNS name) and not the IP address. Defender for IoT performs reverse lookup and matches devices with dynamic IP address to their FQDN (DNS name) every configured number of hours.
+
+In addition, Defender for IoT sends an email to the relevant Panorama user to notify that a new policy created by Defender for IoT is waiting for the approval. The figure below presents the Defender for IoT and Panorama integration architecture:
++
+### Prerequisites
-In addition, Defender for IoT sends an email to the relevant Panorama user to notify that a new policy created by Defender for IoT is waiting for the approval. The figure below presents the Defender for IoT and Panorama integration architecture.
+Before you begin, make sure that you have the following prerequisites:
+
+- Confirmation by the Panorama Administrator to allow automatic blocking.
+- Access to a Defender for IoT OT sensor as an [Admin user](roles-on-premises.md).
+### Configure DNS lookup
The first step in creating Panorama blocking policies in Defender for IoT is to configure DNS lookup.
The first step in creating Panorama blocking policies in Defender for IoT is to
1. Select **Save**.
-## Block suspicious traffic with the Palo Alto firewall
+When you're done, continue by creating forwarding rules as needed:
+
+- [Configure immediate blocking by a specified Palo Alto firewall](#configure-immediate-blocking-by-a-specified-palo-alto-firewall)
+- [Block suspicious traffic with the Palo Alto firewall](#block-suspicious-traffic-with-the-palo-alto-firewall)
-Suspicious traffic needs to be blocked with the Palo Alto firewall. You can block suspicious traffic through the use forwarding rules in Defender for IoT.
+### Configure immediate blocking by a specified Palo Alto firewall
-Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created aren't affected by the rule.
+Configure automatic blocking in cases such as malware-related alerts, by configuring a Defender for IoT forwarding rule to send a blocking command directly to a specific Palo Alto firewall.
-1. Sign in to the sensor, and select **Forwarding**.
+When Defender for IoT identifies a critical threat, it sends an alert that includes an option of blocking the infected source. Selecting **Block Source** in the alertΓÇÖs details activates the forwarding rule, which sends the blocking command to the specified Palo Alto firewall.
-1. Select **Create new rule**.
+When creating your forwarding rule:
-1. In the **Add forwarding rule** pane, define the rule parameters:
+1. In the **Actions** area, define the server, host, port, and credentials for the Palo Alto NGFW.
- :::image type="content" source="media/tutorial-palo-alto/edit.png" alt-text="Screenshot of creating the rules for your Palo Alto Panorama forwarding rule." lightbox="media/tutorial-palo-alto/forwarding-rule.png":::
+1. Configure the following options to allow blocking of the suspicious sources by the Palo Alto firewall:
| Parameter | Description | |--|--|
- | **Rule name** | The forwarding rule name. |
- | **Minimal alert level** | The minimal security level incident to forward. For example, if Minor is selected, minor alerts and any alert above this severity level will be forwarded. |
- | **Any protocol detected** | Toggle off to select the protocols you want to include in the rule. |
- | **Traffic detected by any engine** | Toggle off to select the traffic you want to include in the rule. |
+ | **Block illegal function codes** | Protocol violations - Illegal field value violating ICS protocol specification (potential exploit). |
+ | **Block unauthorized PLC programming / firmware updates** | Unauthorized PLC changes. |
+ | **Block unauthorized PLC stop** | PLC stop (downtime). |
+ | **Block malware related alerts** | Blocking of industrial malware attempts (TRITON, NotPetya, etc.). <br><br> You can select the option of **Automatic blocking**. <br> In that case, the blocking is executed automatically and immediately. |
+ | **Block unauthorized scanning** | Unauthorized scanning (potential reconnaissance). |
-1. In the **Actions** area, set the following parameters:
+For more information, see [Forward on-premises OT alert information](how-to-forward-alert-information-to-partners.md).
- | Parameter | Description |
- |--|--|
- | **Server** | Select Palo Alto NGFW. |
- | **Host** | Enter the NGFW server IP address. |
- | **Port** | Enter the NGFW server port. |
- | **Username** | Enter the NGFW server username. |
- | **Password** | Enter the NGFW server password. |
- | **Report Addresses** | Define how the blocking is executed, as follows: <br><br> - **By IP Address**: Always creates blocking policies on Panorama based on the IP address. <br> - **By FQDN or IP Address**: Creates blocking policies on Panorama based on FQDN if it exists, otherwise by the IP Address. |
- | **Email** | Set the email address for the policy notification email. |
+### Block suspicious traffic with the Palo Alto firewall
+
+Configure a Defender for IoT forwarding rule to block suspicious traffic with the Palo Alto firewall.
+
+When creating your forwarding rule:
+
+1. In the **Actions** area, define the server, host, port, and credentials for the Palo Alto NGFW.
+
+1. Define how the blocking is executed, as follows:
+
+ - **By IP Address**: Always creates blocking policies on Panorama based on the IP address.
+ - **By FQDN or IP Address**: Creates blocking policies on Panorama based on FQDN if it exists, otherwise by the IP Address.
+
+1. In the **Email** field, enter the email address for the policy notification email.
> [!NOTE] > Make sure you have configured a Mail Server in the Defender for IoT. If no email address is entered, Defender for IoT does not send a notification email.
Forwarding alert rules run only on alerts triggered after the forwarding rule is
| **Block malware related alerts** | Blocking of industrial malware attempts (TRITON, NotPetya, etc.). <br><br> You can select the option of **Automatic blocking**. <br> In that case, the blocking is executed automatically and immediately. | | **Block unauthorized scanning** | Unauthorized scanning (potential reconnaissance). |
-1. Select **Save**.
+For more information, see [Forward on-premises OT alert information](how-to-forward-alert-information-to-partners.md).
-You'll then need to block any suspicious source.
+### Block specific suspicious sources
-**To block a suspicious source**:
+After you've created your forwarding rule, use the following steps to block specific, suspicious sources:
-1. Navigate to the **Alerts** page, and select the alert related to the Palo Alto integration.
+1. In the OT sensor's **Alerts** page, locate and select the alert related to the Palo Alto integration.
1. To automatically block the suspicious source, select **Block Source**.
-1. Select **OK**.
+1. In the **Please Confirm** dialog box, select **OK**.
+
+The suspicious source is now blocked by the Palo Alto firewall.
## Next step
defender-for-iot Tutorial Qradar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-qradar.md
A **QID** is a QRadar event identifier. Since all Defender for IoT reports are t
Create a forwarding rule from your on-premises management console to forward alerts to QRadar.
-Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created aren't affected by the rule.
+Forwarding alert rules run only on alerts triggered after the forwarding rule is created. The rule doesn't affect any alerts already in the system from before the forwarding rule was created.
-**To create a QRadar forwarding rule**:
+The following code is an example of a payload sent to QRadar:
-1. Sign in to the on-premises management console and select **Forwarding**.
-
-1. Select the **+** to create a new rule.
-
-1. In the **Create Forwarding Rule** pane, define the following values:
-
- | Parameter | Description |
- |--|--|
- | **Name** | Enter a meaningful name for the forwarding rule. |
- | **Warning** | From the drop-down menu, select the minimal security level incident to forward. <br> For example, if **Minor** is selected, minor alerts and any alert above this severity level will be forwarded.|
- | **Protocols** | To select a specific protocol, select **Specific**, and select the protocol for which this rule is applied. <br> By default, all the protocols are selected. |
- | **Engines** | To select a specific security engine for which this rule is applied, select **Specific**, and select the engine. <br> By default, all the security engines are involved. |
- | **System Notifications** | Forward the sensor's *online* and *offline* status. |
- | **Alert Notifications** | Forward the sensor's alerts. |
+```sample payload
+<9>May 5 12:29:23 sensor_Agent LEEF:1.0|CyberX|CyberX platform|2.5.0|CyberX platform Alert|devTime=May 05 2019 15:28:54 devTimeFormat=MMM dd yyyy HH:mm:ss sev=2 cat=XSense Alerts title=Device is Suspected to be Disconnected (Unresponsive) score=81 reporter=192.168.219.50 rta=0 alertId=6 engine=Operational senderName=sensor Agent UUID=5-1557059334000 site=Site zone=Zone actions=handle dst=192.168.2.2 dstName=192.168.2.2 msg=Device 192.168.2.2 is suspected to be disconnected (unresponsive).
+```
-1. In the **Actions** area, select **Add**, and then select **Qradar**. For example:
+When configuring the forwarding rule:
- :::image type="content" source="media/tutorial-qradar/create.png" alt-text="Screenshot of the Create a Forwarding Rule window." lightbox="media/tutorial-qradar/create.png":::
+1. In the **Actions** area, select **Qradar**.
-1. Define the QRadar **Host**, **Port**, and **Timezone**. You can also choose to **Enable Encryption** and then **CONFIGURE ENCRYPTION**, and you can choose to **Manage alerts externally**.
+1. Enter details for the QRadar host, port, and timezone.
-1. Select **SAVE**.
+1. Optionally, select to enable encryption, and then configure encryption, and/or select to manage alerts externally.
-The following is an example of a payload sent to QRadar:
+For more information, see [Forward on-premises OT alert information](how-to-forward-alert-information-to-partners.md).
-```sample payload
-<9>May 5 12:29:23 sensor_Agent LEEF:1.0|CyberX|CyberX platform|2.5.0|CyberX platform Alert|devTime=May 05 2019 15:28:54 devTimeFormat=MMM dd yyyy HH:mm:ss sev=2 cat=XSense Alerts title=Device is Suspected to be Disconnected (Unresponsive) score=81 reporter=192.168.219.50 rta=0 alertId=6 engine=Operational senderName=sensor Agent UUID=5-1557059334000 site=Site zone=Zone actions=handle dst=192.168.2.2 dstName=192.168.2.2 msg=Device 192.168.2.2 is suspected to be disconnected (unresponsive).
-```
## Map notifications to QRadar
For example:
| Parameter | Description | |--|--|
- | **New Property** | Choose from the list below: <br><br> - Sensor Alert Description <br> - Sensor Alert ID <br> - Sensor Alert Score <br> - Sensor Alert Title <br> - Sensor Destination Name <br> - Sensor Direct Redirect <br> - Sensor Sender IP <br> - Sensor Sender Name <br> - Sensor Alert Engine <br> - Sensor Source Device Name |
+ | **New Property** | One of the following: <br><br> - Sensor Alert Description <br> - Sensor Alert ID <br> - Sensor Alert Score <br> - Sensor Alert Title <br> - Sensor Destination Name <br> - Sensor Direct Redirect <br> - Sensor Sender IP <br> - Sensor Sender Name <br> - Sensor Alert Engine <br> - Sensor Source Device Name |
| **Optimize Parsing** | Check on. | | **Field Type** | `AlphaNumeric` | | **Enabled** | Check on. |
defender-for-iot Tutorial Splunk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-splunk.md
Title: Integrate Splunk with Microsoft Defender for IoT
-description: In this tutorial, learn how to integrate Splunk with Microsoft Defender for IoT.
- Previously updated : 02/07/2022
+description: This article describes how to integrate Splunk with Microsoft Defender for IoT for multidimensional visibility across OT protocols and IIoT devices.
+ Last updated : 09/06/2023 # Integrate Splunk with Microsoft Defender for IoT
-This article helps you learn how to integrate, and use Splunk with Microsoft Defender for IoT.
+This article describes how to integrate Splunk with Microsoft Defender for IoT, in order to view both Splunk and Defender for IoT information in a single place.
-Defender for IoT mitigates IIoT, ICS, and SCADA risk with patented, ICS-aware self-learning engines that deliver immediate insights about ICS devices, vulnerabilities, and threats in less than an image hour and without relying on agents, rules or signatures, specialized skills, or prior knowledge of the environment.
+Viewing both Defender for IoT and Splunk information together provides SOC analysts with multidimensional visibility into the specialized OT protocols and IIoT devices deployed in industrial environments, along with ICS-aware behavioral analytics to rapidly detect suspicious or anomalous behavior.
-To address a lack of visibility into the security and resiliency of OT networks, Defender for IoT developed the Defender for IoT, IIoT, and ICS threat monitoring application for Splunk, a native integration between Defender for IoT and Splunk that enables a unified approach to IT and OT security.
+## Cloud-based integrations
-The application provides SOC analysts with multidimensional visibility into the specialized OT protocols and IIoT devices deployed in industrial environments, along with ICS-aware behavioral analytics to rapidly detect suspicious or anomalous behavior. The application also enables both IT, and OT incident response from within one corporate SOC. This is an important evolution given the ongoing convergence of IT and OT to support new IIoT initiatives, such as smart machines and real-time intelligence.
+> [!TIP]
+> Cloud-based security integrations provide several benefits over on-premises solutions, such as centralized, simpler sensor management and centralized security monitoring.
+>
+> Other benefits include real-time monitoring, efficient resource use, increased scalability and robustness, improved protection against security threats, simplified maintenance and updates, and seamless integration with third-party solutions.
+>
-The Splunk application can be installed locally ('Splunk Enterprise') or run on a cloud ('Splunk Cloud'). The Splunk integration along with Defender for IoT supports 'Splunk Enterprise' only.
+If you're integrating a cloud-connected OT sensor with Splunk, we recommend that you use Splunk's own [OT Security Add-on for Splunk](https://apps.splunk.com/app/5151). For more information, see:
-> [!NOTE]
-> Microsoft Defender for IoT was formally known as [CyberX](https://blogs.microsoft.com/blog/2020/06/22/microsoft-acquires-cyberx-to-accelerate-and-secure-customers-iot-deployments/). References to CyberX refer to Defender for IoT.
+- [The Splunk documentation on installing add-ins](https://docs.splunk.com/Documentation/AddOns/released/Overview/Distributedinstall)
+- [The Splunk documentation on the OT Security Add-on for Splunk](https://splunk.github.io/ot-security-solution/integrationguide/)
-In this article, you learn how to:
-> [!div class="checklist"]
->
-> - Download the Defender for IoT application in Splunk
-> - Send Defender for IoT alerts to Splunk
+## On-premises integrations
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+If you're working with an air-gapped, locally managed OT sensor, you need an on-premises solution to view Defender for IoT and Splunk information in the same place.
+
+In such cases, we recommend that you configure your OT sensor to send syslog files directly to Splunk, or use Defender for IoT's built-in API.
+
+For more information, see:
+
+- [Forward on-premises OT alert information](how-to-forward-alert-information-to-partners.md)
+- [Defender for IoT API reference](references-work-with-defender-for-iot-apis.md)
-## Prerequisites
-Before you begin, make sure that you have the following prerequisites:
-### Version requirements
+## On-premises integration (legacy)
-The following versions are required for the application to run.
+This section describes how to integrate Defender for IoT and Splunk using the legacy, on-premises integration.
-- Defender for IoT version 2.4 and above.
+> [!IMPORTANT]
+> The legacy Splunk integration is supported through October 2024 using sensor version 23.1.3, and won't be supported in upcoming major software versions. For customers using the legacy integration, we recommend moving to one of the following methods:
+>
+> - If you're integrating your security solution with cloud-based systems, we recommend that you use the [OT Security Add-on for Splunk](#cloud-based-integrations).
+> - For on-premises integrations, we recommend that you either configure your OT sensor to [forward syslog events, or use Defender for IoT APIs](#on-premises-integrations).
-- Splunkbase version 11 and above.
+Microsoft Defender for IoT was formally known as [CyberX](https://blogs.microsoft.com/blog/2020/06/22/microsoft-acquires-cyberx-to-accelerate-and-secure-customers-iot-deployments/). References to CyberX refer to Defender for IoT.
-- Splunk Enterprise version 7.2 and above.
+### Prerequisites
-### Permission requirements
+Before you begin, make sure that you have the following prerequisites:
-Make sure you have:
+|Prerequisites |Description |
+|||
+|**Version requirements** | The following versions are required for the application to run: <br>- Defender for IoT version 2.4 and above. <br>- Splunkbase version 11 and above. <br>- Splunk Enterprise version 7.2 and above. |
+|**Permission requirements** | Make sure you have: <br>- Access to a Defender for IoT OT sensor as an [Admin user](roles-on-premises.md). <br>- Splunk user with an *Admin* level user role. |
-- Access to a Defender for IoT OT sensor as an Admin user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).-- Splunk user with an *Admin* level user role.
+> [!NOTE]
+> The Splunk application can be installed locally ('Splunk Enterprise') or run on a cloud ('Splunk Cloud'). The Splunk integration along with Defender for IoT supports 'Splunk Enterprise' only.
+>
-## Download the Defender for IoT application in Splunk
+### Download the Defender for IoT application in Splunk
-To access the Defender for IoT application within Splunk, you need to download the application form the Splunkbase application store.
+To access the Defender for IoT application within Splunk, you need to download the application from the Splunkbase application store.
**To access the Defender for IoT application in Splunk**:
To access the Defender for IoT application within Splunk, you need to download t
1. Select the **LOGIN TO DOWNLOAD BUTTON**.
-## Send Defender for IoT alerts to Splunk
-
-The Defender for IoT alerts provide information about an extensive range of security events. These events include:
--- Deviations from the learned baseline network activity.--- Malware detections.--- Detections based on suspicious operational changes.--- Network anomalies.--- Protocol deviations from protocol specifications.-
-You can also configure Defender for IoT to send alerts to the Splunk server, where alert information is displayed in the Splunk Enterprise dashboard.
--
-To send alert information to the Splunk servers from Defender for IoT, you need to create a Forwarding Rule.
-
-Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created aren't affected by the rule.
-
-**To create the forwarding rule**:
-
-1. Sign in to the sensor, and select **Forwarding**.
-
-1. Select **Create new rule**.
-
-1. In the **Add forwarding rule** pane, define the rule parameters:
-
- :::image type="content" source="media/tutorial-splunk/forwarding-rule.png" alt-text="Screenshot of creating the rules for your forwarding rule." lightbox="media/tutorial-splunk/forwarding-rule.png":::
-
- | Parameter | Description |
- |--|--|
- | **Rule name** | The forwarding rule name. |
- | **Minimal alert level** | The minimal security level incident to forward. For example, if Minor is selected, minor alerts and any alert above this severity level will be forwarded. |
- | **Any protocol detected** | Toggle off to select the protocols you want to include in the rule. |
- | **Traffic detected by any engine** | Toggle off to select the traffic you want to include in the rule. |
-
-1. In the **Actions** area, define the following values:
-
- | Parameter | Description |
- |--|--|
- | **Server** | Select Splunk Server. |
- | **Host** | Enter the Splunk server address. |
- | **Port** | Enter 8089. |
- | **Username** | Enter the Splunk server username. |
- | **Password** | Enter the Splunk server password. |
-
-1. Select **Save**.
- ## Next steps > [!div class="nextstepaction"]
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
Title: What's new in Microsoft Defender for IoT
-description: This article describes features available in Microsoft Defender for IoT, across both OT and Enterprise IoT networks, and both on-premises and in the Azure portal.
+description: This article describes new features available in Microsoft Defender for IoT, including both OT and Enterprise IoT networks, and both on-premises and in the Azure portal.
Previously updated : 09/14/2023 Last updated : 10/23/2023
Features released earlier than nine months ago are described in the [What's new
> Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >
+## November 2023
+
+|Service area |Updates |
+|||
+| **OT networks** | [Updated security stack integration guidance](#updated-security-stack-integration-guidance)|
+
+### Updated security stack integration guidance
+
+Defender for IoT is refreshing its security stack integrations to improve the overall robustness, scalability, and ease of maintenance of various security solutions.
+
+If you're integrating your security solution with cloud-based systems, we recommend that you use data connectors through [Microsoft Sentinel](concept-sentinel-integration.md). For on-premises integrations, we recommend that you either configure your OT sensor to [forward syslog events](how-to-forward-alert-information-to-partners.md)), or use [Defender for IoT APIs](references-work-with-defender-for-iot-apis.md).
+
+The legacy Aruba ClearPass, Palo Alto Panorama, and Splunk integrations are supported through October 2024 using sensor version 23.1.3, and won't be supported in upcoming major software versions.
+
+For customers using legacy integration methods, we recommend moving your integrations to newly recommended methods. For more information, see:
+
+- [Integrate ClearPass with Microsoft Defender for IoT](tutorial-clearpass.md)
+- [Integrate Palo Alto with Microsoft Defender for IoT](tutorial-palo-alto.md)
+- [Integrate Splunk with Microsoft Defender for IoT](tutorial-splunk.md)
+- [Integrations with Microsoft and partner services](integrate-overview.md)
+ ## September 2023 |Service area |Updates |
For more information, see [Enrich Windows workstation and server data with a loc
### Automatically resolved OS notifications
-After updating your OT sensor to version 22.3.8, no new device notifications for **Operating system changes** are generated. Existing **Operating system changes** notifications are automatically resolved if they aren't dismissed or otherwise handled within 14 days.
+After you've updated your OT sensor to version 22.3.8, no new device notifications for **Operating system changes** are generated. Existing **Operating system changes** notifications are automatically resolved if they aren't dismissed or otherwise handled within 14 days.
For more information, see [Device notification responses](how-to-work-with-the-sensor-device-map.md#device-notification-responses)
For more information, see [Malware engine alerts](alert-engine-messages.md#malwa
Starting in version 22.3.6, selected notifications on the OT sensor's **Device map** page are now automatically resolved if they aren't dismissed or otherwise handled within 14 days.
-After updating your sensor version, the **Inactive devices** and **New OT devices** notifications no longer appear. While any **Inactive devices** notifications that are left over from before the update are automatically dismissed, you may still have legacy **New OT devices** notifications to handle. Handle these notifications as needed to remove them from your sensor.
+After you've updated your sensor version, the **Inactive devices** and **New OT devices** notifications no longer appear. While any **Inactive devices** notifications that are left over from before the update are automatically dismissed, you might still have legacy **New OT devices** notifications to handle. Handle these notifications as needed to remove them from your sensor.
For more information, see [Manage device notifications](how-to-work-with-the-sensor-device-map.md#manage-device-notifications). ### New Microsoft Sentinel incident experience for Defender for IoT
-Microsoft Sentinel's new [incident experience](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/the-new-incident-experience-is-here/ba-p/3717042) includes specific features for Defender for IoT customers. When investigating OT/IoT-related incidents, SOC analysts can now use the following enhancements on incident details pages:
+Microsoft Sentinel's new [incident experience](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/the-new-incident-experience-is-here/ba-p/3717042) includes specific features for Defender for IoT customers. SOC analysts who are investigating OT/IoT-related can now use the following enhancements on incident details pages:
- **View related sites, zones, sensors, and device importance** to better understand an incident's business impact and physical location.
OT network sensors connect to Azure to provide alert and device data and sensor
For OT sensors with software versions 22.x and higher, Defender for IoT now supports increased security when adding outbound allow rules for connections to Azure. Now you can define your outbound allow rules to connect to Azure without using wildcards.
-When defining outbound allow rules to connect to Azure, you need to enable HTTPS traffic to each of the required endpoints on port 443. Outbound allow rules are defined once for all OT sensors onboarded to the same subscription.
+When defining outbound *allow* rules to connect to Azure, you need to enable HTTPS traffic to each of the required endpoints on port 443. Outbound *allow* rules are defined once for all OT sensors onboarded to the same subscription.
For supported sensor versions, download the full list of required secure endpoints from the following locations in the Azure portal: -- **A successful sensor registration page**: After onboarding a new OT sensor, version 22.x, the successful registration page now provides instructions for next steps, including a link to the endpoints you'll need to add as secure outbound allow rules on your network. Select the **Download endpoint details** link to download the JSON file.
+- **A successful sensor registration page**: After onboarding a new OT sensor with version 22.x, the successful registration page now provides instructions for next steps, including a link to the endpoints you'll need to add as secure outbound allow rules on your network. Select the **Download endpoint details** link to download the JSON file.
For example:
The Enterprise IoT integration with Microsoft Defender for Endpoint is now in Ge
### Same passwords for cyberx_host and cyberx users
-During OT monitoring software installations and updates, the **cyberx** user is assigned a random password. When updating from version 10.x.x to version 22.1.7, the **cyberx_host** password is assigned with an identical password to the **cyberx** user.
+During OT monitoring software installations and updates, the **cyberx** user is assigned a random password. When you update from version 10.x.x to version 22.1.7, the **cyberx_host** password is assigned with an identical password to the **cyberx** user.
For more information, see [Install OT agentless monitoring software](how-to-install-software.md) and [Update Defender for IoT OT monitoring software](update-ot-software.md).
For more information, see [Install OT agentless monitoring software](how-to-inst
Starting in OT sensor versions 22.2.4, you can now take the following actions from the sensor console's **Device inventory** page: -- **Merge duplicate devices**. You may need to merge devices if the sensor has discovered separate network entities that are associated with a single, unique device. Examples of this scenario might include a PLC with four network cards, a laptop with both WiFi and a physical network card, or a single workstation with multiple network cards.
+- **Merge duplicate devices**. You might need to merge devices if the sensor has discovered separate network entities that are associated with a single, unique device. Examples of this scenario might include a PLC with four network cards, a laptop with both WiFi and a physical network card, or a single workstation with multiple network cards.
- **Delete single devices**. Now, you can delete a single device that hasn't communicated for at least 10 minutes.
Update your **IoT OT Threat Monitoring with Defender for IoT** solution to use t
For more information, see: -- [Tutorial: Integrate Defender for Iot and Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended)
+- [Tutorial: Integrate Defender for IoT and Sentinel](../../sentinel/iot-solution.md?tabs=use-out-of-the-box-analytics-rules-recommended)
- [View and manage alerts on the Defender for IoT portal (Preview)](how-to-manage-cloud-alerts.md) - [View alerts on your sensor](how-to-view-alerts.md)
For more information, see [Create custom alert rules on an OT sensor](how-to-acc
### CLI command updates
-The Defender for Iot sensor software installation is now containerized. With the now-containerized sensor, you can use the *cyberx_host* user to investigate issues with other containers or the operating system, or to send files via FTP.
+The Defender for IoT sensor software installation is now containerized. With the now-containerized sensor, you can use the *cyberx_host* user to investigate issues with other containers or the operating system, or to send files via FTP.
This *cyberx_host* user is available by default and connects to the host machine. If you need to, recover the password for the *cyberx_host* user from the **Sites and sensors** page in Defender for IoT.
For more information, see [Defender for IoT installation](how-to-install-softwar
To use all of Defender for IoT's latest features, make sure to update your sensor software versions to 22.1.x.
-If you're on a legacy version, you may need to run a series of updates in order to get to the latest version. You'll also need to update your firewall rules and reactivate your sensor with a new activation file.
+If you're on a legacy version, you might need to run a series of updates in order to get to the latest version. You'll also need to update your firewall rules and reactivate your sensor with a new activation file.
After you've upgraded to version 22.1.x, the new upgrade log can be found at the following path, accessed via SSH and the *cyberx_host* user: `/opt/sensor/logs/legacy-upgrade.log`.
deployment-environments How To Configure Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-catalog.md
Title: Add and configure a catalog
+ Title: Add and configure a catalog hosted in a GitHub or Azure DevOps repository
description: Learn how to add a catalog in your Azure Deployment Environments dev center to provide environment templates for your developers. Catalogs are repositories stored in GitHub or Azure DevOps. Previously updated : 04/25/2023 Last updated : 10/23/2023
Learn how to add and configure a [catalog](./concept-environments-key-concepts.m
You can use a catalog to provide your development teams with a curated set of infrastructure as code (IaC) templates called [environment definitions](./concept-environments-key-concepts.md#environment-definitions). Your catalog is encrypted; Azure Deployment Environments supports encryption at rest with platform-managed encryption keys, which are managed by Microsoft for Azure Services.
-For more information about environment definitions, see [Add and configure an environment definition](./configure-environment-definition.md).
+Deployment Environments supports catalogs hosted in Azure Repos (the repository service in Azure, commonly referred to as Azure DevOps) and catalogs hosted in GitHub. Azure DevOps supports authentication by assigning permissions to a managed identity. Azure DevOps and GitHub both support the use of PATs for authentication. To further secure your templates, the catalog is encrypted; Azure Deployment Environments supports encryption at rest with platform-managed encryption keys, which Microsoft for Azure Services manages.
A catalog is a repository that's hosted in [GitHub](https://github.com) or [Azure DevOps](https://dev.azure.com/). - To learn how to host a repository in GitHub, see [Get started with GitHub](https://docs.github.com/get-started). - To learn how to host a Git repository in an Azure DevOps project, see [Azure Repos](https://azure.microsoft.com/services/devops/repos/).
-We offer a [sample catalog](https://aka.ms/deployment-environments/SampleCatalog) that you can use as your repository. You also can use your own private repository, or you can fork and customize the environment definitions in the sample catalog.
+Microsoft offers a [sample catalog](https://aka.ms/deployment-environments/SampleCatalog) that you can use as your repository. You also can use your own private repository, or you can fork and customize the environment definitions in the sample catalog.
In this article, you learn how to: > [!div class="checklist"] >
+> - Configure a managed identity for the dev center.
> - Add a catalog. > - Update a catalog. > - Delete a catalog.
+## Configure a managed identity for the dev center
+
+After you create a dev center, before you can attach a catalog, you must configure a [managed identity](concept-environments-key-concepts.md#identities) to the dev center. You can attach either a system-assigned managed identity (system-assigned MSI) or a user-assigned managed identity (user-assigned MSI). You then assign roles to the managed identity to allow the dev center to create environment types in your subscription and read the Azure DevOps project that contains the catalog repo.
+
+If your dev center doesn't have an MSI attached, follow the steps in this article to create and attach one: [Configure a managed identity](how-to-configure-managed-identity.md).
++
+To learn more about managed identities, see: [What are managed identities for Azure resources?](/entra/identity/managed-identities-azure-resources/overview)
+ ## Add a catalog
-In Azure Deployment Environments, catalogs help you provide a set of curated IaC templates for your development teams to create environments. You can attach either a GitHub repository or an Azure DevOps repository as a catalog.
+You can add a catalog from an Azure DevOps repository or a GitHub repository. You can choose to authenticate by assigning permissions to an MSI, also called a managed identity, or by using a PAT, which you store in a key vault.
+
+Select the tab for the type of repository and authentication you want to use.
+
+## [Azure DevOps repo with MSI](#tab/DevOpsRepoMSI/)
+
+To add a catalog, you complete these tasks:
+
+- Configure a managed identity for the dev center.
+- Assign roles for the dev center managed identity.
+- Assign permissions in Azure DevOps for the dev center managed identity.
+- Add your repository as a catalog.
+
+### Assign permissions in Azure DevOps for the dev center managed identity
+You must give the dev center managed identity permissions to the repository in Azure DevOps.
+
+1. Sign in to your [Azure DevOps organization](https://dev.azure.com).
+
+1. Select **Organization settings**.
+
+ :::image type="content" source="media/how-to-configure-catalog/devops-organization-settings.png" alt-text="Screenshot showing the Azure DevOps organization page, with Organization Settings highlighted.":::
+
+1. On the **Overview** page, select **Users**.
+
+ :::image type="content" source="media/how-to-configure-catalog/devops-organization-overview.png" alt-text="Screenshot showing the Organization overview page, with Users highlighted.":::
+
+1. On the **Users** page, select **Add users**.
+
+ :::image type="content" source="media/how-to-configure-catalog/devops-add-user.png" alt-text="Screenshot showing the Users page, with Add user highlighted.":::
+
+1. Complete **Add new users** by entering or selecting the following information, and then select **Add**:
+
+ |Name |Value |
+ ||-|
+ |**Users or Service Principals**|Enter the name of your dev center. </br> When you use a system-assigned managed account, specify the name of the dev center, not the object ID of the managed account. When you use a user-assigned managed account, use the name of the managed account. |
+ |**Access level**|Select **Basic**.|
+ |**Add to projects**|Select the project that contains your repository.|
+ |**Azure DevOps Groups**|Select **Project Readers**.|
+ |**Send email invites (to Users only)**|Clear the checkbox.|
+
+ :::image type="content" source="media/how-to-configure-catalog/devops-add-user-blade.png" alt-text="Screenshot showing Add users, with example entries and Add highlighted.":::
+
+## Add a catalog to the dev center
+Azure Deployment Environments supports attaching Azure DevOps repositories and GitHub repositories. You can store a set of curated IaC templates in a repository. Attaching the repository to a dev center as a catalog gives your development teams access to the templates and enables them to quickly create consistent environments.
+
+In this article, you attach an Azure DevOps repository.
+
+### Add a catalog to your dev center
+1. Navigate to your dev center.
+1. In the left menu under **Environment configuration**, select **Catalogs**, and then select **Add**.
+
+ :::image type="content" source="media/how-to-configure-catalog/catalogs-page.png" alt-text="Screenshot that shows the Catalogs pane.":::
+
+1. In **Add catalog**, enter the following information, and then select **Add**:
+
+ | Field | Value |
+ | -- | -- |
+ | **Name** | Enter a name for the catalog. |
+ | **Catalog location** | Select **Azure DevOps**. |
+ | **Authentication type** | Select **Managed Identity**.|
+ | **Organization** | Select your Azure DevOps organization. |
+ | **Project** | From the list of projects, select the project that stores the repo. |
+ | **Repo** | From the list of repos, select the repo you want to add. |
+ | **Branch** | Select the branch. |
+ | **Folder path** | Dev Box retrieves a list of folders in your branch. Select the folder that stores your IaC templates. |
+
+ :::image type="content" source="media/how-to-configure-catalog/add-catalog-to-dev-center.png" alt-text="Screenshot showing the add catalog pane with examples entries and Add highlighted.":::
+
+1. In **Catalogs** for the dev center, verify that your catalog appears. If the connection is successful, **Status** is **Connected**. Connecting to a catalog can take a few minutes the first time.
++
+## [Azure DevOps repo with PAT](#tab/DevOpsRepoPAT/)
To add a catalog, you complete these tasks:
To add a catalog, you complete these tasks:
- Store the personal access token as a key vault secret in Azure Key Vault. - Add your repository as a catalog.
-### Get the clone URL for your repository
+### Get the clone URL for your Azure DevOps repository
-You can choose from two types of repositories:
+1. Go to the home page of your team collection (for example, `https://contoso-web-team.visualstudio.com`), and then select your project.
+1. [Get the Azure Repos Git repo clone URL](/azure/devops/repos/git/clone#get-the-clone-url-of-an-azure-repos-git-repo).
+1. Copy and save the URL. You use it later.
+
+### Create a personal access token in Azure DevOps
-- A GitHub repository-- An Azure DevOps repository
+1. Go to the home page of your team collection (for example, `https://contoso-web-team.visualstudio.com`) and select your project.
+1. Create a [personal access token](/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate#create-a-pat).
+1. Save the generated token. You use the token later.
-#### Get the clone URL of a GitHub repository
+### Create a Key Vault
+You need an Azure Key Vault to store the personal access token (PAT) that is used to grant Azure access to your repository. Key vaults can control access with either access policies or role-based access control (RBAC). If you have an existing key vault, you can use it, but you should check whether it uses access policies or RBAC assignments to control access. For help with configuring an access policy for a key vault, see [Assign a Key Vault access policy](/azure/key-vault/general/assign-access-policy?branch=main&tabs=azure-portal).
-1. Go to the home page of the GitHub repository that contains the template definitions.
-1. [Get the GitHub repo clone URL](/azure/devops/repos/git/clone#get-the-clone-url-of-a-github-repo).
-1. Copy and save the URL. You use it later.
+Use the following steps to create an RBAC key vault:
-#### Get the clone URL of an Azure DevOps repository
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. In the Search box, enter *Key Vault*.
+1. From the results list, select **Key Vault**.
+1. On the Key Vault page, select **Create**.
+1. On the Create key vault tab, provide the following information:
-1. Go to the home page of your team collection (for example, `https://contoso-web-team.visualstudio.com`), and then select your project.
-1. [Get the Git repo clone URL](/azure/devops/repos/git/clone#get-the-clone-url-of-an-azure-repos-git-repo).
-1. Copy and save the URL. You use it later.
+ |Name |Value |
+ |-|--|
+ |**Name**|Enter a name for the key vault.|
+ |**Subscription**|Select the subscription in which you want to create the key vault.|
+ |**Resource group**|Either use an existing resource group or select **Create new** and enter a name for the resource group.|
+ |**Location**|Select the location or region where you want to create the key vault.|
+
+ Leave the other options at their defaults.
+
+1. On the Access policy tab, select **Azure role-based access control**, and then select **Review + create**.
+
+1. On the Review + create tab, select **Create**.
+
+### Store the personal access token in the key vault
+
+1. In the Key Vault, on the left menu, select **Secrets**.
+1. On the Secrets page, select **Generate/Import**.
+1. On the Create a secret page:
+ - In the **Name** box, enter a descriptive name for your secret.
+ - In the **Secret value** box, paste the GitHub secret.
+ - Select **Create**.
+
+### Get the secret identifier
+
+Get the path to the secret you created in the key vault.
+
+1. In the Azure portal, navigate to your key vault.
+1. On the key vault page, from the left menu, select **Secrets**.
+1. On the Secrets page, select the secret you created earlier.
+1. On the versions page, select the **CURRENT VERSION**.
+1. On the current version page, for the **Secret identifier**, select copy.
+
+### Add your repository as a catalog
+
+1. In the [Azure portal](https://portal.azure.com/), go to your dev center.
+1. Ensure that the [identity](./how-to-configure-managed-identity.md) that's attached to the dev center has [access to the key vault secret](./how-to-configure-managed-identity.md#grant-the-managed-identity-access-to-the-key-vault-secret) where your personal access token is stored.
+1. In the left menu under **Environment configuration**, select **Catalogs**, and then select **Add**.
+1. In **Add catalog**, enter the following information, and then select **Add**:
+
+ | Field | Value |
+ | -- | -- |
+ | **Name** | Enter a name for the catalog. |
+ | **Catalog location** | Select **Azure DevOps**. |
+ | **Authentication type** | Select **Personal Access Token**.|
+ | **Organization** | Select the organization that hosts the catalog repo. |
+ | **Project** | Select the project that stores the catalog repo.|
+ | **Rep** | Select the repo that stores the catalog.|
+ | **Folder path** | Select the folder that holds your IaC templates.|
+ | **Secret identifier**| Enter the secret identifier that contains your personal access token for the repository.<br /> When you copy a secret identifier, the connection string includes a version identifier at the end, like in this example: `https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat/9376b432b72441a1b9e795695708ea5a`.<br />Removing the version identifier ensures that Deployment Environments fetch the latest version of the secret from the key vault. If your personal access token expires, only the key vault needs to be updated. <br />*Example secret identifier:* `https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat`|
+
+ :::image type="content" source="media/how-to-configure-catalog/add-devops-catalog-pane.png" alt-text="Screenshot that shows how to add a catalog to a dev center.":::
+
+1. In **Catalogs** for the dev center, verify that your catalog appears. If the connection is successful, the **Status** is **Connected**.
++
+## [GitHub repo with PAT](#tab/GitHubRepoPAT/)
+
+To add a catalog, you complete these tasks:
+
+- Get the clone URL for your repository.
+- Create a personal access token.
+- Store the personal access token as a key vault secret in Azure Key Vault.
+- Add your repository as a catalog.
-### Create a personal access token
+### Get the clone URL of a GitHub repository
-Next, create a personal access token. Depending on the type of repository you use, create a personal access token either in GitHub or in Azure DevOps.
+1. Go to the home page of the GitHub repository that contains the template definitions.
+1. [Get the GitHub repo clone URL](/azure/devops/repos/git/clone#get-the-clone-url-of-a-github-repo).
+1. Copy and save the URL. You use it later.
-#### Create a personal access token in GitHub
+### Create a personal access token in GitHub
1. Go to the home page of the GitHub repository that contains the template definitions. 1. In the upper-right corner of GitHub, select the profile image, and then select **Settings**.
Next, create a personal access token. Depending on the type of repository you us
1. Select **Generate token**. 1. Save the generated token. You use the token later.
-#### Create a personal access token in Azure DevOps
-
-1. Go to the home page of your team collection (for example, `https://contoso-web-team.visualstudio.com`) and select your project.
-1. Create a [personal access token](/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate#create-a-pat).
-1. Save the generated token. You use the token later.
-
-### Store the personal access token as a key vault secret
-Store the personal access token that you generated as a [key vault secret](../key-vault/secrets/about-secrets.md) and copy the secret identifier:
-
-#### Create a Key Vault
-You need an Azure Key Vault to store the GitHub personal access token (PAT) that is used to grant Azure access to your GitHub repository. Key Vaults can control access with either access policies or role-based access control (RBAC). If you have an existing key vault, you can use it, but you should check whether it uses access policies or RBAC assignments to control access. For help with configuring an access policy for a key vault, see [Assign a Key Vault access policy](/azure/key-vault/general/assign-access-policy?branch=main&tabs=azure-portal).
+### Create a Key Vault
+You need an Azure Key Vault to store the personal access token (PAT) that is used to grant Azure access to your repository. Key vaults can control access with either access policies or role-based access control (RBAC). If you have an existing key vault, you can use it, but you should check whether it uses access policies or RBAC assignments to control access. For help with configuring an access policy for a key vault, see [Assign a Key Vault access policy](/azure/key-vault/general/assign-access-policy?branch=main&tabs=azure-portal).
Use the following steps to create an RBAC key vault:
Use the following steps to create an RBAC key vault:
1. On the Review + create tab, select **Create**.
-#### Store the personal access token in the key vault
+### Store the personal access token in the key vault
1. In the Key Vault, on the left menu, select **Secrets**. 1. On the Secrets page, select **Generate/Import**.
Use the following steps to create an RBAC key vault:
- Select **Create**.
-#### Get the secret identifier
+### Get the secret identifier
Get the path to the secret you created in the key vault.
Get the path to the secret you created in the key vault.
| Field | Value | | -- | -- | | **Name** | Enter a name for the catalog. |
- | **Git clone URI** | Enter or paste the [clone URL](#get-the-clone-url-for-your-repository) for either your GitHub repository or your Azure DevOps repository.<br />*Sample catalog example:* `https://github.com/Azure/deployment-environments.git` |
+ | **Catalog location** | Select **GitHub**. |
+ | **Repo** | Enter or paste the clone URL for either your GitHub repository or your Azure DevOps repository.<br />*Sample catalog example:* `https://github.com/Azure/deployment-environments.git` |
| **Branch** | Enter the repository branch to connect to.<br />*Sample catalog example:* `main`| | **Folder path** | Enter the folder path relative to the clone URI that contains subfolders that hold your environment definitions. <br /> The folder path is for the folder with subfolders containing environment definition manifests, not for the folder with the environment definition manifest itself. The following image shows the sample catalog folder structure.<br />*Sample catalog example:* `/Environments`<br /> :::image type="content" source="media/how-to-configure-catalog/github-folders.png" alt-text="Screenshot showing Environments sample folder in GitHub."::: The folder path can begin with or without a forward slash (`/`).|
- | **Secret identifier**| Enter the [secret identifier](#create-a-personal-access-token) that contains your personal access token for the repository.<br /> When you copy a secret identifier, the connection string includes a version identifier at the end, like in this example: `https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat/9376b432b72441a1b9e795695708ea5a`.<br />Removing the version identifier ensures that Deployment Environments fetches the latest version of the secret from the key vault. If your personal access token expires, only the key vault needs to be updated. <br />*Example secret identifier:* `https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat`|
+ | **Secret identifier**| Enter the secret identifier that contains your personal access token for the repository.<br /> When you copy a secret identifier, the connection string includes a version identifier at the end, like in this example: `https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat/9376b432b72441a1b9e795695708ea5a`.<br />Removing the version identifier ensures that Deployment Environments fetch the latest version of the secret from the key vault. If your personal access token expires, only the key vault needs to be updated. <br />*Example secret identifier:* `https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat`|
- :::image type="content" source="media/how-to-configure-catalog/add-catalog-form-inline.png" alt-text="Screenshot that shows how to add a catalog to a dev center." lightbox="media/how-to-configure-catalog/add-catalog-form-expanded.png":::
+ :::image type="content" source="media/how-to-configure-catalog/add-github-catalog-pane.png" alt-text="Screenshot that shows how to add a catalog to a dev center." lightbox="media/how-to-configure-catalog/add-github-catalog-pane.png":::
1. In **Catalogs** for the dev center, verify that your catalog appears. If the connection is successful, **Status** is **Connected**. ++ ## Update a catalog If you update the Azure Resource Manager template (ARM template) contents or definition in the attached repository, you can provide the latest set of environment definitions to your development teams by syncing the catalog.
An ignored environment definition error occurs if you add two or more environmen
An invalid environment definition error might occur for various reasons: -- **Manifest schema errors**. Ensure that your environment definition manifest matches the [required schema](./configure-environment-definition.md#add-an-environment-definition).
+- **Manifest schema errors**. Ensure that your environment definition manifest matches the [required schema](configure-environment-definition.md#add-an-environment-definition).
- **Validation errors**. Check the following items to resolve validation errors:
An invalid environment definition error might occur for various reasons:
- **Reference errors**. Ensure that the template path that the manifest references is a valid relative path to a file in the repository.
-## Next steps
+## Related content
-- Learn how to [create and configure a project](./quickstart-create-and-configure-projects.md).-- Learn how to [create and configure a project environment type](how-to-configure-project-environment-types.md).
+- [Configure environment types for a dev center](how-to-configure-devcenter-environment-types.md)
+- [Create and configure a project by using the Azure CLI](how-to-create-configure-projects.md)
+- [Configure project environment types](how-to-configure-project-environment-types.md)
deployment-environments How To Configure Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-managed-identity.md
As a security best practice, if you choose to use user-assigned identities, use
## Assign a subscription role assignment to the managed identity
-The identity that's attached to the dev center in Azure Deployment Environments should be assigned the Owner role for all the deployment subscriptions and the Reader role for all subscriptions that contain the relevant project. When a user creates or deploys an environment, the service grants appropriate access to the deployment identity that's attached to the project environment type. The deployment identity uses the access to perform deployments on behalf of the user. You can use the managed identity to empower developers to create environments without granting them access to the subscription.
+The identity that's attached to the dev center should be assigned the Contributor and User Access Administrator roles for all the deployment subscriptions and the Reader role for all subscriptions that contain the relevant project. When a user creates or deploys an environment, the service grants appropriate access to the deployment identity that's attached to the project environment type. The deployment identity uses the access to perform deployments on behalf of the user. You can use the managed identity to empower developers to create environments without granting them access to the subscription.
### Add a role assignment to a system-assigned managed identity
-1. In the Azure portal, go to your dev center.
+1. In the Azure portal, navigate to your dev center.
1. On the left menu under **Settings**, select **Identity**. 1. Under **System assigned** > **Permissions**, select **Azure role assignments**. :::image type="content" source="./media/configure-managed-identity/system-assigned-azure-role-assignment.png" alt-text="Screenshot that shows the Azure role assignment for system-assigned identity.":::
-1. On **Azure role assignments**, select **Add role assignment (Preview)**, and then enter or select the following information:
+1. To give Contributor access to the subscription, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**:
+
+ |Name |Value |
+ ||-|
+ |**Scope**|Subscription|
+ |**Subscription**|Select the subscription in which to use the managed identity.|
+ |**Role**|Contributor|
- 1. For **Scope**, select **Subscription**.
- 1. For **Subscription**, select the subscription in which to use the managed identity.
- 1. For **Role**, select **Owner**.
- 1. Select **Save**.
+1. To give User Access Administrator access to the subscription, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**:
+
+ |Name |Value |
+ ||-|
+ |**Scope**|Subscription|
+ |**Subscription**|Select the subscription in which to use the managed identity.|
+ |**Role**|User Access Administrator|
### Add a role assignment to a user-assigned managed identity
The identity that's attached to the dev center in Azure Deployment Environments
1. On the left menu under **Settings**, select **Identity**. 1. Under **User assigned**, select the identity. 1. On the left menu, select **Azure role assignments**.
-1. On **Azure role assignments**, select **Add role assignment (Preview)**, and then enter or select the following information:
-
- 1. For **Scope**, select **Subscription**.
- 1. For **Subscription**, select the subscription in which to use the managed identity.
- 1. For **Role**, select **Owner**.
- 1. Select **Save**.
+1. To give Contributor access to the subscription, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**:
+
+ |Name |Value |
+ ||-|
+ |**Scope**|Subscription|
+ |**Subscription**|Select the subscription in which to use the managed identity.|
+ |**Role**|Contributor|
+
+1. To give User Access Administrator access to the subscription, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**:
+
+ |Name |Value |
+ ||-|
+ |**Scope**|Subscription|
+ |**Subscription**|Select the subscription in which to use the managed identity.|
+ |**Role**|User Access Administrator|
## Grant the managed identity access to the key vault secret
deployment-environments How To Create Configure Dev Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-create-configure-dev-center.md
To add a catalog to your dev center, you first need to gather some information.
To add a catalog, you must specify the GitHub repo URL, the branch, and the folder that contains your environment definitions. You can gather this information before you begin the process of adding the catalog to the dev center. > [!TIP]
-> If you are attaching an Azure DevOps repository, use these steps: [Get the clone URL of an Azure DevOps repository](how-to-configure-catalog.md#get-the-clone-url-of-an-azure-devops-repository).
+> If you are attaching an Azure DevOps repository, use these steps: [Get the clone URL of an Azure DevOps repository](how-to-configure-catalog.md#get-the-clone-url-for-your-azure-devops-repository).
1. On your [GitHub](https://github.com) account page, select **<> Code**, and then select copy. 1. Take a note of the branch that you're working in.
deployment-environments Quickstart Create And Configure Devcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-devcenter.md
Previously updated : 09/06/2023 Last updated : 10/23/2023 # Quickstart: Create and configure a dev center for Azure Deployment Environments
You need to perform the steps in both quickstarts before you can create a deploy
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Azure role-based access control role with permissions to create and manage resources in the subscription, such as [Contributor](../role-based-access-control/built-in-roles.md#contributor).
+- Azure role-based access control role with permissions to create and manage resources in the subscription, such as [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Owner](../role-based-access-control/built-in-roles.md#owner).
+- An [Azure DevOps repository](https://azure.microsoft.com/products/devops/repos/) repository that contains IaC templates. You can use the [Deployment Environments sample catalog](https://github.com/azure/deployment-environments) that contains samples created and maintained by the Azure Deployment Environments team.
+ - In your Azure DevOps organization, [create a project](/azure/devops/repos/get-started/sign-up-invite-teammates?view=azure-devops&branch=main&preserve-view=true) to store your repository.
+ - Import the [Deployment Environments sample catalog](https://github.com/azure/deployment-environments)
## Create a dev center
-To create and configure a Dev center in Azure Deployment Environments by using the Azure portal:
+To create and configure a dev center in Azure Deployment Environments by using the Azure portal:
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for **Azure Deployment Environments**, and then select the service in the results.
To create and configure a Dev center in Azure Deployment Environments by using t
:::image type="content" source="media/quickstart-create-and-configure-devcenter/deployment-environments-devcenter-created.png" alt-text="Screenshot that shows the Dev centers overview, to confirm that the dev center is created.":::
-### Create a Key Vault
-When you're using a GitHub repository or an Azure DevOps repository to store your [catalog](./concept-environments-key-concepts.md#catalogs), you need an Azure Key Vault to store a personal access token (PAT) that is used to grant Azure access to your repository. Key Vaults can control access with either access policies or role-based access control (RBAC). If you have an existing key vault, you can use it, but you should check whether it uses access policies or RBAC assignments to control access. This quickstart assumes you're using an RBAC Key Vault and a GitHub repository.
-
-If you don't have an existing key vault, use the following steps to create one: [Quickstart: Create a key vault using the Azure portal](../key-vault/general/quick-create-portal.md).
-
-### Configure a personal access token
-Using an authentication token like a GitHub PAT enables you to share your repository securely. GitHub offers classic PATs, and fine-grained PATs. Fine-grained and classic PATs work with Azure Deployment Environments, but fine-grained tokens give you more granular control over the repositories to which you're allowing access.
-
-> [!TIP]
-> If you are attaching an Azure DevOps repository, use these steps: [Create a personal access token in Azure DevOps](how-to-configure-catalog.md#create-a-personal-access-token-in-azure-devops).
-
-1. In a new browser tab, sign into your [GitHub](https://github.com) account.
-1. On your profile menu, select **Settings**.
-1. On your account page, on the left menu, select **< >Developer Settings**.
-1. On the Developer settings page, select **Fine-grained tokens**.
-
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/github-fine-grained-pat.png" alt-text="Screenshot that shows the GitHub Fine-grained tokens option.":::
-
-1. On the Fine-grained personal access tokens page, select **Generate new token**
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/generate-github-fine-grained-token.png" alt-text="Screenshot showing the GitHub Fine-grained personal access tokens page with Generate new token highlighted.":::
-
-1. On the New fine-grained personal access token page, provide the following information:
-
- |Name |Value |
- |-|--|
- |**Token name**|Enter a descriptive name for the token.|
- |**Expiration**|Select the token expiration period in days.|
- |**Description**|Enter a description for the token.|
- |**Repository access**|Select **Public Repositories (read-only)**.|
-
- Leave the other options at their defaults.
-
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/github-public-repo-permissions.png" alt-text="Screenshot showing the GitHub New fine-grained personal access token page.":::
-
-1. Select **Generate token**.
-1. On the Fine-grained personal access tokens page, copy the new token.
-
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/copy-new-token.png" alt-text="Screenshot that shows the new GitHub token with the copy button highlighted.":::
-
- > [!WARNING]
- > You must copy the token now. You will not be able to access it again.
-
-1. Switch back to the **Key Vault ΓÇô Microsoft Azure** browser tab.
-1. In the Key Vault, on the left menu, select **Secrets**.
-1. On the Secrets page, select **Generate/Import**.
-
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/import-secret.png" alt-text="Screenshot that shows the key vault Secrets page with the generate/import button highlighted.":::
-
-1. On the Create a secret page:
- - In the **Name** box, enter a descriptive name for your secret.
- - In the **Secret value** box, paste the GitHub secret you copied in step 7.
-
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/create-secret-in-key-vault.png" alt-text="Screenshot that shows the Create a secret page with the Name and Secret value text boxes highlighted.":::
-
- - Select **Create**.
-1. Leave this tab open, you need to come back to the Key Vault later.
- ## Configure a managed identity for the dev center After you create a dev center, attach an [identity](concept-environments-key-concepts.md#identities) to the dev center. You can attach either a system-assigned managed identity or a user-assigned managed identity. Learn about the two [types of identities](how-to-configure-managed-identity.md#add-a-managed-identity).
-In this quickstart, you configure a system-assigned managed identity for your dev center. You then assign roles to the managed identity to allow the dev center to create environment types in your subscription and read the key vault secret that contains the GitHub PAT.
+In this quickstart, you configure a system-assigned managed identity for your dev center. You then assign roles to the managed identity to allow the dev center to create environment types in your subscription and read the Azure DevOps repository project that contains the catalog.
### Attach a system-assigned managed identity
To attach a system-assigned managed identity to your dev center:
### Assign roles for the dev center managed identity
-The managed identity that represents your dev center requires access to the subscriptions where you configure the [project environment types](concept-environments-key-concepts.md#project-environment-types), and to the key vault secret that stores your GitHub PAT.
+The managed identity that represents your dev center requires access to the subscriptions where you configure the [project environment types](concept-environments-key-concepts.md#project-environment-types), and to the Azure DevOps repo that stores your catalog.
1. Navigate to your dev center. 1. On the left menu under Settings, select **Identity**.
The managed identity that represents your dev center requires access to the subs
:::image type="content" source="media/quickstart-create-configure-projects/system-assigned-managed-identity.png" alt-text="Screenshot that shows a system-assigned managed identity with Role assignments highlighted.":::
-1. To give access to the subscription, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**:
+1. To give Contributor access to the subscription, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**:
|Name |Value | ||-|
The managed identity that represents your dev center requires access to the subs
|**Subscription**|Select the subscription in which to use the managed identity.| |**Role**|Contributor|
-1. To give access to the subscription, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**:
-
+1. To give User Access Administrator access to the subscription, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**:
+ |Name |Value | ||-| |**Scope**|Subscription| |**Subscription**|Select the subscription in which to use the managed identity.| |**Role**|User Access Administrator|
-1. To give access to the key vault, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**:
-
- |Name |Value |
- ||-|
- |**Scope**|Key Vault|
- |**Subscription**|Select the subscription in which to use the managed identity.|
- |**Resource**|Select the key vault that you created earlier.|
- |**Role**|Key Vault Secrets User|
-
-## Add a catalog to the dev center
-Azure Deployment Environments supports attaching Azure DevOps repositories and GitHub repositories. You can store a set of curated IaC templates in a repository. Attaching the repository to a dev center as a catalog gives your development teams access to the templates and enables them to quickly create consistent environments.
-
-In this quickstart, you attach a GitHub repository that contains samples created and maintained by the Azure Deployment Environments team.
+### Assign permissions in Azure DevOps for the dev center managed identity
+You must give the dev center managed identity permissions to the repository in Azure DevOps.
-To add a catalog to your dev center, you first need to gather some information.
+1. Sign in to your [Azure DevOps organization](https://dev.azure.com).
-### Gather GitHub repo information
-To add a catalog, you must specify the GitHub repo URL, the branch, and the folder that contains your environment definitions. You can gather this information before you begin the process of adding the catalog to the dev center, and paste it somewhere accessible, like notepad.
+1. Select **Organization settings**.
+
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/devops-organization-settings.png" alt-text="Screenshot showing the Azure DevOps organization page, with Organization Settings highlighted.":::
+
+1. On the **Overview** page, select **Users**.
-> [!TIP]
-> If you are attaching an Azure DevOps repository, use these steps: [Get the clone URL of an Azure DevOps repository](how-to-configure-catalog.md#get-the-clone-url-of-an-azure-devops-repository).
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/devops-organization-overview.png" alt-text="Screenshot showing the Organization overview page, with Users highlighted.":::
-1. On your [GitHub](https://github.com) account page, select **<> Code**, and then select copy.
-1. Take a note of the branch that you're working in.
-1. Take a note of the folder that contains your environment definitions.
-
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/github-info.png" alt-text="Screenshot that shows the GitHub repo with Code, branch, and folder highlighted.":::
+1. On the **Users** page, select **Add users**.
-### Gather the secret identifier
-You also need the path to the secret you created in the key vault.
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/devops-add-user.png" alt-text="Screenshot showing the Users page, with Add user highlighted.":::
-1. In the Azure portal, navigate to your key vault.
-1. On the key vault page, from the left menu, select **Secrets**.
-1. On the Secrets page, select the secret you created earlier.
+1. Complete **Add new users** by entering or selecting the following information, and then select **Add**:
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/key-vault-secrets-page.png" alt-text="Screenshot that shows the list of secrets in the key vault with one highlighted.":::
+ |Name |Value |
+ ||-|
+ |**Users or Service Principals**|Enter the name of your dev center. </br> When you use a system assigned managed account, specify the name of the dev center, not the Object ID of the Managed Account. When you use a user assigned managed account, use the name of the managed account. |
+ |**Access level**|Select **Basic**.|
+ |**Add to projects**|Select the project that contains your repository.|
+ |**Azure DevOps Groups**|Select **Project Readers**.|
+ |**Send email invites (to Users only)**|Clear the checkbox.|
-1. On the versions page, select the **CURRENT VERSION**.
-
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/key-vault-versions-page.png" alt-text="Screenshot that shows the current version of the select secret.":::
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/devops-add-user-blade.png" alt-text="Screenshot showing Add users, with example entries and Add highlighted.":::
-1. On the current version page, for the **Secret identifier**, select copy.
+## Add a catalog to the dev center
+Azure Deployment Environments supports attaching Azure DevOps repositories and GitHub repositories. You can store a set of curated IaC templates in a repository. Attaching the repository to a dev center as a catalog gives your development teams access to the templates and enables them to quickly create consistent environments.
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/key-vault-current-version-page.png" alt-text="Screenshot that shows the details current version of the select secret with the secret identifier copy button highlighted.":::
+In this quickstart, you attach an Azure DevOps repository.
### Add a catalog to your dev center 1. Navigate to your dev center.
You also need the path to the secret you created in the key vault.
| Field | Value | | -- | -- | | **Name** | Enter a name for the catalog. |
- | **Git clone URI** | Enter or paste the clone URL for either your GitHub repository or your Azure DevOps repository.<br />*Sample catalog example:* `https://github.com/Azure/deployment-environments.git` |
- | **Branch** | Enter the repository branch to connect to.<br />*Sample catalog example:* `main`|
- | **Folder path** | Enter the folder path relative to the clone URI that contains subfolders that hold your environment definitions. <br /> The folder path is for the folder with subfolders containing environment definition manifests, not for the folder with the environment definition manifest itself. The following image shows the sample catalog folder structure.<br />*Sample catalog example:* `/Environments`<br /> :::image type="content" source="media/how-to-configure-catalog/github-folders.png" alt-text="Screenshot showing Environments sample folder in GitHub."::: The folder path can begin with or without a forward slash (`/`).|
- | **Secret identifier**| Enter the [secret identifier](#configure-a-personal-access-token) that contains your personal access token for the repository.<br /> When you copy a secret identifier, the connection string includes a version identifier at the end, like in this example: `https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat/9376b432b72441a1b9e795695708ea5a`.<br />Removing the version identifier ensures that Deployment Environments fetches the latest version of the secret from the key vault. If your personal access token expires, only the key vault needs to be updated. <br />*Example secret identifier:* `https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat`|
+ | **Catalog location** | Select **Azure DevOps**. |
+ | **Authentication type** | Select **Managed Identity**.|
+ | **Organization** | Select your Azure DevOps organization. |
+ | **Project** | From the list of projects, select the project that stores the repo. |
+ | **Repo** | From the list of repos, select the repo you want to add. |
+ | **Branch** | Select the branch. |
+ | **Folder path** | Dev Box retrieves a list of folders in your branch. Select the folder that stores your IaC templates. |
- :::image type="content" source="media/how-to-configure-catalog/add-catalog-form-inline.png" alt-text="Screenshot that shows how to add a catalog to a dev center." lightbox="media/how-to-configure-catalog/add-catalog-form-expanded.png":::
-
-1. In **Catalogs** for the dev center, verify that your catalog appears. If the connection is successful, **Status** is **Connected**.
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/add-catalog-to-devcenter.png" alt-text="Screenshot showing the add catalog pane with examples entries and Add highlighted.":::
+1. In **Catalogs** for the dev center, verify that your catalog appears. If the connection is successful, **Status** is **Connected**. Connecting to a catalog can take a few minutes the first time.
+
## Create an environment type Use an environment type to help you define the different types of environments your development teams can deploy. You can apply different settings for each environment type.
Use an environment type to help you define the different types of environments y
An environment type that you add to your dev center is available in each project in the dev center, but environment types aren't enabled by default. When you enable an environment type at the project level, the environment type determines the managed identity and subscription that are used to deploy environments.
-## Next steps
+## Next step
In this quickstart, you created a dev center and configured it with an identity, a catalog, and an environment type. To learn how to create and configure a project, advance to the next quickstart.
dms Ads Sku Recommend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/ads-sku-recommend.md
- sql-migration-content
-# Get Azure recommendations to migrate your SQL Server database (Preview)
+# Get Azure recommendations to migrate your SQL Server database
The [Azure SQL Migration extension for Azure Data Studio](/azure-data-studio/extensions/azure-sql-migration-extension) helps you to assess your database requirements, get the right-sized SKU recommendations for Azure resources, and migrate your SQL Server database to Azure.
Learn how to use this unified experience, collecting performance data from your
## Overview
-Before migrating to Azure SQL, you can use the SQL Migration extension in Azure Data Studio to help you generate right-sized recommendations (Preview) for Azure SQL Database, Azure SQL Managed Instance, and SQL Server on Azure Virtual Machines targets. The tool helps you collect performance data from your source SQL instance (running on-premises or other cloud), and recommend a compute and storage configuration to meet your workload's needs.
+Before migrating to Azure SQL, you can use the SQL Migration extension in Azure Data Studio to help you generate right-sized recommendations for Azure SQL Database, Azure SQL Managed Instance, and SQL Server on Azure Virtual Machines targets. The tool helps you collect performance data from your source SQL instance (running on-premises or other cloud), and recommend a compute and storage configuration to meet your workload's needs.
The diagram presents the workflow for Azure recommendations in the Azure SQL Migration extension for Azure Data Studio:
The diagram presents the workflow for Azure recommendations in the Azure SQL Mig
## Prerequisites
-To get started with Azure recommendations (Preview) for your SQL Server database migration, you must meet the following prerequisites:
+To get started with Azure recommendations for your SQL Server database migration, you must meet the following prerequisites:
- [Download and install Azure Data Studio](/azure-data-studio/download-azure-data-studio). - [Install the Azure SQL Migration extension](/azure-data-studio/extensions/azure-sql-migration-extension) from Azure Data Studio Marketplace.
GRANT SELECT ON [msdb].[dbo].[sysmail_account] TO [assessment];
- Azure Recommendations don't include price estimates, as this situation may vary depending on region, currency, and discounts such as the [Azure Hybrid Benefit](/azure/azure-sql/azure-hybrid-benefit). To get price estimates, use the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator), or create a [SQL assessment](/azure/migrate/concepts-azure-sql-assessment-calculation) in Azure Migrate. - Recommendations for Azure SQL Database with the [DTU-based purchasing model](/azure/azure-sql/database/migrate-dtu-to-vcore) aren't supported. - Currently, Azure recommendations for Azure SQL Database serverless compute tier and Elastic Pools aren't supported.
+<!--
- Currently, Azure recommendations for SQL Server on Azure Virtual Machine using Premium SSD v2 aren't supported.-
+-->
## Troubleshooting - No recommendations generated - If no recommendations were generated, this situation could mean that no configurations were identified which can fully satisfy the performance requirements of your source instance. In order to see reasons why a particular size, service tier, or hardware family was disqualified:
event-grid Monitor Mqtt Delivery Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/monitor-mqtt-delivery-reference.md
This article provides a reference of log and metric data collected to analyze th
| OperationType | The type of the operation. The available values include: <br><br>- Publish: PUBLISH requests sent from MQTT clients to MQTT broker. <br>- Deliver: PUBLISH requests sent from MQTT broker to MQTT clients. <br>- Subscribe: SUBSCRIBE requests by MQTT clients. <br>- Unsubscribe: UNSUBSCRIBE requests by MQTT clients. <br>- Connect: CONNECT requests by MQTT clients. | | Protocol | The protocol used in the operation. The available values include: <br><br>- MQTT3: MQTT v3.1.1 <br>- MQTT5: MQTT v5 <br>- MQTT3-WS: MQTT v3.1.1 over WebSocket <br>- MQTT5-WS: MQTT v5 over WebSocket | Result | Result of the operation. The available values include: <br><br>- Success <br>- ClientError <br>- ServiceError |
-| Error | Error occurred during the operation. The available values include: <br><br>-QuotaExceeded: the client exceeded one or more of the throttling limits that resulted in a failure <br>- AuthenticationError: a failure because of any authentication reasons. In case of failed MQTT routing messages, the EventGrid Data Sender role for the custom topic configured as the destination for MQTT routed messages was deleted. This error doesn't apply for namespace topics since they don't need a permission to route MQTT messages. In that case for MQTT message routing, MQTT broker drops the MQTT message that was meant to be routed. <br>- AuthorizationError: a failure because of any authorization reasons.<br>- ClientError: the client sent a bad request or used one of the unsupported features that resulted in a failure. <br>-TopicNotFoundError: The custom topic that is configured to receive all the MQTT routed messages was deleted. This error doesn't apply for namespace topics since they can't be deleted if they're used as the destination for MQTT routed messages. In that case, MQTT broker drops the MQTT message that was meant to be routed.<br>-TooManyRequests: the number of MQTT routed messages per second exceeds the limit of the destination (namespace topic or custom topic) for MQTT routed messages. In that case, Event Grid retries to route the MQTT message. <br>- ServiceError: a failure because of an unexpected server error or for a server's operational reason. In that case for MQTT message routing, Event Grid retries to route the MQTT message. |
+| Error | Error occurred during the operation.<br> The available values for MQTT: RequestCount, MQTT: Failed Published Messages, MQTT: Failed Subscription Operations metrics include: <br><br>-QuotaExceeded: the client exceeded one or more of the throttling limits that resulted in a failure <br>- AuthenticationError: a failure because of any authentication reasons. <br>- AuthorizationError: a failure because of any authorization reasons.<br>- ClientError: the client sent a bad request or used one of the unsupported features that resulted in a failure. <br>- ServiceError: a failure because of an unexpected server error or for a server's operational reason. <br><br> [Learn more about how the supported MQTT features.](mqtt-support.md) <br><br>The available values for MQTT: Failed Routed Messages metric include: <br><br>-AuthenticationError: the EventGrid Data Sender role for the custom topic configured as the destination for MQTT routed messages was deleted. <br>-TopicNotFoundError: The custom topic that is configured to receive all the MQTT routed messages was deleted. <br>-TooManyRequests: the number of MQTT routed messages per second exceeds the publish limit of the custom topic. <br>- ServiceError: a failure because of an unexpected server error or for a server's operational reason. <br><br> [Learn more about how the MQTT broker handles each of these routing errors.](mqtt-routing.md#mqtt-message-routing-behavior)|
| QoS | Quality of service level. The available values are: 0, 1. | | Direction | The direction of the operation. The available values are: <br><br>- Inbound: inbound throughput to MQTT broker. <br>- Outbound: outbound throughput from MQTT broker. | | DropReason | The reason a session was dropped. The available values include: <br><br>- SessionExpiry: a persistent session has expired. <br>- TransientSession: a non-persistent session has expired. <br>- SessionOverflow: a client didn't connect during the lifespan of the session to receive queued QOS1 messages until the queue reached its maximum limit. <br>- AuthorizationError: a session drop because of any authorization reasons.
event-grid Mqtt Certificate Chain Client Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-certificate-chain-client-authentication.md
Using the CA files generated to create certificate for the client.
## Upload the CA certificate to the namespace 1. In Azure portal, navigate to your Event Grid namespace. 1. Under the MQTT section in left rail, navigate to CA certificates menu.-- 1. Select **+ Certificate** to launch the Upload certificate page.
-1. Add certificate name and browse to find the intermediate certificate (.step/certs/intermediate_ca.crt) and select **Upload**.
-
-> [!NOTE]
-> - CA certificate name can be 3-50 characters long.
-> - CA certificate name can include alphanumeric, hyphen(-) and, no spaces.
-> - The name needs to be unique per namespace.
+1. Add certificate name and browse to find the intermediate certificate (.step/certs/intermediate_ca.crt) and select **Upload**. You can upload a file of .pem, .cer, or .crt type.
+1. On the Upload certificate page, give a Certificate name and browse for the certificate file.
+1. Select **Upload** button to add the parent certificate.
-4. On the Upload certificate page, give a Certificate name and browse for the certificate file.
-5. Select **Upload** button to add the parent certificate.
+ :::image type="content" source="./media/mqtt-certificate-chain-client-authentication/event-grid-namespace-parent-certificate-added.png" alt-text="Screenshot showing the added CA certificate listed in the CA certificates page.":::
+ > [!NOTE]
+ > - CA certificate name can be 3-50 characters long.
+ > - CA certificate name can include alphanumeric, hyphen(-) and, no spaces.
+ > - The name needs to be unique per namespace.
## Configure client authentication settings 1. Navigate to the Clients page.
event-grid Mqtt Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-overview.md
IoT applications are software designed to interact with and process data from Io
### Client authentication
-Event Grid has a client registry that stores information about the clients permitted to connect to it. Before a client can connect, there must be an entry for that client in the client registry. As a client connects to MQTT broker, it needs to authenticate with MQTT broker based on credentials stored in the identity registry. MQTT broker supports X.509 certificate authentication that is the industry authentication standard in IoT devices and [Microsoft Entra](mqtt-client-azure-ad-token-and-rbac.md) that is Azure's authentication standard for applications.[Learn more about MQTT client authentication.](mqtt-client-authentication.md)
+Event Grid has a client registry that stores information about the clients permitted to connect to it. Before a client can connect, there must be an entry for that client in the client registry. As a client connects to MQTT broker, it needs to authenticate with MQTT broker based on credentials stored in the identity registry. MQTT broker supports X.509 certificate authentication that is the industry authentication standard in IoT devices and [Microsoft Entra ID (formerly Azure Active Directory)](mqtt-client-azure-ad-token-and-rbac.md) that is Azure's authentication standard for applications.[Learn more about MQTT client authentication.](mqtt-client-authentication.md)
### Access control
event-grid Mqtt Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-routing.md
For enrichments configuration instructions, go to [Enrichment CLI configuration]
## MQTT message routing behavior
-While routing MQTT messages to namespace topics or custom topics, Event Grid provides durable delivery as it tries to deliver each message **at least once** immediately. If there's a failure, Event Grid either retries delivery or drops the message that was meant to be routed. Event Grid doesn't guarantee order for event delivery, so subscribers might receive them out of order.
+While routing MQTT messages to custom topics, Event Grid provides durable delivery as it tries to deliver each message **at least once** immediately. If there's a failure, Event Grid either retries delivery or drops the message that was meant to be routed. Event Grid doesn't guarantee order for event delivery, so subscribers might receive them out of order.
The following table describes the behavior of MQTT message routing based on different errors. | Error| Error description | Behavior | | --| --|--|
-| TopicNotFoundError | The custom topic that is configured to receive all the MQTT routed messages was deleted. This error doesn't apply for namespace topics since they can't be deleted if they're used as the destination for MQTT routed messages. | Event Grid drops the MQTT message that was meant to be routed.|
-| AuthenticationError | The EventGrid Data Sender role for the custom topic configured as the destination for MQTT routed messages was deleted. This error doesn't apply for namespace topics since they don't need a permission to route MQTT messages. | Event Grid drops the MQTT message that was meant to be routed.|
-| TooManyRequests | The number of MQTT routed messages per second exceeds the limit of the destination (namespace topic or custom topic) for MQTT routed messages. | Event Grid retries to route the MQTT message.|
+| TopicNotFoundError | The custom topic that is configured to receive all the MQTT routed messages was deleted. | Event Grid drops the MQTT message that was meant to be routed.|
+| AuthenticationError | The EventGrid Data Sender role for the custom topic configured as the destination for MQTT routed messages was deleted. | Event Grid drops the MQTT message that was meant to be routed.|
+| TooManyRequests | The number of MQTT routed messages per second exceeds the publish limit for the custom topic. | Event Grid retries to route the MQTT message.|
| ServiceError | An unexpected server error for a server's operational reason. | Event Grid retries to route the MQTT message.| During retries, Event Grid uses an exponential backoff retry policy for MQTT message routing. Event Grid retries delivery on the following schedule on a best effort basis:
event-grid Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-support.md
MQTT v5 currently differs from the [MQTT v5 Specification](https://docs.oasis-op
- Message ordering isn't guaranteed. - Subscription Identifiers aren't supported. - Assigned Client Identifiers aren't supported yet.-- The server responds to a CONNECT request with either Authentication Method or Authentication Data with a CONNACK with code 0x8C (Bad authentication method) or 0x87 (Not Authorized) respectively. - Topic Alias Maximum is 10. The server doesn't assign any topic aliases for outgoing messages at this time. Clients can assign and use topic aliases within set limit. - CONNACK doesn't return Response Information property even if the CONNECT request contains Request Response Information property.
+- User Properties on CONNECT, SUBSCRIBE, DISCONNECT, PUBACK, AUTH packets are not used by the service so they're not supported. If any of these requests include user properties, the request will fail.
- If the server receives a PUBACK from a client with non-success response code, the connection is terminated. - Keep Alive Maximum is 1160 seconds.
event-grid Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/overview.md
Event Grid offers a rich mixture of features. These features include:
- **[Built-in cloud integration](mqtt-routing.md)** - route your MQTT messages to Azure services or custom webhooks for further processing. - **Flexible and fine-grained [access control model](mqtt-access-control.md)** - group clients and topic to simplify access control management, and use the variable support in topic templates for a fine-grained access control. - **X.509 certificate [authentication](mqtt-client-authentication.md)** - authenticate your devices using the IoT industry's standard mechanism for authentication.-- **[AAD authentication](mqtt-client-azure-ad-token-and-rbac.md)** - authenticate your applications using the Azure's standard mechanism for authentication.
+- **[Microsoft Entra ID (formerly Azure Active Directory) authentication](mqtt-client-azure-ad-token-and-rbac.md)** - authenticate your applications using the Azure's standard mechanism for authentication.
- **TLS 1.2 and TLS 1.3 support** - secure your client communication using robust encryption protocols. - **Multi-session support** - connect your applications with multiple active sessions to ensure reliability and scalability. - **MQTT over WebSockets** - enable connectivity for clients in firewall-restricted environments.
event-hubs Monitor Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/monitor-event-hubs.md
Start with the article [Monitoring Azure resources with Azure Monitor](../azure-
The following sections build on this article by describing the specific data gathered for Azure Event Hubs. These sections also provide examples for configuring data collection and analyzing this data with Azure tools. > [!TIP]
-> To understand costs associated with Azure Monitor, see [Usage and estimated costs](../azure-monitor//usage-estimated-costs.md). To understand the time it takes for your data to appear in Azure Monitor, see [Log data ingestion time](../azure-monitor/logs/data-ingestion-time.md).
+> To understand costs associated with Azure Monitor, see [Azure Monitor cost and usage](../azure-monitor/cost-usage.md). To understand the time it takes for your data to appear in Azure Monitor, see [Log data ingestion time](../azure-monitor/logs/data-ingestion-time.md).
## Monitoring data from Azure Event Hubs Azure Event Hubs collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data).
expressroute Expressroute Monitoring Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-monitoring-metrics-alerts.md
This metric shows the number of FastPath routes configured on a circuit. Set an
Aggregation type: *Avg*
-You can view near to real-time availability of [ARP](./expressroute-troubleshooting-arp-resource-manager.md) (Layer-3 connectivity) across peerings and peers (Primary and Secondary ExpressRoute routers). This dashboard shows the Private Peering ARP session status is up across both peers, but down for Microsoft peering for both peers. The default aggregation (Average) was utilized across both peers.
+You can view near to real-time availability of [ARP](./expressroute-troubleshooting-arp-resource-manager.md) (Layer-2 connectivity) across peerings and peers (Primary and Secondary ExpressRoute routers). This dashboard shows the Private Peering ARP session status is up across both peers, but down for Microsoft peering for both peers. The default aggregation (Average) was utilized across both peers.
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/erArpAvailabilityMetrics.jpg" alt-text="ARP availability per peer":::
frontdoor Front Door Url Rewrite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-url-rewrite.md
With this feature, you can redirect your end users to a different origin based o
The **source pattern** is the URL path in the initial request you want to replace. Currently, source pattern uses a prefix-based match. To match all URL paths, you can define a forward slash (`/`) as the source pattern value.
-For the source pattern in a URL rewrite action, only the path after the *patterns to match* in the route configuration is considered. For example, you have the following incoming URL format `contoso.com/patten-to-match/source-pattern`, only `/source-pattern` gets considered by the rule set as the source pattern to be rewritten. The format of the out going URL after URL rewrite gets applied is `contoso.com/pattern-to-match/destination`.
+For the source pattern in a URL rewrite action, only the path after the *patterns to match* in the route configuration is considered. For example, you have the following incoming URL format `contoso.com/pattern-to-match/source-pattern`, only `/source-pattern` gets considered by the rule set as the source pattern to be rewritten. The format of the out going URL after URL rewrite gets applied is `contoso.com/pattern-to-match/destination`.
-For situation, when you need to remove the `/patterns-to-match` segment of the URL, set the **origin path** for the origin group in route configuration to `/`.
+For situation, when you need to remove the `/pattern-to-match` segment of the URL, set the **origin path** for the origin group in route configuration to `/`.
## Destination
governance Alerts Query Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/alerts-query-quickstart.md
+
+ Title: How Azure Resource Graph uses alerts to monitor resources
+description: In this quickstart, you learn how to create monitoring alerts for Azure resources using an Azure Resource Graph query and a Log Analytics workspace.
Last updated : 10/31/2023+++
+# Quickstart: Create alerts with Azure Resource Graph and Log Analytics
+
+In this quickstart, you learn how you can use Azure Log Analytics to create alerts on Azure Resource Graph queries. You can create alerts with Azure Resource Graph query, Log Analytics workspace, and managed identities. The alert's conditions send notifications at a specified interval.
+
+You can use queries to set up alerts for your deployed Azure resources. You can create queries using Azure Resource Graph tables, or you can combine Azure Resource Graph tables and Log Analytics data from Azure Monitor Logs.
+
+This article includes two examples of alerts:
+
+- **Azure Resource Graph**: Uses the Azure Resource Graph `Resources` table to create a query that gets data for your deployed Azure resources and create an alert.
+- **Azure Resource Graph and Log Analytics**: Uses the Azure Resource Graph `Resources` table and Log Analytics data from the from Azure Monitor Logs `Heartbeat` table. This example uses a virtual machine to show how to set up the query and alert.
+
+> [!NOTE]
+> Azure Resource Graph alerts integration with Log Analytics is in public preview.
+
+## Prerequisites
+
+- If you don't have an Azure account, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- Resources deployed in Azure like virtual machines or storage accounts.
+- To use the example for the Azure Resource Graph and Log Analytics query, you need at least one Azure virtual machine with the Azure Monitor Agent.
+
+## What problem will we solve?
+
+You want to use an Azure Resource Graph query to get information about your Azure resources. You can use Azure Log Analytics to set up alerts that notify you when certain conditions are met.
+
+## Create workspace
+
+Create a Log Analytics Workspace in the subscription that's being monitored.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. In the search field, type _log analytics workspaces_ and select **Log Analytics workspaces**.
+
+ If you've used Log Analytics workspaces, you can select it from **Azure services**.
+
+ :::image type="content" source="./media/alerts-query-quickstart/search-log-analytics.png" alt-text="Screenshot of the Azure home page that highlights search field and Log Analytics workspaces.":::
+
+1. Select **Create**.
+
+ - **Subscription**: Select your Azure subscription
+ - **Resource group**: _demo-arg-alert-rg_
+ - **Name**: _demo-arg-alert-workspace_
+ - **Region**: _West US3_
+
+1. Select **Review + Create** and wait for **Validation passed** to be displayed.
+1. Select **Create** to begin the deployment.
+1. Select **Go to resource** when the deployment is completed.
+
+## Create virtual machine
+
+# [Azure Resource Graph](#tab/azure-resource-graph)
+
+You don't need to create a virtual machine for the example that uses the Azure Resource Graph table.
+
+# [Azure Resource Graph and Log Analytics](#tab/arg-log-analytics)
+
+> [!NOTE]
+> This section is optional if you have existing virtual machines or know how to create a virtual machine. This example uses a virtual machine to show how to create a query using an Azure Resource Graph table and Log Analytics data.
+
+To get log information, when you connect your virtual machine to the Log Analytics workspace, the Azure Monitor Agent is installed on the virtual machine. If you don't have a virtual machine, you can create one for this example. To avoid unnecessary costs, delete the virtual machine when you're finished with the example.
+
+The following instructions are basic settings for a Linux virtual machine. Detailed steps about how to create a virtual machine are outside the scope of this article.
+
+1. In Azure, create an [Ubuntu Linux virtual machine](https://portal.azure.com/#create/canonical.0001-com-ubuntu-server-jammy22_04-lts-gen2).
+1. Select **Create**.
+1. Use the **Create a virtual machine**. You can accept most default settings with the following exceptions:
+
+ **Create a virtual machine**
+ - **Resource group**: _demo-arg-alert-rg_
+ - **virtual machine name**: Type a virtual machine name
+ - **Availability options**: _No infrastructure redundancy required_
+ - **Size**: _B1ms_
+ - **Administrator account**: Create key pair or username and password
+ - **Public inbound ports**: _None_
+
+ **Disks: accept defaults**
+ - Verify **Delete with VM** is selected.
+
+ **Networking**
+ - Accept defaults.
+ - Select **Delete public IP and NIC when VM is deleted**.
+
+ **Management**
+ - Accept defaults
+ - Change the **Auto-shutdown** **Time zone** to your time zone.
+
+ **Monitoring**, **Advanced**, and **Tags**
+ - No changes needed for this example.
+
+1. Select **Review and Create** and then **Create**.
+
+ If you selected SSH for authentication, you're prompted to **Generate new key pair**. Download the private key and create the virtual machine. When you're finished with the VM, delete the key file.
+
+Select **Go to resource** after the virtual machine is deployed.
+
+> [!NOTE]
+> This section is optional if you know how to connect a virtual machine to a Log Analytics workspace.
+
+Set up monitoring for a virtual machine.
+1. Go to your virtual machine.
+1. Select **Monitoring** > **Insights** > **Azure Monitor** > **Overview**.
+1. Select **Not Monitored**.
+1. Select **Enable** for virtual machine's **Monitor Coverage**.
+1. Select **Enable** for the **Azure Monitor Insights Onboarding**.
+1. Set up the **Monitoring Configuration**
+ - **Enable Insights using**: Azure Monitoring Agent
+ - **Subscription**: Select your subscription.
+ - Create a new Data Collection Rule
+ - Create a name.
+ - Select your subscription.
+ - Select your Log Analytics workspace _demo-arg-alert-workspace_.
+ - Select **Create**, verify the settings are correct, and select **Configure** to begin the deployment.
+1. Close **Azure Monitor Insights Onboarding**.
+
+After a successful deployment, **Insights** > **Overview** > **Monitored** shows the virtual machine's **Monitor Coverage** is enabled and a link to the data collection rule.
+
+Select the link to the data collection rule and verify the **Configuration** settings:
+
+- **Resources**: Shows the virtual machine, resource group, and subscription.
+- **Data Sources**:
+ - **Data source**: Performance Counters
+ - **Destination**: Azure Monitor Logs
+
+You can select the Performance Counters link to verify details.
+
+Go to your Log Analytics workspace _demo-arg-alert-workspace_. Select **Settings** > **Agents** > **Linux servers** and one Linux computer is connected to the **Azure Monitor Linux agent**.
+
+Go to your virtual machine and select **Settings** > **Extensions + applications** and verify that the `AzureMonitorLinuxAgent`shows provisioning succeeded.
+++
+## Create query
+
+# [Azure Resource Graph](#tab/azure-resource-graph)
+
+From the Log Analytics workspace, create an Azure Resource Graph query to get a count of your Azure resources. This example uses the Azure Resource Graph `Resources` table.
+
+1. Select **Logs** from the left side of the **Log Analytics workspace** page.
+
+ Close the **Queries** window if it's displayed.
+
+1. Use the following code in the **New Query**.
+
+ ```kusto
+ arg("").Resources
+ | count
+ ```
+
+ Table names in Log Analytics need to be camel case with the first letter of each word capitalized, like `Resources` or `ResourceContainers`. You can also use lowercase like `resources` or `resourcecontainers`.
+
+ :::image type="content" source="./media/alerts-query-quickstart/log-analytics-workspace-query.png" alt-text="Screenshot of the Log Analytics workspace with a query of the Resources table that highlights logs and run button.":::
+
+1. Select **Run**.
+
+ The **Results** displays the **Count** of resources in your Azure subscription. Make a note of that number because you need it for the alert rule's condition. When you manually run the query the count is based on user identity, and a fired alert uses a managed identity. It's possible that the count might vary between a manual run or fired alert.
+
+1. Remove the count from your query.
+
+ ```kusto
+ arg("").Resources
+ ```
+
+# [Azure Resource Graph and Log Analytics](#tab/arg-log-analytics)
+
+From the Log Analytics workspace, create an Azure Resource Graph query to get the last heartbeat information from your virtual machine. This example uses the Azure Resource Graph `Resources` table and Log Analytics data from the from Azure Monitor Logs `Heartbeat` table.
+
+1. Go to your _demo-arg-alert-workspace_ Log Analytics workspace.
+1. Select **Logs** from the left side of the **Log Analytics workspace** page.
+
+ Close the **Queries** window if it's displayed.
+
+1. Use the following code in the **New Query**.
+
+ ```kusto
+ arg("").Resources
+ | where type == 'microsoft.compute/virtualmachines'
+ | project ResourceId = id, name, PowerState = tostring(properties.extended.instanceView.powerState.code)
+ | join (Heartbeat
+ | where TimeGenerated > ago(15m)
+ | summarize lastHeartBeat = max(TimeGenerated) by ResourceId)
+ on ResourceId
+ | project lastHeartBeat, PowerState, name, ResourceId
+ ```
+
+ Table names in Log Analytics need to be camel case with the first letter of each word capitalized, like `Resources` or `ResourceContainers`. You can also use lowercase like `resources` or `resourcecontainers`.
+
+ You can use other timeframes for the `TimeGenerated`. For example, rather than minutes like `15m` use hours like `12h`, `24h`, `48h`.
+
+ :::image type="content" source="./media/alerts-query-quickstart/log-analytics-cross-query.png" alt-text="Screenshot of the Log Analytics workspace with a cross query of the Resources and Heartbeat tables that highlights logs and run button.":::
+
+1. Select **Run**.
+
+ The query should return the virtual machine's last heartbeat, power state, name, and resource ID. If no **Results** are displayed, continue to the next steps.
+++
+## Create alert rule
+
+# [Azure Resource Graph](#tab/azure-resource-graph)
+
+From the Log Analytics workspace, select **New alert rule**. The query from your Log Analytics workspace is copied to the alert rule. **Create an alert rule** has several tabs that need to be updated to create the alert.
++
+### Scope
+
+Verify that the scope is set to your Log Analytics workspace named _demo-arg-alert-workspace_.
+
+If you need to change the scope, do the following steps.
+
+1. Go to the **Scope** tab and select **Select scope**.
+1. At the bottom of the **Selected resources** screen, delete the current scope.
+1. Expand the **demo-arg-alert-rg** from the list of resources and select **demo-arg-alert-workspace**.
+1. Select **Apply**.
+1. Select **Next: Condition**.
+
+### Condition
+
+The form has several fields to complete.
+
+- **Signal name**: Custom log search
+- **Search query**: Displays the query code.
+
+**Measurement**
+
+- **Measure**: Table rows
+- **Aggregation type**: Count
+- **Aggregation granularity**: 5 minutes
+
+**Alert logic**
+
+- **Operator**: Greater than
+- **Threshold value**: Use a number that's less that the number returned from the resources count.
+
+ For example, if your resource count was 50 then use 45. This value triggers the alert to fire when it evaluates your resources because your number of resources is greater than the threshold value.
+
+- **Frequency of evaluation**: 5 minutes
+
+Select **Next: Actions**.
+
+### Actions
+
+Select **Create action group**.
+
+- **Subscription**: Select your Azure subscription.
+- **Resource group**: _demo-arg-alert-rg_
+- **Region**: Global
+- **Action group name**: _demo-arg-alert-action-group_
+- **Display name**: _demo-action_ (limit is 12 characters)
+
+Select **Next: Notifications**.
+
+- **Notification type**: Select **Email/SMS message/Push/Voice**.
+- **Name**: _email-alert_
+- Select the **Email** checkbox and type your email address.
+- Select **Ok**.
+
+Select **Review + Create**, verify the summary is correct, and select **Create**. You're returned to the **Actions** tab of the **Create an alert rule** page. The **Action group name** shows the action group you created.
+
+Select **Next: Details**.
+
+### Details
+
+Use the following information on the **Details** tab.
+
+ - **Subscription**: Select your Azure subscription
+ - **Resource group**: _demo-arg-alert-rg_
+ - **Severity**: Accept the default value **3 - Informational**
+ - **Alert rule name**: _demo-arg-alert-rule_
+ - **Alert rule description**: _Email alert for count of Azure resources_
+ - **Identity**: Select _System assigned managed identity_
+
+Select **Review + Create**, verify the summary is correct, and select **Create**. You're returned to the **Logs** page of your **Log Analytics workspace**.
+
+You receive an email notification to confirm you were added to the action group.
+
+### Assign role
+
+Assign the _Log Analytics Reader_ to the system-assigned managed identity so that it has permissions fire alerts that send email notifications.
+
+1. Select **Monitoring** > **Alerts** in the Log Analytics workspace.
+
+ Select **OK** if you're prompted that **Your unsaved edits will be discarded**.
+
+1. Select **Alert rules**.
+1. Select _demo-arg-alert-rule_.
+1. Select **Settings** > **Identity** > **System assigned**.
+
+ - **Status**: On
+ - **Object ID**: Shows the GUID for your Enterprise Application (service principal) in Microsoft Entra ID.
+ - **Permission**: Select **Azure role assignments**
+ - Verify the correct subscription is selected.
+ - Select **Add role assignment**
+ - **Scope**: _Subscription_
+ - **Subscription**: Your Azure subscription name
+ - **Role**: _Log Analytics Reader_
+1. Select **Save**.
+
+It takes a few minutes for the _Log Analytics Reader_ to display on the **Azure role assignments** page. Select **Refresh** to update the page.
+
+Use your browser's back button to return to the **Identity** and then select **Overview** to return to the alert rule. Select the link to your resource group named _demo-arg-alert-rg_.
+
+# [Azure Resource Graph and Log Analytics](#tab/arg-log-analytics)
+
+From the Log Analytics workspace, select **New alert rule**. The query from your Log Analytics workspace is copied to the alert rule. The **Create an alert rule** has several tabs that need to be updated.
++
+### Scope
+
+Verify that the scope is set to your Log Analytics workspace.
+
+If you need to change the scope, do the following steps.
+
+1. Go to the **Scope** tab and select **Select scope**.
+1. At the bottom of the **Selected resources** screen, delete the current scope.
+1. Expand the **demo-arg-alert-rg** from the list of resources and select **demo-arg-alert-workspace**.
+1. Select **Apply**.
+1. Select **Next: Condition**.
+
+### Condition
+
+The form has several fields to complete.
+
+- **Signal name**: Custom log search
+- **Search query**: Displays the query code.
+
+**Measurement**
+
+- **Measure**: Table rows
+- **Aggregation type**: Count
+- **Aggregation granularity**: 5 minutes
+
+**Alert logic**
+
+- **Operator**: Less than
+- **Threshold value**: 2
+- **Frequency of evaluation**: 5 minutes
+
+Select **Next: Actions**.
+
+### Actions
+
+Select **Create action group**.
+
+- **Subscription**: Select your Azure subscription.
+- **Resource group**: _demo-arg-alert-rg_
+- **Region**: Global
+- **Action group name**: _demo-arg-la-alert-action-group_
+- **Display name**: _demo-argla_ (limit is 12 characters)
+
+Select **Next: Notifications**.
+
+- **Notification type**: Select **Email/SMS message/Push/Voice**
+- **Name**: _email-alert-arg-la_
+- Select the **Email** checkbox and type your email address
+- Select **Ok**
+
+Select **Review + Create**, verify the summary is correct, and select **Create**. You're returned to the **Actions** tab of the **Create an alert rule** page. The **Action group name** shows the action group you created.
+
+Select **Next: Details**.
+
+### Details
+
+Use the following information on the **Details** tab.
+
+ - **Subscription**: Select your Azure subscription
+ - **Resource group**: _demo-arg-alert-rg_
+ - **Severity**: Accept the default value **2 - Warning**
+ - **Alert rule name**: _demo-arg-la-alert-rule_
+ - **Alert rule description**: _Email alert for ARG-LA query of Azure virtual machine_
+ - **Identity**: Select _System assigned managed identity_
+
+Select **Review + Create**, verify the summary is correct, and select **Create**. You're returned to the **Logs** page of your **Log Analytics workspace**.
+
+You receive an email notification to confirm you were added to the action group.
+
+### Assign role
+
+Assign the _Log Analytics Reader_ to the system-assigned managed identity so that it has permissions fire alerts that send email notifications.
+
+1. Select **Monitoring** > **Alerts** in the Log Analytics workspace.
+
+ Select **OK** if you're prompted that **Your unsaved edits will be discarded**.
+
+1. Select **Alert rules**
+1. Select _demo-arg-la-alert-rule_
+1. Select **Settings** > **Identity** > **System assigned**
+
+ - **Status**: On
+ - **Object ID**: Shows the GUID for your Enterprise Application (service principal) in Microsoft Entra ID.
+ - **Permission**: Select **Azure role assignments**
+ - Verify the correct subscription is selected.
+ - Select **Add role assignment**
+ - **Scope**: _Subscription_
+ - **Subscription**: Your Azure subscription name
+ - **Role**: _Log Analytics Reader_
+1. Select **Save**.
+
+It takes a few minutes for the _Log Analytics Reader_ to display on the **Azure role assignments** page. Select **Refresh** to update the page.
+
+Use your browser's back button to return to the **Identity** and select **Overview** to return to the alert rule. Select the link to your resource group named _demo-arg-alert-rg_.
+++
+## Verify alerts
+
+# [Azure Resource Graph](#tab/azure-resource-graph)
+
+After the role is assigned to your alert rule, you begin to receive email for alert messages. The rule was created to send alerts every five minutes and it takes a few minutes to get the first alert.
+
+You can also view the alerts in the Azure portal.
+
+1. Go to the resource group _demo-arg-alert-rg_.
+1. Select _demo-arg-alert-workspace_ in your list of resources.
+1. Select **Monitoring** > **Alerts**.
+1. A list of alerts is displayed.
+
+ :::image type="content" source="./media/alerts-query-quickstart/alert-fired.png" alt-text="Screenshot of the Log Analytics workspace that shows list of alerts that fired.":::
++
+# [Azure Resource Graph and Log Analytics](#tab/arg-log-analytics)
+
+After the role is assigned to your alert rule, you begin to receive email for alert messages. The rule was created to send alerts every five minutes and it takes a few minutes to get the first alert.
+
+You can also view the alerts in the Azure portal.
+
+1. Go to the resource group _demo-arg-alert-rg_.
+1. Select your virtual machine.
+1. Select **Monitoring** > **Alerts**.
+1. A list of alerts is displayed.
+
+ :::image type="content" source="./media/alerts-query-quickstart/vm-alert-fired.png" alt-text="Screenshot of the virtual machine monitoring alerts that shows list of alerts that fired.":::
+
+> [!NOTE]
+> It might take 30 minutes for log information to become available to create alerts.
+++
+## How did we solve the problem?
+
+You created an Azure Resource Graph query and a Log Analytics workspace to monitor Azure resources. You also set up alerts to notify you for events and assigned a role to the system-assigned managed identity. After the alert was created, you received email alerts based on conditions in the alert rule.
+
+## Clean up resources
+
+If you want to keep the alert configuration but stop the alert from firing and sending email notifications, you can disable it. Go to your alert rule _demo-arg-alert-rule_ or _demo-arg-la-alert-rule_ and select **Disable**.
+
+If you don't need this alert or the resources you created in this example, delete the resource group with the following steps:
+
+1. Go to your resource group _demo-arg-alert-rg_.
+1. Select **Delete resource group**.
+1. Type the resource group name to confirm.
+1. Select **Delete**.
+
+## Related content
+
+For more information about the query language or how to explore resources, go to the following articles.
+
+- [Understanding the Azure Resource Graph query language](./concepts/query-language.md)
+- [Explore your Azure resources with Resource Graph](./concepts/explore-resources.md)
+- [Overview of Log Analytics in Azure Monitor](../../azure-monitor/logs/log-analytics-overview.md)
+- [Troubleshoot Azure Resource Graph alerts](./troubleshoot/alerts.md)
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/overview.md
Title: Overview of Azure Resource Graph description: Understand how the Azure Resource Graph service enables complex querying of resources at scale across subscriptions and tenants. Previously updated : 08/15/2023 Last updated : 10/31/2023 + # What is Azure Resource Graph? Azure Resource Graph is an Azure service designed to extend Azure Resource Management by
provide the following abilities:
- Query resources with complex filtering, grouping, and sorting by resource properties. - Explore resources iteratively based on governance requirements.-- Assess the impact of applying policies in a vast cloud environment.
+- Assess the effect of applying policies in a vast cloud environment.
- [Query changes made to resource properties](./how-to/get-resource-changes.md). In this documentation, you review each feature in detail.
With Azure Resource Graph, you can:
## How Resource Graph is kept current
-When an Azure resource is updated, Resource Graph is notified by Resource Manager of the change.
-Resource Graph then updates its database. Resource Graph also does a regular _full scan_. This scan
-ensures that Resource Graph data is current if there are missed notifications or when a resource is
-updated outside of Resource Manager.
+When an Azure resource is updated, Azure Resource Manager notifies Azure Resource Graph about the change. Azure Resource Graph then updates its database. Azure Resource Graph also does a regular _full scan_. This scan ensures that Azure Resource Graph data is current if there are missed notifications or when a resource is updated outside of Azure Resource Manager.
> [!NOTE] > Resource Graph uses a `GET` to the latest non-preview application programming interface (API) of each resource provider to gather
structured the same for each language. Learn how to enable Resource Graph with:
- [Azure PowerShell](./first-query-powershell.md#add-the-resource-graph-module) - [Python](./first-query-python.md#add-the-resource-graph-library)
+## Alerts integration with Log Analytics
+
+> [!NOTE]
+> Azure Resource Graph alerts integration with Log Analytics is in public preview.
+
+You can create alert rules by using either Azure Resources Graph queries or integrating Log Analytics with Azure Resources Graph queries through Azure Monitor. Both methods can be used to create alerts for Azure resources. For examples, go to [Quickstart: Create alerts with Azure Resource Graph and Log Analytics](./alerts-query-quickstart.md).
+ ## Next steps - Learn more about the [query language](./concepts/query-language.md).
governance Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/troubleshoot/alerts.md
+
+ Title: Troubleshoot Azure Resource Graph alerts
+description: Learn how to troubleshoot issues with Azure Resource Graph alerts integration with Log Analytics.
Last updated : 10/31/2023+++
+# Troubleshoot Azure Resource Graph alerts
+
+> [!NOTE]
+> Azure Resource Graph alerts integration with Log Analytics is in public preview.
+
+The following descriptions help you troubleshoot queries for Azure Resource Graph alerts that integrate with Log Analytics.
+
+## Azure Resource Graph operators
+
+Only the operators supported in Azure Resource Graph Explorer are supported as part of this integration with Log Analytics for alerts. For more information, go to [supported operators](../concepts/query-language.md#supported-kql-language-elements).
+
+## Pagination
+
+Azure Resource Graph has pagination in its dedicated APIs. But with the way Log Analytics interacts with Azure Resource Graph, pagination isn't a supported reason why only 1,000 results are returned.
+
+- Cross queries between Azure Resource Graph and Log Analytics don't support pagination and only show the first 1,000 results.
+- You must set a limitation of 400 when writing a query with the [mv-expand](../concepts/query-language.md#supported-tabulartop-level-operators) operator.
++
+## Managed identities
+
+The managed identity for your alert must have the role [Log Analytics Contributor](../../../role-based-access-control/built-in-roles.md#log-analytics-contributor) or [Log Analytics Reader](../../../role-based-access-control/built-in-roles.md#log-analytics-reader). The role provides the permissions to get monitoring information.
+
+When you set up an alert, the results can be different than the result after the alert is fired. The reason is that a fired alert is run based on managed identity, but when you manually test an alert it's based on the user's identity.
+
+## Table names
+
+Azure Resource Graph table names need to be camel case with the first letter of each word capitalized, like `Resources` or `ResourceContainers`. You can also use lowercase like `resources` or `resourcecontainers`.
hdinsight-aks Sink Sql Server Table Using Flink Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/sink-sql-server-table-using-flink-sql.md
The SQLServer CDC connector is a Flink Source connector, which reads database sn
We have already covered this section in detail on how to use [secure shell](./flink-web-ssh-on-portal-to-flink-sql.md) with Flink.
-## Prepare table and enable cdc feature on SQL Server sqldb
+### Prepare table and enable CDC feature on SQL Server SQLDB
Let us prepare a table and enable the CDC, You can refer the detailed steps listed on [SQL Documentation](/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server?)
GO
``` **Verify that the user has access to the CDC table**+ ``` SQL USE inventory GO
VALUES ('21-FEB-2016', 1003, 1, 107);
EXEC sys.sp_cdc_enable_table @source_schema = 'dbo', @source_name = 'orders', @role_name = NULL, @supports_net_changes = 0; GO ```
-## Download SQLServer CDC connector and its dependencies on SSH
-
-**WSL to ubuntu on local to check all dependencies related *flink-sql-connector-sqlserver-cdc* jar**
+### Download SQLServer CDC connector on SSH
```
-myvm@MININT-481C9TJ:/mnt/c/Work/99_tools/apache-maven-3.9.0/bin$ vim pom.xml
-
-<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
- <modelVersion>4.0.0</modelVersion>
- <groupId>com.dep.download</groupId>
- <artifactId>dep-download</artifactId>
- <version>1.0-SNAPSHOT</version>
-<!-- https://mvnrepository.com/artifact/com.ververica/flink-sql-connector-sqlserver-cdc -->
- <dependency>
- <groupId>com.ververica</groupId>
- <artifactId>flink-sql-connector-sqlserver-cdc</artifactId>
- <version>2.3.0</version>
- </dependency>
-</project>
-
-myvm@MININT-481C9TJ:/mnt/c/Work/99_tools/apache-maven-3.9.0/bin$ mkdir target
-
-myvm@MININT-481C9TJ:/mnt/c/Work/99_tools/apache-maven-3.9.0/bin$ /mnt/c/Work/99_tools/apache-maven-3.9.0/bin/mvn -DoutputDirectory=target -f pom.xml dependency:copy-dependencies
-[INFO] Scanning for projects...
-
-myvm@MININT-481C9TJ:/mnt/c/Work/99_tools/apache-maven-3.9.0/bin$ cd target
-myvm@MININT-481C9TJ:/mnt/c/Work/99_tools/apache-maven-3.9.0/bin/target$ ll
-total 19436
-drwxrwxrwx 1 msdata msdata 4096 Feb 9 08:39 ./
-drwxrwxrwx 1 msdata msdata 4096 Feb 9 08:37 ../
--rwxrwxrwx 1 msdata msdata 85388 Feb 9 08:39 awaitility-4.0.1.jar*--rwxrwxrwx 1 msdata msdata 3085931 Feb 9 08:39 flink-shaded-guava-30.1.1-jre-16.0.jar*--rwxrwxrwx 1 msdata msdata 16556459 Feb 9 08:39 flink-sql-connector-sqlserver-cdc-2.3.0.jar*--rwxrwxrwx 1 msdata msdata 123103 Feb 9 08:39 hamcrest-2.1.jar*--rwxrwxrwx 1 msdata msdata 40502 Feb 9 08:39 slf4j-api-1.7.15.jar*
-```
-**Let us download jars to SSH**
-```sql
-wget https://repo1.maven.org/maven2/com/ververica/flink-connector-sqlserver-cdc/2.4.0/flink-connector-sqlserver-cdc-2.4.0.jar
-wget https://repo1.maven.org/maven2/org/apache/flink/flink-shaded-guava/30.1.1-jre-16.0/flink-shaded-guava-30.1.1-jre-16.0.jar
-wget https://repo1.maven.org/maven2/org/awaitility/awaitility/4.0.1/awaitility-4.0.1.jar
-wget https://repo1.maven.org/maven2/org/hamcrest/hamcrest/2.1/hamcrest-2.1.jar
-wget https://repo1.maven.org/maven2/net/java/loci/jsr308-all/1.1.2/jsr308-all-1.1.2.jar
-
-msdata@pod-0 [ ~/jar ]$ ls -l
-total 6988
--rw-r-- 1 msdata msdata 85388 Sep 6 2019 awaitility-4.0.1.jar--rw-r-- 1 msdata msdata 107097 Jun 25 03:47 flink-connector-sqlserver-cdc-2.4.0.jar--rw-r-- 1 msdata msdata 3085931 Sep 27 2022 flink-shaded-guava-30.1.1-jre-16.0.jar--rw-r-- 1 msdata msdata 123103 Dec 20 2018 hamcrest-2.1.jar--rw-r-- 1 msdata msdata 3742993 Mar 30 2011 jsr308-all-1.1.2.jar
+wget https://repo1.maven.org/maven2/com/ververica/flink-sql-connector-sqlserver-cdc/2.4.1/flink-sql-connector-sqlserver-cdc-2.4.1.jar
``` ### Add jar into sql-client.sh and connect to Flink SQL Client ```sql
-msdata@pod-0 [ ~ ]$ bin/sql-client.sh -j jar/flink-sql-connector-sqlserver-cdc-2.4.0.jar -j jar/flink-shaded-guava-30.1.1-jre-16.0.jar -j jar/hamcrest-2.1.jar -j jar/awaitility-4.0.1.jar -j jar/jsr308-all-1.1.2.jar
+bin/sql-client.sh -j flink-sql-connector-sqlserver-cdc-2.4.1.jar
```
-## Create SQLServer CDC table
+### Create SQLServer CDC table
``` sql SET 'sql-client.execution.result-mode' = 'tableau';
select * from orders;
:::image type="content" source="./media/sink-sql-server-table-using-flink-sql/insert-sql-table.png" alt-text="Screenshot showing making changes on SQL Table.":::
-## Validation
+### Validation
Monitor the table on Flink SQL
machine-learning Concept V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-v2.md
Title: 'CLI & SDK v2'
+ Title: 'Azure Machine Learning CLI & SDK v2'
-description: This article explains the difference between the v1 and v2 versions of Azure Machine Learning v1 and v2.
+description: This article explains the difference between the v1 and v2 versions of Azure Machine Learning.
Last updated 11/04/2022
-#Customer intent: As a data scientist, I want to know whether to use v1 or v2 of CLI, SDK.
+#Customer intent: As a data scientist, I want to know whether to use v1 or v2 of CLI and SDK.
-# What is Azure Machine Learning CLI & Python SDK v2?
+# What is Azure Machine Learning CLI and Python SDK v2?
[!INCLUDE [dev v2](includes/machine-learning-dev-v2.md)]
-Azure Machine Learning CLI v2 and Azure Machine Learning Python SDK v2 introduce a consistency of features and terminology across the interfaces. In order to create this consistency, the syntax of commands differs, in some cases significantly, from the first versions (v1).
+Azure Machine Learning CLI v2 (CLI v2) and Azure Machine Learning Python SDK v2 (SDK v2) introduce a consistency of features and terminology across the interfaces. To create this consistency, the syntax of commands differs, in some cases significantly, from the first versions (v1).
-There are no differences in functionality between SDK v2 and CLI v2. The command line based CLI may be more convenient in CI/CD MLOps type of scenarios, while the SDK may be more convenient for development.
+There are no differences in functionality between CLI v2 and SDK v2. The command line-based CLI might be more convenient in CI/CD MLOps types of scenarios, while the SDK might be more convenient for development.
## Azure Machine Learning CLI v2
-The Azure Machine Learning CLI v2 (CLI v2) is the latest extension for the [Azure CLI](/cli/azure/what-is-azure-cli). The CLI v2 provides commands in the format *az ml __\<noun\> \<verb\> \<options\>__* to create and maintain Azure Machine Learning assets and workflows. The assets or workflows themselves are defined using a YAML file. The YAML file defines the configuration of the asset or workflow ΓÇô what is it, where should it run, and so on.
+Azure Machine Learning CLI v2 is the latest extension for the [Azure CLI](/cli/azure/what-is-azure-cli). CLI v2 provides commands in the format *az ml __\<noun\> \<verb\> \<options\>__* to create and maintain Machine Learning assets and workflows. The assets or workflows themselves are defined by using a YAML file. The YAML file defines the configuration of the asset or workflow. For example, what is it, and where should it run?
A few examples of CLI v2 commands:
A few examples of CLI v2 commands:
### Use cases for CLI v2
-The CLI v2 is useful in the following scenarios:
+CLI v2 is useful in the following scenarios:
-* On board to Azure Machine Learning without the need to learn a specific programming language
+* Onboard to Machine Learning without the need to learn a specific programming language.
- The YAML file defines the configuration of the asset or workflow ΓÇô what is it, where should it run, and so on. Any custom logic/IP used, say data preparation, model training, model scoring can remain in script files, which are referred to in the YAML, but not part of the YAML itself. Azure Machine Learning supports script files in python, R, Java, Julia or C#. All you need to learn is YAML format and command lines to use Azure Machine Learning. You can stick with script files of your choice.
+ The YAML file defines the configuration of the asset or workflow, such as what is it and where should it run? Any custom logic or IP used, say data preparation, model training, and model scoring, can remain in script files. These files are referred to in the YAML but aren't part of the YAML itself. Machine Learning supports script files in Python, R, Java, Julia, or C#. All you need to learn is YAML format and command lines to use Machine Learning. You can stick with script files of your choice.
-* Ease of deployment and automation
+* Take advantage of ease of deployment and automation.
- The use of command-line for execution makes deployment and automation simpler, since workflows can be invoked from any offering/platform, which allows users to call the command line.
+ The use of command line for execution makes deployment and automation simpler because you can invoke workflows from any offering or platform, which allows users to call the command line.
-* Managed inference deployments
+* Use managed inference deployments.
- Azure Machine Learning offers [endpoints](concept-endpoints.md) to streamline model deployments for both real-time and batch inference deployments. This functionality is available only via CLI v2 and SDK v2.
+ Machine Learning offers [endpoints](concept-endpoints.md) to streamline model deployments for both real-time and batch inference deployments. This functionality is available only via CLI v2 and SDK v2.
-* Reusable components in pipelines
-
- Azure Machine Learning introduces [components](concept-component.md) for managing and reusing common logic across pipelines. This functionality is available only via CLI v2 and SDK v2.
+* Reuse components in pipelines.
+ Machine Learning introduces [components](concept-component.md) for managing and reusing common logic across pipelines. This functionality is available only via CLI v2 and SDK v2.
## Azure Machine Learning Python SDK v2 Azure Machine Learning Python SDK v2 is an updated Python SDK package, which allows users to:
-* Submit training jobs
-* Manage data, models, environments
-* Perform managed inferencing (real time and batch)
-* Stitch together multiple tasks and production workflows using Azure Machine Learning pipelines
+* Submit training jobs.
+* Manage data, models, and environments.
+* Perform managed inferencing (real time and batch).
+* Stitch together multiple tasks and production workflows by using Machine Learning pipelines.
-The SDK v2 is on par with CLI v2 functionality and is consistent in how assets (nouns) and actions (verbs) are used between SDK and CLI. For example, to list an asset, the `list` action can be used in both CLI and SDK. The same `list` action can be used to list a compute, model, environment, and so on.
+SDK v2 is on par with CLI v2 functionality and is consistent in how assets (nouns) and actions (verbs) are used between SDK and CLI. For example, to list an asset, you can use the `list` action in both SDK and CLI. You can use the same `list` action to list a compute, model, environment, and so on.
### Use cases for SDK v2
-The SDK v2 is useful in the following scenarios:
+SDK v2 is useful in the following scenarios:
+
+* Use Python functions to build a single step or a complex workflow.
-* Use Python functions to build a single step or a complex workflow
+ SDK v2 allows you to build a single command or a chain of commands like Python functions. The command has a name and parameters, expects input, and returns output.
- SDK v2 allows you to build a single command or a chain of commands like Python functions - the command has a name, parameters, expects input, and returns output.
+* Move from simple to complex concepts incrementally.
-* Move from simple to complex concepts incrementally
+ SDK v2 allows you to:
- SDK v2 allows you to:
* Construct a single command.
- * Add a hyperparameter sweep on top of that command,
- * Add the command with various others into a pipeline one after the other.
+ * Add a hyperparameter sweep on top of that command.
+ * Add the command with various others into a pipeline one after the other.
- This construction is useful, given the iterative nature of machine learning.
+ This construction is useful because of the iterative nature of machine learning.
-* Reusable components in pipelines
+* Reuse components in pipelines.
- Azure Machine Learning introduces [components](concept-component.md) for managing and reusing common logic across pipelines. This functionality is available only via CLI v2 and SDK v2.
+ Machine Learning introduces [components](concept-component.md) for managing and reusing common logic across pipelines. This functionality is available only via CLI v2 and SDK v2.
-* Managed inferencing
+* Use managed inferencing.
- Azure Machine Learning offers [endpoints](concept-endpoints.md) to streamline model deployments for both real-time and batch inference deployments. This functionality is available only via CLI v2 and SDK v2.
+ Machine Learning offers [endpoints](concept-endpoints.md) to streamline model deployments for both real-time and batch inference deployments. This functionality is available only via CLI v2 and SDK v2.
## Should I use v1 or v2?
+Here are some considerations to help you decide which version to use.
+ ### CLI v2
-The Azure Machine Learning CLI v1 has been deprecated. We recommend you to use CLI v2 if:
+Azure Machine Learning CLI v1 has been deprecated. We recommend that you use CLI v2 if:
-* You were a CLI v1 user
-* You want to use new features like - reusable components, managed inferencing
-* You don't want to use a Python SDK - CLI v2 allows you to use YAML with scripts in python, R, Java, Julia or C#
-* You were a user of R SDK previously - Azure Machine Learning won't support an SDK in `R`. However, the CLI v2 has support for `R` scripts.
-* You want to use command line based automation/deployments
+* You were a CLI v1 user.
+* You want to use new features like reusable components and managed inferencing.
+* You don't want to use a Python SDK. CLI v2 allows you to use YAML with scripts in Python, R, Java, Julia, or C#.
+* You were a user of R SDK previously. Machine Learning won't support an SDK in `R`. However, CLI v2 has support for `R` scripts.
+* You want to use command line-based automation or deployments.
* You don't need Spark Jobs. This feature is currently available in preview in CLI v2. ### SDK v2
-The Azure Machine Learning Python SDK v1 doesn't have a planned deprecation date. If you have significant investments in Python SDK v1 and don't need any new features offered by SDK v2, you can continue to use SDK v1. However, you should consider using SDK v2 if:
+Azure Machine Learning Python SDK v1 doesn't have a planned deprecation date. If you have significant investments in Python SDK v1 and don't need any new features offered by SDK v2, you can continue to use SDK v1. However, you should consider using SDK v2 if:
-* You want to use new features like - reusable components, managed inferencing
-* You're starting a new workflow or pipeline - all new features and future investments will be introduced in v2
-* You want to take advantage of the improved usability of the Python SDK v2 - ability to compose jobs and pipelines using Python functions, easy evolution from simple to complex tasks etc.
+* You want to use new features like reusable components and managed inferencing.
+* You're starting a new workflow or pipeline. All new features and future investments will be introduced in v2.
+* You want to take advantage of the improved usability of the Python SDK v2 ability to compose jobs and pipelines by using Python functions, with easy evolution from simple to complex tasks.
## Next steps
-* [How to upgrade from v1 to v2](how-to-migrate-from-v1.md)
-* Get started with CLI v2
+* [Upgrade from v1 to v2](how-to-migrate-from-v1.md)
+* Get started with CLI v2:
* [Install and set up CLI (v2)](how-to-configure-cli.md)
- * [Train models with the CLI (v2)](how-to-train-model.md)
+ * [Train models with CLI (v2)](how-to-train-model.md)
* [Deploy and score models with online endpoints](how-to-deploy-online-endpoints.md)
-* Get started with SDK v2
+* Get started with SDK v2:
* [Install and set up SDK (v2)](https://aka.ms/sdk-v2-install)
- * [Train models with the Azure Machine Learning Python SDK v2](how-to-train-model.md)
- * [Tutorial: Create production ML pipelines with Python SDK v2 in a Jupyter notebook](tutorial-pipeline-python-sdk.md)
+ * [Train models with Azure Machine Learning Python SDK v2](how-to-train-model.md)
+ * [Tutorial: Create production Machine Learning pipelines with Python SDK v2 in a Jupyter notebook](tutorial-pipeline-python-sdk.md)
machine-learning How To Network Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-network-security-overview.md
This article is part of a series on securing an Azure Machine Learning workflow.
This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series: :::moniker range="azureml-api-2"
-* [Use managed networks](how-to-managed-network.md) (preview)
+* [Use managed networks](how-to-managed-network.md)
* [Secure the workspace resources](how-to-secure-workspace-vnet.md) * [Secure machine learning registries](how-to-registry-network-isolation.md) * [Secure the training environment](how-to-secure-training-vnet.md)
machine-learning Monitor Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/monitor-azure-machine-learning.md
Start with the article [Monitoring Azure resources with Azure Monitor](../azure-
The following sections build on this article by describing the specific data gathered for Azure Machine Learning. These sections also provide examples for configuring data collection and analyzing this data with Azure tools. > [!TIP]
-> To understand costs associated with Azure Monitor, see [Usage and estimated costs](../azure-monitor/usage-estimated-costs.md). To understand the time it takes for your data to appear in Azure Monitor, see [Log data ingestion time](../azure-monitor/logs/data-ingestion-time.md).
+> To understand costs associated with Azure Monitor, see [Azure Monitor cost and usage](../azure-monitor/cost-usage.md). To understand the time it takes for your data to appear in Azure Monitor, see [Log data ingestion time](../azure-monitor/logs/data-ingestion-time.md).
## Monitoring data from Azure Machine Learning
machine-learning Quickstart Create Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/quickstart-create-resources.md
Last updated 08/17/2023 adobe-target: true
-content_well_notification:
- - AI-contribution
#Customer intent: As a data scientist, I want to create a workspace so that I can start to use Azure Machine Learning.
managed-grafana How To Connect To Data Source Privately https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-connect-to-data-source-privately.md
Once you've set up the private link service, you can create a managed private en
> The *Private link service url* field is optional unless you need TLS. If you specify a URL, Managed Grafana will ensure that the host IP address for that URL matches the private endpoint's IP address. Due to security reasons, AMG have an allowed list of the URL. 1. Click **Create** to add the managed private endpoint resource.
-1. Contact the owner of target Azure Monitor workspace to approve the connection request.
+1. Contact the owner of target private link service to approve the connection request.
1. After the connection request is approved, click **Refresh** to see the connection status and private IP address. > [!NOTE]
managed-grafana Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/overview.md
Azure Managed Grafana is available in the two service tiers presented below.
| Essential (preview) | Provides the core Grafana functionalities in use with Azure data sources. Since it doesn't provide an SLA guarantee, this tier should be used only for non-production environments. | | Standard | The default tier, offering better performance, more features and an SLA. It's recommended for most situations. |
-> [!NOTE]
-> The Essential plan (preview) is currently being rolled out and will be available in all cloud regions on October 30, 2023.
- The following table lists the main features supported in each tier: | Feature | Essential (preview) | Standard |
nat-gateway Nat Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/nat-metrics.md
Azure NAT Gateway provides the following diagnostic capabilities:
## Metrics overview
-NAT gateway resources provide the following multi-dimensional metrics in Azure Monitor:
+NAT gateway provides the following multi-dimensional metrics in Azure Monitor:
| Metric | Description | Recommended aggregation | Dimensions | ||||| | Bytes | Bytes processed inbound and outbound | Sum | Direction (In; Out), Protocol (6 TCP; 17 UDP) | | Packets | Packets processed inbound and outbound | Sum | Direction (In; Out), Protocol (6 TCP; 17 UDP) |
-| Dropped packets | Packets dropped by the NAT gateway | Sum | / |
-| SNAT Connection Count | Number of new SNAT connections over a given interval of time | Sum | Connection State (Attempted, Established, Failed, Closed, Timed Out), Protocol (6 TCP; 17 UDP) |
-| Total SNAT connection count | Total number of active SNAT connections | Sum | Protocol (6 TCP; 17 UDP) |
-| Datapath availability | Availability of the data path of the NAT gateway. Used to determine whether the NAT gateway endpoints are available for outbound traffic flow. | Avg | Availability (0, 100) |
+| Dropped Packets | Packets dropped by the NAT gateway | Sum | / |
+| SNAT Connection Count | Number of new SNAT connections over a given interval of time | Sum | Connection State (Attempted, Failed), Protocol (6 TCP; 17 UDP) |
+| Total SNAT Connection Count | Total number of active SNAT connections | Sum | Protocol (6 TCP; 17 UDP) |
+| Datapath Availability | Availability of the data path of the NAT gateway. Used to determine whether the NAT gateway endpoints are available for outbound traffic flow. | Avg | Availability (0, 100) |
+
+>[!NOTE]
+> Count aggregation is not recommended for any of the NAT gateway metrics. Count aggregation adds up the number of metric values and not the metric values themselves. Use Sum aggregation instead to get the best representation of data values for connection count, bytes, and packets metrics.
+>
+> Use average for best represented health data for the datapath availability metric.
+>
+> See [aggregation types](/azure/azure-monitor/essentials/metrics-aggregation-explained#aggregation-types) for more information.
## Where to find my NAT gateway metrics
To view any one of your metrics for a given NAT gateway resource:
3. In the **Aggregation** drop-down menu, select the recommended aggregation listed in the [metrics overview](#metrics-overview) table.
- :::image type="content" source="./media/nat-metrics/nat-metrics-1.png" alt-text="Screenshot of the metrics setup configuration in NAT gateway resource.":::
+ :::image type="content" source="./media/nat-metrics/nat-metrics-1.png" alt-text="Screenshot of the metrics set up in NAT gateway resource.":::
4. To adjust the time frame over which the chosen metric is presented on the metrics graph or to adjust how frequently the chosen metric is measured, select the **Time** window in the top right corner of the metrics page and make your adjustments.
To view any one of your metrics for a given NAT gateway resource:
The **Bytes** metric shows you the amount of data going outbound through NAT gateway and returning inbound in response to an outbound connection.
-Use this metric for the following measurements:
+Use this metric to:
-- Assess the amount of data being processed through NAT gateway to connect outbound or return inbound.
+- View the amount of data being processed through NAT gateway to connect outbound or return inbound.
-To view the amount of data sent in one or both directions when connecting outbound through NAT gateway:
+To view the amount of data passing through NAT gateway:
1. Select the NAT gateway resource you would like to monitor.
To view the amount of data sent in one or both directions when connecting outbou
### Packets
-The packets metric shows you the number of data packets transmitted through the NAT gateway.
+The packets metric shows you the number of data packets passing through NAT gateway.
Use this metric to: -- To confirm that traffic is being sent through your NAT gateway to go outbound to the internet or return inbound.
+- Verify that traffic is passing outbound or returning inbound through NAT gateway.
-- To assess the amount of traffic being directed through your NAT gateway resource outbound or inbound (when in response to an outbound directed flow).
+- View the amount of traffic going outbound through NAT gateway or returning inbound.
-To view the number of packets sent in one or both directions when connecting outbound through NAT gateway, follow the same steps in the [Bytes](#bytes) section.
+To view the number of packets sent in one or both directions through NAT gateway, follow the same steps in the [Bytes](#bytes) section.
### Dropped packets
-The dropped packets metric shows you the number of data packets dropped by NAT gateway when directing traffic outbound or inbound in response to an outbound connection.
+The dropped packets metric shows you the number of data packets dropped by NAT gateway when traffic goes outbound or returns inbound in response to an outbound connection.
Use this metric to: -- Assess whether or not you're nearing or possibly experiencing SNAT exhaustion with a given NAT gateway resource. Check to see if periods of dropped packets coincide with periods of failed SNAT connections with the [SNAT Connection Count](#snat-connection-count) metric.
+- Check if periods of dropped packets coincide with periods of failed SNAT connections with the [SNAT Connection Count](#snat-connection-count) metric.
-- Help assess if you're experiencing a pattern of failed outbound connections.
+- Help determine if you're experiencing a pattern of failed outbound connections or SNAT port exhaustion.
-Reasons for why you may see dropped packets:
+Possible reasons for dropped packets:
-- If you're seeing a high rate of dropped packets, it may be due to outbound connectivity failure. Connectivity failure may happen for various reasons. See the NAT gateway [troubleshooting guide](./troubleshoot-nat.md) to help you further diagnose.
+- Outbound connectivity failure can cause packets to drop. Connectivity failure can happen for various reasons. See the [NAT gateway connectivity troubleshooting guide](/azure/nat-gateway/troubleshoot-nat-connectivity) to help you further diagnose.
### SNAT connection count
-The SNAT connection count metric shows you the number of new SNAT connections within a specified time frame. This metric can be broken out to view different connection states including: attempted, established, failed, closed, and timed out connections. A failed connection volume greater than zero may indicate SNAT port exhaustion.
+The SNAT connection count metric shows you the number of new SNAT connections within a specified time frame. This metric can be filtered by **Attempted** and **Failed** connection states. A failed connection volume greater than zero can indicate SNAT port exhaustion.
Use this metric to: - Evaluate the health of your outbound connections. -- Assess whether or not you're nearing or possibly experiencing SNAT port exhaustion.
+- Help diagnose if your NAT gateway is experiencing SNAT port exhaustion.
-- Evaluate whether your NAT gateway resource should be scaled out further by adding more public IPs. --- Assess if you're experiencing a pattern of failed outbound connections.
+- Determine if you're experiencing a pattern of failed outbound connections.
To view the connection state of your connections:
To view the connection state of your connections:
### Total SNAT connection count
-The **Total SNAT connection count** metric shows you the total number of active SNAT connections over a period of time.
+The **Total SNAT connection count** metric shows you the total number of active SNAT connections passing through NAT gateway.
You can use this metric to: -- Assess if you're nearing the connection limit of your NAT gateway resource.
+- Evaluate the volume of connections passing through NAT gateway.
+
+- Determine if you're nearing the connection limit of NAT gateway.
- Help assess if you're experiencing a pattern of failed outbound connections.
-Reasons for why you may see failed connections:
+Possible reasons for failed connections:
-- If you're seeing a pattern of failed connections for your NAT gateway resource, there could be multiple possible reasons. See the NAT gateway [troubleshooting guide](./troubleshoot-nat.md) to help you further diagnose.
+- A pattern of failed connections can happen for various reasons. See the [NAT gateway connectivity troubleshooting guide](/azure/nat-gateway/troubleshoot-nat-connectivity) to help you further diagnose.
+
+>[!NOTE]
+> When NAT gateway is attached to a subnet and public IP address, the Azure platform verifies NAT gateway is healthy by conducting health checks. These health checks may appear in NAT gatewayΓÇÖs SNAT connection metrics, but are negligible and donΓÇÖt impact NAT gatewayΓÇÖs ability to connect outbound.
### Datapath availability
-The datapath availability metric measures the status of the NAT gateway resource over time. This metric informs on whether or not NAT gateway is available for directing outbound traffic to the internet. This metric is a reflection of the health of the Azure infrastructure.
+The datapath availability metric measures the health of the NAT gateway resource over time. This metric indicates if NAT gateway is available for directing outbound traffic to the internet. This metric is a reflection of the health of the Azure infrastructure.
You can use this metric to: -- Monitor the availability of your NAT gateway resource.
+- Monitor the availability of NAT gateway.
- Investigate the platform where your NAT gateway is deployed and determine if itΓÇÖs healthy. - Isolate whether an event is related to your NAT gateway or to the underlying data plane.
-Reasons for why you may see a drop in data path availability include:
+Possible reasons for a drop in data path availability include:
- An infrastructure outage has occurred. -- There aren't healthy VMs available in your NAT gateway configured subnet. For more information, see the NAT gateway [troubleshooting guide](./troubleshoot-nat.md).
+- There aren't healthy VMs available in your NAT gateway configured subnet. For more information, see the [NAT gateway connectivity troubleshooting guide](/azure/nat-gateway/troubleshoot-nat-connectivity).
## Alerts
-Alerts can be configured in Azure Monitor for each of the preceding metrics. These alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address potential issues with your NAT gateway resource.
+Alerts can be configured in Azure Monitor for all NAT gateway metrics. These alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address potential issues with NAT gateway.
For more information about how metric alerts work, see [Azure Monitor Metric Alerts](../azure-monitor/alerts/alerts-metric-overview.md). The following guidance describes how to configure some common and recommended types of alerts for your NAT gateway.
-### Alerts for datapath availability droppage
+### Alerts for datapath availability degradation
-If the datapath of your NAT gateway resource begins to experience drops in availability, you can set up an alert to be fired when it hits a specific threshold in availability.
+Set up an alert on datapath availability to help you detect issues with the health of NAT gateway.
-The recommended guidance is to alert on NAT gatewayΓÇÖs datapath availability when it drops below 90% over a 15 minute period. This configuration is indicative of a NAT gateway resource being in a degraded state.
+The recommended guidance is to alert on NAT gatewayΓÇÖs datapath availability when it drops below 90% over a 15-minute period. This configuration is indicative of a NAT gateway resource being in a degraded state.
To set up a datapath availability alert, follow these steps:
To set up a datapath availability alert, follow these steps:
5. From the **Aggregation type** drop-down menu, select **Average**.
-6. In the **Threshold value** box, enter **90%** as the value that the datapath availability must drop below before an alert is fired.
+6. In the **Threshold value** box, enter **90%**.
7. From the **Unit** drop-down menu, select **Count**.
Setting the aggregation granularity to less than 5 minutes may trigger false pos
### Alerts for SNAT port exhaustion
-Use the **SNAT connection count** metric and alerts to help determine if you're experiencing SNAT port exhaustion. A failed connection volume greater than zero may indicate SNAT port exhaustion. You may need to investigate further to determine the root cause of these failures.
+Set up an alert on the **SNAT connection count** metric to notify you of connection failures on your NAT gateway. A failed connection volume greater than zero can indicate that either you have reached the connection limit on your NAT gateway or that you have hit SNAT port exhaustion. Investigate further to determine the root cause of these failures.
To create the alert, use the following steps:
To create the alert, use the following steps:
11. Select **Create** to create the alert rule. >[!NOTE]
->SNAT port exhaustion on your NAT gateway resource is uncommon. If you see SNAT port exhaustion, your NAT gateway's idle timeout timer may be holding on to SNAT ports too long or your may need to scale with additional public IPs. To troubleshoot these kinds of issues, refer to the [NAT gateway connectivity troubleshooting guide](./troubleshoot-nat-connectivity.md#snat-exhaustion-due-to-nat-gateway-configuration).
+>SNAT port exhaustion on your NAT gateway resource is uncommon. If you see SNAT port exhaustion, check if NAT gateway's idle timeout timer is set higher than the default amount of 4 minutes. A long idle timeout timer seeting can cause SNAT ports too be in hold down for longer, which results in exhausting SNAT port inventory sooner. You can also scale your NAT gateway with additional public IPs to increase NAT gateway's overall SNAT port inventory. To troubleshoot these kinds of issues, refer to the [NAT gateway connectivity troubleshooting guide](/azure/nat-gateway/troubleshoot-nat-connectivity#snat-exhaustion-due-to-nat-gateway-configuration).
## Network Insights
-[Azure Monitor Network Insights](../network-watcher/network-insights-overview.md) allows you to visualize your Azure infrastructure setup and to review all metrics for your NAT gateway resource from a pre-configured metrics dashboard. These visual tools help you diagnose and troubleshoot any issues with your NAT gateway resource.
+[Azure Monitor Network Insights](../network-watcher/network-insights-overview.md) allows you to visualize your Azure infrastructure setup and to review all metrics for your NAT gateway resource from a preconfigured metrics dashboard. These visual tools help you diagnose and troubleshoot any issues with your NAT gateway resource.
### View the topology of your Azure architectural setup
To view a topological map of your setup in Azure:
1. From your NAT gatewayΓÇÖs resource page, select **Insights** from the **Monitoring** section.
-2. On the landing page for **Insights**, there is a topology map of your NAT gateway setup. This map shows the relationship between the different components of your network (subnets, virtual machines, public IP addresses).
+2. On the landing page for **Insights**, there's a topology map of your NAT gateway setup. This map shows the relationship between the different components of your network (subnets, virtual machines, public IP addresses).
3. Hover over any component in the topology map to view configuration information.
For more information on what each metric is showing you and how to analyze these
* Learn about [NAT gateway resource](nat-gateway-resource.md) * Learn about [Azure Monitor](../azure-monitor/overview.md) * Learn about [troubleshooting NAT gateway resources](troubleshoot-nat.md).
+* Learn about [troubleshooting NAT gateway connectivity](/azure/nat-gateway/troubleshoot-nat-connectivity)
network-watcher Connection Monitor Install Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-install-azure-monitor-agent.md
Title: Install Azure Monitor Agent for connection monitor
-description: This article describes how to install Azure Monitor Agent.
-
+ Title: Install and upgrade Azure Monitor Agent - Azure Arc-enabled servers
+
+description: Learn how to install, upgrade, and uninstall Azure Monitor Agent on Azure Arc-enabled servers.
+ - Previously updated : 10/25/2022-
-#Customer intent: I need to monitor a connection by using Azure Monitor Agent.
Last updated : 10/31/2023++
+#Customer intent: As an Azure administrator, I need to install the Azure Monitor Agent on Azure Arc-enabled servers so I can monitor a connection using the Connection Monitor.
-# Install Azure Monitor Agent
+# Install and upgrade Azure Monitor Agent on Azure Arc-enabled servers
-Azure Monitor Agent is implemented as an Azure virtual machine (VM) extension. You can install Azure Monitor Agent by using any of the methods for installing virtual machine extensions, including those described in the [Azure Monitor Agent overview](../azure-monitor/agents/agents-overview.md) article.
+Azure Monitor Agent is implemented as an Azure virtual machine (VM) extension. You can install Azure Monitor Agent using any of the methods described in [Azure Monitor Agent overview](../azure-monitor/agents/agents-overview.md?toc=/azure/network-watcher/toc.json).
-The following section covers installing Azure Monitor Agent on Azure Arc-enabled servers by using PowerShell and the Azure CLI. For more information, see [Manage Azure Monitor Agent](../azure-monitor/agents/azure-monitor-agent-manage.md?tabs=ARMAgentPowerShell%2CPowerShellWindows%2CPowerShellWindowsArc%2CCLIWindows%2CCLIWindowsArc).
+This article covers installing Azure Monitor Agent on Azure Arc-enabled servers using PowerShell or the Azure CLI. For more information, see [Manage Azure Monitor Agent](../azure-monitor/agents/azure-monitor-agent-manage.md?tabs=ARMAgentPowerShell%2CPowerShellWindows%2CPowerShellWindowsArc%2CCLIWindows%2CCLIWindowsArc).
## Use PowerShell
New-AzConnectedMachineExtension -Name AzureNetworkWatcherExtension -ExtensionTyp
```
-## Next steps
--- After you've installed the monitoring agents, [create a connection monitor](connection-monitor-create-using-portal.md#create-a-connection-monitor). Then, after you've created a connection monitor, analyze your monitoring data, set alerts, and diagnose issues in your connection monitor and your network.
+## Next step
-- Monitor the network connectivity of your Azure and non-Azure setups by using [Connection Monitor](connection-monitor-overview.md).
+> [!div class="nextstepaction"]
+> [create a connection monitor](connection-monitor-create-using-portal.md)
network-watcher Connection Monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-overview.md
Title: Connection monitor
+ Title: Connection monitor overview
-description: Learn how to use Azure Network Watcher connection monitor to monitor network communication in a distributed environment.
+description: Learn about Azure Network Watcher connection monitor and how to use it to monitor network communication in a distributed environment.
Previously updated : 10/04/2022 Last updated : 10/31/2023
-#CustomerIntent: I need to monitor communication between one VM and another. If the communication fails, I need to know why so that I can resolve the problem.
+#CustomerIntent: As an Azure administrator, I need to monitor communication between one VM and another. If the communication fails, I need to know why so that I can resolve the problem.
# Connection monitor overview
Last updated 10/04/2022
> > To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor](migrate-to-connection-monitor-from-network-performance-monitor.md), or [migrate from Connection Monitor (Classic)](migrate-to-connection-monitor-from-connection-monitor-classic.md) to the new Connection Monitor in Azure Network Watcher before February 29, 2024.
-> [!IMPORTANT]
-> Connection Monitor will now support end-to-end connectivity checks from and to *Azure Virtual Machine Scale Sets*, enabling faster performance monitoring and network troubleshooting across scale sets
- Connection Monitor provides unified, end-to-end connection monitoring in Azure Network Watcher. The Connection Monitor feature supports hybrid and Azure cloud deployments. Network Watcher provides tools to monitor, diagnose, and view connectivity-related metrics for your Azure deployments. Here are some use cases for Connection Monitor:
Here are some benefits of Connection Monitor:
* Support for connectivity checks that are based on HTTP, Transmission Control Protocol (TCP), and Internet Control Message Protocol (ICMP) * Metrics and Log Analytics support for both Azure and non-Azure test setups
-![Diagram showing how Connection Monitor interacts with Azure VMs, non-Azure hosts, endpoints, and data storage locations.](./media/connection-monitor-2-preview/hero-graphic-new.png)
-To start using Connection Monitor for monitoring, do the following:
+To start using Connection Monitor for monitoring, follow these steps:
1. [Install monitoring agents](#install-monitoring-agents). 1. [Enable Network Watcher on your subscription](#enable-network-watcher-on-your-subscription).
Rules for a network security group (NSG) or firewall can block communication bet
If you wish to escape the installation process for enabling the Network Watcher extension, you can proceed with the creation of Connection Monitor and allow auto enablement of Network Watcher extensions on your Azure VMs and scale sets.
- > [!Note]
- > In case the virtual machine scale sets is set for manual upgradation, the user will have to upgrade the scale set post Network Watcher extension installation in order to continue setting up the Connection Monitor with virtual machine scale sets as endpoints. Incase the virtual machine scale set is set to auto upgradation, the user need not worry about any upgradation after Network Watcher extension installation.
- > As Connection Monitor now supports unified auto enablement of monitoring extensions, user can consent to auto upgradation of VM scale set with auto enablement of Network Watcher extension during the creation on Connection Monitor for VM scale sets with manual upgradation.
+> [!NOTE]
+> If the Automatic Extension Upgrade isn't enabled on the virtual machine scale sets, then you have to manually upgrade the Network Watcher extension whenever a new version is released.
+>
+> As Connection Monitor now supports unified auto enablement of monitoring extensions, user can consent to auto upgrade of the virtual machine scale set with auto enablement of Network Watcher extension during the creation of Connection Monitor for virtual machine scale sets with manual upgrade.
### Agents for on-premises machines
Connection Monitor includes the following entities:
* **Test group**: The group that contains source endpoints, destination endpoints, and test configurations. A connection monitor can contain more than one test group. * **Test**: The combination of a source endpoint, destination endpoint, and test configuration. A test is the most granular level at which monitoring data is available. The monitoring data includes the percentage of checks that failed and the round-trip time (RTT).
- ![Diagram showing a connection monitor, defining the relationship between test groups and tests.](./media/connection-monitor-2-preview/cm-tg-2.png)
You can create a connection monitor by using the [Azure portal](./connection-monitor-create-using-portal.md), [ARMClient](./connection-monitor-create-using-template.md), or [Azure PowerShell](connection-monitor-create-using-powershell.md).
All sources, destinations, and test configurations that you add to a test group
| 10 | C | D | Config 2 | | 11 | C | E | Config 1 | | 12 | C | E | Config 2 |
-| | |
-- ### Scale limits
When you use metrics, set the resource type as **Microsoft.Network/networkWatche
| ChecksFailedPercent | % Checks Failed | Percentage | Average | Percentage of failed checks for a test. | ConnectionMonitorResourceId <br>SourceAddress <br>SourceName <br>SourceResourceId <br>SourceType <br>Protocol <br>DestinationAddress <br>DestinationName <br>DestinationResourceId <br>DestinationType <br>DestinationPort <br>TestGroupName <br>TestConfigurationName <br>Region <br>SourceIP <br>DestinationIP <br>SourceSubnet <br>DestinationSubnet | | RoundTripTimeMs | Round-trip time (ms) | Milliseconds | Average | RTT for checks sent between source and destination. This value isn't averaged. | ConnectionMonitorResourceId <br>SourceAddress <br>SourceName <br>SourceResourceId <br>SourceType <br>Protocol <br>DestinationAddress <br>DestinationName <br>DestinationResourceId <br>DestinationType <br>DestinationPort <br>TestGroupName <br>TestConfigurationName <br>Region <br>SourceIP <br>DestinationIP <br>SourceSubnet <br>DestinationSubnet | | TestResult | Test Result | Count | Average | Connection monitor test results. <br>Interpretation of result values: <br>0-&nbsp;Indeterminate <br>1- Pass <br>2- Warning <br>3- Fail| SourceAddress <br>SourceName <br>SourceResourceId <br>SourceType <br>Protocol <br>DestinationAddress <br>DestinationName <br>DestinationResourceId <br>DestinationType <br>DestinationPort <br>TestGroupName <br>TestConfigurationName <br>SourceIP <br>DestinationIP <br>SourceSubnet <br>DestinationSubnet |
-| | |
#### Metric-based alerts for Connection Monitor
openshift Howto Enable Nsg Flowlogs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-enable-nsg-flowlogs.md
metadata:
name: cluster spec: azEnvironment: "AzurePublicCloud"
- resourceId: "subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.RedHatOpenShift/openShiftClusters/{clusterID}"
+ resourceId: "/subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.RedHatOpenShift/openShiftClusters/{clusterID}"
nsgFlowLogs: enabled: true
- networkWatcherID: "subscriptions/{subscriptionID}/resourceGroups/{networkWatcherRG}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}"
+ networkWatcherID: "/subscriptions/{subscriptionID}/resourceGroups/{networkWatcherRG}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}"
flowLogName: "{flowlogName}" retentionDays: {retentionDays}
- storageAccountResourceId: "subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{storageAccountName}"
+ storageAccountResourceId: "/subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{storageAccountName}"
version: {version} ``` See [Tutorial: Log network traffic to and from a virtual machine using the Azure portal](../network-watcher/network-watcher-nsg-flow-logging-portal.md) for possible values for `version` and `retentionDays`.
partner-solutions Dynatrace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-overview.md
Azure Native Dynatrace Service provides the following capabilities:
## Dynatrace links
-For more help using Azure Native Dynatrace Service, visit the [Dynatrace](https://aka.ms/partners/Dynatrace/PartnerDocs) documentation.
+For more help using Azure Native Dynatrace Service, visit the [Dynatrace](https://dt-url.net/azurenativedynatraceservice) documentation.
## Next steps
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
One advantage of running your workload in Azure is global reach. The flexible se
| Qatar Central | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :x: | | South Africa North | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | South Central US | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| South India | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| South India | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
| Southeast Asia | :heavy_check_mark:(v3/v4 only) | :x: $ | :heavy_check_mark: | :heavy_check_mark: | | Sweden Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Switzerland North | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Switzerland West | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: |
-| UAE North | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| UAE North | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: |
| US Gov Arizona | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :x: | | US Gov Texas | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :x: | | US Gov Virginia | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :x: |
One advantage of running your workload in Azure is global reach. The flexible se
| West Central US | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: | | West Europe | :heavy_check_mark: | :x: $ | :heavy_check_mark: | :heavy_check_mark: | | West US | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
-| West US 2 | :heavy_check_mark: | :x: $ | :x: $ | :heavy_check_mark: |
+| West US 2 | :heavy_check_mark: (v3/v4 only) | :x: $ | :x: $ | :heavy_check_mark: |
| West US 3 | :heavy_check_mark: | :heavy_check_mark: ** | :heavy_check_mark: | :x: | $ New Zone-redundant high availability deployments are temporarily blocked in these regions. Already provisioned HA servers are fully supported.
postgresql Quickstart Create Server Database Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-server-database-portal.md
Go to the [Azure portal](https://portal.azure.com/) to create an Azure Database
1. Select **+ Create**.
-2. On the Create a Azure Database for PostgreSQL page , select **Single server**.
+2. On the Create an Azure Database for PostgreSQL page, select **Single server**.
>[!div class="mx-imgBorder"] > :::image type="content" source="./media/quickstart-create-database-portal/select-single-server.png" alt-text="Select single server":::
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Azure Data Factory (Microsoft.DataFactory/factories) | portal | privatelink.adf.azure.com | adf.azure.com | | Azure Cache for Redis (Microsoft.Cache/Redis) | redisCache | privatelink.redis.cache.windows.net | redis.cache.windows.net | | Azure Cache for Redis Enterprise (Microsoft.Cache/RedisEnterprise) | redisEnterprise | privatelink.redisenterprise.cache.azure.net | redisenterprise.cache.azure.net |
-| Microsoft Purview (Microsoft.Purview) | account | privatelink.purview.azure.com | purview.azure.com |
-| Microsoft Purview (Microsoft.Purview) | portal | privatelink.purviewstudio.azure.com | purview.azure.com </br> purviewstudio.azure.com |
-| Azure Digital Twins (Microsoft.DigitalTwins) | digitalTwinsInstances | privatelink.digitaltwins.azure.net | digitaltwins.azure.net |
+| Microsoft Purview (Microsoft.Purview/accounts) | account | privatelink.purview.azure.com | purview.azure.com |
+| Microsoft Purview (Microsoft.Purview/accounts) | portal | privatelink.purviewstudio.azure.com | purview.azure.com </br> purviewstudio.azure.com |
+| Azure Digital Twins (Microsoft.DigitalTwins/digitalTwinsInstances) | digitalTwinsInstances | privatelink.digitaltwins.azure.net | digitaltwins.azure.net |
| Azure HDInsight (Microsoft.HDInsight/clusters) | N/A | privatelink.azurehdinsight.net | azurehdinsight.net |
-| Azure Arc (Microsoft.HybridCompute) | hybridcompute | privatelink.his.arc.azure.com <br/> privatelink.guestconfiguration.azure.com </br> privatelink.dp.kubernetesconfiguration.azure.com | his.arc.azure.com <br/> guestconfiguration.azure.com </br> dp.kubernetesconfiguration.azure.com |
-| Azure Media Services (Microsoft.Media) | keydelivery </br> liveevent </br> streamingendpoint | privatelink.media.azure.net | media.azure.net |
+| Azure Arc (Microsoft.HybridCompute/privateLinkScopes) | hybridcompute | privatelink.his.arc.azure.com <br/> privatelink.guestconfiguration.azure.com </br> privatelink.dp.kubernetesconfiguration.azure.com | his.arc.azure.com <br/> guestconfiguration.azure.com </br> dp.kubernetesconfiguration.azure.com |
+| Azure Media Services (Microsoft.Media/mediaservices) | keydelivery </br> liveevent </br> streamingendpoint | privatelink.media.azure.net | media.azure.net |
| Azure Data Explorer (Microsoft.Kusto/Clusters) | cluster | privatelink.{regionName}.kusto.windows.net | {regionName}.kusto.windows.net | | Azure Static Web Apps (Microsoft.Web/staticSites) | staticSites | privatelink.azurestaticapps.net </br> privatelink.{partitionId}.azurestaticapps.net | azurestaticapps.net </br> {partitionId}.azurestaticapps.net | | Azure Migrate (Microsoft.Migrate/migrateProjects) | Default | privatelink.prod.migration.windowsazure.com | prod.migration.windowsazure.com |
For Azure services, use the recommended zone names as described in the following
| Azure Automation / (Microsoft.Automation/automationAccounts) | Webhook </br> DSCAndHybridWorker | privatelink.azure-automation.us | azure-automation.us | | Azure SQL Database (Microsoft.Sql/servers) | sqlServer | privatelink.database.usgovcloudapi.net | database.usgovcloudapi.net | | Azure SQL Managed Instance (Microsoft.Sql/managedInstances) | managedInstance | privatelink.{dnsPrefix}.database.usgovcloudapi.net | {instanceName}.{dnsPrefix}.database.usgovcloudapi.net |
+| Azure Synapse Analytics (Microsoft.Synapse/workspaces) | Sql | privatelink.sql.azuresynapse.usgovcloudapi.net | sql.azuresynapse.usgovcloudapi.net |
+| Azure Synapse Analytics (Microsoft.Synapse/workspaces) | SqlOnDemand | privatelink.sql.azuresynapse.usgovcloudapi.net | {workspaceName}-ondemand.sql.azuresynapse.usgovcloudapi.net |
+| Azure Synapse Analytics (Microsoft.Synapse/workspaces) | Dev | privatelink.dev.azuresynapse.usgovcloudapi.net | dev.azuresynapse.usgovcloudapi.net |
+| Azure Synapse Studio (Microsoft.Synapse/privateLinkHubs) | Web | privatelink.azuresynapse.usgovcloudapi.net | azuresynapse.usgovcloudapi.net |
| Storage account (Microsoft.Storage/storageAccounts) | blob </br> blob_secondary | privatelink.blob.core.usgovcloudapi.net | blob.core.usgovcloudapi.net | | Storage account (Microsoft.Storage/storageAccounts) | table </br> table_secondary | privatelink.table.core.usgovcloudapi.net | table.core.usgovcloudapi.net | | Storage account (Microsoft.Storage/storageAccounts) | queue </br> queue_secondary | privatelink.queue.core.usgovcloudapi.net | queue.core.usgovcloudapi.net | | Storage account (Microsoft.Storage/storageAccounts) | file </br> file_secondary | privatelink.file.core.usgovcloudapi.net | file.core.usgovcloudapi.net | | Storage account (Microsoft.Storage/storageAccounts) | web </br> web_secondary | privatelink.web.core.usgovcloudapi.net | web.core.usgovcloudapi.net |
+| Azure Data Lake File System Gen2 (Microsoft.Storage/storageAccounts) | dfs </br> dfs_secondary | privatelink.dfs.core.usgovcloudapi.net | dfs.core.usgovcloudapi.net |
| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Sql | privatelink.documents.azure.us | documents.azure.us |
+| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | MongoDB | privatelink.mongo.cosmos.azure.us | mongo.cosmos.azure.us |
| Azure Batch (Microsoft.Batch/batchAccounts) | batchAccount | privatelink.batch.usgovcloudapi.net | {regionName}.batch.usgovcloudapi.net | | Azure Batch (Microsoft.Batch/batchAccounts) | nodeManagement | privatelink.batch.usgovcloudapi.net | {regionName}.service.batch.usgovcloudapi.net | | Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) | postgresqlServer | privatelink.postgres.database.usgovcloudapi.net | postgres.database.usgovcloudapi.net |
For Azure services, use the recommended zone names as described in the following
| Azure Database for MariaDB (Microsoft.DBforMariaDB/servers) | mariadbServer | privatelink.mariadb.database.usgovcloudapi.net| mariadb.database.usgovcloudapi.net | | Azure Key Vault (Microsoft.KeyVault/vaults) | vault | privatelink.vaultcore.usgovcloudapi.net | vault.usgovcloudapi.net <br> vaultcore.usgovcloudapi.net | | Azure Search (Microsoft.Search/searchServices) | searchService | privatelink.search.windows.us | search.windows.us |
+| Azure Container Registry (Microsoft.ContainerRegistry/registries) | registry | privatelink.azurecr.us </br> {regionName}.privatelink.azurecr.us | azurecr.us </br> {regionName}.azurecr.us |
| Azure App Configuration (Microsoft.AppConfiguration/configurationStores) | configurationStores | privatelink.azconfig.azure.us | azconfig.azure.us | | Azure Backup (Microsoft.RecoveryServices/vaults) | AzureBackup | privatelink.{regionCode}.backup.windowsazure.us | {regionCode}.backup.windowsazure.us | | Azure Site Recovery (Microsoft.RecoveryServices/vaults) | AzureSiteRecovery | privatelink.siterecovery.windowsazure.us | {regionCode}.siterecovery.windowsazure.us |
For Azure services, use the recommended zone names as described in the following
| Azure IoT Hub (Microsoft.Devices/IotHubs) | iotHub | privatelink.azure-devices.us<br/>privatelink.servicebus.windows.us<sup>1</sup> | azure-devices.us<br/>servicebus.usgovcloudapi.net | | Azure IoT Hub Device Provisioning Service (Microsoft.Devices/ProvisioningServices) | iotDps | privatelink.azure-devices-provisioning.us | azure-devices-provisioning.us | | Azure Relay (Microsoft.Relay/namespaces) | namespace | privatelink.servicebus.usgovcloudapi.net | servicebus.usgovcloudapi.net |
+| Azure Event Grid (Microsoft.EventGrid/topics) | topic | privatelink.eventgrid.azure.us | eventgrid.azure.us |
+| Azure Event Grid (Microsoft.EventGrid/domains) | domain | privatelink.eventgrid.azure.us | eventgrid.azure.us |
| Azure Web Apps (Microsoft.Web/sites) | sites | privatelink.azurewebsites.us </br> scm.privatelink.azurewebsites.us | azurewebsites.us </br> scm.azurewebsites.us | Azure Monitor (Microsoft.Insights/privateLinkScopes) | azuremonitor | privatelink.monitor.azure.us <br/> privatelink.adx.monitor.azure.us <br/> privatelink.oms.opinsights.azure.us <br/> privatelink.ods.opinsights.azure.us <br/> privatelink.agentsvc.azure-automation.us <br/> privatelink.blob.core.usgovcloudapi.net | monitor.azure.us <br/> adx.monitor.azure.us <br/> oms.opinsights.azure.us<br/> ods.opinsights.azure.us<br/> agentsvc.azure-automation.us <br/> blob.core.usgovcloudapi.net | | Azure AI services (Microsoft.CognitiveServices/accounts) | account | privatelink.cognitiveservices.azure.us | cognitiveservices.azure.us | | Azure Cache for Redis (Microsoft.Cache/Redis) | redisCache | privatelink.redis.cache.usgovcloudapi.net | redis.cache.usgovcloudapi.net |
+| Microsoft Purview (Microsoft.Purview) | account | privatelink.purview.azure.com | purview.azure.com |
+| Microsoft Purview (Microsoft.Purview) | portal | privatelink.purviewstudio.azure.com | purview.azure.com </br> purviewstudio.azure.com |
| Azure HDInsight (Microsoft.HDInsight) | N/A | privatelink.azurehdinsight.us | azurehdinsight.us | | Azure Machine Learning (Microsoft.MachineLearningServices/workspaces) | amlworkspace | privatelink.api.ml.azure.us<br/>privatelink.notebooks.usgovcloudapi.net | api.ml.azure.us<br/>notebooks.usgovcloudapi.net <br/> instances.azureml.us<br/>aznbcontent.net <br/> inference.ml.azure.us |
+| Azure Health Data Services (Microsoft.HealthcareApis/workspaces) | healthcareworkspace | privatelink.workspace.azurehealthcareapis.us </br> privatelink.fhir.azurehealthcareapis.us </br> privatelink.dicom.azurehealthcareapis.us | workspace.azurehealthcareapis.us </br> fhir.azurehealthcareapis.us </br> dicom.azurehealthcareapis.us |
+| Azure Databricks (Microsoft.Databricks/workspaces) | databricks_ui_api </br> browser_authentication | privatelink.databricks.azure.us | databricks.azure.us |
| Azure Virtual Desktop (Microsoft.DesktopVirtualization/workspaces) | global | privatelink-global.wvd.azure.us | wvd.azure.us | | Azure Virtual Desktop (Microsoft.DesktopVirtualization/workspaces </br> Microsoft.DesktopVirtualization/hostpools) | feed <br> connection | privatelink.wvd.azure.us | wvd.azure.us |
reliability Reliability Azure Container Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-azure-container-apps.md
Last updated 08/29/2023
# Reliability in Azure Container Apps
-This article describes reliability support in Azure Container Apps, and covers both regional resiliency with availability zones and cross-region resiliency with disaster recovery. For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/well-architected/resiliency/).
+This article describes reliability support in [Azure Container Apps](/azure/container-apps/overview), and covers both regional resiliency with availability zones and cross-region resiliency with disaster recovery. For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/well-architected/resiliency/).
## Availability zone support Azure Container Apps uses [availability zones](availability-zones-overview.md#zonal-and-zone-redundant-services) in regions where they're available to provide high-availability protection for your applications and data from data center failures.
If you have enabled [session affinity](../container-apps/sticky-sessions.md), an
To take advantage of availability zones, enable zone redundancy as you create the Container Apps environment. The environment must include a virtual network with an available subnet. You can't migrate an existing Container Apps environment from nonavailability zone support to availability zone support.
-## Disaster recovery: cross-region failover
+## Cross-region disaster recovery and business continuity
+ In the unlikely event of a full region outage, you have the option of using one of two strategies:
reliability Reliability Azure Storage Mover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-azure-storage-mover.md
Current doc score: 100, 1130, 0
# Reliability in Azure Storage Mover
-This article describes reliability support in Azure Storage Mover and covers cross-region resiliency with disaster recovery. For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
+This article describes reliability support in [Azure Storage Mover](/azure/storage-mover/service-overview) and covers both intra-regional resiliency with [availability zones](#availability-zone-support) and [cross-region disaster recovery and business continuity](#cross-region-disaster-recovery-and-business-continuity). For a more detailed overview of reliability principles in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
-## Regional reliability
-When deploying an Azure Storage Mover resource, you must select a location in which the resource's instance metadata is stored. Instance metadata includes projects, endpoints, agents, job definitions, and job run history, but doesn't include the actual data to be migrated. Azure storage accounts to be used as migration targets have their own reliability support. Disaster recovery for on-premises data sources is the responsibility of the customer.
+## Availability zone support
-Instance metadata is replicated across multiple availability zones in regions where availability zones are available. Availability zones are unique physical locations within an Azure region. Each zone is made up of one or more data centers equipped with independent power, cooling, and networking.
-Some regions are paired in order to allow cross-region replication. When cross-region replication is utilized, instance metadata is replicated to each region, but is never permitted to leave the geography.
+Azure Storage Mover supports a zone-redundant deployment model.
-When a Storage Mover agent is registered, it connects to the region in which the Storage Mover resource is registered. If an agent's Azure region experiences an outage, the agent itself isn't affected, but management operations that rely on Azure may be unable to complete. In addition, any active data migrations to storage accounts located within the affected region may fail.
+When you deploy an Azure Storage Mover resource, you must [select a particular region](/azure/storage-mover/deployment-planning#select-an-azure-region-for-your-deployment) in which the resource's instance metadata is stored.
-In the unlikely event of a full region outage, you have the option of using one of the following strategies:
+If the region supports availability zones, the instance metadata is automatically replicated across multiple availability zones within that region.
-- Wait for Azure to recover the region-- Redeploy your resources to a different region-- Deploy a redundant Storage Mover in advance
+>[!IMPORTANT]
+>Azure Storage Mover instance metadata includes projects, endpoints, agents, job definitions, and job run history, but doesn't include the actual data to be migrated. Azure storage accounts that are used as migration targets have their own reliability support.
-The last two options are a matter of timing, since deployment will occur either before or after any future outage.
-## Determining reliability for target storage accounts
+### Prerequisites
-Any migration target storage account may require its own recovery steps. This requirement depends on the redundancy options chosen for each storage account. See the [storage account disaster recovery](/azure/storage/common/storage-disaster-recovery-guidance) article to determine whether more steps are necessary.
+- To deploy with availability zone support, you must choose a region that supports availability zones. To see which regions supports availability zones, see the [list of supported regions](availability-zones-service-support.md#azure-regions-with-availability-zone-support).
-If a local storage was chosen in lieu of redundancy options, you may need to create a new storage account for use in migrations during the outage.
+- (Optional) If your target storage account doesn't support availability zones, and you would like to migrate the account to AZ support, see [Migrate Azure Storage accounts to availability zone support](migrate-storage.md).
### Zone down experience
-During a zone-wide outage, no action is required during zone recovery. Azure Storage Mover is designed to self-heal and rebalance itself to take advantage of the healthy zone automatically.
+During a zone-wide outage, no action is required during zone recovery. Azure Storage Mover is designed to self-heal and re-balance itself to take advantage of the healthy zone automatically.
+
+Any migration target storage account may require its own recovery steps. This requirement depends on the redundancy options chosen for each storage account. See the [storage account disaster recovery guide](/azure/storage/common/storage-disaster-recovery-guidance) to determine whether more steps are necessary.
+
+If a local storage was chosen in lieu of redundancy options, you may need to create a new storage account for use in migrations during the outage.
++
+## Cross-region disaster recovery and business continuity
++
+When a Storage Mover agent is registered, it connects to the region in which the Storage Mover resource is registered. If an agent's Azure region experiences an outage, the agent itself isn't affected, but management operations that rely on Azure may be unable to complete. In addition, any active data migrations to storage accounts located within the affected region may fail.
+
+Storage Mover supports two forms of disaster recovery:
+
+- [Azure initiated disaster recovery](#azure-initiated-disaster-recovery)
+- [Customer initiated disaster recovery](#customer-initiated-disaster-recovery)
-## Disaster recovery: cross-region failover
+>[!IMPORTANT]
+>Disaster recovery for on-premises data sources is the responsibility of the customer.
-Azure can provide disaster recovery protection against a region-wide or large geography disaster by making use of another region. For more information on Azure disaster recovery architecture, see the article on [Azure to Azure disaster recovery architecture](/azure/site-recovery/azure-to-azure-architecture).
-Azure initiated disaster recovery is only applicable for those regions that have are paired with a cross-region replication region. Azure Storage Mover uses Cosmos DB for storing instance metadata. Data loss may occur only with an unrecoverable disaster in the Azure Cosmos DB region. For more information, see [Region outages](/azure/cosmos-db/high-availability). Azure initiated recovery is active-passive, and full recovery of a region may be up to 24 hours.
+### Azure initiated disaster recovery
-Customers can minimize downtime by following the customer enabled disaster recovery steps described in this section. These strategies may require that further steps be taken prior to a disaster, so be sure to review and plan accordingly.
+Azure initiated disaster recovery is only applicable to those [regions that have region pairs](./cross-region-replication-azure.md#azure-paired-regions). When cross-region replication is utilized, instance metadata is replicated to each region, but is never permitted to leave the geography.
-## Customer enabled disaster recovery
+Azure Storage Mover uses Cosmos DB for storing instance metadata. Data loss may occur only with an unrecoverable disaster in the Azure Cosmos DB . For more information, see [Region outages](/azure/cosmos-db/high-availability). Azure initiated recovery is active-passive, and full recovery of a region may be up to 24 hours.
-### Deploy resources to a different region
-Since access to your resources may be impacted during an outage. To redeploy resources to a different region, you must first have a snapshot of the resources you wish to redeploy. To ensure that you're restoring the most recent data, taking a snapshot should be done periodically, either on a schedule or after you make substantial changes. Storing the snapshots using a version control system is a good way to store and track history of the snapshots.
+### Customer initiated disaster recovery
+
+Customer initiated disaster recovery isn't restricted to paired regions.
+
+**Before a regional outage occurs:**
+
+- Deploy a zone-redundant Storage Mover by creating Storage Mover resources in a region that supports availability zones.
+
+- Periodically - either on a schedule or after you make substantial changes - take a snapshot of your Storage Mover resources. Storing the snapshots using a version control system is a good way to store and track history of the snapshots. You'll use the last good snapshot in the event of a disaster where you need to recover your resources in a new region.
+
+**During a regional outage:**
+
+You can do one of two things:
+
+- Choose to wait for Azure to recover the region.
+- Minimize downtime by [redeploying your resources to a different region](#deploy-resources-to-a-different-region). Since access to your resources may be impacted during an outage, you'll want to use the last good snapshot of your resources.
+
+>[!TIP]
+>Either one of these strategies still may require that you need to take further steps prior to a disaster, so be sure to review and plan accordingly.
++
+#### Deploy resources to a different region
See the documentation on [exporting templates](/azure/azure-resource-manager/templates/export-template-portal) for further instructions on exporting resources as an Azure Resource Manager (ARM) template.
To use the exported template for disaster recovery, a few changes to the templat
After completing the previous steps and verifying that the template parameters are correct, the template is ready for deployment to a new region. You should deploy the template to a new resource group that has the same default region as the location property in the template.
-### Registering the new agent
+#### Registering the new agent
Follow the steps within the [deploy an Azure Storage Mover agent](/azure/storage-mover/agent-deploy) article to register a new agent in the new Storage Mover resource.
-### Assigning the agent to job definitions
+#### Assigning the agent to job definitions
After the new agent has been registered and reports as online, use the Azure portal or PowerShell to associate the existing job definitions to the new agent. The following PowerShell example is provided for convenience.
Update-AzStorageMoverJobDefinition `
-AgentName $agentName ```
-### Granting agent access to the target storage container
+#### Granting agent access to the target storage container
You need to assign the data contributor role to the managed identity to successfully perform a migration job. Assign the Hybrid Compute resource's system managed identity access to the target storage account resource. The [assign a managed identity access to a resource](/azure/active-directory/managed-identities-azure-resources/howto-assign-access-portal) article provides guidance on how to grant access to the target resource.
You're now ready to start migration jobs using the newly deployed Storage Mover
## Next steps
-Read more about any of the following features or options.
-
-| Guide | Description |
-|||
-| [Azure resiliency and reliability](/azure/architecture/framework/resiliency/overview) | A detailed overview of resiliency and reliability in Azure.
-| [storage account disaster recovery](/azure/storage/common/storage-disaster-recovery-guidance) | Concepts and processes involved with a storage account failover and recovery. |
+- [Reliability in Azure](./overview.md)
+- [Storage account disaster recovery](/azure/storage/common/storage-disaster-recovery-guidance)
reliability Reliability Bot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-bot.md
During a zone-wide outage, the customer should expect a brief degradation of per
### Cross-region disaster recovery in multi-region geography + Azure Bot Service runs in active-active mode for both global and regional services. When an outage occurs, you don't need to detect errors or manage the service. Azure Bot Service automatically performs autofailover and auto recovery in a multi-region geographical architecture. For the EU bot regional service, Azure Bot Service provides two full regions inside Europe with active/active replication to ensure redundancy. For the global bot service, all available regions/geographies can be served as the global footprint. ## Next steps
reliability Reliability Deployment Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-deployment-environments.md
+
+ Title: Reliability and availability in Azure Deployment Environments
+description: Learn how Azure Deployment Environments supports disaster recovery. Understand reliability and availability within a single region and across regions.
++++ Last updated : 08/25/2023+++
+# Reliability in Azure Deployment Environments
+
+This article describes reliability support in Azure Deployment Environments, and covers intra-regional resiliency with availability zones and inter region resiliency with disaster recovery. For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/well-architected/resiliency/overview).
+
+## Availability zone support
+++
+Availability zone support for all resources in Azure Deployment Environments is enabled automatically. There's no action for you to take.
+
+Regions supported:
+- West US 2
+- South Central US
+- UK South
+- West Europe
+- East US
+- Australia East
+- East US 2
+- North Europe
+- West US 3
+- Japan East
+- East Asia
+- Central India
+- Korea Central
+- Canada Central
+
+For more detailed information on availability zones in Azure, seeΓÇ»[Regions and availability zones](../reliability/availability-zones-overview.md).
+
+## Cross-region disaster recovery and business continuity
++
+You can replicate the following Deployment Environments resources in an alternate region to prevent data loss if a cross-region failover occurs:
+
+- Dev center
+- Project
+- Catalog
+- Catalog items
+- Dev center environment type
+- Project environment type
+- Environments
+++
+For more information on Azure disaster recovery architecture, seeΓÇ»[Azure to Azure disaster recovery architecture](../site-recovery/azure-to-azure-architecture.md).
+
+## Next steps
+
+- To learn more about how Azure supports reliability, see [Azure reliability](/azure/reliability).
+- To learn more about Deployment Environments resources, see [Azure Deployment Environments key concepts](../deployment-environments/concept-environments-key-concepts.md).
+- To get started with Deployment Environments, see [Quickstart: Create and configure the Azure Deployment Environments dev center](../deployment-environments/quickstart-create-and-configure-devcenter.md).
reliability Reliability Energy Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-energy-data-services.md
+
+ Title: Reliability in Azure Data Manager for Energy
+description: Find out about reliability in Azure Data Manager for Energy
+++++ Last updated : 06/07/2023+++
+# Reliability in Azure Data Manager for Energy
+
+This article describes reliability support in [Azure Data Manager for Energy](/azure/energy-data-services/), and covers both regional resiliency with availability zones and cross-region resiliency with disaster recovery. For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/well-architected/resiliency/overview).
+
+## Availability zone support
++
+Azure Data Manager for Energy supports zone-redundant instance by default and there's no additional configuration required.
+
+## Prerequisites
+
+The Azure Data Manager for Energy supports availability zones in the following regions:
++
+| Americas | Europe |
+||-|
+| South Central US | North Europe |
+| East US | West Europe |
+| Brazil South | |
+
+### Zone down experience
+During a zone-wide outage, no action is required during zone recovery. There may be a brief degradation of performance until the service self-heals and rebalances underlying capacity to adjust to healthy zones. During this period, you may experience 5xx errors and you may have to retry API calls until the service is restored.
+
+## Cross-region disaster recovery and business continuity
+++
+### Disaster recovery in multi-region geography
+
+Azure Data Manager for Energy is a regional service and, therefore, is susceptible to region-down service failures. Azure Data Manager for Energy follows an active-passive failover configuration to recover from regional disaster. An active-passive configuration keeps warm Azure Data Manager for Energy resource running in the secondary region, but doesn't send traffic there unless the primary region fails.
++
+Below is the list of primary and secondary regions for regions where disaster recovery is supported:
+
+| Geography | Primary | Secondary |
+||-||
+|Americas | South Central US | North Central US |
+|Americas | East US | West US |
+|Europe | North Europe | West Europe |
+|Europe | West Europe | North Europe |
+
+Azure Data Manager for Energy uses Azure Storage, Azure Cosmos DB and Elasticsearch index as underlying data stores for persisting your data partition data. These data stores offer high durability, availability, and scalability. Azure Data Manager for Energy uses [geo-zone-redundant storage](../storage/common/storage-redundancy.md#geo-zone-redundant-storage) or GZRS to automatically replicate data to a secondary region that's hundreds of miles away from the primary region. The same security features enabled in the primary region (for example, encryption at rest using your encryption key) to protect your data are applicable to the secondary region. Similarly, Azure Cosmos DB is a globally distributed data service, which replicates the metadata (catalog) across regions. Elasticsearch index snapshots are taken at regular intervals and geo-replicated to the secondary region. All inflight data are ephemeral and therefore subject to loss. For example, in-transit data that is part of an on-going ingestion job that isn't persisted yet is lost, and you must restart the ingestion process upon recovery.
+
+> [!IMPORTANT]
+> In the following regions, disaster recovery is not available. For more information please contact your Microsoft sales or customer representative.
+> 1. Brazil South
+
+#### Set up disaster recovery and outage detection
+
+Azure Data Manager for Energy service continuously monitors service health in the primary region. If a hard service down failure is detected in the primary region, we attempt recovery before initiating failover to the secondary region on your behalf. We will notify you about the failover progress. Once the failover completes, you could connect to the Azure Data Manager for Energy resource in the secondary region and continue operations. However, there could be slight degradation in performance due to any capacity constraints in the secondary region.
+
+##### Managing the resources in your subscription
+You must handle the failover of your business apps connecting to Azure Data Manager for Energy resource and hosted in the same primary region. Additionally, you're responsible for recovering any diagnostic logs stored in your Log Analytics Workspace.
+
+If you [set up private links](../energy-data-services/how-to-set-up-private-links.md) to your Azure Data Manager for Energy resource in the primary region, then you must create a secondary private endpoint to the same resource in the [paired region](cross-region-replication-azure.md#azure-paired-regions).
+
+> [!CAUTION]
+> If you don't enable public access networks or create a secondary private endpoint before an outage, you'll lose access to the failed over Azure Data Manager for Energy resource in the secondary region. You will be able to access the Azure Data Manager for Energy resource only after the primary region failback is complete.
+
+> [!IMPORTANT]
+> After failover and until the primary region failback completes, you will be unable to perform state modifications to Azure Data Manager for Energy resource created in your subscription. For example,
+> - you cannot **Enable** or **Disable** public access networks.
+> - you cannot **Approve** or **Reject** private endpoint connection to Azure Data Manager for Energy resource
+> - you cannot create a new data partition.
+
+## Next steps
+
+- [Reliability in Azure](availability-zones-overview.md)
reliability Reliability Guidance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-guidance-overview.md
Azure reliability guidance contains the following:
[Azure Cache for Redis](../azure-cache-for-redis/cache-how-to-zone-redundancy.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Cognitive Search](../search/search-reliability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Communications Gateway](../communications-gateway/reliability-communications-gateway.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
+[Azure Container Apps](reliability-azure-container-apps.md)|
[Azure Container Instances](reliability-containers.md)| [Azure Container Registry](../container-registry/zone-redundancy.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Data Explorer](/azure/data-explorer/create-cluster-database-portal?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
Azure reliability guidance contains the following:
Azure Service Manager (ASM) is the old control plane of Azure responsible for creating, managing, deleting VMs and performing other control plane operations, and has been in use since 2011. ASM is retiring in August 2024, and customers can now migrate to [Azure Resource Manager (ARM)](/azure/azure-resource-manager/management/overview). For more information on specific retirement dates and migration documentation, see [Azure Service Manager Retirement](./asm-retirement.md).+ ## Next steps
reliability Reliability Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-virtual-machines.md
To run production workloads, you can use:
When you create your VMs, use availability zones to protect your applications and data against unlikely datacenter failure. For more information about availability zones for VMs, see [Availability zone support](#availability-zone-support) in this document.
-For information on how to enable availability zones support when you create your VM, see [create availability zone support](#create-a-resource-with-availability-zone-enabled).
+For information on how to enable availability zones support when you create your VM, see [create availability zone support](#create-a-resource-with-availability-zones-enabled).
For information on how to migrate your existing VMs to availability zone support, see [migrate to availability zone support](#migrate-to-availability-zone-support).
To learn more about availability zone readiness options, see:
Because availability zones are physically separate and provide distinct power source, network, and cooling, SLAs (Service-level agreements) increase. For more information, see the [SLA for Virtual Machines](https://azure.microsoft.com/support/legal/sla/virtual-machines/v1_9/).
-#### Create a resource with availability zone enabled
+### Create a resource with availability zones enabled
Get started by creating a virtual machine (VM) with availability zone enabled from the following deployment options below: - [Azure CLI](../virtual-machines/linux/create-cli-availability-zone.md)
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
Previously updated : 08/07/2023 Last updated : 10/30/2023
The following table provides a brief description of each built-in role. Click th
> | [Cognitive Services OpenAI User](#cognitive-services-openai-user) | Read access to view files, models, deployments. The ability to create completion and embedding calls. | 5e0bd9bd-7b93-4f28-af87-19fc36ad61bd | > | [Cognitive Services QnA Maker Editor](#cognitive-services-qna-maker-editor) | Let's you create, edit, import and export a KB. You cannot publish or delete a KB. | f4cc2bf9-21be-47a1-bdf1-5c5804381025 | > | [Cognitive Services QnA Maker Reader](#cognitive-services-qna-maker-reader) | Let's you read and test a KB only. | 466ccd10-b268-4a11-b098-b4849f024126 |
+> | [Cognitive Services Usages Reader](#cognitive-services-usages-reader) | Minimal permission to view Cognitive Services usages. | bba48692-92b0-4667-a9ad-c31c7b334ac2 |
> | [Cognitive Services User](#cognitive-services-user) | Lets you read and list keys of Cognitive Services. | a97b65f3-24c7-4388-baec-2e87135dc908 | > | **Internet of things** | | | > | [Device Update Administrator](#device-update-administrator) | Gives you full access to management and content operations | 02ca0879-e8e4-47a5-a61e-5c618b76e64a |
List cluster monitoring user credential action.
"/" ], "description": "List cluster monitoring user credential action.",
- "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/1afdec4b-e479-420e-99e7-f82237c7c5e6",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/1afdec4b-e479-420e-99e7-f82237c7c5e6",
"name": "1afdec4b-e479-420e-99e7-f82237c7c5e6", "permissions": [ {
Can perform all actions within an Azure Machine Learning workspace, except for c
### Cognitive Services Contributor
-Lets you create, read, update, delete and manage keys of Cognitive Services. [Learn more](../ai-services/cognitive-services-virtual-networks.md)
+Lets you create, read, update, delete and manage keys of Cognitive Services. [Learn more](../ai-services/openai/how-to/role-based-access-control.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
Full access to the project, including the system level configuration. [Learn mor
### Cognitive Services OpenAI Contributor
-Full access including the ability to fine-tune, deploy and generate text
+Full access including the ability to fine-tune, deploy and generate text [Learn more](../ai-services/openai/how-to/role-based-access-control.md)
> [!div class="mx-tableFixed"] > | Actions | Description | > | | | > | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/*/read | |
+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/deployments/write | Writes deployments. |
+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/deployments/delete | Deletes deployments. |
+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/raiPolicies/read | Gets all applicable policies under the account including default policies. |
+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/raiPolicies/write | Create or update a custom Responsible AI policy. |
+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/raiPolicies/delete | Deletes a custom Responsible AI policy that's not referenced by an existing deployment. |
+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/commitmentplans/read | Reads commitment plans. |
+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/commitmentplans/write | Writes commitment plans. |
+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/commitmentplans/delete | Deletes commitment plans. |
> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/roleAssignments/read | Get information about a role assignment. | > | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/roleDefinitions/read | Get information about a role definition. | > | **NotActions** | |
Full access including the ability to fine-tune, deploy and generate text
{ "actions": [ "Microsoft.CognitiveServices/*/read",
+ "Microsoft.CognitiveServices/accounts/deployments/write",
+ "Microsoft.CognitiveServices/accounts/deployments/delete",
+ "Microsoft.CognitiveServices/accounts/raiPolicies/read",
+ "Microsoft.CognitiveServices/accounts/raiPolicies/write",
+ "Microsoft.CognitiveServices/accounts/raiPolicies/delete",
+ "Microsoft.CognitiveServices/accounts/commitmentplans/read",
+ "Microsoft.CognitiveServices/accounts/commitmentplans/write",
+ "Microsoft.CognitiveServices/accounts/commitmentplans/delete",
"Microsoft.Authorization/roleAssignments/read", "Microsoft.Authorization/roleDefinitions/read" ],
Full access including the ability to fine-tune, deploy and generate text
### Cognitive Services OpenAI User
-Read access to view files, models, deployments. The ability to create completion and embedding calls.
+Read access to view files, models, deployments. The ability to create completion and embedding calls. [Learn more](../ai-services/openai/how-to/role-based-access-control.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
Read access to view files, models, deployments. The ability to create completion
> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/OpenAI/deployments/extensions/chat/completions/action | Creates a completion for the chat message with extensions | > | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/OpenAI/deployments/embeddings/action | Return the embeddings for a given prompt. | > | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/OpenAI/deployments/completions/write | |
+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/OpenAI/images/generations/action | Create image generations. |
> | **NotDataActions** | | > | *none* | |
Read access to view files, models, deployments. The ability to create completion
"assignableScopes": [ "/" ],
- "description": "Ability to view files, models, deployments. Readers can't make any changes They can inference",
+ "description": "Ability to view files, models, deployments. Readers are able to call inference operations such as chat completions and image generation.",
"id": "/providers/Microsoft.Authorization/roleDefinitions/5e0bd9bd-7b93-4f28-af87-19fc36ad61bd", "name": "5e0bd9bd-7b93-4f28-af87-19fc36ad61bd", "permissions": [
Read access to view files, models, deployments. The ability to create completion
"Microsoft.CognitiveServices/accounts/OpenAI/deployments/chat/completions/action", "Microsoft.CognitiveServices/accounts/OpenAI/deployments/extensions/chat/completions/action", "Microsoft.CognitiveServices/accounts/OpenAI/deployments/embeddings/action",
- "Microsoft.CognitiveServices/accounts/OpenAI/deployments/completions/write"
+ "Microsoft.CognitiveServices/accounts/OpenAI/deployments/completions/write",
+ "Microsoft.CognitiveServices/accounts/OpenAI/images/generations/action"
], "notDataActions": [] }
Let's you read and test a KB only. [Learn more](../ai-services/qnamaker/index.ym
} ```
+### Cognitive Services Usages Reader
+
+Minimal permission to view Cognitive Services usages. [Learn more](../ai-services/openai/how-to/role-based-access-control.md)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/locations/usages/read | Read all usages data |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Minimal permission to view Cognitive Services usages.",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/bba48692-92b0-4667-a9ad-c31c7b334ac2",
+ "name": "bba48692-92b0-4667-a9ad-c31c7b334ac2",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.CognitiveServices/locations/usages/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Cognitive Services Usages Reader",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+ ### Cognitive Services User Lets you read and list keys of Cognitive Services. [Learn more](../ai-services/authentication.md)
sap Configure Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-control-plane.md
This table shows the parameters related to the deployer VM.
The VM image is defined by using the following structure:
-```python
-{
- "os_type" = ""
- "source_image_id" = ""
- "publisher" = "Canonical"
- "offer" = "0001-com-ubuntu-server-focal"
- "sku" = "20_04-lts"
- "version" = "latest"
- "type" = "marketplace"
+```terraform
+xxx_vm_image = {
+ os_type = ""
+ source_image_id = ""
+ publisher = "Canonical"
+ offer = "0001-com-ubuntu-server-focal"
+ sku = "20_04-lts"
+ version = "latest"
+ type = "marketplace"
} ```
sap Configure System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-system.md
description: Define the SAP system properties for SAP Deployment Automation Fram
Previously updated : 05/04/2023 Last updated : 10/31/2023
To configure this topology, define the database tier values and set `database_hi
This section contains the parameters that define the environment settings. > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type | Notes |
-> | | -- | - | - |
-> | `environment` | Identifier for the workload zone (maximum five characters) | Mandatory | For example, `PROD` for a production environment and `NP` for a nonproduction environment. |
-> | `location` | The Azure region in which to deploy | Required | |
-> | `custom_prefix` | Specifies the custom prefix used in the resource naming | Optional | |
-> | `use_prefix` | Controls if the resource naming includes the prefix | Optional | DEV-WEEU-SAP01-X00_xxxx |
-> | 'name_override_file' | Name override file | Optional | See [Custom naming](naming-module.md). |
-> | 'save_naming_information | Creates a sample naming JSON file | Optional | See [Custom naming](naming-module.md). |
+> | Variable | Description | Type | Notes |
+> | - | -- | - | - |
+> | `environment` | Identifier for the workload zone (max five characters) | Mandatory | For example, `PROD` for a production environment and `NP` for a nonproduction environment. |
+> | `location` | The Azure region in which to deploy | Required | |
+> | `custom_prefix` | Specifies the custom prefix used in the resource naming | Optional | |
+> | `use_prefix` | Controls if the resource naming includes the prefix | Optional | DEV-WEEU-SAP01-X00_xxxx |
+> | `name_override_file` | Name override file | Optional | See [Custom naming](naming-module.md). |
+> | `save_naming_information` | Creates a sample naming JSON file | Optional | See [Custom naming](naming-module.md). |
+> | `tags` | A dictionary of tags to associate with all resources. | Optional | |
## Resource group parameters
This section contains the parameters that define the resource group.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | -- | -- | - |
-> | `resourcegroup_name` | Name of the resource group to be created | Optional |
+> | `resourcegroup_name` | Name of the resource group to be created | Optional |
> | `resourcegroup_arm_id` | Azure resource identifier for an existing resource group | Optional | > | `resourcegroup_tags` | Tags to be associated to the resource group | Optional |
+## Infrastructure parameters
+
+This section contains the parameters related to the Azure infrastructure.
++
+> [!div class="mx-tdCol2BreakAll "]
+> | Variable | Description | Type |
+> | - | -- | - |
+> | `disk_encryption_set_id` | The disk encryption key to use for encrypting managed disks by using customer-provided keys. | Optional |
+> | `proximityplacementgroup_arm_ids` | Specifies the Azure resource identifiers of existing proximity placement groups. | |
+> | `proximityplacementgroup_names` | Specifies the names of the proximity placement groups. | |
+> | `resource_offset` | Provides an offset for resource naming. | Optional |
+> | `use_loadbalancers_for_standalone_deployments` | Controls if load balancers are deployed for standalone installations. | Optional |
+> | `use_scalesets_for_deployment` | Use Flexible Virtual Machine Scale Sets for the deployment | Optional |
+> | `use_msi_for_clusters` | If defined, configures the Pacemaker cluster by using managed identities. | Optional |
+> | `use_simple_mount` | Specifies if simple mounts are used (applicable for SLES 15 SP# or newer). | Optional |
+> | `custom_disk_sizes_filename` | Defines the disk sizing file name, See [Custom sizing](configure-extra-disks.md). | Optional |
+
+The `resource_offset` parameter controls the naming of resources. For example, if you set the `resource_offset` to 1, the first disk will be named `disk1`. The default value is 0.
+
+## SAP Application parameters
+
+This section contains the parameters related to the SAP Application.
+
+> [!div class="mx-tdCol2BreakAll "]
+> | Variable | Description | Type |
+> | -- | -- | - |
+> | `sid` | Defines the SAP application SID | Required |
+> | `database_sid` | Defines the database SID | Required |
+> | `scs_instance_number` | The instance number of SCS | Optional |
+> | `ers_instance_number` | The instance number of ERS | Optional |
+> | `pas_instance_number` | The instance number of the Primary Application Server | Optional |
+> | `app_instance_number` | The instance number of the Application Server | Optional |
+> | `web_instance_number` | The instance number of the Web Dispatcher | Optional |
+ ## SAP virtual hostname parameters In SAP Deployment Automation Framework, the SAP virtual hostname is defined by specifying the `use_secondary_ips` parameter.
In SAP Deployment Automation Framework, the SAP virtual hostname is defined by s
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | -- | -- | - |
-> | `use_secondary_ips` | Boolean flag that indicates if SAP should be installed by using virtual hostnames | Optional |
+> | `use_secondary_ips` | Boolean flag that indicates if SAP should be installed by using virtual hostnames | Optional |
+ ### Database tier parameters
The database tier defines the infrastructure for the database tier. Supported da
- `SQLSERVER` - `NONE` (in this case, no database tier is deployed)
+See [High-availability configuration](configure-system.md#high-availability-configuration) for information on how to configure high availability.
+ > [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type | Notes |
-> | - | -- | -- | |
-> | `database_sid` | Defines the database SID | Required | |
-> | `database_platform` | Defines the database back end | Supported values are `HANA`, `DB2`, `ORACLE`, `ASE`, `SQLSERVER`, and `NONE`. |
-> | `database_high_availability` | Defines if the database tier is deployed highly available | Optional | See [High-availability configuration](configure-system.md#high-availability-configuration). |
-> | `database_server_count` | Defines the number of database servers | Optional | Default value is 1. |
-> | `database_vm_zones` | Defines the availability zones for the database servers | Optional | |
+> | Variable | Description | Type | Notes |
+> | - | - | | |
+> | `database_platform` | Defines the database back end | Required | |
+> | `database_high_availability` | Defines if the database tier is deployed highly available | Optional | |
+> | `database_server_count` | Defines the number of database servers | Optional | |
+> | `database_vm_zones` | Defines the availability zones for the database servers | Optional | |
> | `db_sizing_dictionary_key` | Defines the database sizing information | Required | See [Custom sizing](configure-extra-disks.md). |
-> | `db_disk_sizes_filename` | Defines the custom database sizing file name | Optional | See [Custom sizing](configure-extra-disks.md). |
-> | `database_vm_use_DHCP` | Controls if Azure subnet-provided IP addresses should be used | Optional | |
-> | `database_vm_db_nic_ips` | Defines the IP addresses for the database servers (database subnet) | Optional | |
-> | `database_vm_db_nic_secondary_ips` | Defines the secondary IP addresses for the database servers (database subnet) | Optional | |
-> | `database_vm_admin_nic_ips` | Defines the IP addresses for the database servers (admin subnet) | Optional | |
-> | `database_vm_image` | Defines the virtual machine image to use | Optional | |
-> | `database_vm_authentication_type` | Defines the authentication type (key/password) | Optional | |
-> | `database_use_avset` | Controls if the database servers are placed in availability sets | Optional | Default is false. |
-> | `database_use_ppg` | Controls if the database servers are placed in proximity placement groups | Optional | Default is true. |
-> | `database_vm_avset_arm_ids` | Defines the existing availability sets Azure resource IDs | Optional | Primarily used with ANF pinning. |
-> | `hana_dual_nics` | Controls if the HANA database servers will have dual network interfaces | Optional | Default is true. |
+> | `database_vm_use_DHCP` | Controls if Azure subnet-provided IP addresses should be used | Optional | |
+> | `database_vm_db_nic_ips` | Defines the IP addresses for the database servers (database subnet) | Optional | |
+> | `database_vm_db_nic_secondary_ips` | Defines the secondary IP addresses for the database servers (database subnet) | Optional | |
+> | `database_vm_admin_nic_ips` | Defines the IP addresses for the database servers (admin subnet) | Optional | |
+> | `database_vm_image` | Defines the virtual machine image to use | Optional | |
+> | `database_vm_authentication_type` | Defines the authentication type (key/password) | Optional | |
+> | `database_use_avset` | Controls if the database servers are placed in availability sets | Optional | |
+> | `database_use_ppg` | Controls if the database servers are placed in proximity placement groups | Optional | |
+> | `database_vm_avset_arm_ids` | Defines the existing availability sets Azure resource IDs | Optional | Primarily used with ANF pinning. |
+> | `database_use_premium_v2_storage` | Controls if the database tier will use premium storage v2 (HANA) | Optional | |
+> | `hana_dual_nics` | Controls if the HANA database servers will have dual network interfaces | Optional | |
+++ The virtual machine and the operating system image are defined by using the following structure:
The virtual machine and the operating system image are defined by using the foll
} ```
-### Common application tier parameters
+## Common application tier parameters
The application tier defines the infrastructure for the application tier, which can consist of application servers, central services servers, and web dispatch servers.
The application tier defines the infrastructure for the application tier, which
> | Variable | Description | Type | Notes | > | - | | --| | > | `enable_app_tier_deployment` | Defines if the application tier is deployed | Optional | |
-> | `sid` | Defines the SAP application SID | Required | |
> | `app_tier_sizing_dictionary_key` | Lookup value that defines the VM SKU and the disk layout for the application tier servers | Optional | > | `app_disk_sizes_filename` | Defines the custom disk size file for the application tier servers | Optional | See [Custom sizing](configure-extra-disks.md). | > | `app_tier_authentication_type` | Defines the authentication type for the application tier virtual machines | Optional | | > | `app_tier_use_DHCP` | Controls if Azure subnet-provided IP addresses should be used (dynamic) | Optional | | > | `app_tier_dual_nics` | Defines if the application tier server will have two network interfaces | Optional | |
-### SAP central services parameters
+## SAP central services parameters
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | | -| | > | `scs_server_count` | Defines the number of SCS servers | Required | | > | `scs_high_availability` | Defines if the central services is highly available | Optional | See [High availability configuration](configure-system.md#high-availability-configuration). |
-> | `scs_instance_number` | The instance number of SCS | Optional | |
-> | `ers_instance_number` | The instance number of ERS | Optional | |
> | `scs_server_sku` | Defines the virtual machine SKU to use | Optional | | > | `scs_server_image` | Defines the virtual machine image to use | Required | | > | `scs_server_zones` | Defines the availability zones of the SCS servers | Optional | |
The application tier defines the infrastructure for the application tier, which
> | `scs_server_use_avset` | Controls if the SCS servers are placed in proximity placement groups | Optional | | > | `scs_server_tags` | Defines a list of tags to be applied to the SCS servers | Optional | |
-### Application server parameters
+## Application server parameters
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes |
The application tier defines the infrastructure for the application tier, which
> | `application_server_use_avset` | Controls if application servers are placed in proximity placement groups | Optional | | > | `application_server_tags` | Defines a list of tags to be applied to the application servers | Optional | |
-### Web dispatcher parameters
+## Web dispatcher parameters
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes |
This section defines the parameters used for defining the key vault information.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | | | -- |
-> | `user_keyvault_id` | Azure resource identifier for existing system credentials key vault | Optional | |
-> | `spn_keyvault_id` | Azure resource identifier for existing deployment credentials (SPNs) key vault | Optional | |
+> | `user_keyvault_id` | Azure resource identifier for existing system credentials key vault | Optional | |
+> | `spn_keyvault_id` | Azure resource identifier for existing deployment credentials (SPNs) key vault | Optional | |
> | `enable_purge_control_for_keyvaults` | Disables the purge protection for Azure key vaults | Optional | Only use for test environments. | ### Anchor virtual machine parameters
By default, the SAP system deployment uses the credentials from the SAP workload
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | - | - | -- |
-> | `use_msi_for_clusters` | If defined, configures the Pacemaker cluster by using managed identities. | Optional |
-> | `resource_offset` | Provides an offset for resource naming. The offset number for resource naming when creating multiple resources. The default value is 0, which creates a naming pattern of disk0, disk1, and so on. An offset of 1 creates a naming pattern of disk1, disk2, and so on. | Optional |
-> | `disk_encryption_set_id` | The disk encryption key to use for encrypting managed disks by using customer-provided keys. | Optional |
-> | `use_loadbalancers_for_standalone_deployments` | Controls if load balancers are deployed for standalone installations. | Optional |
> | `license_type` | Specifies the license type for the virtual machines. | Possible values are `RHEL_BYOS` and `SLES_BYOS`. For Windows, the possible values are `None`, `Windows_Client`, and `Windows_Server`. | > | `use_zonal_markers` | Specifies if zonal virtual machines will include a zonal identifier: `xooscs_z1_00l###` versus `xooscs00l###`.| Default value is true. |
-> | `proximityplacementgroup_names` | Specifies the names of the proximity placement groups. | |
-> | `proximityplacementgroup_arm_ids` | Specifies the Azure resource identifiers of existing proximity placement groups. | |
-> | `use_simple_mount` | Specifies if simple mounts are used (applicable for SLES 15 SP# or newer). | Optional |
## NFS support
By default, the SAP system deployment uses the credentials from the SAP workload
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | --| -- | |
+> | `ANF_HANA_use_AVG` | Use Application Volume Group for the volumes. | Optional | |
+> | `ANF_HANA_use_Zones` | Deploy the Azure NetApp Files volume zonally. | Optional | |
+> | | | | |
> | `ANF_HANA_data` | Create Azure NetApp Files volume for HANA data. | Optional | | > | `ANF_HANA_data_use_existing_volume` | Use existing Azure NetApp Files volume for HANA data. | Optional | Use for pre-created volumes. | > | `ANF_HANA_data_volume_name` | Azure NetApp Files volume name for HANA data. | Optional | |
These parameters need to be updated in the *sap-parameters.yaml* file when you d
> | `ora_version` | Version of Oracle, for example, 19.0.0 | Mandatory | | > | `oracle_sbp_patch` | Oracle SBP patch file name, for example, SAP19P_2202-70004508.ZIP | Mandatory | Must be part of the Bill of Materials |
+You can use the `configuration_settings` variable to let Terraform add them to sap-parameters.yaml file.
+
+```terraform
+configuration_settings = {
+ ora_release = "19",
+ ora_version = "19.0.0",
+ oracle_sbp_patch = "SAP19P_2202-70004508.ZIP",
+ oraclegrid_sbp_patch = "GIRU19P_2202-70004508.ZIP",
+ }
+```
+ ## Terraform parameters This section contains the Terraform parameters. These parameters need to be entered manually if you're not using the deployment scripts.
The high-availability configuration for the database tier and the SCS tier is co
High-availability configurations use Pacemaker with Azure fencing agents.
+### Cluster parameters
+
+This section contains the parameters related to the cluster configuration.
+
+> [!div class="mx-tdCol2BreakAll "]
+> | Variable | Description | Type |
+> | - | | - |
+> | `database_cluster_disk_lun` | Specifies the The LUN of the shared disk for the Database cluster. | Optional |
+> | `database_cluster_disk_size` | The size of the shared disk for the Database cluster. | Optional |
+> | `database_cluster_type` | Cluster quorum type; AFA (Azure Fencing Agent), ASD (Azure Shared Disk), ISCSI | Optional |
+> | `fencing_role_name` | Specifies the Azure role assignment to assign to enable fencing. | Optional |
+> | `scs_cluster_disk_lun` | Specifies the The LUN of the shared disk for the Central Services cluster. | Optional |
+> | `scs_cluster_disk_size` | The size of the shared disk for the Central Services cluster. | Optional |
+> | `scs_cluster_type` | Cluster quorum type; AFA (Azure Fencing Agent), ASD (Azure Shared Disk), ISCSI | Optional |
+> | `use_msi_for_clusters` | If defined, configures the Pacemaker cluster by using managed identities. | Optional |
+> | `use_simple_mount` | Specifies if simple mounts are used (applicable for SLES 15 SP# or newer). | Optional |
+> | `idle_timeout_scs_ers` | Sets the idle timeout setting for the SCS and ERS loadbalancer. | Optional |
+ > [!NOTE] > The highly available central services deployment requires using a shared file system for `sap_mnt`. You can use Azure Files or Azure NetApp Files by using the `NFS_provider` attribute. The default is Azure Files. To use Azure NetApp Files, set the `NFS_provider` attribute to `ANF`.
sap Configure Workload Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-workload-zone.md
This table contains the parameters that define the environment settings.
> | `environment` | Identifier for the workload zone (max five characters) | Mandatory | For example, `PROD` for a production environment and `QA` for a Quality Assurance environment. | > | `location` | The Azure region in which to deploy | Required | | > | `name_override_file` | Name override file | Optional | See [Custom naming](naming-module.md). |
+> | `tags` | A dictionary of tags to associate with all resources. | Optional | |
## Resource group parameters
This table defines the parameters used for defining the key vault information.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | | | | -- |
-> | `user_keyvault_id` | Azure resource identifier for existing system credentials key vault | Optional | |
-> | `spn_keyvault_id` | Azure resource identifier for existing deployment credentials (SPNs) key vault | Optional | |
-> | `enable_purge_control_for_keyvaults` | Disables the purge protection for Azure key vaults | Optional | Use only for test environments. |
> | `additional_users_to_add_to_keyvault_policies` | A list of user object IDs to add to the deployment key vault access policies | Optional | |
+> | `enable_purge_control_for_keyvaults` | Disables the purge protection for Azure key vaults | Optional | Use only for test environments. |
+> | `spn_keyvault_id` | Azure resource identifier for existing deployment credentials (SPNs) key vault | Optional | |
+> | `user_keyvault_id` | Azure resource identifier for existing system credentials key vault | Optional | |
## Private DNS
This table defines the parameters used for defining the key vault information.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | - | -- | -- | |
+> | `create_transport_storage` | If defined, create storage for the transport directories. | Optional | |
+> | `export_install_path` | If provided, export mount path for the installation media. | Optional | |
+> | `export_transport_path` | If provided, export mount path for the transport share. | Optional | |
+> | `install_private_endpoint_id` | Azure resource ID for the `install` private endpoint. | Optional | For existing endpoints|
+> | `install_volume_size` | Defines the size (in GB) for the `install` volume. | Optional | |
> | `NFS_provider` | Defines what NFS back end to use. The options are `AFS` for Azure Files NFS or `ANF` for Azure NetApp Files, `NONE` for NFS from the SCS server, or `NFS` for an external NFS solution. | Optional | |
-> | `install_volume_size` | Defines the size (in GB) for the `install` volume. | Optional | |
-> | `install_private_endpoint_id` | Azure resource ID for the `install` private endpoint. | Optional | For existing endpoints|
-> | `transport_volume_size` | Defines the size (in GB) for the `transport` volume. | Optional | |
-> | `transport_private_endpoint_id` | Azure resource ID for the `transport` private endpoint. | Optional | For existing endpoints|
+> | `transport_volume_size` | Defines the size (in GB) for the `transport` volume. | Optional | |
+> | `use_AFS_for_installation_media | If provided, uses AFS for the installation media. | Optional | |
### Azure Files NFS support
ANF_service_level = "Ultra"
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | -- | | -- |
-> | `use_custom_dns_a_registration` | Use an existing private DNS zone. | Optional |
-> | `management_dns_subscription_id` | Subscription ID for the subscription that contains the private DNS zone. | Optional |
-> | `management_dns_resourcegroup_name` | Resource group that contains the private DNS zone. | Optional |
> | `dns_label` | DNS name of the private DNS zone. | Optional |
+> | `management_dns_resourcegroup_name` | Resource group that contains the private DNS zone. | Optional |
+> | `management_dns_subscription_id` | Subscription ID for the subscription that contains the private DNS zone. | Optional |
+> | `use_custom_dns_a_registration` | Use an existing private DNS zone. | Optional |
## Other parameters > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | | - | -- | - |
-> | `place_delete_lock_on_resources` | Places delete locks on the key vaults and the virtual network | Optional | |
-> | `enable_purge_control_for_keyvaults` | If purge control is enabled on the key vault. | Optional | Use only for test deployments. |
> | `diagnostics_storage_account_arm_id` | The Azure resource identifier for the diagnostics storage account. | Required | For brown-field deployments. |
+> | `enable_purge_control_for_keyvaults` | If purge control is enabled on the key vault. | Optional | Use only for test deployments. |
+> | `place_delete_lock_on_resources` | Places delete locks on the key vaults and the virtual network | Optional | |
> | `witness_storage_account_arm_id` | The Azure resource identifier for the witness storage account. | Required | For brown-field deployments. | ## iSCSI parameters
ANF_service_level = "Ultra"
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | - | | -- |
-> | `iscsi_subnet_name` | The name of the `iscsi` subnet | Optional | |
-> | `iscsi_subnet_address_prefix` | The address range for the `iscsi` subnet | Mandatory | For green-field deployments |
-> | `iscsi_subnet_arm_id` | The Azure resource identifier for the `iscsi` subnet | Mandatory | For brown-field deployments |
-> | `iscsi_subnet_nsg_name` | The name of the `iscsi` network security group | Optional | |
-> | `iscsi_subnet_nsg_arm_id` | The Azure resource identifier for the `iscsi` network security group | Mandatory | For brown-field deployments |
-> | `iscsi_count` | The number of iSCSI virtual machines | Optional | |
-> | `iscsi_use_DHCP` | Controls whether to use dynamic IP addresses provided by the Azure subnet | Optional | |
-> | `iscsi_image` | Defines the virtual machine image to use (next table) | Optional | |
> | `iscsi_authentication_type` | Defines the default authentication for the iSCSI virtual machines | Optional | |
-> | `iscsi__authentication_username` | Administrator account name | Optional | |
+> | `iscsi_authentication_username` | Administrator account name | Optional | |
+> | `iscsi_count` | The number of iSCSI virtual machines | Optional | |
+> | `iscsi_image` | Defines the virtual machine image to use (next table) | Optional | |
> | `iscsi_nic_ips` | IP addresses for the iSCSI virtual machines | Optional | Ignored if `iscsi_use_DHCP` is defined |
+> | `iscsi_subnet_address_prefix` | The address range for the `iscsi` subnet | Mandatory | For green-field deployments |
+> | `iscsi_subnet_arm_id` | The Azure resource identifier for the `iscsi` subnet | Mandatory | For brown-field deployments |
+> | `iscsi_subnet_name` | The name of the `iscsi` subnet | Optional | |
+> | `iscsi_subnet_nsg_arm_id` | The Azure resource identifier for the `iscsi` network security group | Mandatory | For brown-field deployments |
+> | `iscsi_subnet_nsg_name` | The name of the `iscsi` network security group | Optional | |
+> | `iscsi_use_DHCP` | Controls whether to use dynamic IP addresses provided by the Azure subnet | Optional | |
+> | `iscsi_vm_zones` | Availability zones for the iSCSI Virtual Machines | Optional | |
## Utility VM parameters
ANF_service_level = "Ultra"
> | Variable | Description | Type | Notes | > | -- | - | | - | > | `utility_vm_count` | Defines the number of utility virtual machines to deploy | Optional | Use the utility virtual machine to host SAPGui |
-> | `utility_vm_size` | Defines the SKU for the utility virtual machines | Optional | Default: Standard_D4ds_v4 |
-> | `utility_vm_useDHCP` | Defines if Azure subnet provided IPs should be used | Optional | |
> | `utility_vm_image` | Defines the virtual machine image to use | Optional | Default: Windows Server 2019 | > | `utility_vm_nic_ips` | Defines the IP addresses for the virtual machines | Optional | |
+> | `utility_vm_size` | Defines the SKU for the utility virtual machines | Optional | Default: Standard_D4ds_v4 |
+> | `utility_vm_useDHCP` | Defines if Azure subnet provided IPs should be used | Optional | |
## Terraform parameters
sap Extensibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/extensibility.md
+
+ Title: Extensibility for the SAP Deployment Automation Framework
+description: Describe how to extend the SAP Deployment Automation Framework.
+++ Last updated : 10/29/2023+++++
+# Extending the SAP Deployment Automation Framework
++
+Within the SAP Deployment Automation Framework (SDAF), we recognize the importance of adaptability and customization to meet the unique needs of various deployments. This document describes the ways to extend the framework's capabilities, ensuring that it aligns with your specific requirements.
+
+Forking the Source Code Repository: One method of extending SDAF is by forking the source code repository. This approach grants you the flexibility to make tailored modifications within your own forked version of the code. By doing so, you gain control over the framework's core functionality, enabling you to tailor it precisely to your deployment objectives.
+
+Adding Stages to the SAP Configuration Pipeline: Another way to customization is by adding stages to the SAP configuration pipeline. This approach allows you to integrate specific processes or steps that are integral to your deployment workflows into the automation pipeline.
+
+Streamlined Extensibility: This capability allows you to effortlessly incorporate your existing Ansible playbooks directly into the SDAF. By using this feature, you can seamlessly integrate your Ansible automation scripts with the framework, further enhancing its versatility.
+
+Configuration extensibility: This feature allows you to extend the framework's configuration capabilities by adding custom repositories, packages, kernel parameters, logical volumes, mounts, and exports without the need to write any code.
+
+Throughout this documentation, we provide comprehensive guidance on each of these extensibility options, ensuring that you have the knowledge and tools needed to tailor the SAP Deployment Automation Framework to your specific deployment needs.
+
+> [!NOTE]
+> If you fork the source code repository, you must maintain your fork of the code. You must also merge the changes from the source code repository into your fork of the code whenever there is a new release of the SDAF codebase.
+
+## Executing your own Ansible playbooks as part of the Azure DevOps orchestration
+
+You can implement your own Ansible playbooks, which are automatically be called as part of the Azure DevOps 'OS Configuration and SAP Installation' pipeline.
+
+The Ansible playbooks must be located in a folder called 'Ansible' located in the root folder in your configuration repository. They're called with the same parameter files as the SDAF playbooks so you have access to all the configuration.
++
+The Ansible playbooks must be named according to the following naming convention:
+
+'Playbook name_pre' for playbooks to be run before the SDAF playbook and 'Playbook name_post' for playbooks to be run after the SDAF playbook.
+
+| Playbook name | Playbook name for 'pre' tasks | Playbook name for 'post' tasks | Description |
+| -- | | - | -- |
+| `playbook_01_os_base_config.yaml` | `playbook_01_os_base_config_pre.yaml` | `playbook_01_os_base_config_post.yaml` | Base operating system configuration |
+| `playbook_02_os_sap_specific_config.yaml` | `playbook_02_os_sap_specific_config_pre.yaml` | `playbook_02_os_sap_specific_config_post.yaml` | SAP specific configuration |
+| `playbook_03_bom_processing.yaml` | `playbook_03_bom_processing_pre.yaml` | `playbook_03_bom_processing_post.yaml` | Bill of Material processing |
+| `playbook_04_00_00_db_install.yaml` | `playbook_04_00_00_db_install_pre.yaml` | `playbook_04_00_00_db_install_post.yaml` | Database server installation |
+| `playbook_04_00_01_db_ha.yaml` | `playbook_04_00_01_db_ha_pre.yaml` | `playbook_04_00_01_db_ha_post.yaml` | Database High Availability configuration |
+| `playbook_05_00_00_sap_scs_install.yaml` | `playbook_05_00_00_sap_scs_install_pre.yaml` | `playbook_05_00_00_sap_scs_install_post.yaml` | Central Services Installation and High Availability configuration |
+| `playbook_05_01_sap_dbload.yaml` | `playbook_05_01_sap_dbload_pre.yaml` | `playbook_05_01_sap_dbload_post.yaml` | Database load |
+| `playbook_05_02_sap_pas_install.yaml` | `playbook_05_02_sap_pas_install_pre.yaml` | `playbook_05_02_sap_pas_install_post.yaml` | Primary Application Server installation |
+| `playbook_05_03_sap_app_install.yaml` | `playbook_05_03_sap_app_install_pre.yaml` | `playbook_05_03_sap_app_install_post.yaml` | Additional Application Server installation |
+| `playbook_05_04_sap_web_install.yaml` | `playbook_05_04_sap_web_install_pre.yaml` | `playbook_05_04_sap_web_install.yaml` | Web dispatcher installation |
++
+### Sample Ansible playbook
+
+```yaml
+
+# /*8
+# | |
+# | Run commands on all remote hosts |
+# | |
+# +4--*/
+
+- hosts: "{{ sap_sid | upper }}_DB :
+ {{ sap_sid | upper }}_SCS :
+ {{ sap_sid | upper }}_ERS :
+ {{ sap_sid | upper }}_PAS :
+ {{ sap_sid | upper }}_APP :
+ {{ sap_sid | upper }}_WEB"
+
+ name: "Examples on how to run commands on remote hosts"
+ gather_facts: true
+ tasks:
+
+ - name: "Calculate information about the OS distribution"
+ ansible.builtin.set_fact:
+ distro_family: "{{ ansible_os_family | upper }}"
+ distribution_id: "{{ ansible_distribution | lower ~ ansible_distribution_major_version }}"
+ distribution_full_id: "{{ ansible_distribution | lower ~ ansible_distribution_version }}"
+
+ - name: "Show information"
+ ansible.builtin.debug:
+ msg:
+ - "Distro family: {{ distro_family }}"
+ - "Distribution id: {{ distribution_id }}"
+ - "Distribution full id: {{ distribution_full_id }}"
+
+ - name: "Show how to run a command on all remote host"
+ ansible.builtin.command: "whoami"
+ register: whoami_results
+
+ - name: "Show results"
+ ansible.builtin.debug:
+ var: whoami_results
+ verbosity: 0
+
+ - name: "Show how to run a command on just the 'SCS' and 'ERS' hosts"
+ ansible.builtin.command: "whoami"
+ register: whoami_results
+ when:
+ - "'scs' in supported_tiers or 'ers' in supported_tiers "
+...
+
+```
+
+## Updating the user and group IDs (Linux)
+
+If you want to change the user and group IDs used by the framework, you can add the following section to the sap-parameters.yaml file.
+
+```yaml
+# User and group IDs
+sapadm_uid: "3000"
+sidadm_uid: "3100"
+sapinst_gid: "300"
+sapsys_gid: "400"
+
+```
+
+You can use the `configuration_settings` variable to let Terraform add them to sap-parameters.yaml file.
+
+```terraform
+configuration_settings = {
+ sapadm_uid = "3000",
+ sidadm_uid = "3100",
+ sapinst_gid = "300",
+ sapsys_gid = "400"
+}
++
+## Adding custom repositories (Linux)
+
+If you need to register extra Linux package repositories to the Virtual Machines deployed by the framework, you can add the following section to the sap-parameters.yaml file.
+
+In this example, the repository 'epel' is registered on all the hosts in your SAP deployment that are running RedHat 8.2.
+
+```yaml
+
+custom_repos:
+ redhat8.2:
+ - { tier: 'ha', repo: 'epel', url: 'https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm', state: 'present' }
+
+```
+
+## Adding custom packages (Linux)
+
+If you need to install more Linux packages to the Virtual Machines deployed by the framework, you can add the following section to the sap-parameters.yaml file.
+
+In this example, the package 'openssl' is installed on all the hosts in your SAP deployment that are running SUSE Enterprise Linux for SAP Applications version 15.3.
+
+```yaml
+
+custom_packages:
+ sles_sap15.3:
+ - { tier: 'os', package: 'openssl', node_tier: 'all', state: 'present' }
+
+```
+
+If you want to install a package on a specific server type (`app`, `ers`, `pas`, `scs`, `hana`) you can add the following section to the sap-parameters.yaml file.
+
+```yaml
+
+custom_packages:
+ sles_sap15.3:
+ - { tier: 'ha', package: 'pacemaker', node_tier: 'hana', state: 'present' }
+
+```
+
+## Adding custom kernel parameters (Linux)
+
+You can extend the SAP Deployment Automation Framework by adding custom kernel parameters to the SDAF installation.
+
+When you add the following section to the sap-parameters.yaml file, the parameter 'fs.suid_dumpable' is set to 0 on all the hosts in your SAP deployment.
+
+```yaml
+
+custom_parameters:
+ common:
+ - { tier: 'os', node_tier: 'all', name: 'fs.suid_dumpable', value: '0', state: 'present' }
+
+```
+
+## Adding custom services (Linux)
+
+If you need to manage additional services on the Virtual Machines deployed by the framework, you can add the following section to the sap-parameters.yaml file.
+
+In this example, the 'firewalld' service is stopped and disabled on all the hosts in your SAP deployment that are running RedHat 7.x.
+
+```yaml
+
+custom_
+ redhat7:
+ - { tier: 'os', service: 'firewalld', node_tier: 'all', state: 'stopped' }
+ - { tier: 'os', service: 'firewalld', node_tier: 'all', state: 'disabled' }
++
+```
+
+## Adding custom logical volumes (Linux)
+
+You can extend the SAP Deployment Automation Framework by adding logical volumes based on additional disks in your SDAF installation.
+
+When you add the following section to the sap-parameters.yaml file, a logical volume 'lv_custom' is created on all Virtual machines with a disk with the name 'custom' in your SAP deployment. A filesystem is mounted on the logical volume and available on '/custompath'.
++
+```yaml
+
+custom_logical_volumes:
+ - tier: 'sapos'
+ node_tier: 'all'
+ vg: 'vg_custom'
+ lv: 'lv_custom'
+ size: '100%FREE'
+ fstype: 'xfs'
+ path: '/custompath'
+```
+
+> [!NOTE]
+> In order to use this functionality you need to add an additional disk named 'custom' to one or more of your Virtual machines. See [Custom disk sizing](configure-extra-disks.md) for more information.
+
+You can use the `configuration_settings` variable to let Terraform add them to sap-parameters.yaml file.
+
+```terraform
+configuration_settings = {
+ custom_logical_volumes = [
+ {
+ tier = 'sapos'
+ node_tier = 'all'
+ vg = 'vg_custom'
+ lv = 'lv_custom'
+ size = '100%FREE'
+ fstype = 'xfs'
+ path = '/custompath'
+ }
+ ]
+}
+```
+
+## Adding custom mount (Linux)
+
+You can extend the SAP Deployment Automation Framework by mounting additional mount points in your installation.
+
+When you add the following section to the sap-parameters.yaml file, a filesystem '/usr/custom' is mounted from an NFS share on "xxxxxxxxx.file.core.windows.net:/xxxxxxxxx/custom".
+
+```yaml
+
+custom_mounts:
+ - path: "/usr/custom"
+ opts: "vers=4,minorversion=1,sec=sys"
+ mount: "xxxxxxxxx.file.core.windows.net:/xxxxxxxx/custom"
+ target_nodes: "scs,pas,app"
+```
+
+The `target_nodes` attribute defines which nodes have the mount defined. Use 'all' if you want all nodes to have the mount defined.
+
+You can use the `configuration_settings` variable to let Terraform add them to sap-parameters.yaml file.
+
+```terraform
+configuration_settings = {
+ custom_mounts = [
+ {
+ path = "/usr/custom",
+ opts = "vers=4,minorversion=1,sec=sys",
+ mount = "xxxxxxxxx.file.core.windows.net:/xxxxxxxx/custom",
+ target_nodes = "scs,pas,app"
+ }
+ ]
+}
+```
+
+## Adding custom export (Linux)
+
+You can extend the SAP Deployment Automation Framework by adding additional folders to be exported from the Central Services virtual machine.
+
+When you add the following section to the sap-parameters.yaml file, a filesystem '/usr/custom' will be exported from the Central Services virtual machine and available via NFS.
+
+```yaml
+
+custom_exports:
+ path: "/usr/custom"
+
+```
+
+You can use the `configuration_settings` variable to let Terraform add them to sap-parameters.yaml file.
+
+```terraform
+configuration_settings = {
+ custom_mounts = [
+ {
+ path = "/usr/custom",
+ }
+ ]
+}
+```
+
+> [!NOTE]
+> This applies only for deployments with NFS_Provider set to 'NONE' as this makes the Central Services server an NFS Server.
+++
+## Next step
+
+> [!div class="nextstepaction"]
+> [Configure custom naming](naming-module.md)
sap Soft Stop Sap And Hana Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/soft-stop-sap-and-hana-database.md
+
+ Title: Soft stop individual SAP instances and HANA database
+description: Learn how to soft stop SAP system and HANA database through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions.
+++ Last updated : 10/25/2023++
+#Customer intent: As a developer, I want to stop SAP systems by draining existing connections gracefully when using Azure Center for SAP solutions.
+
+# Soft stop SAP systems, application server instances and HANA database
+
+In this how-to guide, you'll learn to soft stop your SAP systems, individual instances and HANA database through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions. You can stop your system smoothly by making sure that existing user connections, batch processes, etc. are drained first.
+
+Using the [Azure PowerShell](/powershell/module/az.workloads) and [REST API](/rest/api/workloads) interfaces, you can:
+
+- Soft stop the entire SAP system, that is the application server instances and central services instance.
+- Soft stop specific SAP application server instances.
+- Soft stop HANA database.
++
+## Prerequisites
+
+- An SAP system that you've [created in Azure Center for SAP solutions](prepare-network.md) or [registered with Azure Center for SAP solutions](register-existing-system.md).
+- Check that your Azure account has **Azure Center for SAP solutions administrator** or equivalent role access on the Virtual Instance for SAP solutions resources. For more information, see [how to use granular permissions that govern start and stop actions on the VIS, individual SAP instances and HANA databases](manage-with-azure-rbac.md#start-sap-system).
+- For HA deployments, the HA interface cluster connector for SAP (`sap_vendor_cluster_connector`) must be installed on the ASCS instance. For more information, see the [SUSE connector specifications](https://www.suse.com/c/sap-netweaver-suse-cluster-integration-new-sap_suse_cluster_connector-version-3-0-0/) and [RHEL connector specifications](https://access.redhat.com/solutions/3606101).
+- For HANA Database, Stop operation is initiated only when the cluster maintenance mode is in **Disabled** status.
++
+## Soft stop SAP system
+
+Currently, you can initiate a soft stop operation from the Azure PowerShell and REST API interfaces. You must use the stop operation along with a soft stop timeout value in seconds to initiate a soft stop. Once you initiate soft stop on VIS and the operation is successfully triggered on the SAP system, then monitor the Health and Status of the VIS to check if the system has stopped.
+
+> [!NOTE]
+> When attempting to soft stop an SAP system or applicaton server instance using Azure Center for SAP solutions, soft stop timeout value must be greater than 0 and less than 82800 seconds.
++
+### Soft stop system in PowerShell
+Use the [Stop-AzWorkloadsSapVirtualInstance](/powershell/module/az.workloads/Stop-AzWorkloadsSapVirtualInstance) command:
+
+```powershell
+ Stop-AzWorkloadsSapVirtualInstance -InputObject /subscriptions/sub1/resourceGroups/rg1/providers/Microsoft.Workloads/sapVirtualInstances/DB0 --SoftStopTimeoutSecond 300 `
+```
+
+### Soft stop system using REST API
+Use this [sample payload](/rest/api/workloads/2023-04-01/sap-virtual-instances/stop?tabs=HTTP#sapvirtualinstances_stop) to soft stop an SAP system. You can specify the soft stop timeout value in seconds.
+
+## Soft stop SAP Application server instance
+You can soft stop a specific application server in Azure Center for SAP solutions using Azure PowerShell and REST API interfaces. Once you initiate soft stop on application server and the operation is successfully triggered, then monitor Health and Status of the application server instance to check if it has stopped.
+
+To soft stop an application server represented as an *App server instance for SAP solutions* resource:
++
+### Using PowerShell
+Use the [Stop-AzWorkloadsSapApplicationInstance](/powershell/module/az.workloads/stop-azworkloadssapapplicationinstance) command:
+
+```powershell
+ Stop-AzWorkloadsSapApplicationInstance -InputObject /subscriptions/Sub1/resourceGroups/RG1/providers/Microsoft.Workloads/sapVirtualInstances/DB0/applicationInstances/app0 --SoftStopTimeoutSecond 300 `
+```
+
+### Using REST API
+Use this [sample payload](/rest/api/workloads/2023-04-01/sap-application-server-instances/stop-instance?tabs=HTTP#stop-the-sap-application-server-instance) to soft stop an application server instance. You can specify the soft stop timeout value in seconds.
+
+## Soft stop HANA database
+You can soft stop the HANA database so that the database stops gracefully after all running statements have finished. You can use the Azure PowerShell and REST API interfaces to soft stop database. Once you initiate soft stop on HANA database and the operation is successfully triggered on the database instance, then monitor the status of the database instance on the VIS to check if it has stopped.
+
+> [!NOTE]
+> When attempting to soft stop HANA database instance using Azure Center for SAP solutions, soft stop timeout value must be greater than 0 and less than 1800 seconds.
++
+### Using PowerShell
+Use the [Stop-AzWorkloadsSapDatabaseInstance](/powershell/module/az.workloads/stop-azworkloadssapdatabaseinstance) command:
+
+```powershell
+ Stop-AzWorkloadsSapDatabaseInstance -InputObject /subscriptions/Sub1/resourceGroups/RG1/providers/Microsoft.Workloads/sapVirtualInstances/DB0/databaseInstances/ab0 --SoftStopTimeoutSecond 300 `
+```
+
+### Using REST API
+Use this [sample payload](/rest/api/workloads/2023-04-01/sap-database-instances/stop-instance?tabs=HTTP#stop-the-database-instance-of-the-sap-system.) to soft stop HANA database. You can specify the soft stop timeout value in seconds.
sap Start Stop Sap Systems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/start-stop-sap-systems.md
Title: Start and stop SAP systems
-description: Learn how to start or stop an SAP system through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions through the Azure portal.
+ Title: Start and stop SAP systems, instances and HANA database
+description: Learn how to start or stop an SAP system, specific instances and HANA database through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions through the Azure portal, PowerShell, CLI.
#Customer intent: As a developer, I want to start and stop SAP systems in Azure Center for SAP solutions so that I can control instances through the Virtual Instance for SAP resource.
-# Start and stop SAP systems
+# Start and stop SAP systems, instances and HANA database
In this how-to guide, you'll learn to start and stop your SAP systems through the *Virtual Instance for SAP solutions (VIS)* resource in *Azure Center for SAP solutions*.
-Through the Azure portal, you can start and stop:
+Through the Azure portal, [Azure PowerShell](/powershell/module/az.workloads), [CLI](/cli/azure/workloads/sap-virtual-instance) and [REST API](/rest/api/workloads) interfaces, you can start and stop:
- Entire SAP Application tier in one go, which include ABAP SAP Central Services (ASCS) and Application Server instances.-- Individual SAP instances, which include Central Services and Application server instances.
+- Specific SAP instance, such as the application server instance.
- HANA Database - You can start and stop instances and HANA database in the following types of deployments: - Single-Server
The following scenarios are supported when Starting and Stopping SAP systems:
- Stopping the HANA Database from the VIS resource results in the entire HANA instance to be stopped. In case of HANA MDC with multiple tenant DBs, the entire instance is stopped and not the specific Tenant DB. - For highly available (HA) HANA databases, start and stop operations through Virtual Instance for SAP solutions resource are supported only when cluster management solution is in place. Any other HANA database high availability configurations without a cluster are not currently supported when starting and stopping using Virtual Instance for SAP solutions resource.
+> [!NOTE]
+> When multiple application server instances run on a single virtual machine and you intend to stop all these instances, you can currently stop them one instance at a time only. If you attempt to stop them in parallel, only one stop request is accepted and all others would fail.
+ ## Stop SAP system To stop an SAP system in the VIS resource:
sap Stop Start Sap And Underlying Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/stop-start-sap-and-underlying-vm.md
+
+ Title: Start and stop SAP and underlying VMs
+description: Learn how to Stop and Start SAP and underlying VMs through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions.
+++ Last updated : 10/25/2023++
+#Customer intent: As a developer, I want to start and stop SAP systems including VMs when they are not needed to be run.
++
+# Start and Stop SAP systems, instances, HANA database and their underlying Virtual machines
+In this how-to guide, you'll learn how to start and stop SAP systems and their underlying virtual machines through the Virtual Instance for SAP solutions (VIS) resource in Azure Center for SAP solutions. This simplifies the process to stop and start SAP systems by shutting down and bringing up underlying infrastructure and SAP application in one command.
+
+Using the [REST API](/rest/api/workloads) interfaces, you can:
+
+- Start and stop the entire SAP application tier and its Virtual machines, which includes ABAP SAP Central Services (ASCS) and Application Server instances.
+- Start and stop a specific SAP instance, such as the application server instance, and its Virtual machines.
+- Start and stop HANA database instance and its Virtual machines.
+
+> [!IMPORTANT]
+> The ability to start and stop virtual machines of an SAP system is available from API Version 2023-10-01.
+
+> [!NOTE]
+> You can schedule stop and start of SAP systems, HANA database at scale for your SAP landscapes using the [ARM template](https://aka.ms/SnoozeSAPSystems). This ARM template can be customized to suit your own requirements.
+
+## Prerequisites
+- An SAP system that you've [created in Azure Center for SAP solutions](prepare-network.md) or [registered with Azure Center for SAP solutions](register-existing-system.md).
+- Check that your Azure account has **Azure Center for SAP solutions administrator** or equivalent role access on the Virtual Instance for SAP solutions resources. You can learn more about the granular permissions that govern Start and Stop actions on the VIS, individual SAP instances and HANA Database [in this article](manage-with-azure-rbac.md#start-sap-system).
+- Check that the **User Assigned Managed Identity** associated with the VIS resource has **Virtual Machine Contributor** or equivalent role access. This is needed to be able to Start and Stop VMs.
+
+## Unsupported scenarios
+The following scenarios are not currently supported when using the Start and Stop of SAP, individual SAP instances, HANA database and their underlying VMs:
+
+- Starting and stopping systems when multiple SIDs on the same set of Virtual Machines.
+- Starting and stopping HANA databases with MCOS (Multiple Components in One System) architecture, where multiple HANA instances run on the same set of virtual machines.
+- Starting and stopping SAP application server or central services instances where instances of multiple SIDs or multiple instances of the same SID run on the same virtual machine.
+
+> [!IMPORTANT]
+> For single-server deployments, when you want to stop SAP, HANA DB and the VM, use stop VIS action to stop SAP application tier and then stop HANA database with 'deallocateVm' set to true. This ensures that SAP application and HANA database are both stopped before stopping the VM.
+
+> [!NOTE]
+> When stopping a VIS or an instance with 'DeallocateVm' option set to true, only that VIS or instance is stopped and then the virtual machine is shutdown. SAP instances of other SIDs are not stopped. Use the virtual machine stop option only after all instances running on the VM are stopped.
++
+## Start and Stop SAP system and underlying Virtual machines
+You can start and stop the entire SAP application tier and underlying VMs using [REST API version 2023-10-01](/rest/api/workloads).
+
+### Start SAP system and its VMs
+To start the virtual machines and the SAP application on it, use the following REST API with "startVm" parameter set to true. This command starts the VMs associated with Central services instance and Application server instances.
+
+```http
+POST https://management.azure.com/subscriptions/Sub1/resourceGroups/test-rg/providers/Microsoft.Workloads/sapVirtualInstances/X00/start?api-version=2023-10-01-preview
+
+{
+ "startVm": true
+}
+```
+
+### Stop SAP system and its VMs
+To stop the SAP application and its VMs, use the following REST API with "deallocateVm" parameter set to true.
+
+```http
+POST https://management.azure.com/subscriptions/Sub1/resourceGroups/test-rg/providers/Microsoft.Workloads/sapVirtualInstances/X00/stop?api-version=2023-10-01-preview
+
+{
+ "deallocateVm": true
+}
+```
+
+## Start and Stop HANA Database and its VMs
+You can start and stop HANA database and its underlying VMs using [REST API version 2023-10-01](/rest/api/workloads).
+
+### Start HANA database and its VMs
+To start the virtual machines and the HANA database on it, use the following REST API with "startVm" parameter set to true.
+
+```http
+POST https://management.azure.com/subscriptions/Sub1/resourceGroups/test-rg/providers/Microsoft.Workloads/sapVirtualInstances/X00/databaseInstances/db0/start?api-version=2023-10-01-preview
+
+ {
+ "startVm": true
+ }
+```
+
+### Stop HANA database and its VMs
+To stop HANA database and its underlying VMs, use the following REST API with `deallocateVm` parameter set to `true`.
+
+```http
+POST https://management.azure.com/subscriptions/Sub1/resourceGroups/test-rg/providers/Microsoft.Workloads/sapVirtualInstances/X00/databaseInstances/db0/stop?api-version=2023-10-01-preview
+
+ {
+ "deallocateVm": true
+ }
+```
search Monitor Azure Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/monitor-azure-cognitive-search.md
For REST calls, use an [admin API key](search-security-api-keys.md) and [Postman
In Azure Cognitive Search, [**activity logs**](../azure-monitor/data-sources.md#azure-activity-log) reflect control plane activity, such as service creation and configuration, or API key usage or management.
-Activity logs are collected [free of charge](../azure-monitor/usage-estimated-costs.md#pricing-model), with no configuration required. Data retention is 90 days, but you can configure durable storage for longer retention.
+Activity logs are collected [free of charge](../azure-monitor/cost-usage.md#pricing-model), with no configuration required. Data retention is 90 days, but you can configure durable storage for longer retention.
1. In the Azure portal, find your search service. From the menu on the left, select **Activity logs** to view the logs for your search service. See [Azure Monitor activity log](../azure-monitor/essentials/activity-log.md) for general guidance on working with activity logs.
The following screenshot shows the activity log signals that can be configured i
In Azure Cognitive Search, [**platform metrics**](../azure-monitor/essentials/data-platform-metrics.md) measure query performance, indexing volume, and skillset invocation.
-Metrics are collected [free of charge](../azure-monitor/usage-estimated-costs.md#pricing-model), with no configuration required. Platform metrics are stored for 93 days. However, in the portal you can only query a maximum of 30 days' worth of platform metrics data on any single chart.
+Metrics are collected [free of charge](../azure-monitor/cost-usage.md#pricing-model), with no configuration required. Platform metrics are stored for 93 days. However, in the portal you can only query a maximum of 30 days' worth of platform metrics data on any single chart.
In the Azure portal, find your search service. From the menu on the left, under Monitoring, select **Metrics** to open metrics explorer.
The following links provide more information about working with platform metrics
## Set up alerts
-Alerts help you to identify and address issues before they become a problem for application users. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [resource logs](../azure-monitor/alerts/alerts-unified-log.md), and [activity logs](../azure-monitor/alerts/activity-log-alerts.md). Alerts are billable (see the [Pricing model](../azure-monitor/usage-estimated-costs.md#pricing-model) for details).
+Alerts help you to identify and address issues before they become a problem for application users. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [resource logs](../azure-monitor/alerts/alerts-unified-log.md), and [activity logs](../azure-monitor/alerts/activity-log-alerts.md). Alerts are billable (see the [Pricing model](../azure-monitor/cost-usage.md#pricing-model) for details).
1. In the Azure portal, find your search service. From the menu on the left, under Monitoring, select **Alerts** to open metrics explorer.
In Azure Cognitive Search, [**resource logs**](../azure-monitor/essentials/resou
Resource Logs aren't collected and stored until you create a diagnostic setting. A diagnostic setting specifies data collection and storage. You can create multiple settings if you want to keep metrics and log data separate, or if you want more than one of each type of destination.
-Resource logging is billable (see the [Pricing model](../azure-monitor/usage-estimated-costs.md#pricing-model) for details), starting when you create a diagnostic setting. See [Diagnostic settings in Azure Monitor](../azure-monitor/essentials/diagnostic-settings.md) for general guidance.
+Resource logging is billable (see the [Pricing model](../azure-monitor/cost-usage.md#pricing-model) for details), starting when you create a diagnostic setting. See [Diagnostic settings in Azure Monitor](../azure-monitor/essentials/diagnostic-settings.md) for general guidance.
1. In the Azure portal, find your search service. From the menu on the left, under Monitoring, select **Diagnostic settings**.
security Threat Modeling Tool Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-authentication.md
| **Applicable Technologies** | Generic | | **Attributes** | N/A | | **References** | N/A |
-| Details | <p>Authentication is the process where an entity proves its identity, typically through credentials, such as a user name and password. There are multiple authentication protocols available which may be considered. Some of them are listed below:</p><ul><li>Client certificates</li><li>Windows based</li><li>Forms based</li><li>Federation - ADFS</li><li>Federation - Microsoft Entra ID</li><li>Federation - Identity Server</li></ul><p>Consider using a standard authentication mechanism to identify the source process</p>|
+| Details | <p>Authentication is the process where an entity proves its identity, typically through credentials, such as a user name and password. There are multiple authentication protocols available which might be considered. Some of them are listed below:</p><ul><li>Client certificates</li><li>Windows based</li><li>Forms based</li><li>Federation - ADFS</li><li>Federation - Microsoft Entra ID</li><li>Federation - Identity Server</li></ul><p>Consider using a standard authentication mechanism to identify the source process</p>|
## <a id="handle-failed-authn"></a>Applications must handle failed authentication scenarios securely
| **Applicable Technologies** | Generic | | **Attributes** | N/A | | **References** | N/A |
-| Details | <p>Password and account policy in compliance with organizational policy and best practices should be implemented.</p><p>To defend against brute-force and dictionary based guessing: Strong password policy must be implemented to ensure that users create complex password (e.g., 12 characters minimum length, alphanumeric and special characters).</p><p>Account lockout policies may be implemented in the following manner:</p><ul><li>**Soft lock-out:** This can be a good option for protecting your users against brute force attacks. For example, whenever the user enters a wrong password three times the application could lock down the account for a minute in order to slow down the process of brute forcing their password making it less profitable for the attacker to proceed. If you were to implement hard lock-out countermeasures for this example you would achieve a "DoS" by permanently locking out accounts. Alternatively, application may generate an OTP (One Time Password) and send it out-of-band (through email, sms etc.) to the user. Another approach may be to implement CAPTCHA after a threshold number of failed attempts is reached.</li><li>**Hard lock-out:** This type of lockout should be applied whenever you detect a user attacking your application and counter them by means of permanently locking out their account until a response team had time to do their forensics. After this process you can decide to give the user back their account or take further legal actions against them. This type of approach prevents the attacker from further penetrating your application and infrastructure.</li></ul><p>To defend against attacks on default and predictable accounts, verify that all keys and passwords are replaceable, and are generated or replaced after installation time.</p><p>If the application has to auto-generate passwords, ensure that the generated passwords are random and have high entropy.</p>|
+| Details | <p>Password and account policy in compliance with organizational policy and best practices should be implemented.</p><p>To defend against brute-force and dictionary based guessing: Strong password policy must be implemented to ensure that users create complex password (e.g., 12 characters minimum length, alphanumeric and special characters).</p><p>Account lockout policies might be implemented in the following manner:</p><ul><li>**Soft lock-out:** This can be a good option for protecting your users against brute force attacks. For example, whenever the user enters a wrong password three times the application could lock down the account for a minute in order to slow down the process of brute forcing their password making it less profitable for the attacker to proceed. If you were to implement hard lock-out countermeasures for this example you would achieve a "DoS" by permanently locking out accounts. Alternatively, application might generate an OTP (One Time Password) and send it out-of-band (through email, sms etc.) to the user. Another approach might be to implement CAPTCHA after a threshold number of failed attempts is reached.</li><li>**Hard lock-out:** This type of lockout should be applied whenever you detect a user attacking your application and counter them by means of permanently locking out their account until a response team had time to do their forensics. After this process you can decide to give the user back their account or take further legal actions against them. This type of approach prevents the attacker from further penetrating your application and infrastructure.</li></ul><p>To defend against attacks on default and predictable accounts, verify that all keys and passwords are replaceable, and are generated or replaced after installation time.</p><p>If the application has to auto-generate passwords, ensure that the generated passwords are random and have high entropy.</p>|
## <a id="controls-username-enum"></a>Implement controls to prevent username enumeration
| -- | | | **Component** | Database | | **SDL Phase** | Build |
-| **Applicable Technologies** | OnPrem |
+| **Applicable Technologies** | On-premises |
| **Attributes** | SQL Version - All | | **References** | [SQL Server - Choose an Authentication Mode](/sql/relational-databases/security/choose-an-authentication-mode) | | **Steps** | Windows Authentication uses Kerberos security protocol, provides password policy enforcement with regard to complexity validation for strong passwords, provides support for account lockout, and supports password expiration.|
| -- | | | **Component** | Database | | **SDL Phase** | Build |
-| **Applicable Technologies** | OnPrem, SQL Azure |
+| **Applicable Technologies** | On-premises, SQL Azure |
| **Attributes** | SQL Version - MSSQL2012, SQL Version - V12 | | **References** | [Security Best Practices with Contained Databases](/sql/relational-databases/databases/security-best-practices-with-contained-databases) |
-| **Steps** | The absence of an enforced password policy may increase the likelihood of a weak credential being established in a contained database. Leverage Windows Authentication. |
+| **Steps** | The absence of an enforced password policy might increase the likelihood of a weak credential being established in a contained database. Leverage Windows Authentication. |
## <a id="authn-sas-tokens"></a>Use per device authentication credentials using SaS tokens | Title | Details | | -- | |
-| **Component** | Azure Event Hub |
+| **Component** | Azure Event Hubs |
| **SDL Phase** | Build | | **Applicable Technologies** | Generic | | **Attributes** | N/A |
The `<netMsmqBinding/>` element of the WCF configuration file below instructs WC
| **Applicable Technologies** | Generic | | **Attributes** | N/A | | **References** | [Authentication and Authorization in ASP.NET Web API](https://www.asp.net/web-api/overview/security/authentication-and-authorization-in-aspnet-web-api), [External Authentication Services with ASP.NET Web API (C#)](https://www.asp.net/web-api/overview/security/external-authentication-services) |
-| **Steps** | <p>Authentication is the process where an entity proves its identity, typically through credentials, such as a user name and password. There are multiple authentication protocols available which may be considered. Some of them are listed below:</p><ul><li>Client certificates</li><li>Windows based</li><li>Forms based</li><li>Federation - ADFS</li><li>Federation - Microsoft Entra ID</li><li>Federation - Identity Server</li></ul><p>Links in the references section provide low-level details on how each of the authentication schemes can be implemented to secure a Web API.</p>|
+| **Steps** | <p>Authentication is the process where an entity proves its identity, typically through credentials, such as a user name and password. There are multiple authentication protocols available which might be considered. Some of them are listed below:</p><ul><li>Client certificates</li><li>Windows based</li><li>Forms based</li><li>Federation - ADFS</li><li>Federation - Microsoft Entra ID</li><li>Federation - Identity Server</li></ul><p>Links in the references section provide low-level details on how each of the authentication schemes can be implemented to secure a Web API.</p>|
## <a id="authn-aad"></a>Use standard authentication scenarios supported by Microsoft Entra ID
The `<netMsmqBinding/>` element of the WCF configuration file below instructs WC
| **SDL Phase** | Build | | **Applicable Technologies** | Generic | | **Attributes** | N/A |
-| **References** | [Authentication Scenarios for Microsoft Entra ID](../../active-directory/develop/authentication-vs-authorization.md), [Microsoft Entra code Samples](/azure/active-directory/azuread-dev/sample-v1-code), [Microsoft Entra developer's guide](../../active-directory/develop/index.yml) |
+| **References** | [Authentication Scenarios for Microsoft Entra ID](/entra/identity-platform/authentication-vs-authorization), [Microsoft Entra code Samples](/entra/identity-platform/sample-v2-code), [Microsoft Entra developer's guide](/entra/identity-platform/index) |
| **Steps** | <p>Microsoft Entra ID simplifies authentication for developers by providing identity as a service, with support for industry-standard protocols such as OAuth 2.0 and OpenID Connect. Below are the five primary application scenarios supported by Microsoft Entra ID:</p><ul><li>Web Browser to Web Application: A user needs to sign in to a web application that is secured by Microsoft Entra ID</li><li>Single Page Application (SPA): A user needs to sign in to a single page application that is secured by Microsoft Entra ID</li><li>Native Application to Web API: A native application that runs on a phone, tablet, or PC needs to authenticate a user to get resources from a web API that is secured by Microsoft Entra ID</li><li>Web Application to Web API: A web application needs to get resources from a web API secured by Microsoft Entra ID</li><li>Daemon or Server Application to Web API: A daemon application or a server application with no web user interface needs to get resources from a web API secured by Microsoft Entra ID</li></ul><p>Please refer to the links in the references section for low-level implementation details</p>| ## <a id="msal-distributed-cache"></a>Override the default MSAL token cache with a distributed cache
The `<netMsmqBinding/>` element of the WCF configuration file below instructs WC
| **SDL Phase** | Build | | **Applicable Technologies** | Generic | | **Attributes** | N/A |
-| **References** | [Token cache serialization in MSAL.NET](../../active-directory/develop/msal-net-token-cache-serialization.md) |
+| **References** | [Token cache serialization in MSAL.NET](/entra/msal/dotnet/how-to/token-cache-serialization) |
| **Steps** | <p>The default cache that MSAL (Microsoft Authentication Library) uses is an in-memory cache, and is scalable. However there are different options available that you can use as an alternative, such as a distributed token cache. These have L1/L2 mechanisms, where L1 is in memory and L2 is the distributed cache implementation. These can be accordingly configured to limit L1 memory, encrypt or set eviction policies. Other alternatives include Redis, SQL Server or Azure Comsos DB caches. An implementation of a distributed token cache can be found in the following [Tutorial: Get started with ASP.NET Core MVC](/aspnet/core/tutorials/first-mvc-app/start-mvc).</p>| ## <a id="tokenreplaycache-msal"></a>Ensure that TokenReplayCache is used to prevent the replay of MSAL authentication tokens
OpenIdConnectOptions openIdConnectOptions = new OpenIdConnectOptions
} ```
-Please note that to test the effectiveness of this configuration, login into your local OIDC-protected application and capture the request to `"/signin-oidc"` endpoint in fiddler. When the protection is not in place, replaying this request in fiddler will set a new session cookie. When the request is replayed after the TokenReplayCache protection is added, the application will throw an exception as follows: `SecurityTokenReplayDetectedException: IDX10228: The securityToken has previously been validated, securityToken: 'eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik1uQ19WWmNBVGZNNXBPWWlKSE1iYTlnb0VLWSIsImtpZCI6Ik1uQ1......`
+Please note that to test the effectiveness of this configuration, sign in into your local OIDC-protected application and capture the request to `"/signin-oidc"` endpoint in fiddler. When the protection is not in place, replaying this request in fiddler will set a new session cookie. When the request is replayed after the TokenReplayCache protection is added, the application will throw an exception as follows: `SecurityTokenReplayDetectedException: IDX10228: The securityToken has previously been validated, securityToken: 'eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik1uQ19WWmNBVGZNNXBPWWlKSE1iYTlnb0VLWSIsImtpZCI6Ik1uQ1......`
## <a id="msal-oauth2"></a>Use MSAL libraries to manage token requests from OAuth2 clients to Microsoft Entra ID (or on-premises AD)
Please note that to test the effectiveness of this configuration, login into you
| **SDL Phase** | Build | | **Applicable Technologies** | Generic | | **Attributes** | N/A |
-| **References** | [MSAL](../../active-directory/develop/msal-overview.md) |
+| **References** | [MSAL](/entra/identity-platform/msal-overview) |
| **Steps** | <p>The Microsoft Authentication Library (MSAL) enables developers to acquire security tokens from the Microsoft identity platform to authenticate users and access secured web APIs. It can be used to provide secure access to Microsoft Graph, other Microsoft APIs, third-party web APIs, or your own web API. MSAL supports many different application architectures and platforms including .NET, JavaScript, Java, Python, Android, and iOS. MSAL gives you many ways to get tokens, with a consistent API for many platforms. There is no need to directly use the OAuth libraries or code against the protocol in your application, and can acquire tokens on behalf of a user or application (when applicable to the platform).
await deviceClient.SendEventAsync(message);
| **Applicable Technologies** | Generic | | **Attributes** | StorageType - Blob | | **References** | [Manage anonymous read access to containers and blobs](../../storage/blobs/anonymous-read-access-configure.md), [Shared Access Signatures, Part 1: Understanding the SAS model](../../storage/common/storage-sas-overview.md) |
-| **Steps** | <p>By default, a container and any blobs within it may be accessed only by the owner of the storage account. To give anonymous users read permissions to a container and its blobs, one can set the container permissions to allow public access. Anonymous users can read blobs within a publicly accessible container without authenticating the request.</p><p>Containers provide the following options for managing container access:</p><ul><li>Full public read access: Container and blob data can be read via anonymous request. Clients can enumerate blobs within the container via anonymous request, but cannot enumerate containers within the storage account.</li><li>Public read access for blobs only: Blob data within this container can be read via anonymous request, but container data is not available. Clients cannot enumerate blobs within the container via anonymous request</li><li>No public read access: Container and blob data can be read by the account owner only</li></ul><p>Anonymous access is best for scenarios where certain blobs should always be available for anonymous read access. For finer-grained control, one can create a shared access signature, which enables to delegate restricted access using different permissions and over a specified time interval. Ensure that containers and blobs, which may potentially contain sensitive data, are not given anonymous access accidentally</p>|
+| **Steps** | <p>By default, a container and any blobs within it might be accessed only by the owner of the storage account. To give anonymous users read permissions to a container and its blobs, one can set the container permissions to allow public access. Anonymous users can read blobs within a publicly accessible container without authenticating the request.</p><p>Containers provide the following options for managing container access:</p><ul><li>Full public read access: Container and blob data can be read via anonymous request. Clients can enumerate blobs within the container via anonymous request, but cannot enumerate containers within the storage account.</li><li>Public read access for blobs only: Blob data within this container can be read via anonymous request, but container data is not available. Clients cannot enumerate blobs within the container via anonymous request</li><li>No public read access: Container and blob data can be read by the account owner only</li></ul><p>Anonymous access is best for scenarios where certain blobs should always be available for anonymous read access. For finer-grained control, one can create a shared access signature, which enables to delegate restricted access using different permissions and over a specified time interval. Ensure that containers and blobs, which might potentially contain sensitive data, are not given anonymous access accidentally</p>|
## <a id="limited-access-sas"></a>Grant limited access to objects in Azure storage using SAS or SAP
sentinel Best Practices Workspace Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/best-practices-workspace-architecture.md
Consider the following when working with multiple regions:
- Bandwidth costs vary depending on the source and destination region and collection method. For more information, see: - [Bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/)
- - [Data transfers charges using Log Analytics ](../azure-monitor/usage-estimated-costs.md#data-transfer-charges).
+ - [Data transfers charges using Log Analytics ](../azure-monitor/cost-usage.md#data-transfer-charges).
- Use templates for your analytics rules, custom queries, workbooks, and other resources to make your deployments more efficient. Deploy the templates instead of manually deploying each resource in each region. -- Connectors that are based on diagnostics settings don't incur in-bandwidth costs. For more information, see [Data transfers charges using Log Analytics](../azure-monitor/usage-estimated-costs.md#data-transfer-charges).
+- Connectors that are based on diagnostics settings do not incur in-bandwidth costs. For more information, see [Data transfers charges using Log Analytics](../azure-monitor/cost-usage.md#data-transfer-charges).
For example, if you decide to collect logs from Virtual Machines in East US and send them to a Microsoft Sentinel workspace in West US, you'll be charged ingress costs for the data transfer. Since the Log Analytics agent compresses the data in transit, the size charged for the bandwidth might be lower than the size of the logs in Microsoft Sentinel.
sentinel Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/billing.md
If you're billed at classic Pay-As-You-Go rate, this table shows how Microsoft S
# [Free data meters](#tab/free-data-meters/simplified)
-This table shows how Microsoft Sentinel and Log Analytics no charge costs appear in the **Service name** and **Meter** columns of your Azure bill for free data services when billing is at a simplified pricing tier. For more information, see [View Data Allocation Benefits](../azure-monitor/usage-estimated-costs.md#view-data-allocation-benefits).
+This table shows how Microsoft Sentinel and Log Analytics no charge costs appear in the **Service name** and **Meter** columns of your Azure bill for free data services when billing is at a simplified pricing tier. For more information, see [View Data Allocation Benefits](../azure-monitor/cost-usage.md#view-data-allocation-benefits).
Cost description | Service name | Meter | |--|--|--|
This table shows how Microsoft Sentinel and Log Analytics no charge costs appear
# [Free data meters](#tab/free-data-meters/classic)
-This table shows how Microsoft Sentinel and Log Analytics no charge costs appear in the **Service name** and **Meter** columns of your Azure bill for free data services when billing is at a classic pricing tier. For more information, see [View Data Allocation Benefits](../azure-monitor/usage-estimated-costs.md#view-data-allocation-benefits).
+This table shows how Microsoft Sentinel and Log Analytics no charge costs appear in the **Service name** and **Meter** columns of your Azure bill for free data services when billing is at a classic pricing tier. For more information, see [View Data Allocation Benefits](../azure-monitor/cost-usage.md#view-data-allocation-benefits).
Cost description | Service name | Meter | |--|--|--|
sentinel Entities Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/entities-reference.md
description: This article displays the Microsoft Sentinel entity types and their
Previously updated : 05/29/2023 Last updated : 10/15/2023 # Microsoft Sentinel entity types reference
+This document contains two sets of information regarding entities and entity types in Microsoft Sentinel.
+- The [**Entity types and identifiers**](#entity-types-and-identifiers) table shows the different types of entities that can be used in [entity mapping](map-data-fields-to-entities.md) in both [analytics rules](detect-threats-custom.md) and [hunting](hunting.md). The table also shows, for each entity type, the different identifiers that can be used to identify an entity.
+- The [**Entity schema**](#entity-type-schemas) section shows the data structure and schema for entities in general and for each entity type in particular, including some types that are not represented in the entity mapping feature.
+ ## Entity types and identifiers
-The following table shows the **entity types** currently available for mapping in Microsoft Sentinel, and the **attributes** available as **identifiers** for each entity type. These attributes appear in the **Identifiers** drop-down list in the [entity mapping](map-data-fields-to-entities.md) section of the [analytics rule wizard](detect-threats-custom.md).
+The following table shows the **entity types** currently available for mapping in Microsoft Sentinel, and the **attributes** available as **identifiers** for each entity type. Nearly all of these attributes appear in the **Identifiers** drop-down list in the [entity mapping](map-data-fields-to-entities.md) section of the [analytics rule wizard](detect-threats-custom.md) (see footnotes for exceptions).
-Each one of the identifiers in the **required identifiers** column is necessary to identify its entity. However, a required identifier might not, by itself, be sufficient to provide *unique* identification. The more identifiers used, the greater the likelihood of unique identification. You can use up to three identifiers for a single entity mapping.
+You can use up to three identifiers for a single entity mapping. **Strong identifiers** alone are sufficient to uniquely identify an entity, whereas **weak identifiers** can do so only in combination with other identifiers.
-For best results&mdash;for guaranteed unique identification&mdash;you should use identifiers from the **strongest identifiers** column whenever possible. The use of multiple strong identifiers enables correlation between strong identifiers from varying data sources and schemas. This correlation in turn allows Microsoft Sentinel to provide more comprehensive insights for a given entity.
+Learn more about [strong and weak identifiers](entities.md#strong-and-weak-identifiers).
-| Entity type | Identifiers | Required identifiers | Strongest identifiers |
+| Entity type | Identifiers | Strong identifiers | Weak identifiers |
| - | - | - | - |
-| [**User account**](#user-account)<br>*(Account)* | Name<br>FullName<br>NTDomain<br>DnsDomain<br>UPNSuffix<br>Sid<br>AadTenantId<br>AadUserId<br>PUID<br>IsDomainJoined<br>DisplayName<br>ObjectGuid | FullName<br>Sid<br>Name<br>AadUserId<br>PUID<br>ObjectGuid | Name + NTDomain<br>Name + UPNSuffix<br>AADUserId<br>Sid |
-| [**Host**](#host) | DnsDomain<br>NTDomain<br>HostName<br>FullName<br>NetBiosName<br>AzureID<br>OMSAgentID<br>OSFamily<br>OSVersion<br>IsDomainJoined | FullName<br>HostName<br>NetBiosName<br>AzureID<br>OMSAgentID | HostName + NTDomain<br>HostName + DnsDomain<br>NetBiosName + NTDomain<br>NetBiosName + DnsDomain<br>AzureID<br>OMSAgentID |
-| [**IP address**](#ip-address)<br>*(IP)* | Address | Address | |
-| [**Malware**](#malware) | Name<br>Category | Name | |
-| [**File**](#file) | Directory<br>Name | Name | |
-| [**Process**](#process) | ProcessId<br>CommandLine<br>ElevationToken<br>CreationTimeUtc | CommandLine<br>ProcessId | |
-| [**Cloud application**](#cloud-application)<br>*(CloudApplication)* | AppId<br>Name<br>InstanceName | AppId<br>Name | |
-| [**Domain name**](#domain-name)<br>*(DNS)* | DomainName | DomainName | |
+| [**Account**](#account) | Name<br>*FullName \**<br>NTDomain<br>DnsDomain<br>UPNSuffix<br>Sid<br>AadTenantId<br>AadUserId<br>PUID<br>IsDomainJoined<br>*DisplayName \**<br>ObjectGuid | Name+UPNSuffix<br>AADUserId<br>Sid [\*\*](#strong-identifiers-of-an-account-entity)<br>Sid+*Host* [\*\*](#strong-identifiers-of-an-account-entity)<br>Name+*Host*+NTDomain [\*\*](#strong-identifiers-of-an-account-entity)<br>Name+NTDomain [\*\*](#strong-identifiers-of-an-account-entity)<br>Name+DnsDomain<br>PUID<br>ObjectGuid | Name |
+| [**Host**](#host) | DnsDomain<br>NTDomain<br>HostName<br>*FullName \**<br>NetBiosName<br>AzureID<br>OMSAgentID<br>OSFamily<br>OSVersion<br>IsDomainJoined | HostName+NTDomain<br>HostName+DnsDomain<br>NetBiosName+NTDomain<br>NetBiosName+DnsDomain<br>AzureID<br>OMSAgentID | HostName<br>NetBiosName |
+| [**IP**](#ip) | Address<br>AddressScope | Address [\*\*](#strong-identifiers-of-an-ip-entity)<br>Address+AddressScope [\*\*](#strong-identifiers-of-an-ip-entity) | |
+| [**URL**](#url) | Url | Url *(if absolute URL)* [\*\*](#strong-identifiers-of-a-url-entity) | Url *(if relative URL)* [\*\*](#strong-identifiers-of-a-url-entity) |
| [**Azure resource**](#azure-resource) | ResourceId | ResourceId | |
-| [**File hash**](#file-hash)<br>*(FileHash)* | Algorithm<br>Value | Algorithm + Value | |
-| [**Registry key**](#registry-key) | Hive<br>Key | Hive<br>Key | Hive + Key |
-| [**Registry value**](#registry-value) | Name<br>Value<br>ValueType | Name | |
+| [**Cloud application**](#cloud-application)<br>*(CloudApplication)* | AppId<br>Name<br>InstanceName | AppId<br>Name<br>AppId+InstanceName<br>Name+InstanceName | |
+| [**DNS Resolution**](#dns-resolution) | DomainName | DomainName+*DnsServerIp*+*HostIpAddress* | DomainName+*HostIpAddress* |
+| [**File**](#file) | Directory<br>Name | Directory+Name | |
+| [**File hash**](#file-hash)<br>*(FileHash)* | Algorithm<br>Value | Algorithm+Value | |
+| [**Malware**](#malware) | Name<br>Category | Name+Category | |
+| [**Process**](#process) | ProcessId<br>CommandLine<br>ElevationToken<br>CreationTimeUtc | *Host*+ProcessID+CreationTimeUtc<br>*Host*+*ParentProcessId*+<br>&nbsp;&nbsp;&nbsp;CreationTimeUtc+CommandLine<br>*Host*+ProcessId+<br>&nbsp;&nbsp;&nbsp;CreationTimeUtc+*ImageFile*<br>*Host*+ProcessId+<br>&nbsp;&nbsp;&nbsp;CreationTimeUtc+*ImageFile*+<br>&nbsp;&nbsp;&nbsp;*FileHash* | ProcessId+CreationTimeUtc+<br>&nbsp;&nbsp;&nbsp;CommandLine (no Host)<br>ProcessId+CreationTimeUtc+<br>&nbsp;&nbsp;&nbsp;*ImageFile* (no Host) |
+| [**Registry key**](#registry-key) | Hive<br>Key | Hive+Key | |
+| [**Registry value**](#registry-value) | Name<br>Value<br>ValueType<br> | *Key*+Name | Name (no Key) |
| [**Security group**](#security-group) | DistinguishedName<br>SID<br>ObjectGuid | DistinguishedName<br>SID<br>ObjectGuid | |
-| [**URL**](#url) | Url | Url | |
-| [**IoT device**](#iot-device) | IoTHub<br>DeviceId<br>DeviceName<br>IoTSecurityAgentId<br>DeviceType<br>Source<br>SourceRef<br>Manufacturer<br>Model<br>OperatingSystem<br>IpAddress<br>MacAddress<br>Protocols<br>SerialNumber | IoTHub<br>DeviceId | IoTHub + DeviceId |
| [**Mailbox**](#mailbox) | MailboxPrimaryAddress<br>DisplayName<br>Upn<br>ExternalDirectoryObjectId<br>RiskLevel | MailboxPrimaryAddress | |
-| [**Mail cluster**](#mail-cluster) | NetworkMessageIds<br>CountByDeliveryStatus<br>CountByThreatType<br>CountByProtectionStatus<br>Threats<br>Query<br>QueryTime<br>MailCount<br>IsVolumeAnomaly<br>Source<br>ClusterSourceIdentifier<br>ClusterSourceType<br>ClusterQueryStartTime<br>ClusterQueryEndTime<br>ClusterGroup | Query<br>Source | Query + Source |
-| [**Mail message**](#mail-message) | Recipient<br>Urls<br>Threats<br>Sender<br>P1Sender<br>P1SenderDisplayName<br>P1SenderDomain<br>SenderIP<br>P2Sender<br>P2SenderDisplayName<br>P2SenderDomain<br>ReceivedDate<br>NetworkMessageId<br>InternetMessageId<br>Subject<br>BodyFingerprintBin1<br>BodyFingerprintBin2<br>BodyFingerprintBin3<br>BodyFingerprintBin4<br>BodyFingerprintBin5<br>AntispamDirection<br>DeliveryAction<br>DeliveryLocation<br>Language<br>ThreatDetectionMethods | NetworkMessageId<br>Recipient | NetworkMessageId + Recipient |
-| [**Submission mail**](#submission-mail) | SubmissionId<br>SubmissionDate<br>Submitter<br>NetworkMessageId<br>Timestamp<br>Recipient<br>Sender<br>SenderIp<br>Subject<br>ReportType | SubmissionId<br>NetworkMessageId<br>Recipient<br>Submitter | |
+| [**Mail cluster**](#mail-cluster) | NetworkMessageIds<br>CountByDeliveryStatus<br>CountByThreatType<br>CountByProtectionStatus<br>Threats<br>Query<br>QueryTime<br>MailCount<br>IsVolumeAnomaly<br>Source<br>*ClusterSourceIdentifier \**<br>*ClusterSourceType \**<br>*ClusterQueryStartTime \**<br>*ClusterQueryEndTime \**<br>*ClusterGroup \** | Query+Source | |
+| [**Mail message**](#mail-message) | Recipient<br>Urls<br>Threats<br>Sender<br>*P1Sender \**<br>*P1SenderDisplayName \**<br>*P1SenderDomain \**<br>SenderIP<br>*P2Sender \**<br>*P2SenderDisplayName \**<br>*P2SenderDomain \**<br>ReceivedDate<br>NetworkMessageId<br>InternetMessageId<br>Subject<br>*BodyFingerprintBin1 \**<br>*BodyFingerprintBin2 \**<br>*BodyFingerprintBin3 \**<br>*BodyFingerprintBin4 \**<br>*BodyFingerprintBin5 \**<br>AntispamDirection<br>DeliveryAction<br>DeliveryLocation<br>*Language \**<br>*ThreatDetectionMethods \** | NetworkMessageId+Recipient | |
+| [**Submission mail**](#submission-mail) | NetworkMessageId<br>Timestamp<br>Recipient<br>Sender<br>SenderIp<br>Subject<br>ReportType<br>SubmissionId<br>SubmissionDate<br>Submitter | SubmissionId+NetworkMessageId+<br>&nbsp;&nbsp;&nbsp;Recipient+Submitter | |
| [**Sentinel entities**](#sentinel-entities) | Entities | Entities | |
+**Table footnotes:**
+- \* These identifiers appear in the list of identifiers that can be used in entity mapping, but strictly speaking they are not part of the entity schema.
+- \*\* These identifiers are considered strong only under certain conditions. Follow the asterisks' links to see the conditions that apply, under the relevant entity's listing in the [entity schemas section below](#entity-type-schemas).
+- *Italicized identifier names* (without an asterisk) represent internal entities, which means that one entity type can have other entity types as attributes (see the [entity schemas section below](#entity-type-schemas)). Follow the identifier's link to see the internal entity's own schema.
+ ## Entity type schemas
-The following section contains a more in-depth look at the full schemas of each entity type. You'll notice that many of these schemas include links to other entity types&mdash;for example, the User account schema includes a link to the Host entity type, since one attribute of a user account is the host it's defined on. These externally linked entities can't be used as identifiers for the purpose of entity mapping, but they are very useful in giving a complete picture of entities on entity pages and the investigation graph.
+The following section contains a more in-depth look at the full schemas of each entity type. You'll notice that many of these schemas include links to other entity types. For example, the Account schema includes a link to the Host entity type, since one attribute of a user account is the host it's defined on. These entities-as-attributes are known as "internal entities", and they can't be used as identifiers for entity mapping, but they are very useful in giving a complete picture of entities on entity pages and the investigation graph.
> [!NOTE] > A question mark following the value in the **Type** column indicates the field is nullable.
-## User account
-
-*Entity name: Account*
+### List of entity type schemas
+
+- [Account](#account)
+- [Host](#host)
+- [IP](#ip)
+- [Malware](#malware)
+- [File](#file)
+- [Process](#process)
+- [Cloud application](#cloud-application)
+- [DNS resolution](#dns-resolution)
+- [Azure resource](#azure-resource)
+- [File hash](#file-hash)
+- [Registry key](#registry-key)
+- [Registry value](#registry-value)
+- [Security group](#security-group)
+- [URL](#url)
+- [IoT device](#iot-device)
+- [Mailbox](#mailbox)
+- [Mail cluster](#mail-cluster)
+- [Mail message](#mail-message)
+- [Submission mail](#submission-mail)
+- [Sentinel entities](#sentinel-entities)
+
+### Account
| Field | Type | Description | | -- | - | -- |
-| Type | String | ΓÇÿaccountΓÇÖ |
-| Name | String | The name of the account. This field should hold only the name without any domain added to it. |
-| *FullName* | *N/A* | *Not part of schema, included for backward compatibility with old version of entity mapping.*
-| NTDomain | String | The NETBIOS domain name as it appears in the alert format ΓÇô domain\username. Examples: Finance, NT AUTHORITY |
-| DnsDomain | String | The fully qualified domain DNS name. Examples: finance.contoso.com |
-| UPNSuffix | String | The user principal name suffix for the account. In some cases this is also the domain name. Examples: contoso.com |
-| Host | Entity | The host which contains the account, if it's a local account. |
-| Sid | String | The account security identifier, such as S-1-5-18. |
-| AadTenantId | Guid? | The Microsoft Entra tenant ID, if known. |
-| AadUserId | Guid? | The Microsoft Entra account object ID, if known. |
-| PUID | Guid? | The Microsoft Entra Passport User ID, if known. |
-| IsDomainJoined | Bool? | Determines whether this is a domain account. |
-| DisplayName | String | The display name of the account. |
-| ObjectGuid | Guid? | The objectGUID attribute is a single-value attribute that is the unique identifier for the object, assigned by Active Directory. |
-
-Strong identifiers of an account entity:
--- Name + UPNSuffix-- AadUserId-- Sid + Host (required for SIDs of builtin accounts)-- Sid (except for SIDs of builtin accounts)-- Name + NTDomain (unless NTDomain is a builtin domain, for example "Workgroup")-- Name + Host (if NTDomain is a builtin domain, for example "Workgroup")-- Name + DnsDomain-- PUID-- ObjectGuid-
-Weak identifiers of an account entity:
-
+| **Type** | String | 'account' |
+| **Name** | String | The name of the account. This field should hold only the name without any domain added to it. |
+| ***FullName*** | -- | *Not part of schema, included for backward compatibility with old version of entity mapping.* |
+| **NTDomain** | String | The NETBIOS domain name as it appears in the alert format&mdash;domain\username. Examples: Finance, NT AUTHORITY |
+| **DnsDomain** | String | The fully qualified domain DNS name. Examples: finance.contoso.com |
+| **UPNSuffix** | String | The user principal name suffix for the account. In many cases the UPN Suffix is also the domain name. Examples: contoso.com |
+| **Host** | Entity ([Host](#host)) | The host that contains the account, if it's a local account. |
+| **Sid** | String | The account's security identifier. |
+| **AadTenantId** | Guid? | The Microsoft Entra tenant ID, if known. |
+| **AadUserId** | Guid? | The Microsoft Entra account object ID, if known. |
+| **PUID** | Guid? | The Microsoft Entra Passport User ID, if known. |
+| **IsDomainJoined** | Bool? | Indicates whether the account is a domain account. |
+| ***DisplayName*** | -- | *Not part of schema, included for backward compatibility with old version of entity mapping.* |
+| **ObjectGuid** | Guid? | The objectGUID attribute is a single-value attribute that is the unique identifier for the object, assigned by Active Directory. |
+| **CloudAppAccountId** | String | The AccountID in alerts from the CloudApp provider. Refers to account IDs in third-party apps that are not supported in other Microsoft products. |
+| **IsAnonymized** | Bool? | Indicates whether the user name is anonymized. Optional. Default value: `false`. |
+| **Stream** | Stream | The source of discovery logs related to the specific account. Optional. |
+
+#### Strong identifiers of an account entity
+
+- **Name + UPNSuffix**
+- **AadUserId**
+- **Sid**
+\*\* This identifier is strong as long as the account **is not** one of the built-in accounts listed in the **Note** below.
+- **Sid + [*Host*](#host)**
+\*\* When the account is one of the built-in accounts listed in the **Note** below, the Host component is required to make this identifier a strong one.
+- **Name + NTDomain**
+\*\* This combination is a strong identifier when the account is a domain account, since NTDomain is not a built-in domain/workgroup and is different from the host name. In this case, this is a strong identifier even without the Host component.
+- **Name + NTDomain + [*Host*](#host)**
+\*\* The Host component is necessary to create a strong identifier when the account is a local account, meaning that the NTDomain is a built-in domain/workgroup.
+- **Name + DnsDomain**
+- **PUID**
+- **ObjectGuid**
+
+#### Weak identifiers of an account entity
- Name > [!NOTE]
Weak identifiers of an account entity:
> - LOCALSYSTEM > - NETWORK SERVICE
-## Host
+[Back to list of entity type schemas](#list-of-entity-type-schemas) | [Back to entity identifiers table](#entity-types-and-identifiers)
+
+### Host
| Field | Type | Description | | -- | - | -- |
-| Type | String | ΓÇÿhostΓÇÖ |
-| DnsDomain | String | The DNS domain that this host belongs to. Should contain the complete DNS suffix for the domain, if known. |
-| NTDomain | String | The NT domain that this host belongs to. |
-| HostName | String | The hostname without the domain suffix. |
-| *FullName* | *N/A* | *Not part of schema, included for backward compatibility with old version of entity mapping.*
-| NetBiosName | String | The host name (pre-Windows 2000). |
-| IoTDevice | Entity | The IoT Device entity (if this host represents an IoT Device). |
-| AzureID | String | The Azure resource ID of the VM, if known. |
-| OMSAgentID | String | The OMS agent ID, if the host has OMS agent installed. |
-| OSFamily | Enum? | One of the following values: <li>Linux<li>Windows<li>Android<li>IOS |
-| OSVersion | String | A free-text representation of the operating system.<br>This field is meant to hold specific versions the are more fine-grained than OSFamily, or future values not supported by OSFamily enumeration. |
-| IsDomainJoined | Bool | Determines whether this host belongs to a domain. |
-
-Strong identifiers of a host entity:
-- HostName + NTDomain-- HostName + DnsDomain-- NetBiosName + NTDomain-- NetBiosName + DnsDomain-- AzureID-- OMSAgentID-- IoTDevice (not supported for entity mapping)-
-Weak identifiers of a host entity:
+| **Type** | String | 'host' |
+| **IpInterfaces** | List<Entity ([Ip](#ip))> | List of all IP interfaces on the host machine. |
+| **DnsDomain** | String | The DNS domain that this host belongs to. Should contain the complete DNS suffix for the domain, if known. |
+| **NTDomain** | String | The NT domain that this host belongs to. |
+| **HostName** | String | The hostname without the domain suffix. |
+| **NetBiosName** | String | The host name (pre-Windows 2000). |
+| **IoTDevice** | Entity ([IoT Device](#iot-device)) | The IoT Device entity (if this host represents an IoT Device). |
+| **AzureID** | String | The Azure resource ID of the VM, if known. |
+| **OMSAgentID** | String | The OMS agent ID, if the host has OMS agent installed. |
+| **OSFamily** | Enum? | One of the following values: <li>Linux<li>Windows<li>Android<li>IOS<li>Mac |
+| **OSVersion** | String | A free-text representation of the operating system.<br>This field is meant to hold specific versions the are more fine-grained than OSFamily, or future values not supported by OSFamily enumeration. |
+| **IsDomainJoined** | Bool | Indicates whether this host belongs to a domain. |
+
+#### Strong identifiers of a host entity
+
+- **HostName + NTDomain**
+- **HostName + DnsDomain**
+- **NetBiosName + NTDomain**
+- **NetBiosName + DnsDomain**
+- **AzureID**
+- **OMSAgentID**
+- ***IoTDevice***
+
+#### Weak identifiers of a host entity
+ - HostName - NetBiosName
-## IP address
+[Back to list of entity type schemas](#list-of-entity-type-schemas) | [Back to entity identifiers table](#entity-types-and-identifiers)
-*Entity name: IP*
+### IP
| Field | Type | Description | | -- | - | -- |
-| Type | String | ΓÇÿipΓÇÖ |
-| Address | String | The IP address as string, e.g. 127.0.0.1 (either in IPv4 or IPv6). |
-| Location | GeoLocation | The geo-location context attached to the IP entity. <br><br>For more information, see also [Enrich entities in Microsoft Sentinel with geolocation data via REST API (Public preview)](geolocation-data-api.md). |
+| **Type** | String | 'ip' |
+| **Address** | String | The IP address as string, for example. 127.0.0.1 (either in IPv4 or IPv6). |
+| **AddressScope** | String | Name of the host, subnet, or private network for private, non-global IP addresses. Null or empty for global IP addresses (default). |
+| **Location** | GeoLocation | The geo-location context attached to the IP entity. <br><br>For more information, see also [Enrich entities in Microsoft Sentinel with geolocation data via REST API (Public preview)](geolocation-data-api.md). |
+| **Stream** | Stream | The source of discovery logs related to the specific IP. Optional. |
-Strong identifiers of an IP entity:
-- Address
+#### Strong identifiers of an IP entity
-## Malware
+- **Address**
+\*\* Address alone is a unique, strong identifier when the IP address is a global address.
+- **Address + AddressScope**
+\*\* For private/internal, non-global IP addresses, the AddressScope component is required to make this a strong identifer.
+
+[Back to list of entity type schemas](#list-of-entity-type-schemas) | [Back to entity identifiers table](#entity-types-and-identifiers)
+
+### Malware
| Field | Type | Description | | -- | - | -- |
-| Type | String | ΓÇÿmalwareΓÇÖ |
-| Name | String | The malware name by the vendor, such as `Win32/Toga!rfn`. |
-| Category | String | The malware category by the vendor, e.g. Trojan. |
-| Files | List\<Entity> | List of linked file entities on which the malware was found. Can contain the File entities inline or as reference.<br>See the [File](#file) entity for more details on structure. |
-| Processes | List\<Entity> | List of linked process entities on which the malware was found. This would often be used when the alert triggered on fileless activity.<br>See the [Process](#process) entity for more details on structure. |
+| **Type** | String | 'malware' |
+| **Name** | String | The malware name assigned by the (detection?) vendor, such as `Win32/Toga!rfn`. |
+| **Category** | String | The malware category assigned by the (detection?) vendor, for example. Trojan. |
+| **Files** | List\<Entity ([File](#file))> | List of linked file entities on which the malware was found. Can contain the File entities inline or as reference.<br>See the [File](#file) entity for more details on structure. |
+| **Processes** | List\<Entity ([Process](#process))> | List of linked process entities on which the malware was found. This would often be used when the alert triggered on fileless activity.<br>See the [Process](#process) entity for more details on structure. |
+
+#### Strong identifiers of a malware entity
-Strong identifiers of a malware entity:
+- **Name + Category**
-- Name + Category
+[Back to list of entity type schemas](#list-of-entity-type-schemas) | [Back to entity identifiers table](#entity-types-and-identifiers)
-## File
+### File
| Field | Type | Description | | -- | - | -- |
-| Type | String | ΓÇÿfileΓÇÖ |
-| Directory | String | The full path to the file. |
-| Name | String | The file name without the path (some alerts might not include path). |
-| Host | Entity | The host on which the file was stored. |
-| FileHashes | List&lt;Entity&gt; | The file hashes associated with this file. |
+| **Type** | String | 'file' |
+| **Directory** | String | The full path to the file. |
+| **Name** | String | The file name without the path (some alerts might not include path). |
+| **AlternateDataStreamName** | String | The file stream name in NTFS filesystem (null for the main stream). |
+| **Host** | Entity ([Host](#host)) | The host on which the file was stored. |
+| **HostUrl** | Entity ([URL](#url)) | URL where the file was downloaded from <br>([Mark of the Web](/deployedge/per-site-configuration-by-policy)). |
+| **WindowsSecurityZoneType** | WindowsSecurityZone | Windows Security Zone to which the URL belongs <br>([Mark of the Web](/deployedge/per-site-configuration-by-policy)). |
+| **ReferrerUrl** | Entity ([URL](#url)) | Referrer URL of the file download HTTP request <br>([Mark of the Web](/deployedge/per-site-configuration-by-policy)). |
+| **SizeInBytes** | Long? | The size of the file in bytes. |
+| **FileHashes** | List\<Entity ([FileHash](#file-hash))> | The file hashes associated with this file. |
-Strong identifiers of a file entity:
-- Name + Directory-- Name + FileHash-- Name + Directory + FileHash
+#### Strong identifiers of a file entity
-## Process
+- **Name + Directory**
+- **Name + *FileHash***
+- **Name + Directory + *FileHash***
+
+[Back to list of entity type schemas](#list-of-entity-type-schemas) | [Back to entity identifiers table](#entity-types-and-identifiers)
+
+### Process
| Field | Type | Description | | -- | - | -- |
-| Type | String | ΓÇÿprocessΓÇÖ |
-| ProcessId | String | The process ID. |
-| CommandLine | String | The command line used to create the process. |
-| ElevationToken | Enum? | The elevation token associated with the process.<br>Possible values:<li>TokenElevationTypeDefault<li>TokenElevationTypeFull<li>TokenElevationTypeLimited |
-| CreationTimeUtc | DateTime? | The time when the process started to run. |
-| ImageFile | Entity (File) | Can contain the File entity inline or as reference.<br>See the [File](#file) entity for more details on structure. |
-| Account | Entity | The account running the processes.<br>Can contain the Account entity inline or as reference.<br>See the [Account](#user-account) entity for more details on structure. |
-| ParentProcess | Entity (Process) | The parent process entity. <br>Can contain partial data, i.e. only the PID. |
-| Host | Entity | The host on which the process was running. |
-| LogonSession | Entity (HostLogonSession) | The session in which the process was running. |
-
-Strong identifiers of a process entity:
--- Host + ProcessId + CreationTimeUtc-- Host + ParentProcessId + CreationTimeUtc + CommandLine-- Host + ProcessId + CreationTimeUtc + ImageFile-- Host + ProcessId + CreationTimeUtc + ImageFile.FileHash-
-Weak identifiers of a process entity:
+| **Type** | String | 'process' |
+| **ProcessId** | String | The process ID. |
+| **CommandLine** | String | The command line used to create the process. |
+| **ElevationToken** | Enum? | The elevation token associated with the process.<br>Possible values:<li>TokenElevationTypeDefault<li>TokenElevationTypeFull<li>TokenElevationTypeLimited |
+| **CreationTimeUtc** | DateTime? | The time when the process started to run. |
+| **ImageFile** | Entity ([File](#file)) | Can contain the File entity inline or as reference.<br>See the [File](#file) entity for more details on structure. |
+| **Account** | Entity ([Account](#account)) | The account running the processes.<br>Can contain the Account entity inline or as reference.<br>See the [Account](#account) entity for more details on structure. |
+| **ParentProcess** | Entity ([Process](#process)) | The parent process entity. <br>Can contain partial data, for example, only the PID. |
+| **Host** | Entity ([Host](#host)) | The host on which the process was running. |
+| **LogonSession** | Entity (HostLogonSession) | The session in which the process was running. |
+
+#### Strong identifiers of a process entity
+
+- ***Host* + ProcessId + CreationTimeUtc**
+- ***Host* + *ParentProcessId* + CreationTimeUtc + CommandLine**
+- ***Host* + ProcessId + CreationTimeUtc + *ImageFile***
+- ***Host* + ProcessId + CreationTimeUtc + *ImageFile.FileHash***
+
+#### Weak identifiers of a process entity
- ProcessId + CreationTimeUtc + CommandLine (and no Host)-- ProcessId + CreationTimeUtc + ImageFile (and no Host)
+- ProcessId + CreationTimeUtc + *ImageFile* (and no Host)
+
+[Back to list of entity type schemas](#list-of-entity-type-schemas) | [Back to entity identifiers table](#entity-types-and-identifiers)
-## Cloud application
+### Cloud application
*Entity name: CloudApplication* | Field | Type | Description | | -- | - | -- |
-| Type | String | ΓÇÿcloud-applicationΓÇÖ |
-| AppId | Int | The technical identifier of the application. This should be one of the values defined in the list of [cloud application identifiers](#cloud-application-identifiers). The value for AppId field is optional. |
-| Name | String | The name of the related cloud application. The value of application name is optional. |
-| InstanceName | String | The user-defined instance name of the cloud application. It is often used to distinguish between several applications of the same type that a customer has. |
+| **Type** | String | 'cloud-application' |
+| **AppId** | Int | Deprecated; use SaasId field instead. The technical identifier of the application. Possible values are those defined in the list of [cloud application identifiers](#cloud-application-identifiers). Value optional. Should not contain InstanceId. |
+| **SaasId** | Int | Replaces deprecated AppId field. The technical identifier of the application. Possible values are those defined in the list of [cloud application identifiers](#cloud-application-identifiers). Value optional. Should not contain InstanceId. |
+| **Name** | String | The name of the related cloud application. Value optional. |
+| **InstanceName** | String | The user-defined instance name of the cloud application. It is often used to distinguish between several applications of the same type that a customer has. |
+| **InstanceId** | Int | The identifier of the specific session of the application. This is a zero-based running number. Value optional. |
+| **Risk** | AppRisk? | Lets you filter apps by risk score so that you can focus on, for example, reviewing only highly risky apps. Possible values like Low, Medium, High or Unknown. |
+| **Stream** | Stream | The source of discovery logs related to the specific cloud app. Optional. |
-Strong identifiers of a cloud application entity:
+#### Strong identifiers of a cloud application entity
-## Domain name
+- **AppId (without InstanceName)**
+- **Name (without InstanceName)**
+- **AppId + InstanceName**
+- **Name + InstanceName**
+
+[List of cloud application identifiers](#cloud-application-identifiers)
+
+[Back to list of entity type schemas](#list-of-entity-type-schemas) | [Back to entity identifiers table](#entity-types-and-identifiers)
+
+### DNS resolution
*Entity name: DNS* | Field | Type | Description | | -- | - | -- |
-| Type | String | ΓÇÿdnsΓÇÖ |
-| DomainName | String | The name of the DNS record associated with the alert. |
-| IpAddress | List&lt;Entity (IP)&gt; | Entities corresponding to the resolved IP addresses. |
-| DnsServerIp | Entity (IP) | An entity representing the DNS server resolving the request. |
-| HostIpAddress | Entity (IP) | An entity representing the DNS request client. |
+| **Type** | String | 'dns' |
+| **DomainName** | String | The name of the DNS record associated with the alert. |
+| **IpAddress** | List\<Entity ([IP](#ip))> | Entities corresponding to the resolved IP addresses. |
+| **DnsServerIp** | Entity ([IP](#ip)) | An entity representing the DNS server resolving the request. |
+| **HostIpAddress** | Entity ([IP](#ip)) | An entity representing the DNS request client. |
+
+#### Strong identifiers of a DNS entity
+
+- **DomainName + *DnsServerIp* + *HostIpAddress***
+
+#### Weak identifiers of a DNS entity
-Strong identifiers of a DNS entity:
-- DomainName + DnsServerIp + HostIpAddress
+- DomainName + *HostIpAddress*
-Weak identifiers of a DNS entity:
-- DomainName + HostIpAddress
+[Back to list of entity type schemas](#list-of-entity-type-schemas) | [Back to entity identifiers table](#entity-types-and-identifiers)
-## Azure resource
+### Azure resource
| Field | Type | Description | | -- | - | -- |
-| Type | String | 'azure-resource' |
-| ResourceId | String | The Azure resource ID of the resource. |
-| SubscriptionId | String | The subscription ID of the resource. |
-| TryGetResourceGroup | Bool | The resource group value if it exists. |
-| TryGetProvider | Bool | The provider value if it exists. |
-| TryGetName | Bool | The name value if it exists. |
+| **Type** | String | 'azure-resource' |
+| **ResourceId** | String | The Azure resource ID of the resource. Mandatory. |
+| **SubscriptionId** | String | The subscription ID of the resource. |
+| **ActiveContacts** | List\<ActiveContact> | Active contacts associated with the resource. |
+| **ResourceType** | String | The type of the resource. |
+| **ResourceName** | String | The name of the resource. |
-Strong identifiers of an Azure resource entity:
-- ResourceId
+#### Strong identifiers of an Azure resource entity
-## File hash
+- **ResourceId**
+
+[Back to list of entity type schemas](#list-of-entity-type-schemas) | [Back to entity identifiers table](#entity-types-and-identifiers)
+
+### File hash
*Entity name: FileHash* | Field | Type | Description | | -- | - | -- |
-| Type | String | 'filehash' |
-| Algorithm | Enum | The hash algorithm type. Possible values:<li>Unknown<li>MD5<li>SHA1<li>SHA256<li>SHA256AC |
-| Value | String | The hash value. |
+| **Type** | String | 'filehash' |
+| **Algorithm** | Enum | The hash algorithm type. Mandatory. Possible values:<li>Unknown<li>MD5<li>SHA1<li>SHA256<li>SHA256AC |
+| **Value** | String | The hash value. Mandatory. |
+
+#### Strong identifiers of a file hash entity
-Strong identifiers of a file hash entity:
-- Algorithm + Value
+- **Algorithm + Value**
-## Registry key
+[Back to list of entity type schemas](#list-of-entity-type-schemas) | [Back to entity identifiers table](#entity-types-and-identifiers)
+
+### Registry key
*Entity name: RegistryKey* | Field | Type | Description | | -- | - | -- |
-| Type | String | ΓÇÿregistry-keyΓÇÖ |
-| Hive | Enum? | One of the following values:<li>HKEY_LOCAL_MACHINE<li>HKEY_CLASSES_ROOT<li>HKEY_CURRENT_CONFIG<li>HKEY_USERS<li>HKEY_CURRENT_USER_LOCAL_SETTINGS<li>HKEY_PERFORMANCE_DATA<li>HKEY_PERFORMANCE_NLSTEXT<li>HKEY_PERFORMANCE_TEXT<li>HKEY_A<li>HKEY_CURRENT_USER |
-| Key | String | The registry key path. |
+| **Type** | String | 'registry-key' |
+| **Hive** | Enum? | One of the following values:<li>HKEY_LOCAL_MACHINE<li>HKEY_CLASSES_ROOT<li>HKEY_CURRENT_CONFIG<li>HKEY_USERS<li>HKEY_CURRENT_USER_LOCAL_SETTINGS<li>HKEY_PERFORMANCE_DATA<li>HKEY_PERFORMANCE_NLSTEXT<li>HKEY_PERFORMANCE_TEXT<li>HKEY_A<li>HKEY_CURRENT_USER |
+| **Key** | String | The registry key path. |
+
+#### Strong identifiers of a registry key entity
+
+- **Hive + Key**
-Strong identifiers of a registry key entity:
-- Hive + Key
+[Back to list of entity type schemas](#list-of-entity-type-schemas) | [Back to entity identifiers table](#entity-types-and-identifiers)
-## Registry value
+### Registry value
*Entity name: RegistryValue* | Field | Type | Description | | -- | - | -- |
-| Type | String | ΓÇÿregistry-valueΓÇÖ |
-| Key | Entity (RegistryKey) | The registry key entity. |
-| Name | String | The registry value name. |
-| Value | String | String-formatted representation of the value data. |
-| ValueType | Enum? | One of the following values:<li>String<li>Binary<li>DWord<li>Qword<li>MultiString<li>ExpandString<li>None<li>Unknown<br>Values should conform to Microsoft.Win32.RegistryValueKind enumeration. |
+| **Type** | String | 'registry-value' |
+| **Host** | Entity ([Host](#host)) | The host that the registry belongs to. |
+| **Key** | Entity ([RegistryKey](#registry-key)) | The registry key entity. |
+| **Name** | String | The registry value name. |
+| **Value** | String | String-formatted representation of the value data. |
+| **ValueType** | Enum? | One of the following values:<li>String<li>Binary<li>DWord<li>Qword<li>MultiString<li>ExpandString<li>None<li>Unknown<br>Values should conform to Microsoft.Win32.RegistryValueKind enumeration. |
-Strong identifiers of a registry value entity:
-- Key + Name
+#### Strong identifiers of a registry value entity
+
+- ***Key* + Name**
+
+#### Weak identifiers of a registry value entity
-Weak identifiers of a registry value entity:
- Name (without Key)
-## Security group
+[Back to list of entity type schemas](#list-of-entity-type-schemas) | [Back to entity identifiers table](#entity-types-and-identifiers)
+
+### Security group
*Entity name: SecurityGroup* | Field | Type | Description | | -- | - | -- |
-| Type | String | 'security-group' |
-| DistinguishedName | String | The group distinguished name. |
-| SID | String | The SID attribute is a single-value attribute that specifies the security identifier (SID) of the group. |
-| ObjectGuid | Guid? | The objectGUID attribute is a single-value attribute that is the unique identifier for the object, assigned by Active Directory. |
+| **Type** | String | 'security-group' |
+| **DistinguishedName** | String | The group distinguished name. |
+| **SID** | String | A single-value attribute that specifies the security identifier (SID) of the group. |
+| **ObjectGuid** | Guid? | A single-value attribute that is the unique identifier for the object, assigned by Active Directory. |
-Strong identifiers of a security group entity:
+#### Strong identifiers of a security group entity
-## URL
+- **DistinguishedName**
+- **SID**
+- **ObjectGuid**
+
+[Back to list of entity type schemas](#list-of-entity-type-schemas) | [Back to entity identifiers table](#entity-types-and-identifiers)
+
+### URL
| Field | Type | Description | | -- | - | -- | | Type | String | 'url' |
-| Url | Uri | A full URL the entity points to. |
+| Url | Uri | A full URL the entity points to. Mandatory. |
+
+#### Strong identifiers of a URL entity
-Strong identifiers of a URL entity:
-- Url (when an absolute URL)
+- **Url** (\*\* This identifier is strong when the URL is an absolute URL.)
-Weak identifiers of a URL entity:
-- Url (when a relative URL)
+#### Weak identifiers of a URL entity
-## IoT device
+- Url (\*\* This identifier is weak when the URL is a relative URL.)
+
+[Back to list of entity type schemas](#list-of-entity-type-schemas) | [Back to entity identifiers table](#entity-types-and-identifiers)
+
+### IoT device
*Entity name: IoTDevice* | Field | Type | Description | | -- | - | -- |
-| Type | String | 'iotdevice' |
-| IoTHub | Entity (AzureResource) | The AzureResource entity representing the IoT Hub the device belongs to. |
-| DeviceId | String | The ID of the device in the context of the IoT Hub. |
-| DeviceName | String | The friendly name of the device. |
-| IoTSecurityAgentId | Guid? | The ID of the *Defender for IoT* agent running on the device. |
-| DeviceType | String | The type of the device ('temperature sensor', 'freezer', 'wind turbine' etc.). |
-| Source | String | The source (Microsoft/Vendor) of the device entity. |
-| SourceRef | Entity (Url) | A URL reference to the source item where the device is managed. |
-| Manufacturer | String | The manufacturer of the device. |
-| Model | String | The model of the device. |
-| OperatingSystem | String | The operating system the device is running. |
-| IpAddress | Entity (IP) | The current IP address of the device. |
-| MacAddress | String | The MAC address of the device. |
-| Protocols | List&lt;String&gt; | A list of protocols that the device supports. |
-| SerialNumber | String | The serial number of the device. |
-
-Strong identifiers of an IoT device entity:
-- IoTHub + DeviceId-
-Weak identifiers of an IoT device entity:
+| **Type** | String | 'iotdevice' |
+| **IoTHub** | Entity ([AzureResource](#azure-resource)) | The AzureResource entity representing the IoT Hub the device belongs to. |
+| **DeviceId** | String | The ID of the device in the context of the IoT Hub. Mandatory. |
+| **DeviceName** | String | The friendly name of the device. |
+| **Owners** | List\<String> | The owners for the device. |
+| **IoTSecurityAgentId** | Guid? | The ID of the *Defender for IoT* agent running on the device. |
+| **DeviceType** | String | The type of the device ('temperature sensor', 'freezer', 'wind turbine' etc.). |
+| **DeviceTypeId** | String | A unique ID to identify each device type according to the device type schema, as the device type itself is a display name and not reliable in comparisons.<br><br>Possible values:<br>Unclassified = 0<br>Miscellaneous = 1<br>Network Device = 2<br>Printer = 3<br>Audio and Video = 4<br>Media and Surveillance = 5<br>Communication = 7<br>Smart Appliance = 9<br>Workstation = 10<br>Server = 11<br>Mobile = 12<br>Smart Facility = 13<br>Industrial = 14<br>Operational Equipment = 15 |
+| **Source** | String | The source (Microsoft/Vendor) of the device entity. |
+| **SourceRef** | Entity ([Url](#url)) | A URL reference to the source item where the device is managed. |
+| **Manufacturer** | String | The manufacturer of the device. |
+| **Model** | String | The model of the device. |
+| **OperatingSystem** | String | The operating system the device is running. |
+| **IpAddress** | Entity ([IP](#ip)) | The current IP address of the device. |
+| **MacAddress** | String | The MAC address of the device. |
+| **Nics** | Entity (Nic) | The current NICs on the device. |
+| **Protocols** | List\<String> | A list of protocols that the device supports. |
+| **SerialNumber** | String | The serial number of the device. |
+| **Site** | String | The site location of the device. |
+| **Zone** | String | The zone location of the device within a site. |
+| **Sensor** | String | The sensor monitoring the device. |
+| **Importance** | Enum? | One of the following values:<li>Low<li>Normal<li>High |
+| **PurdueLayer** | String | The Purdue Layer of the device. |
+| **IsProgramming** | Bool? | Indicates whether the device classified as programming device. |
+| **IsAuthorized** | Bool? | Indicates whether the device classified as authorized device. |
+| **IsScanner** | Bool? | Indicates whether the device classified as a scanner device. |
+| **DevicePageLink** | Entity ([Url](#url)) | A URL to the device page in Defender for IoT portal. |
+| **DeviceSubType** | String | The name of the device subtype. |
+
+#### Strong identifiers of an IoT device entity
+
+- **IoTHub + DeviceId**
+
+#### Weak identifiers of an IoT device entity
+ - DeviceId (without IoTHub)
-## Mailbox
+[Back to list of entity type schemas](#list-of-entity-type-schemas) | [Back to entity identifiers table](#entity-types-and-identifiers)
+
+### Mailbox
| Field | Type | Description | | -- | - | -- |
-| Type | String | 'mailbox' |
-| MailboxPrimaryAddress | String | The mailbox's primary address. |
-| DisplayName | String | The mailbox's display name. |
-| Upn | String | The mailbox's UPN. |
-| RiskLevel | Enum? | The risk level of this mailbox. Possible values:<li>None<li>Low<li>Medium<li>High |
-| ExternalDirectoryObjectId | Guid? | The AzureAD identifier of mailbox. Similar to AadUserId in the Account entity, but this property is specific to mailbox object on the Office side. |
+| **Type** | String | 'mailbox' |
+| **MailboxPrimaryAddress** | String | The mailbox's primary address. |
+| **DisplayName** | String | The mailbox's display name. |
+| **Upn** | String | The mailbox's UPN. |
+| **AadId** | String | The mailbox's Azure AD identifier of the user. |
+| **RiskLevel** | RiskLevel? | The risk level of this mailbox. Possible values:<li>None<li>Low<li>Medium<li>High |
+| **ExternalDirectoryObjectId** | Guid? | The AzureAD identifier of mailbox. Similar to AadUserId in the Account entity, but this property is specific to mailbox object on the Office side. |
-Strong identifiers of a mailbox entity:
-- MailboxPrimaryAddress
+#### Strong identifiers of a mailbox entity
-## Mail cluster
+- **MailboxPrimaryAddress**
-*Entity name: MailCluster*
+[Back to list of entity type schemas](#list-of-entity-type-schemas) | [Back to entity identifiers table](#entity-types-and-identifiers)
-> [!NOTE]
-> **Microsoft Defender for Office 365** was formerly known as Office 365 Advanced Threat Protection (O365 ATP).
+### Mail cluster
+
+*Entity name: MailCluster*
| Field | Type | Description | | -- | - | -- |
-| Type | String | 'mail-cluster' |
-| NetworkMessageIds | IList&lt;String&gt; | The mail message IDs that are part of the mail cluster. |
-| CountByDeliveryStatus | IDictionary&lt;String,Int&gt; | Count of mail messages by DeliveryStatus string representation. |
-| CountByThreatType | IDictionary&lt;String,Int&gt; | Count of mail messages by ThreatType string representation. |
-| CountByProtectionStatus | IDictionary&lt;String,long&gt; | Count of mail messages by Threat Protection status. |
-| Threats | IList&lt;String&gt; | The threats of mail messages that are part of the mail cluster. |
-| Query | String | The query that was used to identify the messages of the mail cluster. |
-| QueryTime | DateTime? | The query time. |
-| MailCount | Int? | The number of mail messages that are part of the mail cluster. |
-| IsVolumeAnomaly | Bool? | Determines whether this is a volume anomaly mail cluster. |
-| Source | String | The source of the mail cluster (default is 'O365 ATP'). |
-| ClusterSourceIdentifier | String | The network message ID of the mail that is the source of this mail cluster. |
-| ClusterSourceType | String | The source type of the mail cluster. This maps to the MailClusterSourceType setting from Microsoft Defender for Office 365 (see note above). |
-| ClusterQueryStartTime | DateTime? | Cluster start time - used as start time for cluster counts query. Usually dates to the End time minus DaysToLookBack setting from Microsoft Defender for Office 365 (see note above). |
-| ClusterQueryEndTime | DateTime? | Cluster end time - used as end time for cluster counts query. Usually the mail data's received time. |
-| ClusterGroup | String | Corresponds to the Kusto query key used on Microsoft Defender for Office 365 (see note above). |
-
-Strong identifiers of a mail cluster entity:
-- Query + Source-
-## Mail message
+| **Type** | String | 'mail-cluster' |
+| **NetworkMessageIds** | IList\<String> | The mail message IDs that are part of the mail cluster. |
+| **CountByDeliveryStatus** | IDictionary\<String,Int> | Count of mail messages by DeliveryStatus string representation. |
+| **CountByThreatType** | IDictionary\<String,Int> | Count of mail messages by ThreatType string representation. |
+| **CountByProtectionStatus** | IDictionary\<String,long> | Count of mail messages by Protection status string representation. |
+| **CountByDeliveryLocation** | IDictionary\<String,long> | Count of mail messages by Delivery location string representation. |
+| **Threats** | IList\<String> | The threats of mail messages that are part of the mail cluster. |
+| **Query** | String | The query that was used to identify the messages of the mail cluster. |
+| **QueryTime** | DateTime? | The query time. |
+| **MailCount** | Int? | The number of mail messages that are part of the mail cluster. |
+| **IsVolumeAnomaly** | Bool? | Indicates whether the mail cluster is a volume anomaly mail cluster. |
+| **Source** | String | The source of the mail cluster (default is `O365 ATP`). |
+
+#### Strong identifiers of a mail cluster entity
+
+- **Query + Source**
+
+[Back to list of entity type schemas](#list-of-entity-type-schemas) | [Back to entity identifiers table](#entity-types-and-identifiers)
+
+### Mail message
*Entity name: MailMessage* | Field | Type | Description | | -- | - | -- |
-| Type | String | 'mail-message' |
-| Files | IList&lt;File&gt; | The File entities of this mail message's attachments. |
-| Recipient | String | The recipient of this mail message. In the case of multiple recipients, the mail message is copied, and each copy has one recipient. |
-| Urls | IList&lt;String&gt; | The URLs contained in this mail message. |
-| Threats | IList&lt;String&gt; | The threats contained in this mail message. |
-| Sender | String | The sender's email address. |
-| P1Sender | String | Email ID of (delegated) user who sent this mail "on-behalf of P2 (primary) user". If email not sent by delegate, this value is equal to P2Sender. |
-| P1SenderDisplayName | String | Display name of the (delegated) user who sent this mail "on behalf of P2 (primary) user". Represented in email header by "OnbehalfofSenderDisplayName" property. |
-| P1SenderDomain | String | Email domain of the (delegated) user who sent this mail "on behalf of P2 (primary) user". If email not sent by delegate, this value is equal to P2SenderDomain. |
-| P2Sender | String | Email of the (primary) user on behalf of whom this email was sent. |
-| P2SenderDisplayName | String | Display name of the (primary) user on behalf of whom this email was sent. If email not sent by delegate, this represents the display name of the sender. |
-| P2SenderDomain | String | Email domain of the (primary) user on behalf of whom this email was sent. If email not sent by delegate, this represents the domain of the sender. |
-| SenderIP | String | The sender's IP address. |
-| ReceivedDate | DateTime | The received date of this message. |
-| NetworkMessageId | Guid? | The network message ID of this mail message. |
-| InternetMessageId | String | The internet message ID of this mail message. |
-| Subject | String | The subject of this mail message. |
-| BodyFingerprintBin1<br>BodyFingerprintBin2<br>BodyFingerprintBin3<br>BodyFingerprintBin4<br>BodyFingerprintBin5 | UInt? | Used by Microsoft Defender for Office 365 to find matching or similar mail messages. |
-| AntispamDirection | Enum? | The directionality of this mail message. Possible values:<li>Unknown<li>Inbound<li>Outbound<li>Intraorg (internal) |
-| DeliveryAction | Enum? | The delivery action of this mail message. Possible values:<li>Unknown<li>DeliveredAsSpam<li>Delivered<li>Blocked<li>Replaced |
-| DeliveryLocation | Enum? | The delivery location of this mail message. Possible values:<li>Unknown<li>Inbox<li>JunkFolder<li>DeletedFolder<li>Quarantine<li>External<li>Failed<li>Dropped<li>Forwarded |
-| Language | String | The language in which the contents of the mail are written. |
-| ThreatDetectionMethods | IList&lt;String&gt; | The list of Threat Detection Methods applied on this mail. |
-
-Strong identifiers of a mail message entity:
-- NetworkMessageId + Recipient-
-## Submission mail
+| **Type** | String | 'mail-message' |
+| **Files** | IList\<Entity ([File](#file))> | The File entities of this mail message's attachments. |
+| **Recipient** | String | The recipient of this mail message. In the case of multiple recipients, the mail message is copied, and each copy has one recipient. |
+| **Urls** | IList\<String> | The URLs contained in this mail message. |
+| **Threats** | IList\<String> | The threats contained in this mail message. |
+| **Sender** | String | The sender's email address. |
+| **SenderIP** | String | The sender's IP address. |
+| **ReceivedDate** | DateTime | The received date of this message. |
+| **NetworkMessageId** | Guid? | The network message ID of this mail message. |
+| **InternetMessageId** | String | The internet message ID of this mail message. |
+| **Subject** | String | The subject of this mail message. |
+| **AntispamDirection** | Enum? | The directionality of this mail message. Possible values:<li>Unknown<li>Inbound<li>Outbound<li>Intraorg (internal) |
+| **DeliveryAction** | Enum? | The delivery action of this mail message. Possible values:<li>Unknown<li>DeliveredAsSpam<li>Delivered<li>Blocked<li>Replaced |
+| **DeliveryLocation** | Enum? | The delivery location of this mail message. Possible values:<li>Unknown<li>Inbox<li>JunkFolder<li>DeletedFolder<li>Quarantine<li>External<li>Failed<li>Dropped<li>Forwarded |
+| **CampaignId** | String | The identifier of the campaign in which this mail message is present. |
+| **SuspiciousRecipients** | IList\<String> | The list of recipients who were detected as suspicious. |
+| **ForwardedRecipients** | IList\<String> | The list of all recipients on the forwarded mail. |
+| **ForwardingType** | IList\<String> | The forwarding type of the mail, such as SMTP, ETR, etc. |
+
+#### Strong identifiers of a mail message entity
+
+- **NetworkMessageId + Recipient**
+
+[Back to list of entity type schemas](#list-of-entity-type-schemas) | [Back to entity identifiers table](#entity-types-and-identifiers)
+
+### Submission mail
*Entity name: SubmissionMail* | Field | Type | Description | | -- | - | -- |
-| Type | String | 'SubmissionMail' |
-| SubmissionId | Guid? | The Submission ID. |
-| SubmissionDate | DateTime? | Reported Date time for this submission. |
-| Submitter | String | The submitter email address. |
-| NetworkMessageId | Guid? | The network message ID of email to which submission belongs. |
-| Timestamp | DateTime? | The Time stamp when the message is received (Mail). |
-| Recipient | String | The recipient of the mail. |
-| Sender | String | The sender of the mail. |
-| SenderIp | String | The sender's IP. |
-| Subject | String | The subject of submission mail. |
-| ReportType | String | The submission type for the given instance. This maps to Junk, Phish, Malware or NotJunk. |
-
-Strong identifiers of a SubmissionMail entity:
-- SubmissionId, Submitter, NetworkMessageId, Recipient-
-## Sentinel entities
+| **Type** | String | 'SubmissionMail' |
+| **SubmissionId** | Guid? | The Submission ID. |
+| **SubmissionDate** | DateTime? | Reported Date time for this submission. |
+| **Submitter** | String | The submitter email address. |
+| **NetworkMessageId** | Guid? | The network message ID of email to which submission belongs. |
+| **Timestamp** | DateTime? | The Time stamp when the message is received (Mail). |
+| **Recipient** | String | The recipient of the mail. |
+| **Sender** | String | The sender of the mail. |
+| **SenderIp** | String | The sender's IP. |
+| **Subject** | String | The subject of submission mail. |
+| **ReportType** | String | The submission type for the given instance. Possible values are Junk, Phish, Malware, or NotJunk. |
+
+#### Strong identifiers of a SubmissionMail entity
+
+- **SubmissionId, Submitter, NetworkMessageId, Recipient**
+
+[Back to list of entity type schemas](#list-of-entity-type-schemas) | [Back to entity identifiers table](#entity-types-and-identifiers)
+
+### Sentinel entities
| Field | Type | Description | | -- | - | -- |
-| Entities | String | A list of the entities identified in the alert. This list is the **entities** column from the SecurityAlert schema ([see documentation](security-alert-schema.md)). |
+| **Entities** | String | A list of the entities identified in the alert. This list is the **entities** column from the SecurityAlert schema ([see documentation](security-alert-schema.md)). |
+
+[Back to list of entity type schemas](#list-of-entity-type-schemas) | [Back to entity identifiers table](#entity-types-and-identifiers)
## Cloud application identifiers
sentinel Geographical Availability Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/geographical-availability-data-residency.md
Microsoft Sentinel can run on workspaces in the following regions:
|North America |South America |Asia |Europe |Australia |Africa | |||||||
-|**US**<br><br>ΓÇó Central US<br>ΓÇó Central US EUAP<br>ΓÇó East US<br>ΓÇó East US 2<br>ΓÇó East US 2 EUAP<br>ΓÇó North Central US<br>ΓÇó South Central US<br>ΓÇó West US<br>ΓÇó West US 2<br>ΓÇó West US 3<br>ΓÇó West Central US<br>ΓÇó USNat East<br>ΓÇó USNat West<br>ΓÇó USSec East<br>ΓÇó USSec West<br><br>**Azure government**<br><br>ΓÇó USGov Non-Regional<br>ΓÇó USGov Arizona<br>ΓÇó USGov Texas<br>ΓÇó USGov Virginia<br><br>**Canada**<br><br>ΓÇó Canada Central<br>ΓÇó Canada East |ΓÇó Brazil South<br>ΓÇó Brazil Southeast |ΓÇó East Asia<br>ΓÇó Southeast Asia<br>ΓÇó Qatar Central<br><br>**Japan**<br><br>ΓÇó Japan East<br>ΓÇó Japan West<br><br>**China 21Vianet**<br><br>ΓÇó China East 2<br><br>**India**<br><br>ΓÇó Central India<br>ΓÇó South India<br>ΓÇó West India<br>ΓÇó Jio India West<br>ΓÇó Jio India Central<br><br>**Korea**<br><br>ΓÇó Korea Central<br>ΓÇó Korea South<br><br>**Malaysia**<br><br>ΓÇó Malaysia South<br><br>**UAE**<br><br>ΓÇó UAE Central<br>ΓÇó UAE North |ΓÇó North Europe<br>ΓÇó West Europe<br><br>**France**<br><br>ΓÇó France Central<br>ΓÇó France South<br><br>**Germany**<br><br>ΓÇó Germany West Central<br>ΓÇó Germany North<br><br>**Norway**<br><br>ΓÇó Norway East<br>ΓÇó Norway West<br><br>**Switzerland**<br><br>ΓÇó Switzerland North<br>ΓÇó Switzerland West<br><br>**UK**<br><br>ΓÇó UK South<br>ΓÇó UK West |ΓÇó Australia Central<br>Australia Central 2<br>ΓÇó Australia East<br>ΓÇó Australia Southeast |ΓÇó South Africa North<br>ΓÇó South Africa West |
+|**US**<br><br>ΓÇó Central US<br>ΓÇó Central US EUAP<br>ΓÇó East US<br>ΓÇó East US 2<br>ΓÇó East US 2 EUAP<br>ΓÇó North Central US<br>ΓÇó South Central US<br>ΓÇó West US<br>ΓÇó West US 2<br>ΓÇó West US 3<br>ΓÇó West Central US<br>ΓÇó USNat East<br>ΓÇó USNat West<br>ΓÇó USSec East<br>ΓÇó USSec West<br><br>**Azure government**<br><br>ΓÇó USGov Non-Regional<br>ΓÇó USGov Arizona<br>ΓÇó USGov Texas<br>ΓÇó USGov Virginia<br><br>**Canada**<br><br>ΓÇó Canada Central<br>ΓÇó Canada East |ΓÇó Brazil South<br>ΓÇó Brazil Southeast |ΓÇó East Asia<br>ΓÇó Southeast Asia<br>ΓÇó Qatar Central<br><br>**Japan**<br><br>ΓÇó Japan East<br>ΓÇó Japan West<br><br>**China 21Vianet**<br><br>ΓÇó China East 2<br><br>**India**<br><br>ΓÇó Central India<br>ΓÇó South India<br>ΓÇó West India<br>ΓÇó Jio India West<br>ΓÇó Jio India Central<br><br>**Korea**<br><br>ΓÇó Korea Central<br>ΓÇó Korea South<br><br>**UAE**<br><br>ΓÇó UAE Central<br>ΓÇó UAE North |ΓÇó North Europe<br>ΓÇó West Europe<br><br>**France**<br><br>ΓÇó France Central<br>ΓÇó France South<br><br>**Germany**<br><br>ΓÇó Germany West Central<br>ΓÇó Germany North<br><br>**Norway**<br><br>ΓÇó Norway East<br>ΓÇó Norway West<br><br>**Switzerland**<br><br>ΓÇó Switzerland North<br>ΓÇó Switzerland West<br><br>**UK**<br><br>ΓÇó UK South<br>ΓÇó UK West |ΓÇó Australia Central<br>Australia Central 2<br>ΓÇó Australia East<br>ΓÇó Australia Southeast |ΓÇó South Africa North<br>ΓÇó South Africa West |
sentinel Respond Threats During Investigation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/respond-threats-during-investigation.md
Last updated 01/17/2023
This article shows you how to take response actions against threat actors on the spot, during the course of an incident investigation or threat hunt, without pivoting or context switching out of the investigation or hunt. You accomplish this using playbooks based on the new entity trigger. The entity trigger currently supports the following entity types:-- [Account](entities-reference.md#user-account)
+- [Account](entities-reference.md#account)
- [Host](entities-reference.md#host)-- [IP](entities-reference.md#ip-address)
+- [IP](entities-reference.md#ip)
- [URL](entities-reference.md#url)-- [DNS](entities-reference.md#domain-name)
+- [DNS](entities-reference.md#dns-resolution)
- [FileHash](entities-reference.md#file-hash) > [!IMPORTANT]
sentinel Upload Indicators Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/upload-indicators-api.md
An upload indicators API call has five components:
## Register your client application with Microsoft Entra ID
-In order to authenticate to Microsoft Sentinel, the request to the upload indicators API requires a valid Microsoft Entra access token. For more information on application registration, see [Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md) or see the basic steps as part of the [upload indicators API data connector](connect-threat-intelligence-upload-api.md#register-an-azure-ad-application) setup.
+In order to authenticate to Microsoft Sentinel, the request to the upload indicators API requires a valid Microsoft Entra access token. For more information on application registration, see [Register an application with the Microsoft identity platform](/entr#register-an-azure-ad-application) setup.
## Permissions
This section covers the first three of the five components discussed earlier. Yo
### Acquire an access token
-Acquire a Microsoft Entra access token with [OAuth 2.0 authentication](../active-directory/fundamentals/auth-oauth2.md). [V1.0 and V2.0](../active-directory/develop/access-tokens.md#token-formats) are valid tokens accepted by the API.
+Acquire a Microsoft Entra access token with [OAuth 2.0 authentication](../active-directory/fundamentals/auth-oauth2.md). [V1.0 and V2.0](/entra/identity-platform/access-tokens#token-formats) are valid tokens accepted by the API.
-To get a v1.0 token, use [ADAL](/azure/active-directory/azuread-dev/active-directory-authentication-libraries) or send requests to the REST API in the following format:
-- POST `https://login.microsoftonline.com/{{tenantId}}/oauth2/token`-- Headers for using Microsoft Entra App:-- grant_type: "client_credentials"-- client_id: {Client ID of Microsoft Entra App}-- client_secret: {Client secret of Microsoft Entra App}-- resource: `"https://management.azure.com/"`
+The version of the token (v1.0 or v2.0) that your application receives is determined by the `accessTokenAcceptedVersion` property in the [app manifest](/entra/identity-platform/reference-app-manifest#manifest-reference) of the API that your application is calling. If `accessTokenAcceptedVersion` is set to 1, then your application will receive a v1.0 token.
-To get a v2.0 token, use Microsoft Authentication Library [MSAL](../active-directory/develop/msal-overview.md) or send requests to the REST API in the following format:
+Use Microsoft Authentication Library [MSAL](/entra/identity-platform/msal-overview) to acquire either a v1.0 or v2.0 access token. Or, send requests to the REST API in the following format:
- POST `https://login.microsoftonline.com/{{tenantId}}/oauth2/v2.0/token` - Headers for using Microsoft Entra App: - grant_type: "client_credentials"
To get a v2.0 token, use Microsoft Authentication Library [MSAL](../active-direc
- client_secret: {secret of Microsoft Entra App} - scope: `"https://management.azure.com/.default"`
+If `accessTokenAcceptedVersion` in the app manifest is set to 1, your application will receive a v1.0 access token even though it's calling the v2 token endpoint.
+ The resource/scope value is the audience of the token. This API only accepts the following audiences: - `https://management.core.windows.net/` - `https://management.core.windows.net`
Create the array of indicators using the STIX 2.1 indicator format specification
|`name` (optional)| string | A name used to identify the indicator.<br><br>Producers *should* provide this property to help products and analysts understand what this indicator actually does.| |`description` (optional) | string | A description that provides more details and context about the indicator, potentially including its purpose and its key characteristics.<br><br>Producers *should* provide this property to help products and analysts understand what this indicator actually does. | |`indicator_types` (optional) | list of strings | A set of categorizations for this indicator.<br><br>The values for this property *should* come from the [indicator-type-ov](https://docs.oasis-open.org/cti/stix/v2.1/cs01/stix-v2.1-cs01.html#_cvhfwe3t9vuo) |
-|`pattern` (required) | string | The detection pattern for this indicator *may* be expressed as a [STIX Patterning](https://docs.oasis-open.org/cti/stix/v2.1/cs01/stix-v2.1-cs01.html#_e8slinrhxcc9) or another appropriate language such as SNORT, YARA, etc. |
+|`pattern` (required) | string | The detection pattern for this indicator *might* be expressed as a [STIX Patterning](https://docs.oasis-open.org/cti/stix/v2.1/cs01/stix-v2.1-cs01.html#_e8slinrhxcc9) or another appropriate language such as SNORT, YARA, etc. |
|`pattern_type` (required) | string | The pattern language used in this indicator.<br><br>The value for this property *should* come from [pattern types](https://docs.oasis-open.org/cti/stix/v2.1/cs01/stix-v2.1-cs01.html#_9lfdvxnyofxw).<br><br>The value of this property *must* match the type of pattern data included in the pattern property.| |`pattern_version` (optional) | string | The version of the pattern language used for the data in the pattern property, which *must* match the type of pattern data included in the pattern property.<br><br>For patterns that don't have a formal specification, the build or code version that the pattern is known to work with *should* be used.<br><br>For the STIX pattern language, the specification version of the object determines the default value.<br><br>For other languages, the default value *should* be the latest version of the patterning language at the time of this object's creation.| |`valid_from` (required) | timestamp | The time from which this indicator is considered a valid indicator of the behaviors it's related to or represents.|
Create the array of indicators using the STIX 2.1 indicator format specification
|`revoked` (optional) | boolean | Revoked objects are no longer considered valid by the object creator. Revoking an object is permanent; future versions of the object with this `id` *must not* be created.<br><br>The default value of this property is false.| |`labels` (optional) | list of strings | The `labels` property specifies a set of terms used to describe this object. The terms are user-defined or trust-group defined. These labels will display as **Tags** in Microsoft Sentinel.| |`confidence` (optional) | integer | The `confidence` property identifies the confidence that the creator has in the correctness of their data. The confidence value *must* be a number in the range of 0-100.<br><br>[Appendix A](https://docs.oasis-open.org/cti/stix/v2.1/cs01/stix-v2.1-cs01.html#_1v6elyto0uqg) contains a table of normative mappings to other confidence scales that *must* be used when presenting the confidence value in one of those scales.<br><br>If the confidence property is not present, then the confidence of the content is unspecified.|
-|`lang` (optional) | string | The `lang` property identifies the language of the text content in this object. When present, it *must* be a language code conformant to [RFC5646](https://docs.oasis-open.org/cti/stix/v2.1/cs01/stix-v2.1-cs01.html#kix.yoz409d7eis1). If the property isn't present, then the language of the content is `en` (English).<br><br>This property *should* be present if the object type contains translatable text properties (for example, name, description).<br><br>The language of individual fields in this object *may* override the `lang` property in granular markings (see section [7.2.3](https://docs.oasis-open.org/cti/stix/v2.1/cs01/stix-v2.1-cs01.html#_robezi5egfdr)).|
-|`object_marking_refs` (optional, including TLP) | list of strings | The `object_marking_refs` property specifies a list of ID properties of marking-definition objects that apply to this object. For example, use the Traffic Light Protocol (TLP) marking definition ID to designate the sensitivity of the indicator source. For details of what marking-definition IDs to use for TLP content, see section [7.2.1.4](https://docs.oasis-open.org/cti/stix/v2.1/os/stix-v2.1-os.html#_yd3ar14ekwrs)<br><br>In some cases, though uncommon, marking definitions themselves may be marked with sharing or handling guidance. In this case, this property *must not* contain any references to the same Marking Definition object (that is, it can't contain any circular references).<br><br>See section [7.2.2](https://docs.oasis-open.org/cti/stix/v2.1/cs01/stix-v2.1-cs01.html#_bnienmcktc0n) for further definition of data markings.|
+|`lang` (optional) | string | The `lang` property identifies the language of the text content in this object. When present, it *must* be a language code conformant to [RFC5646](https://docs.oasis-open.org/cti/stix/v2.1/cs01/stix-v2.1-cs01.html#kix.yoz409d7eis1). If the property isn't present, then the language of the content is `en` (English).<br><br>This property *should* be present if the object type contains translatable text properties (for example, name, description).<br><br>The language of individual fields in this object *might* override the `lang` property in granular markings (see section [7.2.3](https://docs.oasis-open.org/cti/stix/v2.1/cs01/stix-v2.1-cs01.html#_robezi5egfdr)).|
+|`object_marking_refs` (optional, including TLP) | list of strings | The `object_marking_refs` property specifies a list of ID properties of marking-definition objects that apply to this object. For example, use the Traffic Light Protocol (TLP) marking definition ID to designate the sensitivity of the indicator source. For details of what marking-definition IDs to use for TLP content, see section [7.2.1.4](https://docs.oasis-open.org/cti/stix/v2.1/os/stix-v2.1-os.html#_yd3ar14ekwrs)<br><br>In some cases, though uncommon, marking definitions themselves might be marked with sharing or handling guidance. In this case, this property *must not* contain any references to the same Marking Definition object (that is, it can't contain any circular references).<br><br>See section [7.2.2](https://docs.oasis-open.org/cti/stix/v2.1/cs01/stix-v2.1-cs01.html#_bnienmcktc0n) for further definition of data markings.|
|`external_references` (optional) | list of object | The `external_references` property specifies a list of external references which refers to non-STIX information. This property is used to provide one or more URLs, descriptions, or IDs to records in other systems.|
-|`granular_markings` (optional) | list of [granular-marking](https://docs.oasis-open.org/cti/stix/v2.1/cs01/stix-v2.1-cs01.html#_robezi5egfdr) | The `granular_markings` property helps define parts of the indicator differently. For example, the indicator language is English, `en` but the description is German, `de`.<br><br>In some cases, though uncommon, marking definitions themselves may be marked with sharing or handling guidance. In this case, this property *must not* contain any references to the same Marking Definition object (i.e., it can't contain any circular references).<br><br>See section [7.2.3](https://docs.oasis-open.org/cti/stix/v2.1/cs01/stix-v2.1-cs01.html#_robezi5egfdr) for further definition of data markings.|
+|`granular_markings` (optional) | list of [granular-marking](https://docs.oasis-open.org/cti/stix/v2.1/cs01/stix-v2.1-cs01.html#_robezi5egfdr) | The `granular_markings` property helps define parts of the indicator differently. For example, the indicator language is English, `en` but the description is German, `de`.<br><br>In some cases, though uncommon, marking definitions themselves might be marked with sharing or handling guidance. In this case, this property *must not* contain any references to the same Marking Definition object (i.e., it can't contain any circular references).<br><br>See section [7.2.3](https://docs.oasis-open.org/cti/stix/v2.1/cs01/stix-v2.1-cs01.html#_robezi5egfdr) for further definition of data markings.|
### Process the response message
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
The listed features were released in the last three months. For information abou
[!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)] ## October 2023
+- [Microsoft Applied Skill - Configure SIEM security operations using Microsoft Sentinel](#microsoft-applied-skill-available-for-microsoft-sentinel)
- [Changes to the documentation table of contents](#changes-to-the-documentation-table-of-contents)
+### Microsoft Applied Skill available for Microsoft Sentinel
+
+This month Microsoft Worldwide Learning announced [Applied Skills](https://techcommunity.microsoft.com/t5/microsoft-learn-blog/announcing-microsoft-applied-skills-the-new-credentials-to/ba-p/3775645) to help you acquire the technical skills you need to reach your full potential. Microsoft Sentinel is included in the initial set of credentials offered! This credential is based on the learning path with the same name.
+- **Learning path** - [Configure SIEM security operations using Microsoft Sentinel](/training/paths/configure-security-information-event-management-operations-using-microsoft-sentinel/)
+ <br>Learn at your own pace, and the modules require you to have your own Azure subscription.
+- **Applied Skill** - [Configure SIEM security operations using Microsoft Sentinel](/credentials/applied-skills/configure-siem-security-operations-using-microsoft-sentinel/)
+ <br>A 2 hour assessment is contained in a sandbox virtual desktop. You are provided an Azure subscription with some features already configured.
+ ### Changes to the documentation table of contents We've made some significant changes in how the Microsoft Sentinel documentation is organized in the table of contents on the left-hand side of the library. Two important things to know:
service-bus-messaging Monitor Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/monitor-service-bus.md
Start with the article [Monitoring Azure resources with Azure Monitor](../azure-
The following sections build on this article by describing the specific data gathered for Azure Service Bus. These sections also provide examples for configuring data collection and analyzing this data with Azure tools. > [!TIP]
-> To understand costs associated with Azure Monitor, see [Usage and estimated costs](../azure-monitor//usage-estimated-costs.md). To understand the time it takes for your data to appear in Azure Monitor, see [Log data ingestion time](../azure-monitor/logs/data-ingestion-time.md).
+> To understand costs associated with Azure Monitor, see [Azure Monitor cost and usage](../azure-monitor/cost-usage.md). To understand the time it takes for your data to appear in Azure Monitor, see [Log data ingestion time](../azure-monitor/logs/data-ingestion-time.md).
## Monitoring data from Azure Service Bus Azure Service Bus collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data).
service-connector How To Integrate App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-app-configuration.md
Use the connection details below to connect compute services to Azure App Config
|--||| | AZURE_APPCONFIGURATION_ENDPOINT | App Configuration endpoint | `https://<App-Configuration-name>.azconfig.io` |
+#### Sample code
Refer to the steps and code below to connect to Azure App Configuration using a system-assigned managed identity. [!INCLUDE [code sample for app config](./includes/code-appconfig-me-id.md)]
Refer to the steps and code below to connect to Azure App Configuration using a
| AZURE_APPCONFIGURATION_ENDPOINT | App Configuration Endpoint | `https://App-Configuration-name>.azconfig.io` | | AZURE_APPCONFIGURATION_CLIENTID | Your client ID | `<client-ID>` |
+#### Sample code
Refer to the steps and code below to connect to Azure App Configuration using a user-assigned managed identity. [!INCLUDE [code sample for app config](./includes/code-appconfig-me-id.md)]
Refer to the steps and code below to connect to Azure App Configuration using a
| AZURE_APPCONFIGURATION_CLIENTSECRET | Your client secret | `<client-secret>` | | AZURE_APPCONFIGURATION_TENANTID | Your tenant ID | `<tenant-ID>` |
+#### Sample code
Refer to the steps and code below to connect to Azure App Configuration using a service principaL. [!INCLUDE [code sample for app config](./includes/code-appconfig-me-id.md)]
service-connector How To Integrate Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-service-bus.md
Last updated 08/11/2022
# Integrate Service Bus with Service Connector
-This page shows the supported authentication types and client types of Azure Service Bus using Service Connector. You might still be able to connect to Service Bus in other programming languages without using Service Connector. This page also shows default environment variable names and values or Spring Boot configuration you get when you create service connections. You can learn more about the [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows supported authentication methods and clients, and shows sample code you can use to connect Azure Service Bus to other cloud services using Service Connector. You might still be able to connect to Service Bus in other programming languages without using Service Connector.
## Supported compute services
This page shows the supported authentication types and client types of Azure Ser
Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
-### [Azure App Service](#tab/app-service)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
-|--|::|::|::|::|
-| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Go | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-
-### [Azure Container Apps](#tab/container-apps)
- | Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal | |--|::|::|::|::| | .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
Supported authentication and clients for App Service, Container Apps and Azure S
| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-### [Azure Spring Apps](#tab/spring-apps)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
-|--|::|::|::|::|
-| .NET | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Go | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Node.js | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Python | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| None | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
- ## Default environment variable names or application properties
-Use the connection details below to connect compute services to Service Bus. For each example below, replace the placeholder texts `<Service-Bus-namespace>`, `<access-key-name>`, `<access-key-value>` `<client-ID>`, `<client-secret>`, and `<tenant-id>` with your own Service Bus namespace, shared access key name, shared access key value, client ID, client secret and tenant ID.
+Use the connection details below to connect compute services to Service Bus. This page also shows default environment variable names and values or Spring Boot configuration you get when you create service connections, as well as sample code. For each example below, replace the placeholder texts `<Service-Bus-namespace>`, `<access-key-name>`, `<access-key-value>` `<client-ID>`, `<client-secret>`, and `<tenant-id>` with your own Service Bus namespace, shared access key name, shared access key value, client ID, client secret and tenant ID. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
-### Azure App Service and Azure Container Apps
+### System-assigned managed identity
-#### Secret/connection string
+#### SpringBoot client type
-> [!div class="mx-tdBreakAll"]
-> |Default environment variable name | Description | Sample value |
-> | -- | -- | |
-> | AZURE_SERVICEBUS_CONNECTIONSTRING | Service Bus connection string | `Endpoint=sb://<Service-Bus-namespace>.servicebus.windows.net/;SharedAccessKeyName=<access-key-name>;SharedAccessKey=<access-key-value>` |
+| Default environment variable name | Description | Sample value |
+|--|--|--|
+| spring.cloud.azure.servicebus.namespace | Service Bus namespace | `<Service-Bus-namespace>.servicebus.windows.net` |
-#### System-assigned managed identity
+#### Other client types
| Default environment variable name | Description | Sample value | | -- | -- | -- | | AZURE_SERVICEBUS_FULLYQUALIFIEDNAMESPACE | Service Bus namespace | `<Service-Bus-namespace>.servicebus.windows.net` |
-#### User-assigned managed identity
+#### Sample code
+Refer to the steps and code below to connect to Service Bus using a system-assigned managed identity.
+
+### User-assigned managed identity
+
+#### SpringBoot client type
+
+| Default environment variable name | Description | Sample value |
+|--|--|--|
+| spring.cloud.azure.servicebus.namespace | Service Bus namespace | `<Service-Bus-namespace>.servicebus.windows.net` |
+| spring.cloud.azure.client-id | Your client ID | `<client-ID>` |
+
+#### Other client types
| Default environment variable name | Description | Sample value | | - | -| - | | AZURE_SERVICEBUS_FULLYQUALIFIEDNAMESPACE | Service Bus namespace | `<Service-Bus-namespace>.servicebus.windows.net` | | AZURE_SERVICEBUS_CLIENTID | Your client ID | `<client-ID>` |
-#### Service principal
-
-| Default environment variable name | Description | Sample value |
-| --| | -- |
-| AZURE_SERVICEBUS_FULLYQUALIFIEDNAMESPACE | Service Bus namespace | `<Service-Bus-namespace>.servicebus.windows.net` |
-| AZURE_SERVICEBUS_CLIENTID | Your client ID | `<client-ID>` |
-| AZURE_SERVICEBUS_CLIENTSECRET | Your client secret | `<client-secret>` |
-| AZURE_SERVICEBUS_TENANTID | Your tenant ID | `<tenant-id>` |
+#### Sample code
+Refer to the steps and code below to connect to Service Bus using a user-assigned managed identity.
-### Azure Spring Apps
+### Connection string
-#### Spring Boot secret/connection string
+#### SpringBoot client type
> [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value | > | -- | -- | | > | spring.cloud.azure.servicebus.connection-string | Service Bus connection string | `Endpoint=sb://<Service-Bus-namespace>.servicebus.windows.net/;SharedAccessKeyName=<access-key-name>;SharedAccessKey=<access-key-value>` |
-#### Spring Boot system-assigned managed identity
+#### Other client types
-| Default environment variable name | Description | Sample value |
-|--|--|--|
-| spring.cloud.azure.servicebus.namespace | Service Bus namespace | `<Service-Bus-namespace>.servicebus.windows.net` |
+> [!div class="mx-tdBreakAll"]
+> |Default environment variable name | Description | Sample value |
+> | -- | -- | |
+> | AZURE_SERVICEBUS_CONNECTIONSTRING | Service Bus connection string | `Endpoint=sb://<Service-Bus-namespace>.servicebus.windows.net/;SharedAccessKeyName=<access-key-name>;SharedAccessKey=<access-key-value>` |
-#### Spring Boot user-assigned managed identity
+#### Sample code
+Refer to the steps and code below to connect to Service Bus using a connection string.
-| Default environment variable name | Description | Sample value |
-|--|--|--|
-| spring.cloud.azure.servicebus.namespace | Service Bus namespace | `<Service-Bus-namespace>.servicebus.windows.net` |
-| spring.cloud.azure.client-id | Your client ID | `<client-ID>` |
+### Service principal
-#### Spring Boot service principal
+#### SpringBoot client type
| Default environment variable name | Description | Sample value | |--|--|--|
Use the connection details below to connect compute services to Service Bus. For
| spring.cloud.azure.tenant-id | Your client secret | `<client-secret>` | | spring.cloud.azure.client-secret | Your tenant ID | `<tenant-id>` |
+#### Other client types
+
+| Default environment variable name | Description | Sample value |
+| --| | -- |
+| AZURE_SERVICEBUS_FULLYQUALIFIEDNAMESPACE | Service Bus namespace | `<Service-Bus-namespace>.servicebus.windows.net` |
+| AZURE_SERVICEBUS_CLIENTID | Your client ID | `<client-ID>` |
+| AZURE_SERVICEBUS_CLIENTSECRET | Your client secret | `<client-secret>` |
+| AZURE_SERVICEBUS_TENANTID | Your tenant ID | `<tenant-id>` |
+
+#### Sample code
+Refer to the steps and code below to connect to Service Bus using a service principal.
+ ## Next step Follow the tutorial listed below to learn more about Service Connector.
service-connector Tutorial Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-passwordless.md
Previously updated : 07/17/2023 Last updated : 09/28/2023 ms.devlang: azurecli zone_pivot_group_filename: service-connector/zone-pivot-groups.json
In this tutorial, you use the Azure CLI to complete the following tasks:
* An Azure account with an active subscription. [Create an Azure account for free](https://azure.microsoft.com/free). * An app deployed to [Azure App Service](../app-service/overview.md) in a [region supported by Service Connector](./concept-region-support.md).
-### Set up environment
+### Set up your environment
#### Account Sign in with the Azure CLI via `az login`. If you're using Azure Cloud Shell or are already logged in, confirm your authenticated account with `az account show`.
-#### Network connectivity
--
-If your database server is in Virtual Network, ensure your environment that runs the Azure CLI command can access the server in the Virtual Network.
---
-If your database server is in Virtual Network, ensure your environment that runs the Azure CLI command can access the server in the Virtual Network.
--
-If your database server disallows public access, ensure your environment that runs the Azure CLI command can access the server through the private endpoint.
-
-### Install the Service Connector passwordless extension
--
-## Create passwordless connection
-
-Next, we use Azure App Service as an example to create a connection using managed identity.
-
-If you use:
-
-* Azure Spring Apps, use `az spring connection create` instead. For more examples, see [Connect Azure Spring Apps to the Azure database](/azure/developer/java/spring-framework/deploy-passwordless-spring-database-app#connect-azure-spring-apps-to-the-azure-database).
-* Azure Container Apps, use `az containerapp connection create` instead. For more examples, see [Create and connect a PostgreSQL database with identity connectivity](../container-apps/tutorial-java-quarkus-connect-managed-identity-postgresql-database.md?tabs=flexible#5-create-and-connect-a-postgresql-database-with-identity-connectivity).
-
-> [!NOTE]
-> If you use the Azure portal, go to the **Service Connector** blade of [Azure App Service](./quickstart-portal-app-service-connection.md), [Azure Spring Apps](./quickstart-portal-spring-cloud-connection.md), or [Azure Container Apps](./quickstart-portal-container-apps.md), and select **Create** to create a connection. The Azure portal will automatically compose the command for you and trigger the command execution on Cloud Shell.
--
-The following Azure CLI commands use a `--client-type` parameter. Run the `az webapp connection create postgres-flexible -h` to get the supported client types, and choose the one that matches your application.
-
-### [User-assigned managed identity](#tab/user)
-
-```azurecli
-az webapp connection create postgres-flexible \
- --resource-group $RESOURCE_GROUP \
- --name $APPSERVICE_NAME \
- --target-resource-group $RESOURCE_GROUP \
- --server $POSTGRESQL_HOST \
- --database $DATABASE_NAME \
- --user-identity client-id=XX subs-id=XX \
- --client-type java
-```
-
-### [System-assigned managed identity](#tab/system)
-
-```azurecli
-az webapp connection create postgres-flexible \
- --resource-group $RESOURCE_GROUP \
- --name $APPSERVICE_NAME \
- --target-resource-group $RESOURCE_GROUP \
- --server $POSTGRESQL_HOST \
- --database $DATABASE_NAME \
- --system-identity \
- --client-type java
-```
-
-### [Service principal](#tab/sp)
-
-```azurecli
-az webapp connection create postgres-flexible \
- --resource-group $RESOURCE_GROUP \
- --name $APPSERVICE_NAME \
- --target-resource-group $RESOURCE_GROUP \
- --server $POSTGRESQL_HOST \
- --database $DATABASE_NAME \
- --service-principal client-id=XX secret=XX\
- --client-type java
-```
----
-Azure Database for MySQL - Flexible Server requires a user-assigned managed identity to enable Microsoft Entra authentication. For more information, see [Set up Microsoft Entra authentication for Azure Database for MySQL - Flexible Server](../mysql/flexible-server/how-to-azure-ad.md). You can use the following command to create a user-assigned managed identity:
-
-```azurecli
-USER_IDENTITY_NAME=<YOUR_USER_ASSIGNED_MANAGEMED_IDENTITY_NAME>
-IDENTITY_RESOURCE_ID=$(az identity create \
- --name $USER_IDENTITY_NAME \
- --resource-group $RESOURCE_GROUP \
- --query id \
- --output tsv)
-```
-
-> [!IMPORTANT]
-> After creating the user-assigned managed identity, ask your *Global Administrator* or *Privileged Role Administrator* to grant the following permissions for this identity:
-
-* `User.Read.All`
-* `GroupMember.Read.All`
-* `Application.Read.All`
-
-For more information, see the [Permissions](../mysql/flexible-server/concepts-azure-ad-authentication.md#permissions) section of [Active Directory authentication](../mysql/flexible-server/concepts-azure-ad-authentication.md).
-
-Then, connect your app to a MySQL database with a system-assigned managed identity using Service Connector.
-
-The following Azure CLI commands use a `--client-type` parameter. Run the `az webapp connection create mysql-flexible -h` to get the supported client types, and choose the one that matches your application.
-
-### [User-assigned managed identity](#tab/user)
-
-```azurecli
-az webapp connection create mysql-flexible \
- --resource-group $RESOURCE_GROUP \
- --name $APPSERVICE_NAME \
- --target-resource-group $RESOURCE_GROUP \
- --server $MYSQL_HOST \
- --database $DATABASE_NAME \
- --user-identity client-id=XX subs-id=XX mysql-identity-id=$IDENTITY_RESOURCE_ID \
- --client-type java
-```
-
-### [System-assigned managed identity](#tab/system)
-
-```azurecli
-az webapp connection create mysql-flexible \
- --resource-group $RESOURCE_GROUP \
- --name $APPSERVICE_NAME \
- --target-resource-group $RESOURCE_GROUP \
- --server $MYSQL_HOST \
- --database $DATABASE_NAME \
- --system-identity mysql-identity-id=$IDENTITY_RESOURCE_ID \
- --client-type java
-```
-
-### [Service principal](#tab/sp)
-
-```azurecli
-az webapp connection create mysql-flexible \
- --resource-group $RESOURCE_GROUP \
- --name $APPSERVICE_NAME \
- --target-resource-group $RESOURCE_GROUP \
- --server $MYSQL_HOST \
- --database $DATABASE_NAME \
- --service-principal client-id=XX secret=XX mysql-identity-id=$IDENTITY_RESOURCE_ID \
- --client-type java
-```
----
-The following Azure CLI commands use a `--client-type` parameter. Run the `az webapp connection create sql -h` to get the supported client types, and choose the one that matches your application.
+## Deploy the application to an Azure hosting service
-### [User-assigned managed identity](#tab/user)
+Finally, deploy your application to an Azure hosting service. That source service can use a managed identity to connect to the target database on Azure.
-```azurecli
-az webapp connection create sql \
- --resource-group $RESOURCE_GROUP \
- --name $APPSERVICE_NAME \
- --target-resource-group $RESOURCE_GROUP \
- --server $SQL_HOST \
- --database $DATABASE_NAME \
- --user-identity client-id=XX subs-id=XX \
- --client-type dotnet
-```
-
-### [System-assigned managed identity](#tab/system)
+### [App Service](#tab/appservice)
-```azurecli
-az webapp connection create sql \
- --resource-group $RESOURCE_GROUP \
- --name $APPSERVICE_NAME \
- --target-resource-group $RESOURCE_GROUP \
- --server $SQL_HOST \
- --database $DATABASE_NAME \
- --system-identity \
- --client-type dotnet
-```
+For Azure App Service, you can deploy the application code via the `az webapp deploy` command. For more information, see [Quickstart: Deploy an ASP.NET web app](../app-service/quickstart-dotnetcore.md).
-### [Service principal](#tab/sp)
+### [Spring Apps](#tab/springapp)
-```azurecli
-az webapp connection create sql \
- --resource-group $RESOURCE_GROUP \
- --name $APPSERVICE_NAME \
- --target-resource-group $RESOURCE_GROUP \
- --server $SQL_HOST \
- --database $DATABASE_NAME \
- --service-principal client-id=XX secret=XX \
- --client-type dotnet
-```
+For Azure Spring Apps, you can deploy the application code via the `az spring app deploy` command. For more information, see [Quickstart: Deploy your first application to Azure Spring Apps](../spring-apps/quickstart.md).
+### [Container Apps](#tab/containerapp)
-This Service Connector command completes the following tasks in the background:
+For Azure Container Apps, you can deploy the application code via the `az containerapp create` command. For more information, see [Quickstart: Deploy your first container app](../container-apps/get-started.md).
-- Enable system-assigned managed identity, or assign a user identity for the app `$APPSERVICE_NAME` hosted by Azure App Service/Azure Spring Apps/Azure Container Apps.-- Set the Microsoft Entra admin to the current signed-in user.-- Add a database user for the system-assigned managed identity, user-assigned managed identity, or service principal. Grant all privileges of the database `$DATABASE_NAME` to this user. The username can be found in the connection string in preceding command output.-- Set configurations named `AZURE_MYSQL_CONNECTIONSTRING`, `AZURE_POSTGRESQL_CONNECTIONSTRING`, or `AZURE_SQL_CONNECTIONSTRING` to the Azure resource based on the database type.
- - For App Service, the configurations are set in the **App Settings** blade.
- - For Spring Apps, the configurations are set when the application is launched.
- - For Container Apps, the configurations are set to the environment variables. You can get all configurations and their values in the **Service Connector** blade in the Azure portal.
+
+Then you can check the log or call the application to see if it can connect to the Azure database successfully.
### Troubleshooting #### Permission
-If you encounter any permission-related errors, confirm the Azure CLI signed-in user with the command `az account show`. Make sure you log in with the correct account. Next, confirm that you have the following permissions that may be required to create a passwordless connection with Service Connector.
+If you encounter any permission-related errors, confirm the Azure CLI signed-in user with the command `az account show`. Make sure you log in with the correct account. Next, confirm that you have the following permissions that might be required to create a passwordless connection with Service Connector.
::: zone pivot="postgresql"
Service Connector needs to access Microsoft Entra ID to get information of your
az ad signed-in-user show ```
-If you don't log in interactively, you may also get the error and `Interactive authentication is needed`. To resolve the error, log in with the `az login` command.
-
+If you don't log in interactively, you might also get the error and `Interactive authentication is needed`. To resolve the error, log in with the `az login` command.
<a name='connect-to-database-with-azure-active-directory-authentication'></a>
-## Connect to database with Microsoft Entra authentication
-
-After creating the connection, you can use the connection string in your application to connect to the database with Microsoft Entra authentication. For example, you can use the following solutions to connect to the database with Microsoft Entra authentication.
--------------
+#### Network connectivity
-## Deploy the application to an Azure hosting service
+If your database server is in Virtual Network, ensure your environment that runs the Azure CLI command can access the server in the Virtual Network.
-Finally, deploy your application to an Azure hosting service. That source service can use managed identity to connect to the target database on Azure.
-### [App Service](#tab/appservice)
-For Azure App Service, you can deploy the application code via the `az webapp deploy` command. For more information, see [Quickstart: Deploy an ASP.NET web app](../app-service/quickstart-dotnetcore.md).
+If your database server is in Virtual Network, ensure your environment that runs the Azure CLI command can access the server in the Virtual Network.
-### [Spring Apps](#tab/springapp)
-For Azure Spring Apps, you can deploy the application code via the `az spring app deploy` command. For more information, see [Quickstart: Deploy your first application to Azure Spring Apps](../spring-apps/quickstart.md).
-### [Container Apps](#tab/containerapp)
+If your database server disallows public access, ensure your environment that runs the Azure CLI command can access the server through the private endpoint.
-For Azure Container Apps, you can deploy the application code via the `az containerapp create` command. For more information, see [Quickstart: Deploy your first container app](../container-apps/get-started.md).
-
-Then you can check the log or call the application to see if it can connect to the database on Azure successfully.
## Next steps
site-recovery Concepts Public Ip Address With Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/concepts-public-ip-address-with-site-recovery.md
Previously updated : 04/08/2019 Last updated : 10/31/2023 # Set up public IP addresses after failover
-Public IP addresses allow Internet resources to communicate inbound to Azure resources. Public IP addresses also enable Azure resources to communicate outbound to Internet and public-facing Azure services with an IP address assigned to the resource.
-- Inbound communication from the Internet to the resource, such as Azure Virtual Machines (VM), Azure Application Gateways, Azure Load Balancers, Azure VPN Gateways, and others. You can still communicate with some resources, such as VMs, from the Internet, if a VM doesn't have a public IP address assigned to it, as long as the VM is part of a load balancer back-end pool, and the load balancer is assigned a public IP address.
+Public IP addresses serve two purposes in Azure. First, they allow inbound communication from Internet resources to Azure resources. Secondly, they enable Azure resources to communicate outbound to the Internet and public-facing Azure services that have an IP address assigned to the resource.
+
+- Allow inbound communication from the Internet to the resource, such as Azure Virtual Machines (VM), Azure Application Gateways, Azure Load Balancers, Azure VPN Gateways, and others. You can still communicate with some resources, such as VMs, from the Internet, if a VM doesn't have a public IP address assigned to it, as long as the VM is part of a load balancer back-end pool, and the load balancer is assigned a public IP address.
- Outbound connectivity to the Internet using a predictable IP address. For example, a virtual machine can communicate outbound to the Internet without a public IP address assigned to it, but its address is network address translated by Azure to an unpredictable public address, by default. Assigning a public IP address to a resource enables you to know which IP address is used for the outbound connection. Though predictable, the address can change, depending on the assignment method chosen. For more information, see [Create a public IP address](../virtual-network/ip-services/virtual-network-public-ip-address.md#create-a-public-ip-address). To learn more about outbound connections from Azure resources, see [Understand outbound connections](../load-balancer/load-balancer-outbound-connections.md?toc=%2fazure%2fvirtual-network%2ftoc.json). In Azure Resource Manager, a Public IP address is a resource that has its own properties. Some of the resources you can associate a public IP address resource with are:
This article describes how you can use Public IP addresses with Site Recovery.
Public IP address of the production application **cannot be retained on failover**. Workloads brought up as part of failover process must be assigned an Azure Public IP resource available in the target region. This step can be done either manually or is automated with recovery plans. A recovery plan gathers machines into recovery groups. It helps you to define a systematic recovery process. You can use a recovery plan to impose order, and automate the actions needed at each step, using Azure Automation runbooks for failover to Azure, or scripts. The setup is as follows:+ - Create a [recovery plan](../site-recovery/site-recovery-create-recovery-plans.md#create-a-recovery-plan) and group your workloads as necessary into the plan. - Customize the plan by adding a step to attach a public IP address using [Azure Automation runbooks](../site-recovery/site-recovery-runbook-automation.md#customize-the-recovery-plan) scripts to the failed over VM.
Read more about failover scenarios with Traffic
2. [Azure to Azure failover](../site-recovery/concepts-traffic-manager-with-site-recovery.md#azure-to-azure-failover) with Traffic Manager The setup is as follows:+ - Create a [Traffic Manager profile](../traffic-manager/quickstart-create-traffic-manager-profile.md). - Utilizing the **Priority** routing method, create two endpoints ΓÇô **Primary** for source and **Failover** for Azure. **Primary** is assigned Priority 1 and **Failover** is assigned Priority 2. - The **Primary** endpoint can be [Azure](../traffic-manager/traffic-manager-endpoint-types.md#azure-endpoints) or [External](../traffic-manager/traffic-manager-endpoint-types.md#external-endpoints) depending on whether your source environment is inside or outside Azure. - The **Failover** endpoint is created as an **Azure** endpoint. Use a **static public IP address** as this will be external facing endpoint for Traffic Manager in the disaster event. ## Next steps-- Learn more about [Traffic Manager with Azure Site Recovery](../site-recovery/concepts-traffic-manager-with-site-recovery.md)-- Learn more about Traffic Manager [routing methods](../traffic-manager/traffic-manager-routing-methods.md).+
+- Learn about [Traffic Manager with Azure Site Recovery](../site-recovery/concepts-traffic-manager-with-site-recovery.md)
+- Learn about Traffic Manager [routing methods](../traffic-manager/traffic-manager-routing-methods.md).
- Learn more about [recovery plans](site-recovery-create-recovery-plans.md) to automate application failover.
site-recovery Migrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/migrate-overview.md
Title: Compare Azure Migrate and Site Recovery for migration to Azure
-description: Summarizes the advantages of using Azure Migrate for migration, instead of Site Recovery.
+description: Summarizes the advantages of using Azure Migrate for migration instead of Site Recovery.
Previously updated : 12/12/2022 Last updated : 10/31/2023
# Migrating to Azure
-For migration, we recommend that you use the Azure Migrate service to migrate VMs and servers to Azure, rather than the Azure Site Recovery service. [Learn more](../migrate/migrate-services-overview.md) about Azure Migrate.
-
+For migration, we recommend that you use the Azure Migrate service to migrate your VMs and servers to Azure, instead of using Azure Site Recovery service. Learn about [Azure Migrate](../migrate/migrate-services-overview.md).
## Why use Azure Migrate?
-Using Azure Migrate for migration provides a number of advantages:
-
+Using Azure Migrate for migration provides many advantages:
- Azure Migrate provides a centralized hub for discovery, assessment, and migration to Azure. - Using Azure Migrate provides interoperability and future extensibility with Azure Migrate tools, other Azure services, and third-party tools. - The Migration and modernization tool is purpose-built for server migration to Azure. It's optimized for migration. You don't need to learn about concepts and scenarios that aren't directly relevant to migration.
+- Azure Migrate can be used to identify modernization opportunities and migration previews.
+- Some key features like OS upgrade are only available with Azure Migrate
- There are no tool usage charges for migration for 180 days, from the time replication is started for a VM. This gives you time to complete migration. You only pay for the storage and network resources used in replication, and for compute charges consumed during test migrations. - Azure Migrate supports all migration scenarios supported by Site Recovery. In addition, for VMware VMs, Azure Migrate provides an agentless migration option. - We're prioritizing new migration features for the Migration and modernization tool only. These features aren't targeted for Site Recovery.
Site Recovery should be used:
- For disaster recovery of on-premises machines to Azure. - For disaster recovery of Azure VMs, between Azure regions.
-Although we recommend using Azure Migrate to migrate on-premises servers to Azure, if you've already started your migration journey with Site Recovery, you can continue using it to complete your migration.
+## Which service to use for migration?
+
+We recommend using Azure Migrate to migrate on-premises servers to Azure. However, if you've already started your migration journey with Site Recovery, consider the following details:
+
+- If you're already using Azure Site Recovery to replicate your servers, you don't need to deploy a Migrate appliance. Remove the BCDR protection, and replicate with a new appliance.
+- However, there are benefits to conducting assessment, dependency analysis, and business case review with the Azure Migrate discovery appliance even for workloads that are already replicating.
+- There could be architecture changes required to support the workload in the long term. In this case, address the requirements while continuing to use Azure Site Recovery to replicate so that you don't lose protections.
++
+## Conclusion
+
+Suggestions to choose between Azure Migrate and Site Recovery:
+
+- **For new migration**: If you're beginning a new migration and don't have either Azure Site Recovery or Migrate in place, we recommend that you use Azure Migrate.
+- **For disaster recovery of on-premises machines to Azure**: For disaster recovery of on-premises machines to Azure, we recommend that you use Azure Site Recovery. You can also use this service to migrate machines to Azure once it has been determined that they should be moved off-premise.
+- **For disaster recovery of Azure VMs between Azure region**: For disaster recovery of Azure VMs between Azure regions, we recommend that you use Azure Site Recovery. Although you can use Azure Migrate to initially move them into Azure.
+- **If you're already using Azure Site Recovery**: If you're currently using Azure Site Recovery to actively protect your machines, continue to use it for replication. However, consider using Azure Migrate for conducting business cases and dependency analysis.
## Next steps
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
Previously updated : 08/01/2023 Last updated : 10/31/2023
spring-apps Concept App Customer Responsibilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/concept-app-customer-responsibilities.md
The following sections describe the version support that applies to the Enterpri
You can deploy polyglot applications to the Enterprise plan with source code. To enjoy the best stability, use SDKs with LTS versions that are officially supported.
-When you deploy your polyglot applications to the Enterprise plan, assign specific LTS versions for the SDKs. Otherwise, the default SDK version might change during the regular upgrades for builder components. For more information about deploying polygot apps, see [How to deploy polyglot apps in the Azure Spring Apps Enterprise plan](how-to-enterprise-deploy-polyglot-apps.md).
-
-| Type | Support policy |
-|--|-|
-| Java | [Java support on Azure](/azure/developer/java/fundamentals/java-support-on-azure) |
-| Tomcat | [Tomcat versions](https://tomcat.apache.org/whichversion.html) |
-| .NET | [.NET and .NET core support policy](https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core) |
-| Python | [Status of Python versions](https://devguide.python.org/versions/) |
-| Go | [Go release history](https://go.dev/doc/devel/release) |
-| NodeJS | [Nodejs releases](https://nodejs.dev/en/about/releases/) |
-| PHP | [PHP supported versions](https://www.php.net/supported-versions.php) |
+When you deploy your polyglot applications to the Enterprise plan, assign specific LTS versions for the SDKs. Otherwise, the default SDK version might change during the regular upgrades for builder components. For more information about deploying polyglot apps, see [How to deploy polyglot apps in the Azure Spring Apps Enterprise plan](how-to-enterprise-deploy-polyglot-apps.md).
+
+| Type | Support policy |
+|--|-|
+| Java | [Java support on Azure](/azure/developer/java/fundamentals/java-support-on-azure) |
+| Tomcat | [Tomcat versions](https://tomcat.apache.org/whichversion.html) |
+| .NET | [.NET and .NET core support policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) |
+| Python | [Status of Python versions](https://devguide.python.org/versions/) |
+| Go | [Go release history](https://go.dev/doc/devel/release) |
+| NodeJS | [Nodejs releases](https://nodejs.org/en/about/previous-releases/) |
+| PHP | [PHP supported versions](https://www.php.net/supported-versions.php) |
### Stack image support
spring-apps How To Start Stop Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-start-stop-service.md
az spring show \
## Next steps - [Monitor app lifecycle events using Azure Activity log and Azure Service Health](./monitor-app-lifecycle-events.md)-- [Azure Monitor cost and usage](../azure-monitor/usage-estimated-costs.md)
+- [Azure Monitor cost and usage](../azure-monitor/cost-usage.md)
spring-apps How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-managed-identities.md
For user-assigned managed identities, see [How to assign and remove user-assigne
An application can use its managed identity to get tokens to access other resources protected by Microsoft Entra ID, such as Azure Key Vault. These tokens represent the application accessing the resource, not any specific user of the application.
-You may need to configure the target resource to allow access from your application. For more information, see [Assign a managed identity access to a resource by using the Azure portal](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md). For example, if you request a token to access Key Vault, be sure you have added an access policy that includes your application's identity. Otherwise, your calls to Key Vault will be rejected, even if they include the token. To learn more about which resources support Microsoft Entra tokens, see [Azure services that support Microsoft Entra authentication](../active-directory/managed-identities-azure-resources/services-azure-active-directory-support.md).
+You can configure the target resource to allow access from your application. For more information, see [Assign a managed identity access to a resource by using the Azure portal](/entra/identity/managed-identities-azure-resources/howto-assign-access-portal). For example, if you request a token to access Key Vault, be sure you have added an access policy that includes your application's identity. Otherwise, your calls to Key Vault will be rejected, even if they include the token. To learn more about which resources support Microsoft Entra tokens, see [Azure services that support Microsoft Entra authentication](/entra/identity/managed-identities-azure-resources/services-id-authentication-support.
-Azure Spring Apps shares the same endpoint for token acquisition with Azure Virtual Machines. We recommend using Java SDK or Spring Boot starters to acquire a token. For various code and script examples and guidance on important topics such as handling token expiration and HTTP errors, see [How to use managed identities for Azure resources on an Azure VM to acquire an access token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md).
+Azure Spring Apps shares the same endpoint for token acquisition with Azure Virtual Machines. We recommend using Java SDK or Spring Boot starters to acquire a token. For various code and script examples, as well as guidance on important topics like handling token expiration and HTTP errors, see [How to use managed identities for Azure resources on an Azure VM to acquire an access token](/entra/identity/managed-identities-azure-resources/how-to-use-vm-token).
## Examples of connecting Azure services in application code
The following table provides links to articles that contain examples:
## Best practices when using managed identities
-We highly recommend that you use system-assigned and user-assigned managed identities separately unless you have a valid use case. If you use both kinds of managed identity together, failure might happen if an application is using system-assigned managed identity and the application gets the token without specifying the client ID of that identity. This scenario may work fine until one or more user-assigned managed identities are assigned to that application, then the application may fail to get the correct token.
+We highly recommend that you use system-assigned and user-assigned managed identities separately unless you have a valid use case. If you use both kinds of managed identity together, failure might happen if an application is using system-assigned managed identity and the application gets the token without specifying the client ID of that identity. This scenario might work fine until one or more user-assigned managed identities are assigned to that application, then the application might fail to get the correct token.
## Limitations
The following table shows the mappings between concepts in Managed Identity scop
## Next steps -- [Learn more about managed identities for Azure resources](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/active-directory/managed-identities-azure-resources/overview.md)
+- [Learn more about managed identities for Azure resources](/entra/identity/managed-identities-azure-resources/overview)
- [How to use managed identities with Java SDK](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples)
spring-apps Quickstart Automate Deployments Github Actions Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-automate-deployments-github-actions-enterprise.md
The automation associated with the sample application requires a Storage account
## Automate with GitHub Actions
-Now you can run GitHub Actions in your repository. The [provision workflow](https://github.com/Azure-Samples/acme-fitness-store/blob/Azure/.github/workflows/provision.yml) provisions all resources necessary to run the example application. The following screenshot shows an example run:
+Now you can run GitHub Actions in your repository. The [provision workflow](https://github.com/Azure-Samples/acme-fitness-store/blob/HEAD/.github/workflows/provision.yml) provisions all resources necessary to run the example application. The following screenshot shows an example run:
:::image type="content" source="media/quickstart-automate-deployments-github-actions-enterprise/provision.png" alt-text="Screenshot of GitHub showing output from the provision workflow." lightbox="media/quickstart-automate-deployments-github-actions-enterprise/provision.png"
-Each application has a [deploy workflow](https://github.com/Azure-Samples/acme-fitness-store/blob/Azure/.github/workflows/catalog.yml) that will redeploy the application when changes are made to that application. The following screenshot shows some example output from the catalog service:
+Each application has a [deploy workflow](https://github.com/Azure-Samples/acme-fitness-store/blob/HEAD/.github/workflows/catalog.yml) that will redeploy the application when changes are made to that application. The following screenshot shows some example output from the catalog service:
:::image type="content" source="media/quickstart-automate-deployments-github-actions-enterprise/deploy-catalog.png" alt-text="Screenshot of GitHub showing output from the Deploy Catalog workflow." lightbox="media/quickstart-automate-deployments-github-actions-enterprise/deploy-catalog.png"
-The [cleanup workflow](https://github.com/Azure-Samples/acme-fitness-store/blob/Azure/.github/workflows/cleanup.yml) can be manually run to delete all resources created by the `provision` workflow. The following screenshot shows the output:
+The [cleanup workflow](https://github.com/Azure-Samples/acme-fitness-store/blob/HEAD/.github/workflows/cleanup.yml) can be manually run to delete all resources created by the `provision` workflow. The following screenshot shows the output:
:::image type="content" source="media/quickstart-automate-deployments-github-actions-enterprise/cleanup.png" alt-text="Screenshot of GitHub showing output from the cleanup workflow." lightbox="media/quickstart-automate-deployments-github-actions-enterprise/cleanup.png"
spring-apps Quickstart Deploy Restful Api App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-restful-api-app.md
Last updated 10/02/2023 -+
+zone_pivot_groups: spring-apps-enterprise-or-consumption-plan-selection
# Quickstart: Deploy RESTful API application to Azure Spring Apps
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Standard consumption and dedicated (Preview)
- This article describes how to deploy a RESTful API application protected by [Microsoft Entra ID](../active-directory/fundamentals/active-directory-whatis.md) to Azure Spring Apps. The sample project is a simplified version based on the [Simple Todo](https://github.com/Azure-Samples/ASA-Samples-Web-Application) web application, which only provides the backend service and uses Microsoft Entra ID to protect the RESTful APIs. These RESTful APIs are protected by applying role-based access control (RBAC). Anonymous users can't access any data and aren't allowed to control access for different users. Anonymous users only have the following three permissions:
The following diagram shows the architecture of the system:
## 1. Prerequisites
-### [Azure portal](#tab/Azure-portal)
+
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- [Git](https://git-scm.com/downloads).
+- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
+- A Microsoft Entra tenant. For instructions on creating one, see [Quickstart: Create a new tenant in Microsoft Entra ID](../active-directory/fundamentals/create-new-tenant.md).
+++
+### [Azure portal + Maven plugin](#tab/Azure-portal-maven-plugin)
- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. - [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17. - A Microsoft Entra tenant. For instructions on creating one, see [Quickstart: Create a new tenant in Microsoft Entra ID](../active-directory/fundamentals/create-new-tenant.md).-- [curl](https://curl.se/download.html). ### [Azure Developer CLI](#tab/Azure-Developer-CLI)
The following diagram shows the architecture of the system:
- [Java Development Kit (JDK)](/java/azure/jdk/), version 17. - A Microsoft Entra tenant. For instructions on creating one, see [Quickstart: Create a new tenant in Microsoft Entra ID](../active-directory/fundamentals/create-new-tenant.md). - [Azure Developer CLI (AZD)](https://aka.ms/azd-install), version 1.0.2 or higher.-- [curl](https://curl.se/download.html). +++++ [!INCLUDE [deploy-restful-api-app-with-consumption-plan](includes/quickstart-deploy-restful-api-app/deploy-restful-api-app-with-consumption-plan.md)] + ## 5. Validate the app
-Now, you can access the RESTful API to see if it works.
+You can now access the RESTful API to see if it works.
-### Request an access token
+### 5.1. Request an access token
The RESTful APIs act as a resource server, which is protected by Microsoft Entra ID. Before acquiring an access token, you're required to register another application in Microsoft Entra ID and grant permissions to the client application, which is named `ToDoWeb`.
Use the following steps to update the OAuth2 configuration for Swagger UI author
1. Under **Manage**, select **Authentication**, select **Add a platform**, and then select **Single-page application**.
-1. Use the format `<your-app-exposed-application-url-or-endpoint>/swagger-ui/oauth2-redirect.html` as the OAuth2 redirect URL in the **Redirect URIs** field - for example, `https://simple-todo-api.xxxxxxxx-xxxxxxxx.xxxxxx.azurecontainerapps.io/swagger-ui/oauth2-redirect.html` - and then select **Configure**.
+1. Use the format `<your-app-exposed-application-url-or-endpoint>/swagger-ui/oauth2-redirect.html` as the OAuth2 redirect URL in the **Redirect URIs** field, and then select **Configure**.
:::image type="content" source="media/quickstart-deploy-restful-api-app/single-page-app-authentication.png" alt-text="Screenshot of the Azure portal that shows the Authentication page for Microsoft Entra ID." lightbox="media/quickstart-deploy-restful-api-app/single-page-app-authentication.png":::
Use the following steps to use [OAuth 2.0 authorization code flow](../active-dir
After completing the sign in with the previous user, you're returned to the **Available authorizations** window.
-### Access the RESTful APIs
+### 5.2. Access the RESTful APIs
Use the following steps to access the RESTful APIs of the `ToDo` app in the Swagger UI:
spring-apps Quickstart Deploy Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-web-app.md
This article provides the following options for deploying to Azure Spring Apps:
This article provides the following options for deploying to Azure Spring Apps: -- The Azure portal is the easiest and fastest way to create resources and deploy applications with a single click. This option is suitable for Spring developers who want to quickly deploy applications to Azure cloud services.-- The Azure CLI is a powerful command line tool to manage Azure resources. This option is suitable for Spring developers who are familiar with Azure cloud services.
+- The **Azure portal** option is the easiest and the fastest way to create resources and deploy applications with a single click. This option is suitable for Spring developers who want to quickly deploy applications to Azure cloud services.
+- The **Azure portal + Maven plugin** option provides a more conventional way to create resources and deploy applications step by step. This option is suitable for Spring developers using Azure cloud services for the first time.
+- The **Azure CLI** option is a powerful command line tool to manage Azure resources. This option is suitable for Spring developers who are familiar with Azure cloud services.
::: zone-end
This article provides the following options for deploying to Azure Spring Apps:
- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+### [Azure portal + Maven plugin](#tab/Azure-portal-maven-plugin-ent)
+
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- [Git](https://git-scm.com/downloads).
+- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
+ ### [Azure CLI](#tab/Azure-CLI) - An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
This article provides the following options for deploying to Azure Spring Apps:
::: zone-end ::: zone-end ::: zone-end ::: zone-end
Use the following steps to validate:
1. Check the details for each resource deployment, which are useful for investigating any deployment issues.
+### [Azure portal + Maven plugin](#tab/Azure-portal-maven-plugin-ent)
+
+Access the application with the output application URL. The page should appear as you saw in localhost.
+ ### [Azure CLI](#tab/Azure-CLI) Use the following steps to validate:
Be sure to delete the resources you created in this article when you no longer n
[!INCLUDE [clean-up-resources-via-resource-group](includes/quickstart-deploy-web-app/clean-up-resources-via-resource-group.md)]
+### [Azure portal + Maven plugin](#tab/Azure-portal-maven-plugin-ent)
++ ### [Azure CLI](#tab/Azure-CLI) Use the following command to delete the entire resource group, including the newly created service:
spring-apps Quickstart Integrate Azure Database And Redis Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-integrate-azure-database-and-redis-enterprise.md
The following instructions describe how to provision an Azure Cache for Redis an
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
-You can find the template used in this quickstart in the [fitness store sample GitHub repository](https://github.com/Azure-Samples/acme-fitness-store/blob/Azure/azure-spring-apps-enterprise/resources/json/deploy/azuredeploy.json).
+You can find the template used in this quickstart in the [fitness store sample GitHub repository](https://github.com/Azure-Samples/acme-fitness-store/blob/HEAD/azure-spring-apps-enterprise/resources/json/deploy/azuredeploy.json).
To deploy this template, follow these steps:
spring-apps Quickstart Setup Config Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-setup-config-server.md
Set up your Config Server with the location of the git repository for the projec
az spring config-server git set -n <service instance name> --uri https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples --search-paths steeltoe-sample/config ```
-This command tells Config Server to find the configuration data in the [steeltoe-sample/config](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/steeltoe-sample/config) folder of the sample app repository. Since the name of the app that gets the configuration data is `planet-weather-provider`, the file that's used is [planet-weather-provider.yml](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/blob/master/steeltoe-sample/config/planet-weather-provider.yml).
+This command tells Config Server to find the configuration data in the [steeltoe-sample/config](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/steeltoe-sample/config) folder of the sample app repository. Since the name of the app that gets the configuration data is `planet-weather-provider`, the file that's used is [planet-weather-provider.yml](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/blob/HEAD/steeltoe-sample/config/planet-weather-provider.yml).
::: zone-end
storage Soft Delete Blob Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/soft-delete-blob-manage.md
Previously updated : 02/16/2023 Last updated : 10/31/2023 ms.devlang: csharp
When blobs or directories are soft-deleted, they are invisible in the Azure port
> [!div class="mx-imgBorder"] > ![Screenshot showing how to list soft-deleted blobs in Azure portal (hierarchical namespace enabled accounts).](media/soft-delete-blob-manage/soft-deleted-blobs-list-portal-hns.png)
-> [!NOTE]
-> If you rename a directory that contains soft-deleted items (subdirectories and blobs), those soft-deleted items become disconnected from the directory, so they won't appear in the Azure portal when you toggle the **Show deleted blobs** setting. If you want to view them in the Azure portal, you'll have to revert the name of the directory back to its original name or create a separate directory that uses the original directory name.
+There are two reasons why soft-deleted blobs and directories might not appear in the Azure portal when you toggle the **Show deleted blobs** setting.
+
+- Soft-deleted blobs and directories won't appear if your security principal relies only on access control list (ACL) entries for authorization.
+
+ For these items to appear, you must either be the owner of the account or your security principal must be assigned the role of [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner), [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) or [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader).
+
+- If you rename a directory that contains soft-deleted items (subdirectories and blobs), those soft-deleted items become disconnected from the directory, so they won't appear.
+
+ If you want to view them in the Azure portal, you'll have to revert the name of the directory back to its original name or create a separate directory that uses the original directory name.
-Next, select the deleted directory or blob from the list display its properties. Under the **Overview** tab, notice that the status is set to **Deleted**. The portal also displays the number of days until the blob is permanently deleted.
+You can display the properties of a soft-deleted blob or directory by selecting it from the list. Under the **Overview** tab, notice that the status is set to **Deleted**. The portal also displays the number of days until the blob is permanently deleted.
> [!div class="mx-imgBorder"] > ![Screenshot showing properties of soft-deleted blob in Azure portal (hierarchical namespace enabled accounts).](media/soft-delete-blob-manage/soft-deleted-blob-properties-portal-hns.png)
storage Komprise Tiering Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/data-management/komprise-tiering-guide.md
+
+ Title: Optimize file storage with Komprise Intelligent Tiering for Azure
+
+description: Getting started guide to implement Komprise Tiering. This guide shows how to your data to Azure Blob storage using Komprise's Intelligent Tiering
++ Last updated : 10/30/2023++++
+# Optimize File Storage with Komprise Intelligent Tiering for Azure
+
+Businesses and public sector enterprises often retain file data for decades as it contains business value such as customer insights, genomic patterns, machine learning training data, and compliance information. Because of its large volume, variety and velocity of its growth, file data can become expensive to store, backup and manage. Most IT organizations spend 30% to 50% of their budgets on file data storage and backups, as shown in the [2023 Komprise State of Unstructured Data Management Report](https://www.komprise.com/resource/the-2023-komprise-unstructured-data-management-report/).
+
+When looking at the storage cost of file data, consider that the cost of file data is at least 3X higher than the cost of the file storage itself. File data can't just be stored, it needs to also be protected with backups and replicated for disaster recovery.
++
+| Item | Cost |
+|--|-|
+| File Storage | 1x |
+| Snapshots | 0.3x |
+| Mirror for DR (second site copy of FS + Snapshots) | 1.3x |
+| 2 to 3 backups (cheaper storage + backup software) | 1x+ |
+| TOTAL | 3.6x+ |
+
+## Use the Cloud to Cut File Data Costs: Comparison of Alternatives
+
+As file data growth accelerates, managing data growth while cutting the costs of file data storage and backup has become an enterprise IT priority. Most (typically up to 80%) of file data is cold, meaning infrequently accessed. Therefore, organizations can save significantly by tiering on-premises file data to cost-effective object storage tiers such as Azure Blob, which can be 1/10th to 1/100th the cost of file storage. Here's a breakdown of Azure Blob Archive costs compared to higher performance on-premises and cloud file storage options.
+
+| Item | Cost/GB/month | Relative Cost to Azure Blob Archive |
+||-|--|
+| On-premises Flash/SSD | $0.12 | 123x |
+| On-premises Disk | $.066 | 66x |
+| Azure Blob Cool | $0.015 | 15x |
+| Azure Blob Archive | $0.00099 | 1x |
+
+Read the Azure Storage blog posts:
+- [The True Cost of Traditional File Storage](https://techcommunity.microsoft.com/t5/azure-storage-blog/the-true-cost-of-traditional-file-storage/ba-p/3797945)
+- [Leverage the Cloud to Cut File Data Costs: Comparison of Alternatives](https://techcommunity.microsoft.com/t5/azure-storage-blog/leverage-the-cloud-to-cut-file-data-costs-comparison-of/ba-p/3799614)
+
+#### Komprise Intelligent Tiering for Azure
+
+Komprise Intelligent Tiering for Azure analyzes data across your file and object storage systems, identifying inactive/cold data. Qualifying files, based on your policies, are moved to Azure Blob. Komprise Transparent Move Technology (TMTΓäó) offloads entire files from the data storage, snapshot, backup and DR footprints without any change to your processes or user behavior. This approach to file tiering maximizes your data storage and backup savings, reduces cloud egress costs by preventing unnecessary rehydration when data is accessed from the cloud, and lets you set different policies for different data types and groups. Users can access the tiered data exactly as before as files from the original location, and they can directly access the objects in Azure Blob without any lock-in. You can also use all the native Azure data services.
++
+Komprise analyzes all your file and object data. You can set policies to transparently tier cold files to Azure and instantly visualize savings based on your costs and policies
++
+Files moved by Komprise TMT appear just as they did before, without users noticing any difference, and open like normal files on the desktop.
+No stubs. No agents. The figure shows a PDF file before and after it's tiered by Komprise.
+
+Read the Azure blog post: [How to Save 70% on File Data Costs](https://techcommunity.microsoft.com/t5/azure-storage-blog/how-to-save-70-on-file-data-costs/ba-p/3799616)
+
+## Getting Started with Komprise Intelligent Tiering for Azure
+
+Your local Komprise team will assist with setting up your Komprise Director console and on-premises grid. It's an easy process that includes a preinstallation review, a deployment phase and initial training. Preparation for the installation should be ~1 hour from power-up to seeing the first analysis results.
+
+[Komprise Intelligent Tiering for Azure on the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.komprise_tiering_transactable_license?tab=Overview)
+
+#### Komprise Grid
+
+Komprise software is easy to set up and it runs in virtual environments for complete resource flexibility. The Komprise Grid is configurable for optimum performance, flexibility and cost to meet the needs of each customerΓÇÖs unique environment. Komprise data managers (Observers) and data movers (Windows Proxies) can be deployed on-premises or in the cloud to fit your requirements.
+
+- __Director:__ The administration console for the Komprise Grid. It's used to configure the environment, set policies, monitor activities, and view reports and graphs.
+- __Observers:__ The heart of the system. Observers communicate with the Director, analyze storage systems, summarize reports, execute tiering Plans identifying what data to tier, move data to Azure Blob, then provide continuous transparent end user access to the tiered files.
+- __Windows Proxies:__ Scalable data movers that simplify and accelerate SMB/CIFS data flow. ItΓÇÖs easy to add Windows Proxy data movers to meet the changing performance requirements of growing environments or tight timelines.
++
+#### Share Discovery and Analysis
+
+Adding shares for Komprise Analysis and Tiering
+1. Start on the Data Stores page.
+1. Use ΓÇ£Add data storeΓÇ¥ to identify the storage system and shares for analysis.
++
+3. When adding a NAS share, select ΓÇ£StandardΓÇ¥ for the ΓÇ£Storage useΓÇ¥ and choose the appropriate platform or provider of the storage system to add. Komprise supports standard NAS protocols. Use the ΓÇ£Other NASΓÇ¥ option for storage systems not named in the list.
++
+4. Provide the necessary system information to enable Komprise to access the storage system. This is typically a Hostname or IP address. Larger NAS systems sometimes require an admin port and sign in. See [Komprise Admin Guide: Introduction](https://komprise.freshdesk.com/support/solutions/articles/17000048966-komprise-admin-guide-introduction) for more information.
+
+1. Enable shares for Analysis and Tiering.
++
+6. Then, choose ΓÇ£Skip adding to Plan and closeΓÇ¥ to complete the Add Data Store wizard.
+
+#### Add Azure Blob Target
+
+Add a target to send the tiered data.
+
+1. On the Data Stores page, select ΓÇ£Add data store" to add the tiering Plan Target. Select ΓÇ£Plan TargetΓÇ¥ for the ΓÇ£Storage useΓÇ¥ and choose Azure Blob Storage as the Provider.
++
+2. Enter the appropriate information required to enable Komprise to connect to the Azure Blob container.
++
+#### Create a New Plan for Tiering
+
+Komprise Plans define the policies for tiering, including identifying the shares to tier from and the desired file inactivity age. At periodic transfer intervals (the default is seven days), Komprise scans the specified shares and tiers any file that meets the Plan policy (inactivity age) to the specified Azure Blob container.
+
+1. Create a new Plan using ΓÇ£All actionsΓÇ¥ at the upper right side of the screen.
++
+2. In the Plan Editor, assign a new Group name for the first group of shares. After providing the Group name, type Enter to open the Group for editing.
+
+1. Enable Move Operations for Tiering
++
+4. Select ΓÇ£Add SharesΓÇ¥ to specify the on-premises shares for tiering.
++
+5. Select ΓÇ£Files filtered by AgeΓÇ¥ to specify the age policy for tiering inactive files. The heatmap (donut chart) on right adds a purple hashed area showing the amount and number of files that Komprise estimates will move according to the Plan policy. Mouse over the purple hash area to see the values.
++
+6. You can customize the analysis to ensure accurate cost savings estimates by clicking ΓÇ£Edit Cost ModelΓÇ¥ at the lower right when viewing your Plan. Enter On-premises storage cost information on the left and Target storage costs information (for Azure Blob) on the right.
++
+7. These financial values are applied to the data identified for tiering and represented in the 3-Year Savings numbers and graphs below the heatmap. The following images highlight the growth rates and costs after one year. The yellow line shows estimated costs if no data is tiered whereas the blue line represents reduced costs including tiering by Komprise.
++
+8. Back in the Plan Editor, add the tiering Target for the tiered, aged files. For the ΓÇ£To TargetΓÇ¥ field, choose the Azure Blob Target added earlier.
++
+9. Select ΓÇ£Done EditingΓÇ¥ to complete the Plan configuration.
+
+1. Next select ΓÇ£Activate or TestΓÇ¥ to begin tiering operations. If you test your Plan first (recommended), Komprise generates a list of files that it would have moved from your selected NAS shares to Azure Blob, without actually moving them. If you choose to activate the Plan, Komprise begins moving files that meet the criteria defined in the Plan to Azure Blob.
++
+11. Once activated, the new Plan Activity tab opens showing the tiering operation results.
++
+During any copy or move operations, Komprise performs full MD5 checksums for NFS and SHA-1 for SMB on your files to ensure full data integrity during the data transfers. A single Plan can span multiple NAS servers, even from different vendors. You can create different policy groups in Komprise for different departments, for example. This feature is useful when a central IT department is managing data for different business units.
+
+## How Komprise Delivers Transparent, Native Data Access
+
+Komprise tiers files from a source NAS share to Azure Blob and leaves behind a symbolic link to the file in its original source location. When a user or application attempts to read or write a file that you have tiered, they access the file using the symbolic link. The link points to a Komprise Grid Observer, which tracks where the file is stored. The Observer retrieves the file behind-the-scenes from Azure and fulfills the read/write request for the file within seconds. This way, users and applications continue accessing these files in the same location without refactoring applications to use object storage or specifying a different file share location.
+
+By default, Komprise moves files in their native format with their contents unchanged. As files are moved to Azure, you can now access these files as objects natively within Azure. This opens many new and exciting scenarios for customers. For example, customers can use this data as the foundation for a data lake, with the potential to query and explore data using services such as [Azure Synapse Analytics](https://azure.microsoft.com/products/synapse-analytics/) or [Azure Machine Learning](https://azure.microsoft.com/products/machine-learning/). The possibilities are endless ΓÇô and many customers are unlocking the value of their cold data in ways that they might have never considered before.
+
+#### Other Komprise Licensing Options
+
+Komprise Intelligent Tiering for Azure includes Komprise Elastic Data Migration, which is available for Azure customers at no cost with the Azure Migration Program. It doesn't include Komprise Deep Analytics, which is a powerful search-like interface that unlocks the data management and data workflow capabilities of the Komprise Global File Index and Intelligent Data Management Platform. [Contact Komprise](https://www.komprise.com/contact/) to learn more about these features and upgrades.
+
+#### Why Komprise Intelligent Tiering for Azure?
+
+In addition to overall cost savings, some of the benefits of hybrid cloud tiering using Komprise Intelligent Tiering for Azure include:
+
+- __Leverage Existing Storage Investment:__ You can continue to use your existing NAS storage and Komprise transparently tiers cold files to Azure. Users and applications continue to see and access the files as if they are still on-premises.
+- __Leverage Azure Data
+- __Works Across Heterogeneous Vendor Storage:__ Komprise works across all file and object storage to analyze and transparently tier data to Azure file and object tiers.
+- __Ongoing Lifecycle Management in Azure:__ Komprise continues to manage data lifecycles in Azure. As data gets colder it can move from Azure Blob Cool to Azure Blob Archive based on policies you set.
+
+To learn more and get a customized assessment of your savings, visit [Komprise on the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.komprise_tiering_transactable_license?tab=Overview)
++
stream-analytics Monitor Azure Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/monitor-azure-stream-analytics.md
Start with the article [Monitoring Azure resources with Azure Monitor](../azure-
The following sections build on this article by describing the specific data gathered for Azure Stream Analytics. These sections also provide examples for configuring data collection and analyzing this data with Azure tools. > [!TIP]
-> To understand costs associated with Azure Monitor, see [Usage and estimated costs](../azure-monitor//usage-estimated-costs.md). To understand the time it takes for your data to appear in Azure Monitor, see [Log data ingestion time](../azure-monitor/logs/data-ingestion-time.md).
+> To understand costs associated with Azure Monitor, see [Azure Monitor cost and usage](../azure-monitor/cost-usage.md). To understand the time it takes for your data to appear in Azure Monitor, see [Log data ingestion time](../azure-monitor/logs/data-ingestion-time.md).
## Monitoring overview page in Azure portal Azure Stream Analytics provides plenty of metrics that you can use to monitor and troubleshoot your query and job performance. You can view data from these metrics on the **Overview** page of the Azure portal, in the **Monitoring** section.
synapse-analytics Apache Spark Secure Credentials With Tokenlibrary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-secure-credentials-with-tokenlibrary.md
Previously updated : 06/28/2023 Last updated : 10/31/2023 zone_pivot_groups: programming-languages-spark-all-minus-sql-r
zone_pivot_groups: programming-languages-spark-all-minus-sql-r
Accessing data from external sources is a common pattern. Unless the external data source allows anonymous access, chances are you need to secure your connection with a credential, secret, or connection string.
-Azure Synapse Analytics uses Microsoft Entra passthrough by default for authentication between resources. If you need to connect to a resource using other credentials, use the mssparkutils directly. The mssparkutils simplifies the process of retrieving SAS tokens, Microsoft Entra tokens, connection strings, and secrets stored in a linked service or from an Azure Key Vault.
+Azure Synapse Analytics uses Microsoft Entra passthrough by default for authentication between resources. If you need to connect to a resource using other credentials, use the mssparkutils directly. The [mssparkutils](microsoft-spark-utilities.md) package simplifies the process of retrieving SAS tokens, Microsoft Entra tokens, connection strings, and secrets stored in a linked service or from an Azure Key Vault.
Microsoft Entra passthrough uses permissions assigned to you as a user in Microsoft Entra ID, rather than permissions assigned to Synapse or a separate service principal. For example, if you want to use Microsoft Entra passthrough to access a blob in a storage account, then you should go to that storage account and assign blob contributor role to yourself.
df.show()
::: zone-end
-#### ADLS Gen2 storage (without linked services)
+#### ADLS Gen2 storage without linked services
-Connect to ADLS Gen2 storage directly by using a SAS key use the **ConfBasedSASProvider** and provide the SAS key to the **spark.storage.synapse.sas** configuration setting.
+Connect to ADLS Gen2 storage directly by using a SAS key. Use the `ConfBasedSASProvider` and provide the SAS key to the `spark.storage.synapse.sas` configuration setting. SAS tokens can be set at the container level, account level, or global. We do not recommend setting SAS keys at the global level, as the job will not be able to read/write from more than one storage account.
+
+**SAS configuration per storage container**
++
+```scala
+%%spark
+sc.hadoopConfiguration.set("fs.azure.account.auth.type.<ACCOUNT>.dfs.core.windows.net", "SAS")
+sc.hadoopConfiguration.set("fs.azure.sas.token.provider.type", "com.microsoft.azure.synapse.tokenlibrary.ConfBasedSASProvider")
+spark.conf.set("spark.storage.synapse.<CONTAINER>.<ACCOUNT>.dfs.core.windows.net.sas", "<SAS KEY>")
+
+val df = spark.read.csv("abfss://<CONTAINER>@<ACCOUNT>.dfs.core.windows.net/<FILE PATH>")
+
+display(df.limit(10))
+```
+++
+```python
+%%pyspark
+
+sc._jsc.hadoopConfiguration().set("fs.azure.account.auth.type.<ACCOUNT>.dfs.core.windows.net", "SAS")
+sc._jsc.hadoopConfiguration().set("fs.azure.sas.token.provider.type", "com.microsoft.azure.synapse.tokenlibrary.ConfBasedSASProvider")
+spark.conf.set("spark.storage.synapse.<CONTAINER>.<ACCOUNT>.dfs.core.windows.net.sas", "<SAS KEY>")
+
+df = spark.read.csv('abfss://<CONTAINER>@<ACCOUNT>.dfs.core.windows.net/<FILE PATH>')
+
+display(df.limit(10))
+```
++
+**SAS configuration per storage account**
++
+```scala
+%%spark
+sc.hadoopConfiguration.set("fs.azure.account.auth.type.<ACCOUNT>.dfs.core.windows.net", "SAS")
+sc.hadoopConfiguration.set("fs.azure.sas.token.provider.type", "com.microsoft.azure.synapse.tokenlibrary.ConfBasedSASProvider")
+spark.conf.set("spark.storage.synapse.<ACCOUNT>.dfs.core.windows.net.sas", "<SAS KEY>")
+
+val df = spark.read.csv("abfss://<CONTAINER>@<ACCOUNT>.dfs.core.windows.net/<FILE PATH>")
+
+display(df.limit(10))
+```
+++
+```python
+%%pyspark
+
+sc._jsc.hadoopConfiguration().set("fs.azure.account.auth.type.<ACCOUNT>.dfs.core.windows.net", "SAS")
+sc._jsc.hadoopConfiguration().set("fs.azure.sas.token.provider.type", "com.microsoft.azure.synapse.tokenlibrary.ConfBasedSASProvider")
+spark.conf.set("spark.storage.synapse.<ACCOUNT>.dfs.core.windows.net.sas", "<SAS KEY>")
+
+df = spark.read.csv('abfss://<CONTAINER>@<ACCOUNT>.dfs.core.windows.net/<FILE PATH>')
+
+display(df.limit(10))
+```
++
+**SAS configuration of all storage accounts**
::: zone pivot = "programming-language-scala"
While Azure Synapse Analytics supports a variety of linked service connections (
| Azure Database for MySQL | `AzureOSSDB` | | Azure Database for MariaDB | `AzureOSSDB` | | Azure Database for PostgreSQL | `AzureOSSDB` |+ #### Unsupported linked service access from the Spark runtime The following methods of accessing the linked services are not supported from the Spark runtime: - Passing arguments to parameterized linked service - Connections with User assigned managed identities (UAMI)
+ - System Assigned Managed identities are not supported on Keyvault resource
+ - For Azure Cosmos DB connections, key based access alone is supported. Token based access is not supported.
-While running a notebook or a Spark job, requests to get a token / secret using a linked service may fail with an error message that indicates 'BadRequest'. This is often caused by a configuration issue with the linked service. If you see this error message, please check the configuration of your linked service. If you have any questions, please contact Microsoft Azure Support at the [Azure portal](https://portal.azure.com).
+While running a notebook or a Spark job, requests to get a token / secret using a linked service might fail with an error message that indicates 'BadRequest'. This is often caused by a configuration issue with the linked service. If you see this error message, please check the configuration of your linked service. If you have any questions, please contact Microsoft Azure Support at the [Azure portal](https://portal.azure.com).
-## Next steps
+## Related content
- [Write to dedicated SQL pool](./synapse-spark-sql-pool-import-export.md)
+- [Apache Spark in Azure Synapse Analytics](apache-spark-overview.md)
+- [Introduction to Microsoft Spark Utilities](microsoft-spark-utilities.md)
synapse-analytics What Is A Data Warehouse Unit Dwu Cdwu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/what-is-a-data-warehouse-unit-dwu-cdwu.md
Title: Data Warehouse Units (DWUs) for dedicated SQL pool (formerly SQL DW)
description: Recommendations on choosing the ideal number of data warehouse units (DWUs) to optimize price and performance, and how to change the number of units. Previously updated : 11/22/2019 Last updated : 10/30/2023 - # Data Warehouse Units (DWUs) for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics
Increasing DWUs:
## Service Level Objective
-The Service Level Objective (SLO) is the scalability setting that determines the cost and performance level of your data warehouse. The service levels for Gen2 are measured in compute data warehouse units (cDWU), for example DW2000c. Gen1 service levels are measured in DWUs, for example DW2000.
- The Service Level Objective (SLO) is the scalability setting that determines the cost and performance level of your dedicated SQL pool (formerly SQL DW). The service levels for Gen2 dedicated SQL pool (formerly SQL DW) are measured in data warehouse units (DWU), for example DW2000c. > [!NOTE]
time-series-insights How To Monitor Tsi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-monitor-tsi.md
Start with the article [Monitoring Azure resources with Azure Monitor](../azure-
The following sections build on this article by describing the specific data gathered for Azure Time Series Insights. These sections also provide examples for configuring data collection and analyzing this data with Azure tools. > [!TIP]
-> To understand costs associated with Azure Monitor, see [Usage and estimated costs](../azure-monitor//usage-estimated-costs.md). To understand the time it takes for your data to appear in Azure Monitor, see [Log data ingestion time](../azure-monitor/logs/data-ingestion-time.md).
+> To understand costs associated with Azure Monitor, see [Azure Monitor cost and usage](../azure-monitor/cost-usage.md). To understand the time it takes for your data to appear in Azure Monitor, see [Log data ingestion time](../azure-monitor/logs/data-ingestion-time.md).
## Monitoring data from Azure Time Series Insights
virtual-desktop Insights Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/insights-costs.md
Learn more about Azure Virtual Desktop Insights at these articles:
- [Use Azure Virtual Desktop Insights to monitor your deployment](insights.md). - Use the [glossary](insights-glossary.md) to learn more about terms and concepts. - If you encounter a problem, check out our [troubleshooting guide](troubleshoot-insights.md) for help.-- Check out [Monitoring usage and estimated costs in Azure Monitor](../azure-monitor/usage-estimated-costs.md) to learn more about managing your monitoring costs.
+- Check out [Azure Monitor cost and usage](../azure-monitor/cost-usage.md) to learn more about managing your monitoring costs.
virtual-desktop Service Architecture Resilience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/service-architecture-resilience.md
You're responsible for creating and managing session hosts, including any operat
This high-level diagram shows the components and responsibilities: ## User connections
The feed discovery process is as follows:
Here's a high-level diagram showing the feed discovery process in a single Azure region:
- :::image type="content" source="media/service-architecture/feed-discovery-single.svg" border="false" alt-text="A diagram showing the feed discovery process in a single Azure region." lightbox="media/service-architecture/feed-discovery-single.svg":::
+ :::image type="content" source="media/service-architecture-resilience/feed-discovery-single.svg" border="false" alt-text="A diagram showing the feed discovery process in a single Azure region." lightbox="media/service-architecture-resilience/feed-discovery-single.svg":::
The geographical database only contains the information required for desktops and apps from host pools in the same Azure regions covered by the geography. If the user is assigned to desktops or apps from a host pool that is covered by a different geography, the resource directory tells the web service to connect to the broker service and geographical database in the correct Azure region. Here's a high-level diagram showing the feed discovery process for a host pool in an Azure region that's covered by a different geography:
- :::image type="content" source="media/service-architecture/feed-discovery-multi.svg" border="false" alt-text="A diagram showing the feed discovery process for a host pool in an Azure region that's covered by a different geography." lightbox="media/service-architecture/feed-discovery-multi.svg":::
+ :::image type="content" source="media/service-architecture-resilience/feed-discovery-multi.svg" border="false" alt-text="A diagram showing the feed discovery process for a host pool in an Azure region that's covered by a different geography." lightbox="media/service-architecture-resilience/feed-discovery-multi.svg":::
### RDP connection
When a user connects to a desktop or app from their feed, the RDP connection is
Here's a high-level diagram showing the RDP connection process: > [!TIP] > You can find more detailed technical information about network connectivity at [Understanding Azure Virtual Desktop network connectivity](network-connectivity.md) and [RDP Shortpath for Azure Virtual Desktop](rdp-shortpath.md).
The Microsoft-managed components of Azure Virtual Desktop are currently located
Here's a high-level diagram showing how the Microsoft-managed components are interconnected: The other Azure services on which Azure Virtual Desktop relies are themselves designed to be resilient and reliable. For more information, see [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md) and [Azure Front Door](../frontdoor/front-door-overview.md).
Azure Virtual Desktop is a service that can help organizations adapt to the dema
Here's a map demonstrating the global reach of Azure Virtual Desktop: ## Related content
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
Title: What's new in Azure Virtual Desktop? - Azure
description: New features and product updates for Azure Virtual Desktop. Previously updated : 10/09/2023 Last updated : 10/31/2023
Make sure to check back here often to keep up with new updates.
> [!TIP] > See [What's new in documentation](whats-new-documentation.md), where we highlight new and updated articles for Azure Virtual Desktop.
+## October 2023
+
+Here's what changed in October 2023:
+
+### New article about Azure Virtual Desktop service architecture and resilience
+
+We've published a new article about the service architecture for Azure Virtual Desktop and how it provides a resilient, reliable, and secure service for organizations and users. Most components are Microsoft-managed, but some are customer-managed.
+
+You can learn more at [Azure Virtual Desktop service architecture and resilience](service-architecture-resilience.md).
+
+### OneDrive with RemoteApp in preview
+
+You can now use Microsoft OneDrive alongside a RemoteApp in preview. You can use this feature to access and synchronize your files while using a RemoteApp. When you connect to a RemoteApp, OneDrive can automatically launch as a companion to the RemoteApp.
+
+For more information about prerequisites and configuration, see [Use Microsoft OneDrive with a RemoteApp in Azure Virtual Desktop (preview)](onedrive-remoteapp.md).
+
+### Administrative template for FSLogix now available in Intune settings catalog
+
+The [administrative template for FSLogix](/fslogix/how-to-use-group-policy-templates) is now available in the [Intune settings catalog](/mem/intune/configuration/administrative-templates-windows). This template enables you to configure FSLogix settings centrally for [session hosts that are enrolled in Intune](management.md#microsoft-intune).
+
+## September 2023
+
+Here's what changed in September 2023:
+
+### Azure Virtual Desktop (classic) deprecation
+
+Azure Virtual Desktop (classic) now blocks users from creating new tenants. Customers should be deploying the current version of Azure Virtual Desktop for any new workloads. However, while Azure Virtual Desktop (classic) blocks new tenants, you can still access all other ongoing operation and management processes. We will no longer support Azure Virtual Desktop (classic) in September 2026, so we highly recommend you migrate from classic to Azure Virtual Desktop before then.
+
+For more information about the Azure Virtual Desktop (classic) retirement, see [Azure Virtual Desktop (classic) retirement](./virtual-desktop-fall-2019/classic-retirement.md).
+
+### Updates to Azure Virtual Desktop overview page in the Azure portal
+
+We've updated the overview page in the Azure Virtual Desktop administrator portal to include new visuals and tile links. These updates make it easier to navigate to documentation, find the forums for collaboration and discussion, submit feedback, and locate release notes for Azure Virtual Desktop.
+
+### The latest version of FSLogix is now included in Windows Enterprise multi-session images
+
+We added the latest version of FSLogix to Windows 10 and 11 Enterprise multi-session images in the Azure Marketplace. As of September 12, 2023, all images come preinstalled with the latest version of FSLogix.
+
+For more information about what's new in FSLogix, see the [FSLogix Release Notes](/fslogix/overview-release-notes?context=%2Fazure%2Fvirtual-desktop%2Fcontext%2Fcontext).
+
+### Azure Virtual Desktop Insights support for the Azure Monitor Agent is now generally available
+
+Azure Virtual Desktop Insights is a dashboard built on Azure Monitor workbooks that helps you understand your Azure Virtual Desktop environments. Azure Virtual Desktop Insights support for the Azure Monitor agent is now generally available. For more information, see [Use Azure Virtual Desktop Insights to monitor your deployment](insights.md?tabs=monitor).
+
+The Log Analytics agent for Azure Monitor is deprecating on August 31, 2024. We recommend you migrate monitoring your virtual machines (VMs) and servers to Azure Monitor Agent before that date. For more information about how to migrate, see [Migrate to Azure Monitor Agent from Log Analytics agent](../azure-monitor/agents/azure-monitor-agent-migration.md).
+
+### Custom Image Template feature is now generally available
+
+Azure Virtual Desktop just made it easier for you to create your golden image with the new Custom Image Template feature. You can use this new management option in the Azure portal to include built-in or custom scripts in your template that you can reuse. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-general-availability-of-azure-virtual-desktop-custom/ba-p/3909907).
+
+## August 2023
+
+Here's what changed in August 2023:
+
+### Updated Group Policy templates for FSLogix
+
+The [FSLogix 2210 hotfix 2](/fslogix/overview-release-notes#fslogix-2210-hotfix-2-29861260056) release includes updates to the Group Policy templates. Before this release, the Group Policy template files had some unique behaviors that made it difficult to find the correct policy name based on the list of configuration settings for Profiles, Office Data File Containers (ODFC), and Cloud Cache.
+
+For more information about FSLogix Group Policy Template Files, see [How to Use FSLogix Group Policy Template Files for FSLogix](/fslogix/how-to-use-group-policy-templates).
+
+### Improvements in custom image templates
+
+We've updated the text, tooltips, and links for custom image templates in the Azure portal to make them easier to use. You can also now go to the built-in customization settings and remove Clipchamp in the Remove AppX package list.
+
+We built the custom image templates feature using [Azure Image Builder](../virtual-machines/image-builder-overview.md) for you to use with Azure Virtual Desktop. For more information, see [Custom image templates](custom-image-templates.md).
+ ## July 2023 Here's what changed in July 2023:
virtual-desktop Windows 11 Language Packs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/windows-11-language-packs.md
Title: Install language packs on Windows 11 Enterprise VMs in Azure Virtual Desk
description: How to install language packs for Windows 11 Enterprise VMs in Azure Virtual Desktop. Previously updated : 08/23/2022 Last updated : 10/20/2023
Before you can add languages to a Windows 11 Enterprise VM, you'll need to have
- A Language and Optional Features ISO and Inbox Apps ISO of the OS version the image uses. You can download them here: - Language and Optional Features ISO: - [Windows 11, version 21H2 Language and Optional Features ISO](https://software-download.microsoft.com/download/sg/22000.1.210604-1628.co_release_amd64fre_CLIENT_LOF_PACKAGES_OEM.iso)
- - [Windows 11, version 22H2 Language and Optional Features ISO](https://software-static.download.prss.microsoft.com/dbazure/988969d5-f34g-4e03-ac9d-1f9786c66749/22621.1.220506-1250.ni_release_amd64fre_CLIENT_LOF_PACKAGES_OEM.iso)
+ - [Windows 11, version 22H2 and 23H2 Language and Optional Features ISO](https://software-static.download.prss.microsoft.com/dbazure/988969d5-f34g-4e03-ac9d-1f9786c66749/22621.1.220506-1250.ni_release_amd64fre_CLIENT_LOF_PACKAGES_OEM.iso)
- Inbox Apps ISO: - [Windows 11, version 21H2 Inbox Apps ISO](https://software-static.download.prss.microsoft.com/dbazure/888969d5-f34g-4e03-ac9d-1f9786c66749/22000.2003.230512-1746.co_release_svc_prod3_amd64fre_InboxApps.iso)
- - [Windows 11, version 22H2 Inbox Apps ISO](https://software-static.download.prss.microsoft.com/dbazure/888969d5-f34g-4e03-ac9d-1f9786c66749/22621.1778.230511-2102.ni_release_svc_prod3_amd64fre_InboxApps.iso)
+ - [Windows 11, version 22H2 and 23H2 Inbox Apps ISO](https://software-static.download.prss.microsoft.com/dbazure/888969d5-f34g-4e03-ac9d-1f9786c66749/22621.2501.231009-1937.ni_release_svc_prod3_amd64fre_InboxApps.iso)
- An Azure Files share or a file share on a Windows File Server VM >[!NOTE]
virtual-network-manager Concept Security Admins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-security-admins.md
# Security admin rules in Azure Virtual Network Manager
-In this article, you'll learn about security admin rules in Azure Virtual Network Manager. Security admin rules are used to define global network security rules that apply to all virtual networks within a [network group](concept-network-groups.md). You learn about what security admin rules are, how they work, and when to use them.
+In this article, you learn about security admin rules in Azure Virtual Network Manager. Security admin rules are used to define global network security rules that apply to all virtual networks within a [network group](concept-network-groups.md). You learn about what security admin rules are, how they work, and when to use them.
[!INCLUDE [virtual-network-manager-preview](../../includes/virtual-network-manager-preview.md)] ## What is a security admin rule?
-Security admin rules are global network security rules that enforce security policies defined in the rule collection on virtual networks. These rules can be used to Allow, Always Allow, or Deny traffic across virtual networks within your targeted network groups. These network groups can only consist of virtual networks within the scope of your network manager instance; thus, security admin rules cannot apply to virtual networks not managed by a network manager.
+Security admin rules are global network security rules that enforce security policies defined in the rule collection on virtual networks. These rules can be used to Allow, Always Allow, or Deny traffic across virtual networks within your targeted network groups. These network groups can only consist of virtual networks within the scope of your network manager instance; thus, security admin rules can't apply to virtual networks not managed by a network manager.
Here are some scenarios where security admin rules can be used:
Based on the industry study and suggestions from Microsoft, we recommend custome
| **Port** | **Protocol** | **Description** | | | - | - |
-| **20** | TCP | Unencrypted FTP Traffic |
-| **21** | TCP | Unencrypted FTP Traffic |
-| **22** | TCP | SSH. Potential brute force attacks |
-| **23** | TCP | TFTP allows unauthenticated and/or unencrypted traffic |
-| **69** | UDP | TFTP allows unauthenticated and/or unencrypted traffic |
-| **111** | TCP/UDP | RPC. Unencrypted authentication allowed |
-| **119** | TCP | NNTP for unencrypted authentication |
-| **135** | TCP/UDP | End Point Mapper, multiple remote management services |
-| **161** | TCP | SNMP for unsecure / no authentication |
-| **162** | TCP/UDP | SNMP Trap - unsecure / no authentication |
-| **445** | TCP | SMB - well known attack vector |
-| **512** | TCP | Rexec on Linux - remote commands without encryption authentication |
-| **514** | TCP | Remote Shell - remote commands without authentication or encryption |
-| **593** | TCP/UDP | HTTP RPC EPMAP - unencrypted remote procedure call |
-| **873** | TCP | Rsync - unencrypted file transfer |
-| **2049** | TCP/UDP | Network File System |
-| **3389** | TCP | RDP - Common brute force attack port |
-| **5800** | TCP | VNC Remote Frame Buffer over HTTP |
-| **5900** | TCP | VNC Remote Frame Buffer over HTTP |
+| **20** | TCP | Unencrypted FTP Traffic |
+| **21** | TCP | Unencrypted FTP Traffic |
+| **22** | TCP | SSH. Potential brute force attacks |
+| **23** | TCP | TFTP allows unauthenticated and/or unencrypted traffic |
+| **69** | UDP | TFTP allows unauthenticated and/or unencrypted traffic |
+| **111** | TCP/UDP | RPC. Unencrypted authentication allowed |
+| **119** | TCP | NNTP for unencrypted authentication |
+| **135** | TCP/UDP | End Point Mapper, multiple remote management services |
+| **161** | TCP | SNMP for unsecure / no authentication |
+| **162** | TCP/UDP | SNMP Trap - unsecure / no authentication |
+| **445** | TCP | SMB - well known attack vector |
+| **512** | TCP | Rexec on Linux - remote commands without encryption authentication |
+| **514** | TCP | Remote Shell - remote commands without authentication or encryption |
+| **593** | TCP/UDP | HTTP RPC EPMAP - unencrypted remote procedure call |
+| **873** | TCP | Rsync - unencrypted file transfer |
+| **2049** | TCP/UDP | Network File System |
+| **3389** | TCP | RDP - Common brute force attack port |
+| **5800** | TCP | VNC Remote Frame Buffer over HTTP |
+| **5900** | TCP | VNC Remote Frame Buffer over HTTP |
| **11211** | UDP | Memcached | ### Management at scale
New resources are protected along with existing resources. For example, if you a
When new security risks are identified, you can deploy them at scale by creating a security admin rule to protect against the new risk and applying it to your network groups. Once this new rule is deployed, all resources in the scope of the network groups will be protected now and in the future.
+## Nonapplication of security admin rules
+
+In most instances, security admin rules are applied to all virtual networks and subnets within the scope of a network group's applied security configuration. However, there are some services that don't apply security admin rules due to the network requirements of the service. These requirements are enforced by the service's network intent policy.
+
+### Nonapplication of security admin rules at virtual network level
+
+By default, security admin rules aren't applied to a virtual network containing the following
+
+- [Azure SQL Managed Instances](/azure/azure-sql/managed-instance/connectivity-architecture-overview#mandatory-security-rules-with-service-aided-subnet-configuration)
+- Azure Databricks
+
+When a virtual network contains these services, the security admin rules skip this virtual network. If you want *Allow* rules applied to this virtual network, you create your security configuration with the `AllowRulesOnly` field set in the [securityConfiguration.properties.applyOnNetworkIntentPolicyBasedServices](/dotnet/api/microsoft.azure.management.network.models.networkintentpolicybasedservice?view=azure-dotnet) .NET class. When set, *Allow* rules in your security configuration will be applied to this virtual network. All *Deny* rules will not be applied on this virtual network. Virtual networks without these services can continue using *Allow* and *Deny* rules.
+
+You can create a security configuration with *Allow* rules only and deploy it to your virtual networks with [Azure PowerShell](/powershell/module/az.network/new-aznetworkmanagersecurityadminconfiguration#example-1) and [Azure CLI](/cli/azure/network/manager/security-admin-config#az-network-manager-security-admin-config-create-examples).
+
+> [!NOTE]
+> When multiple Azure Virtual Network Manager instances apply different settings in the `securityConfiguration.properties.applyOnNetworkIntentPolicyBasedServices` class to the same virtual network, the setting of the network manager instance with the highest scope will be used.
+> Let's say you have two virtual network managers. The first network manager is scoped to the root management group and has a security configuration with set to *AllowRulesOnly* in the `securityConfiguration.properties.applyOnNetworkIntentPolicyBasedServices` class. The second virtual network manager is scoped to a subscription under the root management group and uses the default field of *None* in it's security configuration. When both configurations apply security admin rules to the same virtual network, the *AllowRulesOnly* setting will be applied to the virtual network.
+
+### Nonapplication of security admin rules at subnet level
+
+Similarly, there are some services that don't apply security admin rules at the subnet level when those subnets' virtual networks are a part of a network group targeted by a security admin configuration. Those services include:
+
+- Azure Application Gateway
+- Azure Bastion
+- Azure Firewall
+- Azure Route Server
+- Azure VPN Gateway
+- Azure Virtual WAN
+- Azure ExpressRoute Gateway
+
+In this case, the resources in the subnet with these services wonΓÇÖt be applied with security admin rules; however, other subnets will still have security admin rules applied to them.
+
+> [!NOTE]
+> If you want to apply security admin rules on subnets containing an Azure Application Gateway, ensure each subnet only contains gateways that have been provisioned with [network isolation](../application-gateway/application-gateway-private-deployment.md) enabled. If a subnet contains an Azure Application Gateway without network isolation, security admin rules won't be applied to this subnet.
+ ## Security admin fields When you define a security admin rule, there are required and optional fields. + ### Required fields #### Priority
virtual-network-manager Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/faq.md
You can view Azure Virtual Network Manager settings under **Network Manager** fo
Should a regional outage occur, all configurations applied to current resources managed persist, and you can't modify existing configurations, or create new configuration.
-### Can a virtual network managed by Azure Virtual Network Manager be peered to a non-managed virtual network?
+### Can a virtual network managed by Azure Virtual Network Manager be peered to a nonmanaged virtual network?
Yes, Azure Virtual Network Manager is fully compatible with pre-existing hub and spoke topology deployments using peering. This means that you won't need to delete any existing peered connections between the spokes and the hub. The migration occurs without any downtime to your network. ### Can I migrate an existing hub and spoke topology to Azure Virtual Network Manager?
-Yes, migrating existing VNets to AVNMΓÇÖs hub and spoke topology is very easy and requires no down time. Customers can [create a hub and spoke topology connectivity configuration](how-to-create-hub-and-spoke.md) of the desired topology. When the deployment of this configuration is deployed, virtual network manager will automatically create the necessary peerings. Any pre-existing peerings set up by users will remain intact, ensuring there's no downtime.
+Yes, migrating existing VNets to AVNMΓÇÖs hub and spoke topology is easy and requires no down time. Customers can [create a hub and spoke topology connectivity configuration](how-to-create-hub-and-spoke.md) of the desired topology. When the deployment of this configuration is deployed, virtual network manager will automatically create the necessary peerings. Any pre-existing peerings set up by users remain intact, ensuring there's no downtime.
### How do connected groups differ from virtual network peering regarding establishing connectivity between virtual networks?
-In Azure, VNet peering and connected groups are two methods of establishing connectivity between virtual networks (VNets). While VNet peering works by creating a 1:1 mapping between each peered VNet, connected groups use a new construct that establishes connectivity without such a mapping. In a connected group, all virtual networks are connected without individual peering relationships. For example, if VNetA, VNetB, and VNetC are part of the same connected group, connectivity is enabled between each VNet without the need for individual peering relationships.
+In Azure, virtual network peering and connected groups are two methods of establishing connectivity between virtual networks (VNets). While virtual network peering works by creating a 1:1 mapping between each peered virtual network, connected groups use a new construct that establishes connectivity without such a mapping. In a connected group, all virtual networks are connected without individual peering relationships. For example, if VNetA, VNetB, and VNetC are part of the same connected group, connectivity is enabled between each virtual network without the need for individual peering relationships.
### Do security admin rules apply to Azure Private Endpoints?
No, an Azure Virtual WAN hub isn't supported as the hub in a hub and spoke topol
### My Virtual Network isn't getting the configurations I'm expecting. How do I troubleshoot?
-#### Have you deployed your configuration to the VNet's region?
+#### Have you deployed your configuration to the virtual network's region?
Configurations in Azure Virtual Network Manager don't take effect until they're deployed. Make a deployment to the virtual networks region with the appropriate configurations.
Configurations in Azure Virtual Network Manager don't take effect until they're
A network manager is only delegated enough access to apply configurations to virtual networks within your scope. Even if a resource is in your network group but out of scope, it doesn't receive any configurations.
-#### Are you applying security rules to a VNet containing Azure SQL Managed Instances?
+#### Are you applying security rules to a virtual network containing Azure SQL Managed Instances?
Azure SQL Managed Instance has some network requirements. These are enforced through high priority Network Intent Policies, whose purpose conflicts with Security Admin Rules. By default, Admin rule application is skipped on VNets containing any of these Intent Policies. Since *Allow* rules pose no risk of conflict, you can opt to apply *Allow Only* rules. If you only wish to use Allow rules, you can set AllowRulesOnly on `securityConfiguration.properties.applyOnNetworkIntentPolicyBasedServices`.
+#### Are you applying security rules to a virtual network or subnet that contains services blocking security configuration rules?
+
+Certain services such as Azure SQL Managed Instance, Azure Databricks and Azure Application Gateway require specific network requirements to function propertly. By default, security admin rule application is skipped on [VNets and subnets containing any of these services](./concept-security-admins.md#nonapplication-of-security-admin-rules). Since *Allow* rules pose no risk of conflict, you can opt to apply *Allow Only* rules by setting the security configurations' `AllowRulesOnly`field on `securityConfiguration.properties.applyOnNetworkIntentPolicyBasedServices` .NET class.
+ ## Limits ### What are the service limitations of Azure Virtual Network Manager?
Azure SQL Managed Instance has some network requirements. These are enforced thr
* Azure Virtual Network Manager policies don't support the standard policy compliance evaluation cycle. For more information, see [Evaluation triggers](../governance/policy/how-to/get-compliance-data.md#evaluation-triggers).
-* The current preview of connected group has a limitation where traffic from a connected group cannot communicate with a private endpoint in this connected group if it has NSG enabled on it. However, this limitation will be removed once the feature is generally available.
+* The current preview of connected group has a limitation where traffic from a connected group can't communicate with a private endpoint in this connected group if it has NSG enabled on it. However, this limitation will be removed once the feature is generally available.
## Next steps Create an [Azure Virtual Network Manager](create-virtual-network-manager-portal.md) instance using the Azure portal.
virtual-wan Create Bgp Peering Hub Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/create-bgp-peering-hub-portal.md
description: Learn how to create a BGP peering with Virtual WAN hub router.
Previously updated : 09/06/2022 Last updated : 10/30/2023
vpn-gateway Vpn Gateway Vpn Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-vpn-faq.md
No.
A VPN gateway is a type of virtual network gateway. A VPN gateway sends encrypted traffic between your virtual network and your on-premises location across a public connection. You can also use a VPN gateway to send traffic between virtual networks. When you create a VPN gateway, you use the -GatewayType value 'Vpn'. For more information, see [About VPN Gateway configuration settings](vpn-gateway-about-vpn-gateway-settings.md).
+### Can I create new virtual network gateways in the classic deployment model?
+
+No. The classic deployment model is a legacy deployment model. You can't create new classic virtual network gateways. Existing classic virtual network gateways are supported until August 31, 2024. To migrate to the Resource Manager deployment model, see [VPN Gateway classic to Resource Manager migration](vpn-gateway-classic-resource-manager-migration.md).
+ ### Why can't I specify policy-based and route-based VPN types? As of Oct 1, 2023, you no longer need to specify VPN type. All new VPN gateways will automatically be created as route-based gateways. If you already have a policy-based gateway, you don't need to upgrade your gateway to route-based.