Updates from: 02/15/2024 02:13:08
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/best-practices.md
Previously updated : 01/11/2024 Last updated : 02/05/2024
The following best practices and recommendations cover some of the primary aspec
| Best practice | Description | |--|--|
+| Create emergency access account | This emergency access account helps you gain access to your Azure AD B2C tenant in circumstances such as the only administrator is unreachable when the credential is needed. [Learn how to create an emergency access account](tenant-management-emergency-access-account.md#create-emergency-access-account) |
| Choose user flows for most scenarios | The Identity Experience Framework of Azure AD B2C is the core strength of the service. Policies fully describe identity experiences such as sign-up, sign-in, or profile editing. To help you set up the most common identity tasks, the Azure AD B2C portal includes predefined, configurable policies called user flows. With user flows, you can create great user experiences in minutes, with just a few clicks. [Learn when to use user flows vs. custom policies](user-flow-overview.md#comparing-user-flows-and-custom-policies).| | App registrations | Every application (web, native) and API that is being secured must be registered in Azure AD B2C. If an app has both a web and native version of iOS and Android, you can register them as one application in Azure AD B2C with the same client ID. Learn how to [register OIDC, SAML, web, and native apps](./tutorial-register-applications.md?tabs=applications). Learn more about [application types that can be used in Azure AD B2C](./application-types.md). | | Move to monthly active users billing | Azure AD B2C has moved from monthly active authentications to monthly active users (MAU) billing. Most customers will find this model cost-effective. [Learn more about monthly active users billing](https://azure.microsoft.com/updates/mau-billing/). |
ai-services Video Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/video-retrieval.md
Title: Do video retrieval using vectorization - Image Analysis 4.0
-description: Learn how to call the Spatial Analysis Video Retrieval APIs to vectorize video frames and search terms.
+description: Learn how to call the Video Retrieval APIs to vectorize video frames and search terms.
#
# Do video retrieval using vectorization (version 4.0 preview)
-Azure AI Spatial Analysis Video Retrieval APIs are part of Azure AI Vision and enable developers to create an index, add documents (videos and images) to it, and search with natural language. Developers can define metadata schemas for each index and ingest metadata to the service to help with retrieval. Developers can also specify what features to extract from the index (vision, speech) and filter their search based on features.
+Azure AI Video Retrieval APIs are part of Azure AI Vision and enable developers to create an index, add documents (videos and images) to it, and search with natural language. Developers can define metadata schemas for each index and ingest metadata to the service to help with retrieval. Developers can also specify what features to extract from the index (vision, speech) and filter their search based on features.
## Prerequisites
Azure AI Spatial Analysis Video Retrieval APIs are part of Azure AI Vision and e
## Input requirements
-### Supported formats
-
-| File format | Description |
-| -- | -- |
-| `asf` | ASF (Advanced / Active Streaming Format) |
-| `avi` | AVI (Audio Video Interleaved) |
-| `flv` | FLV (Flash Video) |
-| `matroskamm`, `webm` | Matroska / WebM |
-| `mov`,`mp4`,`m4a`,`3gp`,`3g2`,`mj2` | QuickTime / MOV |
-
-### Supported video codecs
-
-| Codec | Format |
-| -- | -- |
-| `h264` | H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 |
-| `h265` | H.265/HEVC |
-| `libvpx-vp9` | libvpx VP9 (codec vp9) |
-| `mpeg4` | MPEG-4 part 2 |
-
-### Supported audio codecs
-
-| Codec | Format |
-| -- | -- |
-| `aac` | AAC (Advanced Audio Coding) |
-| `mp3` | MP3 (MPEG audio layer 3) |
-| `pcm` | PCM (uncompressed) |
-| `vorbis` | Vorbis |
-| `wmav2` | Windows Media Audio 2 |
## Call the Video Retrieval APIs
-To use the Spatial Analysis Video Retrieval APIs in a typical pattern, you would do the following steps:
+To use the Video Retrieval APIs in a typical pattern, you would do the following steps:
1. Create an index using **PUT - Create an index**. 2. Add video documents to the index using **PUT - CreateIngestion**.
To use the Spatial Analysis Video Retrieval APIs in a typical pattern, you would
### Use Video Retrieval APIs for metadata-based search
-The Spatial Analysis Video Retrieval APIs allows a user to add metadata to video files. Metadata is additional information associated with video files such as "Camera ID," "Timestamp," or "Location" that can be used to organize, filter, and search for specific videos. This example demonstrates how to create an index, add video files with associated metadata, and perform searches using different features.
+The Video Retrieval APIs allows a user to add metadata to video files. Metadata is additional information associated with video files such as "Camera ID," "Timestamp," or "Location" that can be used to organize, filter, and search for specific videos. This example demonstrates how to create an index, add video files with associated metadata, and perform searches using different features.
### Step 1: Create an Index
ai-services Intro To Spatial Analysis Public Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/intro-to-spatial-analysis-public-preview.md
Title: What is Spatial Analysis?
+ Title: What is Video Analysis?
-description: This document explains the basic concepts and features of the Azure Spatial Analysis container.
+description: This document explains the basic concepts and features of Azure Spatial Analysis and Video Retrieval.
# Previously updated : 01/19/2024 Last updated : 02/12/2024+
-# What is Spatial Analysis?
+# What is Video Analysis?
-You can use Azure AI Vision Spatial Analysis to detect the presence and movements of people in video. Ingest video streams from cameras, extract insights, and generate events to be used by other systems. The service can do things like count the number of people entering a space or measure compliance with face mask and social distancing guidelines. By processing video streams from physical spaces, you're able to learn how people use them and maximize the space's value to your organization.
+Video Analysis includes video-related features like Spatial Analysis and Video Retrieval.
+
+## Spatial Analysis
+
+You can use Azure AI Vision Spatial Analysis to detect the presence and movements of people in video. Ingest video streams from cameras, extract insights, and generate events to be used by other systems. The service can do things like count the number of people entering a space or measure compliance with face mask and social distancing guidelines. By processing video streams from physical spaces, you're able to learn how people use them and maximize the space's value to your organization.
Try out the capabilities of Spatial Analysis quickly and easily in your browser using Vision Studio. > [!div class="nextstepaction"] > [Try Vision Studio](https://portal.vision.cognitive.azure.com/)
-<!--This documentation contains the following types of articles:
-* The [quickstarts](./quickstarts-sdk/analyze-image-client-library.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time.
-* The [how-to guides](./how-to/call-analyze-image.md) contain instructions for using the service in more specific or customized ways.
-* The [conceptual articles]() provide in-depth explanations of the service's functionality and features.
-* The [tutorials](./tutorials/storage-lab-tutorial.md) are longer guides that show you how to use this service as a component in broader business solutions.-->
-
-## What it does
-Spatial Analysis ingests video then detects people in the video. After people are detected, the system tracks the people as they move around over time then generates events as people interact with regions of interest. All operations give insights from a single camera's field of view.
### People counting This operation counts the number of people in a specific zone over time using the PersonCount operation. It generates an independent count for each frame processed without attempting to track people across frames. This operation can be used to estimate the number of people in a space or generate an alert when a person appears.
Spatial Analysis can also be configured to detect if a person is wearing a prote
## Video Retrieval
-Spatial Analysis Video Retrieval is a service that lets you create a search index, add documents (videos and images) to it, and search with natural language. Developers can define metadata schemas for each index and ingest metadata to the service to help with retrieval. Developers can also specify what features to extract from the index (vision, speech) and filter their search based on features.
+Video Retrieval is a service that lets you create a search index, add documents (videos and images) to it, and search with natural language. Developers can define metadata schemas for each index and ingest metadata to the service to help with retrieval. Developers can also specify what features to extract from the index (vision, speech) and filter their search based on features.
-[Call the Video Retrieval APIs](./how-to/video-retrieval.md)
+> [!div class="nextstepaction"]
+> [Call the Video Retrieval APIs](./how-to/video-retrieval.md)
## Input requirements
+#### [Spatial Analysis](#tab/sa)
+ Spatial Analysis works on videos that meet the following requirements: * The video must be in RTSP, rawvideo, MP4, FLV, or MKV format. * The video codec must be H.264, HEVC(H.265), rawvideo, VP9, or MPEG-4.
-## Get started
+#### [Video Retrieval](#tab/vr)
+++
-Follow the [quickstart](spatial-analysis-container.md) to set up the Spatial Analysis container and begin analyzing video.
## Responsible use of Spatial Analysis technology
-To learn how to use Spatial Analysis technology responsibly, see the [transparency note](/legal/cognitive-services/computer-vision/transparency-note-spatial-analysis?context=%2fazure%2fcognitive-services%2fComputer-vision%2fcontext%2fcontext). Microsoft's transparency notes help you understand how our AI technology works and the choices system owners can make that influence system performance and behavior. They focus on the importance of thinking about the whole system including the technology, people, and environment.
+To learn how to use Spatial Analysis technology responsibly, see the [Transparency note](/legal/cognitive-services/computer-vision/transparency-note-spatial-analysis?context=%2fazure%2fcognitive-services%2fComputer-vision%2fcontext%2fcontext). Microsoft's transparency notes help you understand how our AI technology works and the choices system owners can make that influence system performance and behavior. They focus on the importance of thinking about the whole system including the technology, people, and environment.
## Next steps
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/overview.md
Azure's Azure AI Vision service gives you access to advanced algorithms that pro
| [Optical Character Recognition (OCR)](overview-ocr.md)|The Optical Character Recognition (OCR) service extracts text from images. You can use the new Read API to extract printed and handwritten text from photos and documents. It uses deep-learning-based models and works with text on various surfaces and backgrounds. These include business documents, invoices, receipts, posters, business cards, letters, and whiteboards. The OCR APIs support extracting printed text in [several languages](./language-support.md). Follow the [OCR quickstart](quickstarts-sdk/client-library.md) to get started.| |[Image Analysis](overview-image-analysis.md)| The Image Analysis service extracts many visual features from images, such as objects, faces, adult content, and auto-generated text descriptions. Follow the [Image Analysis quickstart](quickstarts-sdk/image-analysis-client-library-40.md) to get started.| | [Face](overview-identity.md) | The Face service provides AI algorithms that detect, recognize, and analyze human faces in images. Facial recognition software is important in many different scenarios, such as identification, touchless access control, and face blurring for privacy. Follow the [Face quickstart](quickstarts-sdk/identity-client-library.md) to get started. |
-| [Spatial Analysis](intro-to-spatial-analysis-public-preview.md)| The Spatial Analysis service analyzes the presence and movement of people on a video feed and produces events that other systems can respond to. Install the [Spatial Analysis container](spatial-analysis-container.md) to get started.|
+| [Video Analysis](intro-to-spatial-analysis-public-preview.md)| Video Analysis includes video-related features like Spatial Analysis and Video Retrieval. Spatial Analysis analyzes the presence and movement of people on a video feed and produces events that other systems can respond to. Install the [Spatial Analysis container](spatial-analysis-container.md) to get started. [Video Retrieval](/azure/ai-services/computer-vision/how-to/video-retrieval) lets you create an index of videos that you can search with natural language.|
## Azure AI Vision for digital asset management
ai-services Reference Video Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/reference-video-search.md
-+ Last updated 11/15/2023
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/language-support.md
Title: Language support - Immersive Reader
+ Title: Language support for Immersive Reader
-description: Learn more about the human languages that are available with Immersive Reader.
+description: Learn more about the human languages that Immersive Reader supports.
# Previously updated : 11/15/2021 Last updated : 02/07/2024
-# Language support for Immersive Reader
-
-This article lists supported human languages for Immersive Reader features.
+# Language support for Azure AI Immersive Reader
+Immersive Reader supports the following human languages.
## Text to speech
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/overview.md
Title: What is Azure AI Immersive Reader?
-description: Immersive Reader is a tool that is designed to help people with learning differences or help new readers and language learners with reading comprehension.
+description: Learn how you can use Immersive Reader to help people with learning differences or help new readers and language learners improve reading comprehension.
# Previously updated : 11/15/2021 Last updated : 02/12/2024 keywords: readers, language learners, display pictures, improve reading, read content, translate #Customer intent: As a developer, I want to learn more about the Immersive Reader, which is a new offering in Azure AI services, so that I can embed this package of content into a document to accommodate users with reading differences.
keywords: readers, language learners, display pictures, improve reading, read co
[!INCLUDE [Azure AI services rebrand](../includes/rebrand-note.md)]
-[Immersive Reader](https://www.onenote.com/learningtools) is part of [Azure AI services](../../ai-services/what-are-ai-services.md), and is an inclusively designed tool that implements proven techniques to improve reading comprehension for new readers, language learners, and people with learning differences such as dyslexia. With the Immersive Reader client library, you can leverage the same technology used in Microsoft Word and Microsoft One Note to improve your web applications.
+[Immersive Reader](https://www.onenote.com/learningtools), part of [Azure AI services](../../ai-services/what-are-ai-services.md), is an inclusively designed tool that implements proven techniques to improve reading comprehension for new readers, language learners, and people with learning differences such as dyslexia. With the Immersive Reader client library, you can leverage the same technology used in Microsoft Word and Microsoft OneNote to improve your web applications.
This documentation contains the following types of articles:
-* **[Quickstarts](quickstarts/client-libraries.md)** are getting-started instructions to guide you through making requests to the service.
+* **[Quickstart guides](quickstarts/client-libraries.md)** provide instructions to help you get started making requests to the service.
* **[How-to guides](how-to-create-immersive-reader.md)** contain instructions for using the service in more specific or customized ways.
-## Use Immersive Reader to improve reading accessibility
+## Use Immersive Reader to improve reading accessibility
-Immersive Reader is designed to make reading easier and more accessible for everyone. Let's take a look at a few of Immersive Reader's core features.
+Immersive Reader is designed to make reading easier and more accessible for everyone. Take a look at a few of Immersive Reader's core features.
### Isolate content for improved readability
-Immersive Reader isolates content to improve readability.
+Immersive Reader isolates content to improve readability.
- ![Isolate content for improved readability with Immersive Reader](./media/immersive-reader.png)
### Display pictures for common words
-For commonly used terms, the Immersive Reader will display a picture.
+Immersive Reader displays pictures for commonly used terms.
- ![Picture Dictionary with Immersive Reader](./media/picture-dictionary.png)
### Highlight parts of speech
-Immersive Reader can be use to help learners understand parts of speech and grammar by highlighting verbs, nouns, pronouns, and more.
+Immersive Reader can help learners understand parts of speech and grammar by highlighting verbs, nouns, pronouns, and more.
- ![Show parts of speech with Immersive Reader](./media/parts-of-speech.png)
### Read content aloud
-Speech synthesis (or text-to-speech) is baked into the Immersive Reader service, which lets your readers select text to be read aloud.
+Speech synthesis, or text to speech, is baked into the Immersive Reader service. Readers can select text to be read aloud.
- ![Read text aloud with Immersive Reader](./media/read-aloud.png)
### Translate content in real-time
-Immersive Reader can translate text into many languages in real-time. This is helpful to improve comprehension for readers learning a new language.
+Immersive Reader can translate text into many languages in real time, which helps to improve comprehension for readers learning a new language.
- ![Translate text with Immersive Reader](./media/translation.png)
### Split words into syllables
-With Immersive Reader you can break words into syllables to improve readability or to sound out new words.
+With Immersive Reader, you can break words into syllables to improve readability or to sound out new words.
- ![Break words into syllables with Immersive Reader](./media/syllabification.png)
## How does Immersive Reader work?
-Immersive Reader is a standalone web application. When invoked using the Immersive Reader client library is displayed on top of your existing web application in an `iframe`. When your web application calls the Immersive Reader service, you specify the content to show the reader. The Immersive Reader client library handles the creation and styling of the `iframe` and communication with the Immersive Reader backend service. The Immersive Reader service processes the content for parts of speech, text to speech, translation, and more.
+Immersive Reader is a standalone web application. When it's invoked, the Immersive Reader client library displays on top of your existing web application in an `iframe`. When your web application calls the Immersive Reader service, you specify the content to show the reader. The Immersive Reader client library handles the creation and styling of the `iframe` and communication with the Immersive Reader backend service. The Immersive Reader service processes the content for parts of speech, text to speech, translation, and more.
-## Get started with Immersive Reader
+## Next step
-The Immersive Reader client library is available in C#, JavaScript, Java (Android), Kotlin (Android), and Swift (iOS). Get started with:
+The Immersive Reader client library is available in C#, JavaScript, Java (Android), Kotlin (Android), and Swift (iOS). Get started with:
* [Quickstart: Use the Immersive Reader client library](quickstarts/client-libraries.md)
ai-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/release-notes.md
Title: "Immersive Reader SDK Release Notes"
+ Title: "Immersive Reader JavaScript SDK release notes"
-description: Learn more about what's new in the Immersive Reader JavaScript SDK.
+description: Learn about what's new in the Immersive Reader JavaScript SDK.
#
Previously updated : 11/15/2021 Last updated : 02/07/2024
-# Immersive Reader JavaScript SDK Release Notes
+# Release notes for Immersive Reader JavaScript SDK
## Version 1.4.0
-This release contains new feature, security vulnerability fixes, and updates to code samples.
+This release contains new features, security vulnerability fixes, and updates to code samples.
-#### New Features
+#### New features
* Subdomain regex validation updated to allow private links #### Improvements
-* Update code samples to use v1.4.0
+* Updated code samples to use v1.4.0
## Version 1.3.0 This release contains new features, security vulnerability fixes, and updates to code samples.
-#### New Features
+#### New features
* Added the capability for the Immersive Reader iframe to request microphone permissions for Reading Coach #### Improvements
-* Update code samples to use v1.3.0
-* Update code samples to demonstrate the usage of latest options from v1.2.0
+* Updated code samples to use v1.3.0
+* Updated code samples to demonstrate the usage of latest options from v1.2.0
## Version 1.2.0 This release contains new features, security vulnerability fixes, bug fixes, updates to code samples, and configuration options.
-#### New Features
+#### New features
-* Add option to set the theme to light or dark
-* Add option to set the parent node where the iframe/webview container is placed
-* Add option to disable the Grammar experience
-* Add option to disable the Translation experience
-* Add option to disable Language Detection
+* Added option to set the theme to light or dark
+* Added option to set the parent node where the iframe/webview container is placed
+* Added option to disable the Grammar experience
+* Added option to disable the Translation experience
+* Added option to disable Language Detection
#### Improvements
-* Add title and aria modal attributes to the iframe
+* Added title and aria modal attributes to the iframe
* Set isLoading to false when exiting
-* Update code samples to use v1.2.0
-* Adds React code sample
-* Adds Ember code sample
-* Adds Azure function code sample
-* Adds C# code sample demonstrating how to call the Azure Function for authentication
-* Adds Android Kotlin code sample demonstrating how to call the Azure Function for authentication
-* Updates the Swift code sample to be Objective C compliant
-* Updates Advanced C# code sample to demonstrate the usage of new options: parent node, disableGrammar, disableTranslation, and disableLanguageDetection
+* Updated code samples to use v1.2.0
+* Added React code sample
+* Added Ember code sample
+* Added Azure function code sample
+* Added C# code sample demonstrating how to call the Azure Function for authentication
+* Added Android Kotlin code sample demonstrating how to call the Azure Function for authentication
+* Updated the Swift code sample to be Objective C compliant
+* Updated Advanced C# code sample to demonstrate the usage of new options: parent node, disableGrammar, disableTranslation, and disableLanguageDetection
#### Fixes
-* Fixes multiple security vulnerabilities by upgrading TypeScript packages
-* Fixes bug where renderButton rendered a duplicate icon and label in the button
+* Fixed multiple security vulnerabilities by upgrading TypeScript packages
+* Fixed bug where renderButton rendered a duplicate icon and label in the button
## Version 1.1.0 This release contains new features, security vulnerability fixes, bug fixes, updates to code samples, and configuration options.
-#### New Features
+#### New features
-* Enable saving and loading user preferences across different browsers and devices
-* Enable configuring default display options
-* Add option to set the translation language, enable word translation, and enable document translation when launching Immersive Reader
-* Add support for configuring Read Aloud via options
-* Add ability to disable first run experience
-* Add ImmersiveReaderView for UWP
+* Enabled saving and loading user preferences across different browsers and devices
+* Enabled configuring default display options
+* Added option to set the translation language, enable word translation, and enable document translation when launching Immersive Reader
+* Added support for configuring Read Aloud via options
+* Added ability to disable first run experience
+* Added ImmersiveReaderView for UWP
#### Improvements
-* Update the Android code sample HTML to work with the latest SDK
-* Update launch response to return the number of characters processed
-* Update code samples to use v1.1.0
-* Do not allow launchAsync to be called when already loading
-* Check for invalid content by ignoring messages where the data is not a string
-* Wrap call to window in an if clause to check browser support of Promise
+* Updated the Android code sample HTML to work with the latest SDK
+* Updated launch response to return the number of characters processed
+* Updated code samples to use v1.1.0
+* Doesn't allow launchAsync to be called when already loading
+* Checked for invalid content by ignoring messages where the data isn't a string
+* Wrapped call to window in an if clause to check browser support of Promise
#### Fixes
-* Fix dependabot by removing yarn.lock from gitignore
-* Fix security vulnerability by upgrading pug to v3.0.0 in quickstart-nodejs code sample
-* Fix multiple security vulnerabilities by upgrading Jest and TypeScript packages
-* Fix a security vulnerability by upgrading Microsoft.IdentityModel.Clients.ActiveDirectory to v5.2.0
+* Fixed dependabot by removing yarn.lock from gitignore
+* Fixed security vulnerability by upgrading pug to v3.0.0 in quickstart-nodejs code sample
+* Fixed multiple security vulnerabilities by upgrading Jest and TypeScript packages
+* Fixed a security vulnerability by upgrading Microsoft.IdentityModel.Clients.ActiveDirectory to v5.2.0
<br>
This release contains new features, security vulnerability fixes, bug fixes, upd
This release contains breaking changes, new features, code sample improvements, and bug fixes.
-#### Breaking Changes
+#### Breaking changes
* Require Azure AD token and subdomain, and deprecates tokens used in previous versions. * Set CookiePolicy to disabled. Retention of user preferences is disabled by default. The Reader launches with default settings every time, unless the CookiePolicy is set to enabled.
-#### New Features
+#### New features
-* Add support to enable or disable cookies
-* Add Android Kotlin quick start code sample
-* Add Android Java quick start code sample
-* Add Node quick start code sample
+* Added support to enable or disable cookies
+* Added Android Kotlin quick start code sample
+* Added Android Java quick start code sample
+* Added Node quick start code sample
#### Improvements
-* Update Node.js advanced README.md
-* Change Python code sample from advanced to quick start
-* Move iOS Swift code sample into js/samples
-* Update code samples to use v1.0.0
+* Updated Node.js advanced README.md
+* Changed Python code sample from advanced to quick start
+* Moved iOS Swift code sample into js/samples
+* Updated code samples to use v1.0.0
#### Fixes
-* Fix for Node.js advanced code sample
-* Add missing files for advanced-csharp-multiple-resources
-* Remove en-us from hyperlinks
+* Fixed for Node.js advanced code sample
+* Added missing files for advanced-csharp-multiple-resources
+* Removed en-us from hyperlinks
<br>
This release contains breaking changes, new features, code sample improvements,
This release contains new features, improvements to code samples, security vulnerability fixes, and bug fixes.
-#### New Features
+#### New features
-* Add iOS Swift code sample
-* Add C# advanced code sample demonstrating use of multiple resources
-* Add support to disable the full screen toggle feature
-* Add support to hide the Immersive Reader application exit button
-* Add a callback function that may be used by the host application upon exiting the Immersive Reader
-* Update code samples to use Azure Active Directory Authentication
+* Added iOS Swift code sample
+* Added C# advanced code sample demonstrating use of multiple resources
+* Added support to disable the full screen toggle feature
+* Added support to hide the Immersive Reader application exit button
+* Added a callback function that may be used by the host application upon exiting the Immersive Reader
+* Updated code samples to use Azure Active Directory Authentication
#### Improvements
-* Update C# advanced code sample to include Word document
-* Update code samples to use v0.0.3
+* Updated C# advanced code sample to include Word document
+* Updated code samples to use v0.0.3
#### Fixes
-* Upgrade lodash to version 4.17.14 to fix security vulnerability
-* Update C# MSAL library to fix security vulnerability
-* Upgrade mixin-deep to version 1.3.2 to fix security vulnerability
-* Upgrade jest, webpack and webpack-cli which were using vulnerable versions of set-value and mixin-deep to fix security vulnerability
+* Upgraded lodash to version 4.17.14 to fix security vulnerability
+* Updated C# MSAL library to fix security vulnerability
+* Upgraded mixin-deep to version 1.3.2 to fix security vulnerability
+* Upgraded jest, webpack and webpack-cli which were using vulnerable versions of set-value and mixin-deep to fix security vulnerability
<br>
This release contains new features, improvements to code samples, security vulne
This release contains new features, improvements to code samples, security vulnerability fixes, and bug fixes.
-#### New Features
+#### New features
-* Add Python advanced code sample
-* Add Java quick start code sample
-* Add simple code sample
+* Added Python advanced code sample
+* Added Java quick start code sample
+* Added simple code sample
#### Improvements
-* Rename resourceName to cogSvcsSubdomain
-* Move secrets out of code and use environment variables
-* Update code samples to use v0.0.2
+* Renamed resourceName to cogSvcsSubdomain
+* Moved secrets out of code and use environment variables
+* Updated code samples to use v0.0.2
#### Fixes
-* Fix Immersive Reader button accessibility bugs
-* Fix broken scrolling
-* Upgrade handlebars package to version 4.1.2 to fix security vulnerability
-* Fixes bugs in SDK unit tests
-* Fixes JavaScript Internet Explorer 11 compatibility bugs
-* Updates SDK urls
+* Fixed Immersive Reader button accessibility bugs
+* Fixed broken scrolling
+* Upgraded handlebars package to version 4.1.2 to fix security vulnerability
+* Fixed bugs in SDK unit tests
+* Fixed JavaScript Internet Explorer 11 compatibility bugs
+* Updated SDK urls
<br>
This release contains new features, improvements to code samples, security vulne
The initial release of the Immersive Reader JavaScript SDK.
-* Add Immersive Reader JavaScript SDK
-* Add support to specify the UI language
-* Add a timeout to determine when the launchAsync function should fail with a timeout error
-* Add support to specify the z-index of the Immersive Reader iframe
-* Add support to use a webview tag instead of an iframe, for compatibility with Chrome Apps
-* Add SDK unit tests
-* Add Node.js advanced code sample
-* Add C# advanced code sample
-* Add C# quick start code sample
-* Add package configuration, Yarn and other build files
-* Add git configuration files
-* Add README.md files to code samples and SDK
-* Add MIT License
-* Add Contributor instructions
-* Add static icon button SVG assets
-
-## Next steps
-
-Get started with Immersive Reader:
-
-* Read the [Immersive Reader client library Reference](./reference.md)
-* Explore the [Immersive Reader client library on GitHub](https://github.com/microsoft/immersive-reader-sdk)
+* Added Immersive Reader JavaScript SDK
+* Added support to specify the UI language
+* Added a timeout to determine when the launchAsync function should fail with a timeout error
+* Added support to specify the z-index of the Immersive Reader iframe
+* Added support to use a webview tag instead of an iframe, for compatibility with Chrome Apps
+* Added SDK unit tests
+* Added Node.js advanced code sample
+* Added C# advanced code sample
+* Added C# quick start code sample
+* Added package configuration, Yarn and other build files
+* Added git configuration files
+* Added README.md files to code samples and SDK
+* Added MIT License
+* Added Contributor instructions
+* Added static icon button SVG assets
+
+## Related content
+
+* [Immersive Reader client library reference](./reference.md)
+* [Immersive Reader client library on GitHub](https://github.com/microsoft/immersive-reader-sdk)
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
The following Embeddings models are available with [Azure Government](/azure/azu
For Assistants you need a combination of a supported model, and a supported region. Certain tools and capabilities require the latest models. For example [parallel function](../how-to/assistant-functions.md) calling requires the latest 1106 models.
-| Region | `gpt-35-turbo (1106)` | `gpt-4 (1106-preview)` | `gpt-4 (0613)` | `gpt-4 (0314)` | `gpt-35-turbo (0301)` | `gpt-35-turbo (0613)` | `gpt-35-turbo-16k (0613)` | `gpt-4-32k (0314)` | `gpt-4-32k (0613)` |
-|||||||||||
-| Sweden Central | ✅|✅|✅|✅|✅|✅|✅||✅|
-| East US 2 ||✅|✅|||✅|||✅|
-| Australia East |✅|✅|✅|||✅|||✅|
+
+| Region | `gpt-35-turbo (0613)` | `gpt-35-turbo (1106)` | `gpt-4 (0613)` | `gpt-4 (1106)` |
+|--|||||
+| Australia East | ✅ | ✅ | ✅ |✅ |
+| East US 2 | ✅ | ⬜| ✅ |✅ |
+| Sweden Central | ✅ |✅ |✅ |✅|
## Next steps
ai-services Gpt With Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/gpt-with-vision.md
Every response includes a `"finish_details"` field. It has the following possibl
GPT-4 Turbo with Vision provides exclusive access to Azure AI Services tailored enhancements. The **video prompt** integration uses Azure AI Vision video retrieval to sample a set of frames from a video and create a transcript of the speech in the video. It enables the AI model to give summaries and answers about video content.
+Follow these steps to set up a video retrieval system and integrate it with your AI chat model.
+ > [!IMPORTANT]
-> To use Vision enhancement, you need a Computer Vision resource. It must be in the paid (S0) tier and in the same Azure region as your GPT-4 Turbo with Vision resource.
+> To use Vision enhancement, you need an Azure AI Vision resource. It must be in the paid (S0) tier and in the same Azure region as your GPT-4 Turbo with Vision resource.
> [!CAUTION] > Azure AI enhancements for GPT-4 Turbo with Vision will be billed separately from the core functionalities. Each specific Azure AI enhancement for GPT-4 Turbo with Vision has its own distinct charges. For details, see the [special pricing information](../concepts/gpt-with-vision.md#special-pricing-information).
-### Set up video retrieval
+> [!TIP]
+> If you prefer, you can carry out the below steps using a Jupyter notebook instead: [Video chat completions notebook](https://github.com/Azure-Samples/azureai-samples/blob/main/scenarios/GPT-4V/video/video_chatcompletions_example_restapi.ipynb).
+
+### Create a video retrieval index
-Follow these steps to set up a video retrieval system to integrate with your AI chat model:
1. Get an Azure AI Vision resource in the same region as the Azure OpenAI resource you're using.
-1. Follow the instructions in [Do video retrieval using vectorization](/azure/ai-services/computer-vision/how-to/video-retrieval) to create a video retrieval index. Return to this guide once your index is created.
-1. Save the index name, the `documentId` values of your videos, and the blob storage SAS URLs of your videos to a temporary location. You'll need these values the next steps.
+1. Create an index to store and organize the video files and their metadata. The example command below demonstrates how to create an index named `my-video-index` using the **[Create Index](/azure/ai-services/computer-vision/reference-video-search)** API. Save the index name to a temporary location; you'll need it in later steps.
+
+ > [!TIP]
+ > For more detailed instructions on creating a video index, see [Do video retrieval using vectorization](/azure/ai-services/computer-vision/how-to/video-retrieval).
+
+ ```bash
+ curl.exe -v -X PUT "https://<YOUR_ENDPOINT_URL>/computervision/retrieval/indexes/my-video-index?api-version=2023-05-01-preview" -H "Ocp-Apim-Subscription-Key: <YOUR_SUBSCRIPTION_KEY>" -H "Content-Type: application/json" --data-ascii "
+ {
+ 'metadataSchema': {
+ 'fields': [
+ {
+ 'name': 'cameraId',
+ 'searchable': false,
+ 'filterable': true,
+ 'type': 'string'
+ },
+ {
+ 'name': 'timestamp',
+ 'searchable': false,
+ 'filterable': true,
+ 'type': 'datetime'
+ }
+ ]
+ },
+ 'features': [
+ {
+ 'name': 'vision',
+ 'domain': 'surveillance'
+ },
+ {
+ 'name': 'speech'
+ }
+ ]
+ }"
+ ```
-### Call the Chat Completion API
+1. Add video files to the index with their associated metadata. The example below demonstrates how to add two video files to the index using SAS URLs with the **[Create Ingestion](/azure/ai-services/computer-vision/reference-video-search)** API. Save the SAS URLs and `documentId` values to a temporary location; you'll need them in later steps.
+
+ ```bash
+ curl.exe -v -X PUT "https://<YOUR_ENDPOINT_URL>/computervision/retrieval/indexes/my-video-index/ingestions/my-ingestion?api-version=2023-05-01-preview" -H "Ocp-Apim-Subscription-Key: <YOUR_SUBSCRIPTION_KEY>" -H "Content-Type: application/json" --data-ascii "
+ {
+ 'videos': [
+ {
+ 'mode': 'add',
+ 'documentId': '02a504c9cd28296a8b74394ed7488045',
+ 'documentUrl': 'https://example.blob.core.windows.net/videos/02a504c9cd28296a8b74394ed7488045.mp4?sas_token_here',
+ 'metadata': {
+ 'cameraId': 'camera1',
+ 'timestamp': '2023-06-30 17:40:33'
+ }
+ },
+ {
+ 'mode': 'add',
+ 'documentId': '043ad56daad86cdaa6e493aa11ebdab3',
+ 'documentUrl': '[https://example.blob.core.windows.net/videos/043ad56daad86cdaa6e493aa11ebdab3.mp4?sas_token_here',
+ 'metadata': {
+ 'cameraId': 'camera2'
+ }
+ }
+ ]
+ }"
+ ```
+
+1. After you add video files to the index, the ingestion process starts. It might take some time depending on the size and number of files. To ensure the ingestion is complete before performing searches, you can use the **[Get Ingestion](/en-us/azure/ai-services/computer-vision/reference-video-search)** API to check the status. Wait for this call to return `"state" = "Completed"` before proceeding to the next step.
+
+ ```bash
+ curl.exe -v -X GET "https://<YOUR_ENDPOINT_URL>/computervision/retrieval/indexes/my-video-index/ingestions?api-version=2023-05-01-preview&$top=20" -H "ocp-apim-subscription-key: <YOUR_SUBSCRIPTION_KEY>"
+ ```
+
+### Integrate your video index with GPT-4 Turbo with Vision
#### [REST](#tab/rest)
Follow these steps to set up a video retrieval system to integrate with your AI
#### [Python](#tab/python)
-Call the client's **create** method as in the previous sections, but include the *extra_body* parameter. Here, it contains the `enhancements` and `dataSources` fields. `enhancements` represents the specific Vision enhancement features requested in the chat. It has a `video` field, which has a boolean `enabled` property. Use this to request the video retrieval service.
+In your Python script, call the client's **create** method as in the previous sections, but include the *extra_body* parameter. Here, it contains the `enhancements` and `dataSources` fields. `enhancements` represents the specific Vision enhancement features requested in the chat. It has a `video` field, which has a boolean `enabled` property. Use this to request the video retrieval service.
`dataSources` represents the external resource data that's needed for Vision enhancement. It has a `type` field which should be `"AzureComputerVisionVideoIndex"` and a `parameters` field.
print(response)
The chat responses you receive from the model should include information about the video. The API response should look like the following. - ```json { "id": "chatcmpl-8V4J2cFo7TWO7rIfs47XuDzTKvbct",
ai-services V3 0 Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/reference/v3-0-reference.md
Previously updated : 09/19/2023 Last updated : 02/14/2024
To force the request to be handled within a specific geography, use the desired
|Europe| api-eur.cognitive.microsofttranslator.com|North Europe, West Europe| |United States| api-nam.cognitive.microsofttranslator.com|East US, South Central US, West Central US, and West US 2|
-<sup>`1`</sup> Customers with a resource located in Switzerland North or Switzerland West can ensure that their Text API requests are served within Switzerland. To ensure that requests are handled in Switzerland, create the Translator resource in the 'Resource region' 'Switzerland North' or `Switzerland West`, then use the resource's custom endpoint in your API requests. For example: If you create a Translator resource in Azure portal with 'Resource region' as 'Switzerland North' and your resource name is `my-swiss-n`, then your custom endpoint is `https&#8203;://my-swiss-n.cognitiveservices.azure.com`. And a sample request to translate is:
+<sup>`1`</sup> Customers with a resource located in Switzerland North or Switzerland West can ensure that their Text API requests are served within Switzerland. To ensure that requests are handled in Switzerland, create the Translator resource in the `Resource region` `Switzerland North` or `Switzerland West`, then use the resource's custom endpoint in your API requests.
+
+For example: If you create a Translator resource in Azure portal with `Resource region` as `Switzerland North` and your resource name is `my-swiss-n`, then your custom endpoint is `https&#8203;://my-swiss-n.cognitiveservices.azure.com`. And a sample request to translate is:
```curl // Pass secret key and region using headers to a custom endpoint
aks Ingress Basic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-basic.md
helm install ingress-nginx ingress-nginx/ingress-nginx `
--set controller.service.externalTrafficPolicy=Local ```
-> [!NOTE]
-> In this tutorial, "service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path" is being set to "/healthz". This means if the response code of the requests to "/healthz" is not "200", the whole ingress controller will be down. You can modify the value to other URI in your own scenario. You cannot delete this part or unset the value, or the ingress controller will still be down.
-> The package "ingress-nginx" used in this tutorial, which is provided by [Kubernetes official](https://github.com/kubernetes/ingress-nginx), will always return "200" response code if requesting "/healthz", as it is designed as "[default backend](https://kubernetes.github.io/ingress-nginx/user-guide/default-backend/)" for users to have a quick start, unless it is being overwritten by ingress rules.
-
+> [!NOTE]
+> In this tutorial, `service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path` is being set to `/healthz`. This means if the response code of the requests to `/healthz` is not `200`, the entire ingress controller will be down. You can modify the value to other URI in your own scenario. You cannot delete this part or unset the value, or the ingress controller will still be down.
+> The package `ingress-nginx` used in this tutorial, which is provided by [Kubernetes official](https://github.com/kubernetes/ingress-nginx), will always return `200` response code if requesting `/healthz`, as it is designed as [default backend](https://kubernetes.github.io/ingress-nginx/user-guide/default-backend/) for users to have a quick start, unless it is being overwritten by ingress rules.
+ ## Customized configuration As an alternative to the basic configuration presented in the above section, the next set of steps will show how to deploy a customized ingress controller. You'll have the option of using an internal static IP address, or using a dynamic public IP address.
aks Quick Kubernetes Deploy Azd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-azd.md
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you learn to: -- Deploy an AKS cluster using the Azure CLI.
+- Deploy an AKS cluster using the Azure Developer CLI (AZD).
- Run a sample multi-container application with a group of microservices simulating a retail app.
+- Teardown and clean up containers using AZD.
> [!NOTE] > To get started with quickly provisioning an AKS cluster, this article includes steps to deploy a cluster with default settings for evaluation purposes only. Before deploying a production-ready cluster, we recommend that you familiarize yourself with our [baseline reference architecture][baseline-reference-architecture] to consider how it aligns with your business requirements.
aks Monitor Control Plane Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-control-plane-metrics.md
This article helps you understand this new feature, how to implement it, and how
- [Private link](../azure-monitor/logs/private-link-security.md) isn't supported. - Only the default [ama-metrics-settings-config-map](../azure-monitor/containers/prometheus-metrics-scrape-configuration.md#configmaps) can be customized. All other customizations are not supported. - The cluster must use [managed identity authentication](use-managed-identity.md).-- This feature is currently available in the following regions: West Central US, East Asia, UK South, East US, Australia Central, Australia East, Brazil South, Canada Central, Central India, East US 2, France Central, and Germany West Central, Israel Central, Italy North, Japan East, JioIndia West, Korea Central, Malaysia South, Mexico Central, North Central US, North Europe, Norway East, Qatar Central, South Africa North, Sweden Central, Switzerland North, Taiwan North, UAE North, UK West, West US 2, Australia Central 2, Austrial South East, Austria East, Belgium Central, Brazil South East, Canada East, Central US, Chile Central, France South, Germany North, Israel North West, Japan West, Jio India Central. ### Install or update the `aks-preview` Azure CLI extension
az provider register --namespace "Microsoft.ContainerService"
You can enable control plane metrics with the Azure Monitor managed service for Prometheus add-on during cluster creation or for an existing cluster. To collect Prometheus metrics from your Kubernetes cluster, see [Enable Prometheus and Grafana for Kubernetes clusters][enable-monitoring-kubernetes-cluster] and follow the steps on the **CLI** tab for an AKS cluster. On the command-line, be sure to include the parameters `--generate-ssh-keys` and `--enable-managed-identity`.
+If your cluster already has the Prometheus addon deployed, then you can simply run an `az aks update` to ensure the cluster updates to start collecting control plane metrics.
+
+```azurecli
+az aks update -n <cluster-name> -g <resource-group>
+```
+ >[!NOTE] > Unlike the metrics collected from cluster nodes, control plane metrics are collected by a component which isn't part of the **ama-metrics** add-on. Enabling the `AzureMonitorMetricsControlPlanePreview` feature flag and the managed prometheus add-on ensures control plane metrics are collected. After enabling metric collection, it can take several minutes for the data to appear in the workspace.
When you enable the add-on, you might have specified an existing workspace that
You can disable control plane metrics at any time, by either disabling the feature flag, disabling managed Prometheus, or by deleting the AKS cluster.
+## Preview flag enabled after Managed Prometheus setup
+If the preview flag(`AzureMonitorMetricsControlPlanePreview`) was enabled on an existing Managed Prometheus cluster, it will require forcing an update for the cluster to emit control plane metrics
+
+You can run an az aks update to ensure the cluster updates to start collecting control plane metrics.
+
+```azurecli
+az aks update -n <cluster-name> -g <resource-group>
+```
+ > [!NOTE] > This action doesn't remove any existing data stored in your Azure Monitor workspace.
api-management Api Management Howto Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-autoscale.md
-# Mandatory fields. See more on aka.ms/skyeye/meta.
Title: Configure autoscale of an Azure API Management instance | Microsoft Docs
-description: This article describes how to set up autoscale behavior for an Azure API Management instance.
+description: This article describes how to set up rules to control autoscale behavior for an Azure API Management instance.
Previously updated : 03/30/2023 Last updated : 02/06/2024 # Automatically scale an Azure API Management instance
-An Azure API Management service instance can scale automatically based on a set of rules. This behavior can be enabled and configured through [Azure Monitor autoscale](../azure-monitor/autoscale/autoscale-overview.md#supported-services-for-autoscale) and is currently supported only in the **Standard** and **Premium** tiers of the Azure API Management service.
+An Azure API Management service instance can scale automatically based on a set of rules. This behavior can be enabled and configured through [Azure Monitor autoscale](../azure-monitor/autoscale/autoscale-overview.md#supported-services-for-autoscale) and is currently supported only in the **Basic**, **Standard**, and **Premium** tiers of the Azure API Management service.
The article walks through the process of configuring autoscale and suggests optimal configuration of autoscale rules.
To follow the steps from this article, you must:
+ Understand the concept of [capacity](api-management-capacity.md) of an API Management instance. + Understand [manual scaling](upgrade-and-scale.md) of an API Management instance, including cost consequences. ## Azure API Management autoscale limitations Certain limitations and consequences of scaling decisions need to be considered before configuring autoscale behavior.
-+ The pricing tier of your API Management instance determines the [maximum number of units](upgrade-and-scale.md#upgrade-and-scale) you may scale to. For example, the **Standard tier** can be scaled to 4 units. You can add any number of units to the **Premium** tier.
++ The [pricing tier](api-management-features.md) of your API Management instance determines the [maximum number of units](upgrade-and-scale.md#upgrade-and-scale) you may scale to. For example, the **Standard tier** can be scaled to 4 units. You can add any number of units to the **Premium** tier. + The scaling process takes at least 20 minutes. + If the service is locked by another operation, the scaling request will fail and retry automatically. + If your service instance is deployed in multiple regions (locations), only units in the **Primary location** can be autoscaled with Azure Monitor autoscale. Units in other locations can only be scaled manually.
Follow these steps to configure autoscale for an Azure API Management service:
1. Define a new scale-out rule.
- For example, a scale-out rule could trigger addition of 1 API Management unit, when the average capacity metric over the previous 30 minutes exceeds 80%. The following table provides configuration for such a rule.
+ For example, a scale-out rule could trigger addition of 1 API Management unit, when the average capacity metric over the previous 30 minutes exceeds 70%. The following table provides an example configuration for such a rule. Review the preceding [limitations](#azure-api-management-autoscale-limitations) when defining a scale-out rule in your environment.
| Parameter | Value | Notes | |--|-|| | Metric source | Current resource | Define the rule based on the current API Management resource metrics. | | *Criteria* | | |
- | Metric name | Capacity | Capacity metric is an API Management metric reflecting usage of resources by an Azure API Management instance. |
+ | Metric name | Capacity | [Capacity metric](api-management-capacity.md) is an API Management metric reflecting usage of resources by an Azure API Management instance. |
| Location | Select the primary location of the API Management instance | | | Operator | Greater than | |
- | Metric threshold | 80% | The threshold for the averaged capacity metric. |
+ | Metric threshold | 70% | The threshold for the averaged capacity metric. For considerations on setting this threshold, see [Using capacity for scaling decisions](api-management-capacity.md#use-capacity-for-scaling-decisions). |
| Duration (in minutes) | 30 | The timespan to average the capacity metric over is specific to usage patterns. The longer the duration, the smoother the reaction will be. Intermittent spikes will have less effect on the scale-out decision. However, it will also delay the scale-out trigger. | | Time grain statistic | Average | | |*Action* | | |
Follow these steps to configure autoscale for an Azure API Management service:
1. Select **Add** to save the rule. 1. To add another rule, select **Add a rule**.
- This time, a scale-in rule needs to be defined. It will ensure resources aren't being wasted, when the usage of APIs decreases.
+ This time, a scale-in rule needs to be defined. It ensures that resources aren't being wasted, when the usage of APIs decreases.
1. Define a new scale-in rule.
- For example, a scale-in rule could trigger a removal of 1 API Management unit when the average capacity metric over the previous 30 minutes has been lower than 35%. The following table provides configuration for such a rule.
+ For example, a scale-in rule could trigger a removal of 1 API Management unit when the average capacity metric over the previous 30 minutes is lower than 35%. The following table provides an example configuration for such a rule.
| Parameter | Value | Notes | |--|-|--|
Follow these steps to configure autoscale for an Azure API Management service:
:::image type="content" source="media/api-management-howto-autoscale/07.png" alt-text="Screenshot showing how to set instance limits in the portal.":::
-1. Select **Save**. Your autoscale has been configured.
+1. Select **Save**. Your autoscale is configured.
-## Next steps
+## Related content
- [How to deploy an Azure API Management service instance to multiple Azure regions](api-management-howto-deploy-multi-region.md)-- [Optimize and save on your cloud spending](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn)
+- [Optimize and save on your cloud spending](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn)
api-management Backends https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/backends.md
resource symbolicname 'Microsoft.ApiManagement/service/backends@2023-03-01-previ
name: 'myAPIM/myBackend' properties: { url: 'https://mybackend.com'
- protocol: 'http'
+ protocol: 'https'
circuitBreaker: { rules: [ {
Include a JSON snippet similar to the following in your ARM template for a backe
"name": "myAPIM/myBackend", "properties": { "url": "https://mybackend.com",
- "protocol": "http",
+ "protocol": "https",
"circuitBreaker": { "rules": [ {
app-service Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-recommendations.md
Previously updated : 09/02/2021 Last updated : 01/30/2024
application-gateway Application Gateway Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-diagnostics.md
The access log is generated only if you've enabled it on each Application Gatewa
If the application gateway can't complete the request, it stores one of the following reason codes in the error_info field of the access log.
-|4XX Errors |The 4xx error codes indicate that there was an issue with the client's request, and the server can't fulfill it |
+|4XX Errors | (The 4xx error codes indicate that there was an issue with the client's request, and the Application Gateway can't fulfill it.) |
||| | ERRORINFO_INVALID_METHOD| The client has sent a request which is non-RFC compliant. Possible reasons: client using HTTP method not supported by server, misspelled method, incompatible HTTP protocol version etc.| | ERRORINFO_INVALID_REQUEST | The server can't fulfill the request because of incorrect syntax.|
If the application gateway can't complete the request, it stores one of the foll
| ERRORINFO_HTTPS_NO_CERT | Indicates client is not sending a valid and properly configured TLS certificate during Mutual TLS authentication. |
-|5XX Errors |Description |
+|5XX Errors | Description |
||| | ERRORINFO_UPSTREAM_NO_LIVE | The application gateway is unable to find any active or reachable backend servers to handle incoming requests | | ERRORINFO_UPSTREAM_CLOSED_CONNECTION | The backend server closed the connection unexpectedly or before the request was fully processed. This could happen due to backend server reaching its limits, crashing etc.|
attestation Policy Signer Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-signer-examples.md
Previously updated : 11/14/2022 Last updated : 01/30/2024
attestation Private Endpoint Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/private-endpoint-powershell.md
Previously updated : 11/14/2022 Last updated : 01/30/2024
attestation Quickstart Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/quickstart-azure-cli.md
Previously updated : 11/14/2022 Last updated : 01/30/2024 ms.devlang: azurecli
attestation Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/quickstart-portal.md
Previously updated : 11/14/2022 Last updated : 01/30/2024
attestation Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/quickstart-powershell.md
Previously updated : 11/14/2022 Last updated : 01/30/2024
attestation Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/quickstart-template.md
Previously updated : 01/23/2023 Last updated : 01/30/2024 # Quickstart: Create an Azure Attestation provider with an ARM template
attestation Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/workflow.md
Previously updated : 01/23/2023 Last updated : 01/30/2024
azure-arc Extensions Release https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md
Title: "Available extensions for Azure Arc-enabled Kubernetes clusters" Previously updated : 02/08/2024 Last updated : 02/14/2024 description: "See which extensions are currently available for Azure Arc-enabled Kubernetes clusters and view release notes."
The most recent version of the Flux v2 extension and the two previous versions (
> [!NOTE] > When a new version of the `microsoft.flux` extension is released, it may take several days for the new version to become available in all regions.
+### 1.8.2 (February 2023)
+
+Flux version: [Release v2.1.2](https://github.com/fluxcd/flux2/releases/tag/v2.1.2)
+
+- source-controller: v1.1.2
+- kustomize-controller: v1.1.1
+- helm-controller: v0.36.2
+- notification-controller: v1.1.0
+- image-automation-controller: v0.36.1
+- image-reflector-controller: v0.30.0
+
+Changes made for this version:
+
+- Improve the identity token generation logic to handle token generation failures
+ ### 1.8.1 (November 2023) Flux version: [Release v2.1.2](https://github.com/fluxcd/flux2/releases/tag/v2.1.2)
Changes made for this version:
- Upgrades Flux to [v2.1.1](https://github.com/fluxcd/flux2/releases/tag/v2.1.1) - Adds support for [AKS clusters with workload identity](tutorial-use-gitops-flux2.md#workload-identity-in-aks-clusters)
-### 1.7.7 (September 2023)
-
-Flux version: [Release v2.0.1](https://github.com/fluxcd/flux2/releases/tag/v2.0.1)
--- source-controller: v1.0.1-- kustomize-controller: v1.0.1-- helm-controller: v0.35.0-- notification-controller: v1.0.0-- image-automation-controller: v0.35.0-- image-reflector-controller: v0.29.1-
-Changes made for this version:
--- Updated SSH key entry to use the [Ed25519 SSH host key](https://bitbucket.org/blog/ssh-host-key-changes) to prevent failures in configurations with `ssh` authentication type for Bitbucket.-
-### 1.7.6 (August 2023)
-
-Flux version: [Release v2.0.1](https://github.com/fluxcd/flux2/releases/tag/v2.0.1)
--- source-controller: v1.0.1-- kustomize-controller: v1.0.1-- helm-controller: v0.35.0-- notification-controller: v1.0.0-- image-automation-controller: v0.35.0-- image-reflector-controller: v0.29.1-
-Changes made for this version:
--- Configurations with `ssh` authentication type were intermittently failing to reconcile with GitHub due to an updated [RSA SSH host key](https://github.blog/2023-03-23-we-updated-our-rsa-ssh-host-key/). This release updates the SSH key entries to match the ones mentioned in [GitHub's SSH key fingerprints documentation](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/githubs-ssh-key-fingerprints).-
-### 1.7.5 (August 2023)
-
-Flux version: [Release v2.0.1](https://github.com/fluxcd/flux2/releases/tag/v2.0.1)
--- source-controller: v1.0.1-- kustomize-controller: v1.0.1-- helm-controller: v0.35.0-- notification-controller: v1.0.0-- image-automation-controller: v0.35.0-- image-reflector-controller: v0.29.1-
-Changes made for this version:
--- Upgrades Flux to [v2.0.1](https://github.com/fluxcd/flux2/releases/tag/v2.0.1)-- Promotes some APIs to v1. This change shouldn't affect any existing Flux configurations that have already been deployed. Previous API versions will still be supported in all `microsoft.flux` v.1.x.x releases. However, we recommend that you update the API versions in your manifests as soon as possible. For more information about the new API versions (breaking changes) and how to update your manifests, see the [Flux v2 release notes](https://github.com/fluxcd/flux2/releases/tag/v2.0.0).-- Adds support for [Helm drift detection](tutorial-use-gitops-flux2.md#helm-drift-detection) and [OOM watch](tutorial-use-gitops-flux2.md#helm-oom-watch).-
-### 1.7.4 (June 2023)
-
-Flux version: [Release v0.41.2](https://github.com/fluxcd/flux2/releases/tag/v0.41.2)
--- source-controller: v0.36.1-- kustomize-controller: v0.35.1-- helm-controller: v0.31.2-- notification-controller: v0.33.0-- image-automation-controller: v0.31.0-- image-reflector-controller: v0.26.1-
-Changes made for this version:
--- Adds support for [`wait`](https://fluxcd.io/flux/components/kustomize/kustomization/#wait) and [`postBuild`](https://fluxcd.io/flux/components/kustomize/kustomization/#post-build-variable-substitution) properties as optional parameters for kustomization. By default, `wait` is set to `true` for all Flux configurations, and `postBuild` is null. ([Example](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/kubernetesconfiguration/resource-manager/Microsoft.KubernetesConfiguration/stable/2023-05-01/examples/CreateFluxConfiguration.json#L55))--- Adds support for optional properties [`waitForReconciliation`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/kubernetesconfiguration/resource-manager/Microsoft.KubernetesConfiguration/stable/2023-05-01/fluxconfiguration.json#L1299C14-L1299C35) and [`reconciliationWaitDuration`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/kubernetesconfiguration/resource-manager/Microsoft.KubernetesConfiguration/stable/2023-05-01/fluxconfiguration.json#L1304).-
- By default, `waitForReconciliation` is set to false, so when creating a flux configuration, the `provisioningState` returns `Succeeded` once the configuration reaches the cluster and the ARM template or Azure CLI command successfully exits. However, the actual state of the objects being deployed as part of the configuration is tracked by `complianceState`, which can be viewed in the portal or by using Azure CLI. Setting `waitForReconciliation` to true and specifying a `reconciliationWaitDuration` means that the template or CLI deployment waits for `complianceState` to reach a terminal state (success or failure) before exiting. ([Example](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/kubernetesconfiguration/resource-manager/Microsoft.KubernetesConfiguration/stable/2023-05-01/examples/CreateFluxConfiguration.json#L72))
- ## Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes [Dapr](https://dapr.io/) is a portable, event-driven runtime that simplifies building resilient, stateless, and stateful applications that run on the cloud and edge and embrace the diversity of languages and developer frameworks. The Dapr extension eliminates the overhead of downloading Dapr tooling and manually installing and managing the runtime on your clusters.
azure-cache-for-redis Cache Azure Active Directory For Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-azure-active-directory-for-authentication.md
Previously updated : 06/23/2023 Last updated : 02/07/2024
-# Use Microsoft Entra ID for cache authentication
+# Use Microsoft Entra ID (preview) for cache authentication
Azure Cache for Redis offers two methods to authenticate to your cache instance: -- [access key](cache-configure.md#access-keys)
+- [Access keys](cache-configure.md#access-keys)
+- [Microsoft Entra ID (preview)](cache-configure.md#preview-microsoft-entra-authentication)
-- [Microsoft Entra token](/azure/active-directory/develop/access-tokens)
+Although access key authentication is simple, it comes with a set of challenges around security and password management. For contrast, in this article, you learn how to use a Microsoft Entra token for cache authentication.
-Although access key authentication is simple, it comes with a set of challenges around security and password management. In this article, you learn how to use a Microsoft Entra token for cache authentication.
-
-Azure Cache for Redis offers a password-free authentication mechanism by integrating with [Microsoft Entra ID](/azure/active-directory/fundamentals/active-directory-whatis). This integration also includes [role-based access control](/azure/role-based-access-control/) functionality provided through [access control lists (ACLs)](https://redis.io/docs/management/security/acl/) supported in open source Redis.
+Azure Cache for Redis offers a password-free authentication mechanism by integrating with [Microsoft Entra ID (preview)](/azure/active-directory/fundamentals/active-directory-whatis). This integration also includes [role-based access control](/azure/role-based-access-control/) functionality provided through [access control lists (ACLs)](https://redis.io/docs/management/security/acl/) supported in open source Redis.
To use the ACL integration, your client application must assume the identity of a Microsoft Entra entity, like service principal or managed identity, and connect to your cache. In this article, you learn how to use your service principal or managed identity to connect to your cache, and how to grant your connection predefined permissions based on the Microsoft Entra artifact being used for the connection.
To use the ACL integration, your client application must assume the identity of
## Prerequisites and limitations -- To enable Microsoft Entra token-based authentication for your Azure Cache for Redis instance, at least one Redis user must be configured under the **Data Access Policy** setting in the Resource menu.-- Microsoft Entra ID-based authentication is supported for SSL connections and TLS 1.2 only.-- Microsoft Entra ID-based authentication isn't supported on Azure Cache for Redis instances that run Redis version 4.
+- Microsoft Entra ID-based authentication is supported for SSL connections and TLS 1.2 or higher.
- Microsoft Entra ID-based authentication isn't supported on Azure Cache for Redis instances that [depend on Cloud Services](./cache-faq.yml#caches-with-a-dependency-on-cloud-services--classic). - Microsoft Entra ID based authentication isn't supported in the Enterprise tiers of Azure Cache for Redis Enterprise. - Some Redis commands are blocked. For a full list of blocked commands, see [Redis commands not supported in Azure Cache for Redis](cache-configure.md#redis-commands-not-supported-in-azure-cache-for-redis). > [!IMPORTANT]
-> Once a connection is established using Microsoft Entra token, client applications must periodically refresh Microsoft Entra token before expiry, and send an `AUTH` command to Redis server to avoid disruption of connections. For more information, see [Configure your Redis client to use Microsoft Entra ID](#configure-your-redis-client-to-use-azure-active-directory).
-
-<a name='enable-azure-ad-token-based-authentication-on-your-cache'></a>
+> Once a connection is established using Microsoft Entra token, client applications must periodically refresh Microsoft Entra token before expiry, and send an `AUTH` command to Redis server to avoid disruption of connections. For more information, see [Configure your Redis client to use Microsoft Entra ID](#configure-your-redis-client-to-use-microsoft-entra-id).
-## Enable Microsoft Entra token based authentication on your cache
+## Enable Microsoft Entra ID authentication on your cache
1. In the Azure portal, select the Azure Cache for Redis instance where you'd like to configure Microsoft Entra token-based authentication.
-1. Select **(PREVIEW) Data Access Configuration** from the Resource menu.
+1. Select **Authentication** from the Resource menu.
-1. Select "**Add**" and choose **New Redis User**.
+1. In the working pane, select **(PREVIEW) Enable Microsoft Entra Authentication**.
-1. On the **Access Policy** tab, select one the available policies in the table: **Owner**, **Contributor**, or **Reader**. Then, select the **Next:Redis Users**.
+1. Select **Enable Microsoft Entra Authentication**, and enter the name of a valid user. The user you enter is automatically assigned _Data Owner Access Policy_ by default when you select **Save**. You can also enter a managed identity or service principal to connect to your cache instance.
- :::image type="content" source="media/cache-azure-active-directory-for-authentication/cache-new-redis-user.png" alt-text="Screenshot showing the available Access Policies.":::
+ :::image type="content" source="media/cache-azure-active-directory-for-authentication/cache-enable-microsoft-entra.png" alt-text="Screenshot showing authentication selected in the resource menu and the enable Microsoft Entra authentication checked.":::
-1. Choose either the **User or service principal** or **Managed Identity** to determine how you want to use for authenticate to your Azure Cache for Redis instance.
+1. A popup dialog box displays asking if you want to update your configuration, and informing you that it takes several minutes. Select **Yes.**
-1. Then, select **Select members** and select **Select**. Then, select **Next : Review + Design**.
- :::image type="content" source="media/cache-azure-active-directory-for-authentication/cache-select-members.png" alt-text="Screenshot showing members to add as New Redis Users.":::
+ > [!IMPORTANT]
+ > Once the enable operation is complete, the nodes in your cache instance reboots to load the new configuration. We recommend performing this operation during your maintenance window or outside your peak business hours. The operation can take up to 30 minutes.
+
+## Using data access configuration with your cache
+
+If you would like to use a custom access policy instead of Redis Data Owner, go to the **Data Access Configuration** on the Resource menu. For more information, see [Configure a custom data access policy for your application](cache-configure-role-based-access-control.md#configure-a-custom-data-access-policy-for-your-application).
+
+1. In the Azure portal, select the Azure Cache for Redis instance where you'd like to add to the Data Access Configuration.
+
+1. Select **(PREVIEW) Data Access Configuration** from the Resource menu.
-1. From the Resource menu, select **Advanced settings**.
+1. Select **Add** and choose **New Redis User**.
-1. Check the box labeled **(PREVIEW) Enable Microsoft Entra Authorization** and select **OK**. Then, select **Save**.
+1. On the **Access Policy** tab, select one the available policies in the table: **Data Owner**, **Data Contributor**, or **Data Reader**. Then, select the **Next:Redis Users**.
- :::image type="content" source="media/cache-azure-active-directory-for-authentication/cache-azure-ad-access-authorization.png" alt-text="Screenshot of Microsoft Entra ID access authorization.":::
+ :::image type="content" source="media/cache-azure-active-directory-for-authentication/cache-new-redis-user.png" alt-text="Screenshot showing the available Access Policies.":::
+
+1. Choose either the **User or service principal** or **Managed Identity** to determine how to assign access to your Azure Cache for Redis instance. If you select **User or service principal**,and you want to add a _user_, you must first [enable Microsoft Entra Authentication](#enable-microsoft-entra-id-authentication-on-your-cache).
+
+1. Then, select **Select members** and select **Select**. Then, select **Next : Review + Assign**.
+ :::image type="content" source="media/cache-azure-active-directory-for-authentication/cache-select-members.png" alt-text="Screenshot showing members to add as New Redis Users.":::
1. A dialog box displays a popup notifying you that upgrading is permanent and might cause a brief connection blip. Select **Yes.** > [!IMPORTANT] > Once the enable operation is complete, the nodes in your cache instance reboots to load the new configuration. We recommend performing this operation during your maintenance window or outside your peak business hours. The operation can take up to 30 minutes.
-<a name='configure-your-redis-client-to-use-azure-active-directory'></a>
- ## Configure your Redis client to use Microsoft Entra ID
-Because most Azure Cache for Redis clients assume that a password/access key is used for authentication, you likely need to update your client workflow to support authentication using Microsoft Entra ID. In this section, you learn how to configure your client applications to connect to Azure Cache for Redis using a Microsoft Entra token.
-
+Because most Azure Cache for Redis clients assume that a password and access key are used for authentication, you likely need to update your client workflow to support authentication using Microsoft Entra ID. In this section, you learn how to configure your client applications to connect to Azure Cache for Redis using a Microsoft Entra token.
-<a name='azure-ad-client-workflow'></a>
+<!-- :::image type="content" source="media/cache-azure-active-directory-for-authentication/azure-ad-token.png" alt-text="Architecture diagram showing the flow of a token from Microsoft Entra ID to a customer application to a cache."::: -->
### Microsoft Entra Client Workflow
-1. Configure your client application to acquire a Microsoft Entra token for scope `https://redis.azure.com/.default` or `acca5fbb-b7e4-4009-81f1-37e38fd66d78/.default` using the [Microsoft Authentication Library (MSAL)](/azure/active-directory/develop/msal-overview).
+1. Configure your client application to acquire a Microsoft Entra token for scope, `https://redis.azure.com/.default` or `acca5fbb-b7e4-4009-81f1-37e38fd66d78/.default`, using the [Microsoft Authentication Library (MSAL)](/azure/active-directory/develop/msal-overview).
- <!-- (ADD code snippet) -->
-
-1. Update your Redis connection logic to use following `UserName` and `Password`:
-
- - `UserName` = Object ID of your managed identity or service principal
+1. Update your Redis connection logic to use following `User` and `Password`:
+ - `User` = Object ID of your managed identity or service principal
- `Password` = Microsoft Entra token that you acquired using MSAL
- <!-- (ADD code snippet) -->
- 1. Ensure that your client executes a Redis [AUTH command](https://redis.io/commands/auth/) automatically before your Microsoft Entra token expires using:
- - `UserName` = Object ID of your managed identity or service principal
-
+ - `User` = Object ID of your managed identity or service principal
- `Password` = Microsoft Entra token refreshed periodically
- <!-- (ADD code snippet) -->
- ### Client library support The library [`Microsoft.Azure.StackExchangeRedis`](https://www.nuget.org/packages/Microsoft.Azure.StackExchangeRedis) is an extension of `StackExchange.Redis` that enables you to use Microsoft Entra ID to authenticate connections from a Redis client application to an Azure Cache for Redis. The extension manages the authentication token, including proactively refreshing tokens before they expire to maintain persistent Redis connections over multiple days.
The following table includes links to code samples, which demonstrate how to con
| ioredis | Node.js | [ioredis code sample](https://aka.ms/redis/aad/sample-code/js-ioredis) | | node-redis | Node.js | [node-redis code sample](https://aka.ms/redis/aad/sample-code/js-noderedis) |
-<a name='best-practices-for-azure-ad-authentication'></a>
- ### Best practices for Microsoft Entra authentication - Configure private links or firewall rules to protect your cache from a Denial of Service attack.
azure-cache-for-redis Cache Configure Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-configure-role-based-access-control.md
The following list contains some examples of permission strings for various scen
- Allow application to execute only _read_ commands
- Permissions string: `+@read *`
+ Permissions string: `+@read ~*`
- Allow application to execute _read_ command category and set command on keys with prefix `Az`.
The following list contains some examples of permission strings for various scen
1. [Configure Permissions](#permissions-for-your-data-access-policy) as per your requirements.
-1. From the Resource menu, select **Advanced settings**.
+1. To add a user to the access policy using Microsoft Entra ID, you must first enable Microsoft Entra ID by selecting **Authentication** from the Resource menu.
-1. If not checked already, Check the box labeled **(PREVIEW) Enable Microsoft Entra Authorization** and select **OK**. Then, select **Save**.
+1. Select **(PREVIEW) Enable Microsoft Entra Authentication** as the tab in the working pane.
- :::image type="content" source="media/cache-azure-active-directory-for-authentication/cache-azure-ad-access-authorization.png" alt-text="Screenshot of Microsoft Entra ID access authorization.":::
+1. If not checked already, check the box labeled **(PREVIEW) Enable Microsoft Entra Authentication** and select **OK**. Then, select **Save**.
-1. A dialog box displays a popup notifying you that upgrading is permanent and might cause a brief connection blip. Select **Yes.**
+ :::image type="content" source="media/cache-azure-active-directory-for-authentication/cache-enable-microsoft-entra.png" alt-text="Screenshot of Microsoft Entra ID access authorization.":::
+
+1. A popup dialog box displays asking if you want to update your configuration, and informing you that it takes several minutes. Select **Yes.**
> [!IMPORTANT] > Once the enable operation is complete, the nodes in your cache instance reboots to load the new configuration. We recommend performing this operation during your maintenance window or outside your peak business hours. The operation can take up to 30 minutes.
-<a name='configure-your-redis-client-to-use-azure-active-directory'></a>
- ## Configure your Redis client to use Microsoft Entra ID
-Now that you have configured Redis User and Data access policy for configuring role based access control, you need to update your client workflow to support authenticating using a specific user/password. To learn how to configure you client application to connect to your cache instance as a specific Redis User, see [Configure your Redis client to use Azure AD.](cache-azure-active-directory-for-authentication.md#configure-your-redis-client-to-use-azure-active-directory)
+Now that you have configured Redis User and Data access policy for configuring role based access control, you need to update your client workflow to support authenticating using a specific user/password. To learn how to configure your client application to connect to your cache instance as a specific Redis User, see [Configure your Redis client to use Microsoft Entra ID](cache-azure-active-directory-for-authentication.md#configure-your-redis-client-to-use-microsoft-entra-id).
## Next steps
azure-cache-for-redis Cache Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-configure.md
You can view and configure the following settings using the **Resource Menu**. T
- [Diagnose and solve problems](#diagnose-and-solve-problems) - [Events](#events) - [Settings](#settings)
- - [Access keys](#access-keys)
+ - [Authentication](#authentication)
- [Advanced settings](#advanced-settings) - [Scale](#scale) - [Cluster size](#cluster-size)
For information on moving resources from one resource group to another, and from
The **Settings** section allows you to access and configure the following settings for your cache. -- [Access keys](#access-keys)
+- [Authentication](#authentication)
+ - [Access keys](#access-keys)
+ - [(Preview) Microsoft Entra Authentication](#preview-microsoft-entra-authentication)
- [Advanced settings](#advanced-settings) - [Scale](#scale) - [Cluster size](#cluster-size)
The **Settings** section allows you to access and configure the following settin
- [Properties](#properties) - [Locks](#locks)
-### Access keys
+### Authentication
+
+You have two options for authentication: access keys and Microsoft Entra Authentication.
+
+#### Access keys
Select **Access keys** to view or regenerate the access keys for your cache. These keys are used by the clients connecting to your cache. +
+#### (Preview) Microsoft Entra Authentication
+
+Select **(Preview) Microsoft Entra Authentication** to a password-free authentication mechanism by integrating with Microsoft Entra ID. This integration also includes role-based access control functionality provided through access control lists (ACLs) supported in open source Redis.
+++ ### Advanced settings
Use the **Maxmemory policy**, **maxmemory-reserved**, and **maxfragmentationmemo
For more information about `maxmemory` policies, see [Eviction policies](https://redis.io/topics/lru-cache#eviction-policies).
-The **maxmemory-reserved** setting configures the amount of memory in MB per instance in a cluster that is reserved for non-cache operations, such as replication during failover. Setting this value allows you to have a more consistent Redis server experience when your load varies. This value should be set higher for workloads that write large amounts of data. When memory is reserved for such operations, it's unavailable for storage of cached data. The minimum and maximum values on the slider are 10% and 60%, shown in megabytes. You must set the value in that range.
+The **maxmemory-reserved** setting configures the amount of memory in MB per instance in a cluster that is reserved for noncache operations, such as replication during failover. Setting this value allows you to have a more consistent Redis server experience when your load varies. This value should be set higher for workloads that write large amounts of data. When memory is reserved for such operations, it's unavailable for storage of cached data. The minimum and maximum values on the slider are 10% and 60%, shown in megabytes. You must set the value in that range.
The **maxfragmentationmemory-reserved** setting configures the amount of memory in MB per instance in a cluster that is reserved to accommodate for memory fragmentation. When you set this value, the Redis server experience is more consistent when the cache is full or close to full and the fragmentation ratio is high. When memory is reserved for such operations, it's unavailable for storage of cached data. The minimum and maximum values on the slider are 10% and 60%, shown in megabytes. You must set the value in that range.
-When choosing a new memory reservation value (**maxmemory-reserved** or **maxfragmentationmemory-reserved**), consider how this change might affect a cache that is already running with large amounts of data in it. For instance, if you have a 53-GB cache with 49 GB of data, then change the reservation value to 8 GB, this change drops the max available memory for the system down to 45 GB. If either your current `used_memory` or your `used_memory_rss` values are higher than the new limit of 45 GB, then the system will have to evict data until both `used_memory` and `used_memory_rss` are below 45 GB. Eviction can increase server load and memory fragmentation. For more information on cache metrics such as `used_memory` and `used_memory_rss`, see [Create your own metrics](cache-how-to-monitor.md#create-your-own-metrics).
+When choosing a new memory reservation value (**maxmemory-reserved** or **maxfragmentationmemory-reserved**), consider how this change might affect a cache that is already running with large amounts of data in it. For instance, if you have a 53-GB cache with 49 GB of data, then change the reservation value to 8 GB, this change drops the max available memory for the system down to 45 GB. If either your current `used_memory` or your `used_memory_rss` values are higher than the new limit of 45 GB, then the system has to evict data until both `used_memory` and `used_memory_rss` are below 45 GB. Eviction can increase server load and memory fragmentation. For more information on cache metrics such as `used_memory` and `used_memory_rss`, see [Create your own metrics](cache-how-to-monitor.md#create-your-own-metrics).
> [!IMPORTANT] > The **maxmemory-reserved** and **maxfragmentationmemory-reserved** settings are available for Basic,Standard and Premium caches.
Presently, you can only use managed identities for storage. For more information
### Schedule updates
-The **Schedule updates** section on the left allows you to choose a maintenance window for Redis server updates for your cache.
+The **Schedule updates** section allows you to choose a maintenance window for Redis server updates for your cache.
> [!IMPORTANT] > The maintenance window applies only to Redis server updates, and not to any Azure updates or updates to the operating system of the VMs that host the cache.
For more information and instructions, see [Update channel and Schedule updates]
### Geo-replication
-**Geo-replication**, on the left, provides a mechanism for linking two Premium tier Azure Cache for Redis instances. One cache is named as the primary linked cache, and the other as the secondary linked cache. The secondary linked cache becomes read-only, and data written to the primary cache is replicated to the secondary linked cache. This functionality can be used to replicate a cache across Azure regions.
+**Geo-replication**, on the Resource menu, provides a mechanism for linking two Premium tier Azure Cache for Redis instances. One cache is named as the primary linked cache, and the other as the secondary linked cache. The secondary linked cache becomes read-only, and data written to the primary cache is replicated to the secondary linked cache. This functionality can be used to replicate a cache across Azure regions.
> [!IMPORTANT] > **Geo-replication** is only available for Premium tier caches. For more information and instructions, see [How to configure Geo-replication for Azure Cache for Redis](cache-how-to-geo-replication.md).
You can use import with Redis-compatible RDB files from any Redis server running
- Windows - any cloud provider such as Amazon Web Services and others
-Importing data is an easy way to create a cache with pre-populated data. During the import process, Azure Cache for Redis loads the RDB files from Azure storage into memory, and then inserts the keys into the cache.
+Importing data is an easy way to create a cache with prepopulated data. During the import process, Azure Cache for Redis loads the RDB files from Azure storage into memory, and then inserts the keys into the cache.
Export allows you to export the data stored in Azure Cache for Redis to Redis compatible RDB files. You can use this feature to move data from one Azure Cache for Redis instance to another or to another Redis server. During the export process, a temporary file is created on the VM that hosts the Azure Cache for Redis server instance. The temporary file is uploaded to the designated storage account. When the export operation completes with either a status of success or failure, the temporary file is deleted.
Export allows you to export the data stored in Azure Cache for Redis to Redis co
### Reboot
-The **Reboot** item on the left allows you to reboot the nodes of your cache. This reboot capability enables you to test your application for resiliency if there's a failure of a cache node.
+The **Reboot** item allows you to reboot the nodes of your cache. This reboot capability enables you to test your application for resiliency if there's a failure of a cache node.
:::image type="content" source="media/cache-configure/redis-cache-reboot.png" alt-text="Reboot":::
Use **Insights** to see groups of predefined tiles and charts to use as starting
For more information, see [Use Insights for predefined charts](cache-how-to-monitor.md#use-insights-for-predefined-charts).
-<!-- create link to new content for Insights when it is added by the monitor team -->
- ### Metrics Select **Metrics** to Create your own custom chart to track the metrics you want to see for your cache. For more information, see [Create alerts](cache-how-to-monitor.md#create-alerts).
By default, cache metrics in Azure Monitor are [stored for 30 days](../azure-mon
### Advisor recommendations
-The **Advisor recommendations** on the left displays recommendations for your cache. During normal operations, no recommendations are displayed.
+The **Advisor recommendations** displays recommendations for your cache. During normal operations, no recommendations are displayed.
:::image type="content" source="media/cache-configure/redis-cache-no-recommendations.png" alt-text="Screenshot that shows where the Advisor recommendations are displayed but there are no current ones.":::
If any conditions occur during the operations of your cache such as imminent cha
Further information can be found on the **Recommendations** in the working pane of the Azure portal. :::image type="content" source="media/cache-configure/redis-cache-recommendations.png" alt-text="Screenshot that shows Advisor recommendations":::
-<!-- How do we trigger an event that causes a good recommendation for the image? -->
You can monitor these metrics on the [Monitoring](cache-how-to-monitor.md) section of the Resource menu.
azure-cache-for-redis Cache How To Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-scale.md
There are fundamentally two ways to scale an Azure Cache for Redis Instance:
- _Scaling out_ divides the cache instance into more nodes of the same size, increasing memory, vCPUs, and network bandwidth through parallelization. Scaling out is also referred to as _horizontal scaling_ or _sharding_. The opposite of scaling out is **Scaling in**. In the Redis community, scaling out is frequently called [_clustering_](https://redis.io/docs/management/scaling/). - ## Scope of availability |Tier | Basic and Standard | Premium | Enterprise and Enterprise Flash |
You can scale out/in with the following restrictions:
:::image type="content" source="media/cache-how-to-scale/scaling-notification.png" alt-text="Screenshot showing the notification of scaling.":::
-1. When scaling is complete, the status changes from **Scaling** to **Running**.
+1. When scaling is complete, the status changes from **Scaling** to **Running**.
> [!NOTE] > When you scale a cache up or down using the portal, both `maxmemory-reserved` and `maxfragmentationmemory-reserved` settings automatically scale in proportion to the cache size.
It takes a while for the cache to create. You can monitor progress on the Azure
> There are some minor differences required in your client application when clustering is configured. For more information, see [Do I need to make any changes to my client application to use clustering?](#do-i-need-to-make-any-changes-to-my-client-application-to-use-clustering) > - For sample code on working with clustering with the StackExchange.Redis client, see the [clustering.cs](https://github.com/rustd/RedisSamples/blob/master/HelloWorld/Clustering.cs) portion of the [Hello World](https://github.com/rustd/RedisSamples/tree/master/HelloWorld) sample. #### Scale a running Premium cache in or out
For more information on scaling with Azure CLI, see [Change settings of an exist
> > + ## How to scale up and out - Enterprise and Enterprise Flash tiers The Enterprise and Enterprise Flash tiers are able to scale up and scale out in one operation. Other tiers require separate operations for each action.
The Enterprise and Enterprise Flash tiers are able to scale up and scale out in
> The Enterprise and Enterprise Flash tiers do not yet support _scale down_ or _scale in_ operations. > - ### Scale using the Azure portal 1. To scale your cache, [browse to the cache](cache-configure.md#configure-azure-cache-for-redis-settings) in the [Azure portal](https://portal.azure.com) and select **Scale** from the Resource menu.
The Enterprise and Enterprise Flash tiers are able to scale up and scale out in
:::image type="content" source="media/cache-how-to-scale/cache-enterprise-notifications.png" alt-text="Screenshot showing notification of scaling an Enterprise cache."::: - 1. When scaling is complete, the status changes from **Scaling** to **Running**. - ### Scale using PowerShell You can scale your Azure Cache for Redis instances with PowerShell by using the [Update-AzRedisEnterpriseCache](/powershell/module/az.redisenterprisecache/update-azredisenterprisecache) cmdlet. You can modify the `Sku` property to scale the instance up. You can modify the `Capacity` property to scale out the instance. The following example shows how to scale a cache named `myCache` to an Enterprise E20 (25 GB) instance with capacity of 4.
No, your cache name and keys are unchanged during a scaling operation.
- When you scale a **Basic** cache to a **Standard** cache, the data in the cache is typically preserved. - When you scale a **Standard**, **Premium**, **Enterprise**, or **Enterprise Flash** cache to a larger size, all data is typically preserved. When you scale a Standard or Premium cache to a smaller size, data can be lost if the data size exceeds the new smaller size when it's scaled down. If data is lost when scaling down, keys are evicted using the [allkeys-lru](https://redis.io/topics/lru-cache) eviction policy.
- ### Can I use all the features of Premium tier after scaling?
+### Can I use all the features of Premium tier after scaling?
No, some features can only be set when you create a cache in Premium tier, and are not available after scaling.
In the Azure portal, you can see the scaling operation in progress. When scaling
### Do I need to make any changes to my client application to use clustering?
-* When clustering is enabled, only database 0 is available. If your client application uses multiple databases and it tries to read or write to a database other than 0, the following exception is thrown: `Unhandled Exception: StackExchange.Redis.RedisConnectionException: ProtocolFailure on GET >` `StackExchange.Redis.RedisCommandException: Multiple databases are not supported on this server; cannot switch to database: 6`
+- When clustering is enabled, only database 0 is available. If your client application uses multiple databases and it tries to read or write to a database other than 0, the following exception is thrown: `Unhandled Exception: StackExchange.Redis.RedisConnectionException: ProtocolFailure on GET >` `StackExchange.Redis.RedisCommandException: Multiple databases are not supported on this server; cannot switch to database: 6`
For more information, see [Redis Cluster Specification - Implemented subset](https://redis.io/topics/cluster-spec#implemented-subset).
-* If you're using [StackExchange.Redis](https://www.nuget.org/packages/StackExchange.Redis/), you must use 1.0.481 or later. You connect to the cache using the same [endpoints, ports, and keys](cache-configure.md#properties) that you use when connecting to a cache where clustering is disabled. The only difference is that all reads and writes must be done to database 0.
+- If you're using [StackExchange.Redis](https://www.nuget.org/packages/StackExchange.Redis/), you must use 1.0.481 or later. You connect to the cache using the same [endpoints, ports, and keys](cache-configure.md#properties) that you use when connecting to a cache where clustering is disabled. The only difference is that all reads and writes must be done to database 0.
Other clients may have different requirements. See [Do all Redis clients support clustering?](#do-all-redis-clients-support-clustering)
-* If your application uses multiple key operations batched into a single command, all keys must be located in the same shard. To locate keys in the same shard, see [How are keys distributed in a cluster?](#how-are-keys-distributed-in-a-cluster)
-* If you're using Redis ASP.NET Session State provider, you must use 2.0.1 or higher. See [Can I use clustering with the Redis ASP.NET Session State and Output Caching providers?](#can-i-use-clustering-with-the-redis-aspnet-session-state-and-output-caching-providers)
+- If your application uses multiple key operations batched into a single command, all keys must be located in the same shard. To locate keys in the same shard, see [How are keys distributed in a cluster?](#how-are-keys-distributed-in-a-cluster)
+- If you're using Redis ASP.NET Session State provider, you must use 2.0.1 or higher. See [Can I use clustering with the Redis ASP.NET Session State and Output Caching providers?](#can-i-use-clustering-with-the-redis-aspnet-session-state-and-output-caching-providers)
> [!IMPORTANT] > When using the Enterprise or Enterprise FLash tiers, you are given the choice of _OSS Cluster Mode_ or _Enterprise Cluster Mode_. OSS Cluster Mode is the same as clustering on the Premium tier and follows the open source clustering specification. Enterprise Cluster Mode can be less performant, but uses Redis Enterprise clustering which doesn't require any client changes to use. For more information, see [Clustering on Enterprise](cache-best-practices-enterprise-tiers.md#clustering-on-enterprise).
In the Azure portal, you can see the scaling operation in progress. When scaling
Per the Redis documentation on [Keys distribution model](https://redis.io/topics/cluster-spec#keys-distribution-model): The key space is split into 16,384 slots. Each key is hashed and assigned to one of these slots, which are distributed across the nodes of the cluster. You can configure which part of the key is hashed to ensure that multiple keys are located in the same shard using hash tags.
-* Keys with a hash tag - if any part of the key is enclosed in `{` and `}`, only that part of the key is hashed for the purposes of determining the hash slot of a key. For example, the following three keys would be located in the same shard: `{key}1`, `{key}2`, and `{key}3` since only the `key` part of the name is hashed. For a complete list of keys hash tag specifications, see [Keys hash tags](https://redis.io/topics/cluster-spec#keys-hash-tags).
-* Keys without a hash tag - the entire key name is used for hashing, resulting in a statistically even distribution across the shards of the cache.
+- Keys with a hash tag - if any part of the key is enclosed in `{` and `}`, only that part of the key is hashed for the purposes of determining the hash slot of a key. For example, the following three keys would be located in the same shard: `{key}1`, `{key}2`, and `{key}3` since only the `key` part of the name is hashed. For a complete list of keys hash tag specifications, see [Keys hash tags](https://redis.io/topics/cluster-spec#keys-hash-tags).
+- Keys without a hash tag - the entire key name is used for hashing, resulting in a statistically even distribution across the shards of the cache.
For best performance and throughput, we recommend distributing the keys evenly. If you're using keys with a hash tag, it's the application's responsibility to ensure the keys are distributed evenly.
The Redis clustering protocol requires each client to connect to each shard dire
> ### How do I connect to my cache when clustering is enabled?
-You can connect to your cache using the same [endpoints](cache-configure.md#properties), [ports](cache-configure.md#properties), and [keys](cache-configure.md#access-keys) that you use when connecting to a cache that doesn't have clustering enabled. Redis manages the clustering on the backend so you don't have to manage it from your client.
+You can connect to your cache using the same [endpoints](cache-configure.md#properties), [ports](cache-configure.md#properties), and [keys](cache-configure.md#authentication) that you use when connecting to a cache that doesn't have clustering enabled. Redis manages the clustering on the backend so you don't have to manage it from your client.
### Can I directly connect to the individual shards of my cache?
Clustering is only available for Premium, Enterprise, and Enterprise Flash cache
### Can I use clustering with the Redis ASP.NET Session State and Output Caching providers?
-* **Redis Output Cache provider** - no changes required.
-* **Redis Session State provider** - to use clustering, you must use [RedisSessionStateProvider](https://www.nuget.org/packages/Microsoft.Web.RedisSessionStateProvider) 2.0.1 or higher or an exception is thrown, which is a breaking change. For more information, see [v2.0.0 Breaking Change Details](https://github.com/Azure/aspnet-redis-providers/wiki/v2.0.0-Breaking-Change-Details).
+- **Redis Output Cache provider** - no changes required.
+- **Redis Session State provider** - to use clustering, you must use [RedisSessionStateProvider](https://www.nuget.org/packages/Microsoft.Web.RedisSessionStateProvider) 2.0.1 or higher or an exception is thrown, which is a breaking change. For more information, see [v2.0.0 Breaking Change Details](https://github.com/Azure/aspnet-redis-providers/wiki/v2.0.0-Breaking-Change-Details).
### I'm getting MOVE exceptions when using StackExchange.Redis and clustering, what should I do?+ If you're using StackExchange.Redis and receive `MOVE` exceptions when using clustering, ensure that you're using [StackExchange.Redis 1.1.603](https://www.nuget.org/packages/StackExchange.Redis/) or later. For instructions on configuring your .NET applications to use StackExchange.Redis, see [Configure the cache clients](cache-dotnet-how-to-use-azure-redis-cache.md#configure-the-cache-client). ### What is the difference between OSS Clustering and Enterprise Clustering on Enterprise tier caches?
azure-cache-for-redis Cache Tutorial Vector Similarity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-tutorial-vector-similarity.md
In this tutorial, you learn how to:
To successfully make a call against Azure OpenAI, you need an **endpoint** and a **key**. You also need an **endpoint** and a **key** to connect to Azure Cache for Redis.
-1. Go to your Azure Open AI resource in the Azure portal.
+1. Go to your Azure OpenAI resource in the Azure portal.
1. Locate **Endpoint and Keys** in the **Resource Management** section. Copy your endpoint and access key as you'll need both for authenticating your API calls. An example endpoint is: `https://docs-test-001.openai.azure.com`. You can use either `KEY1` or `KEY2`.
azure-functions Durable Functions Instance Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-instance-management.md
Title: Manage instances in Durable Functions - Azure
description: Learn how to manage instances in the Durable Functions extension for Azure Functions. Previously updated : 12/07/2022 Last updated : 02/13/2024 ms.devlang: csharp # ms.devlang: csharp, java, javascript, python
Functions can send instances of these objects to external systems to monitor or
public static void SendInstanceInfo( [ActivityTrigger] IDurableActivityContext ctx, [DurableClient] IDurableOrchestrationClient client,
- [DocumentDB(
+ [CosmosDB(
databaseName: "MonitorDB",
- collectionName: "HttpManagementPayloads",
- ConnectionStringSetting = "CosmosDBConnection")]out dynamic document)
+ containerName: "HttpManagementPayloads",
+ Connection = "CosmosDBConnectionSetting")]out dynamic document)
{ HttpManagementPayload payload = client.CreateHttpManagementPayload(ctx.InstanceId);
azure-functions Functions Add Output Binding Cosmos Db Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-cosmos-db-vs-code.md
Title: Connect Azure Functions to Azure Cosmos DB using Visual Studio Code description: Learn how to connect Azure Functions to an Azure Cosmos DB account by adding an output binding to your Visual Studio Code project. Previously updated : 02/09/2023 Last updated : 02/13/2024 zone_pivot_groups: programming-languages-set-functions-temp ms.devlang: csharp
In the [previous quickstart article](./create-first-function-vs-code-csharp.md),
|Prompt| Selection| |--|--|
- |**Enter new app setting name**| Type `CosmosDbConnectionString`.|
- |**Enter value for "CosmosDbConnectionString"**| Paste the connection string of your Azure Cosmos DB account you copied.|
+ |**Enter new app setting name**| Type `CosmosDbConnectionSetting`.|
+ |**Enter value for "CosmosDbConnectionSetting"**| Paste the connection string of your Azure Cosmos DB account you copied. You can also configure [Microsoft Entra identity](./functions-bindings-cosmosdb-v2-trigger.md#connections) as an alternative.|
- This creates an application setting named connection `CosmosDbConnectionString` in your function app in Azure. Now, you can download this setting to your local.settings.json file.
+ This creates an application setting named connection `CosmosDbConnectionSetting` in your function app in Azure. Now, you can download this setting to your local.settings.json file.
1. Press <kbd>F1</kbd> again to open the command palette, then search for and run the command `Azure Functions: Download Remote Settings...`.
Except for HTTP and timer triggers, bindings are implemented as extension packag
# [Isolated worker model](#tab/isolated-process) ```command
-dotnet add package Microsoft.Azure.Functions.Worker.Extensions.CosmosDB --version 3.0.9
+dotnet add package Microsoft.Azure.Functions.Worker.Extensions.CosmosDB
``` # [In-process model](#tab/in-process) ```command
-dotnet add package Microsoft.Azure.WebJobs.Extensions.CosmosDB --version 3.0.10
+dotnet add package Microsoft.Azure.WebJobs.Extensions.CosmosDB
``` ::: zone-end
The `documentsOut` parameter is an `IAsyncCollector<T>` type, which represents a
-Specific attributes specify the name of the container and the name of its parent database. The connection string for your Azure Cosmos DB account is set by the `CosmosDbConnectionString`.
+Specific attributes specify the name of the container and the name of its parent database. The connection string for your Azure Cosmos DB account is set by the `CosmosDbConnectionSetting`.
::: zone-end
To create a binding, right-click (Ctrl+select on macOS) the *function.json* file
| **The Azure Cosmos DB database where data will be written** | `my-database` | The name of the Azure Cosmos DB database containing the target container. | | **Database collection where data will be written** | `my-container` | The name of the Azure Cosmos DB container where the JSON documents will be written. | | **If true, creates the Azure Cosmos DB database and collection** | `false` | The target database and container already exist. |
-| **Select setting from "local.setting.json"** | `CosmosDbConnectionString` | The name of an application setting that contains the connection string for the Azure Cosmos DB account. |
+| **Select setting from "local.setting.json"** | `CosmosDbConnectionSetting` | The name of an application setting that contains the connection string for the Azure Cosmos DB account. |
| **Partition key (optional)** | *leave blank* | Only required when the output binding creates the container. | | **Collection throughput (optional)** | *leave blank* | Only required when the output binding creates the container. |
A binding is added to the `bindings` array in your *function.json*, which should
"direction": "out", "name": "outputDocument", "databaseName": "my-database",
- "collectionName": "my-container",
+ "containerName": "my-container",
"createIfNotExists": "false",
- "connectionStringSetting": "CosmosDbConnectionString"
+ "connection": "CosmosDbConnectionSetting"
} ```
To create a binding, right-select (Ctrl+select on macOS) the *function.json* fil
| **The Azure Cosmos DB database where data will be written** | `my-database` | The name of the Azure Cosmos DB database containing the target container. | | **Database collection where data will be written** | `my-container` | The name of the Azure Cosmos DB container where the JSON documents will be written. | | **If true, creates the Azure Cosmos DB database and collection** | `false` | The target database and container already exist. |
-| **Select setting from "local.setting.json"** | `CosmosDbConnectionString` | The name of an application setting that contains the connection string for the Azure Cosmos DB account. |
+| **Select setting from "local.setting.json"** | `CosmosDbConnectionSetting` | The name of an application setting that contains the connection string for the Azure Cosmos DB account. |
| **Partition key (optional)** | *leave blank* | Only required when the output binding creates the container. | | **Collection throughput (optional)** | *leave blank* | Only required when the output binding creates the container. |
A binding is added to the `bindings` array in your *function.json*, which should
"direction": "out", "name": "outputDocument", "databaseName": "my-database",
- "collectionName": "my-container",
+ "containerName": "my-container",
"createIfNotExists": "false",
- "connectionStringSetting": "CosmosDbConnectionString"
+ "connection": "CosmosDbConnectionSetting"
} ```
Binding attributes are defined directly in the *function_app.py* file. You use t
```python @app.cosmos_db_output(arg_name="outputDocument", database_name="my-database",
- collection_name="my-container", connection_string_setting="CosmosDbConnectionString")
+ container_name="my-container", connection="CosmosDbConnectionSetting")
```
-In this code, `arg_name` identifies the binding parameter referenced in your code, `database_name` and `collection_name` are the database and collection names that the binding writes to, and `connection_string_setting` is the name of an application setting that contains the connection string for the Storage account, which is in the CosmosDbConnectionString setting in the *local.settings.json* file.
+In this code, `arg_name` identifies the binding parameter referenced in your code, `database_name` and `container_name` are the database and collection names that the binding writes to, and `connection` is the name of an application setting that contains the connection string for the Azure Cosmos DB account, which is in the `CosmosDbConnectionSetting` setting in the *local.settings.json* file.
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req, [CosmosDB( databaseName: "my-database",
- collectionName: "my-container",
- ConnectionStringSetting = "CosmosDbConnectionString")]IAsyncCollector<dynamic> documentsOut,
+ containerName: "my-container",
+ Connection = "CosmosDbConnectionSetting")]IAsyncCollector<dynamic> documentsOut,
ILogger log) { log.LogInformation("C# HTTP trigger function processed a request.");
app = func.FunctionApp()
@app.function_name(name="HttpTrigger1") @app.route(route="hello", auth_level=func.AuthLevel.ANONYMOUS) @app.queue_output(arg_name="msg", queue_name="outqueue", connection="AzureWebJobsStorage")
-@app.cosmos_db_output(arg_name="outputDocument", database_name="my-database", collection_name="my-container", connection_string_setting="CosmosDbConnectionString")
+@app.cosmos_db_output(arg_name="outputDocument", database_name="my-database", container_name="my-container", connection="CosmosDbConnectionSetting")
def test_function(req: func.HttpRequest, msg: func.Out[func.QueueMessage], outputDocument: func.Out[func.Document]) -> func.HttpResponse: logging.info('Python HTTP trigger function processed a request.')
azure-functions Functions Event Hub Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-event-hub-cosmos-db.md
Title: 'Tutorial: Use Java functions with Azure Cosmos DB and Event Hubs'
description: This tutorial shows you how to consume events from Event Hubs to make updates in Azure Cosmos DB using a function written in Java. Previously updated : 02/13/2024 Last updated : 02/14/2024 ms.devlang: java
az functionapp config appsettings set \
--settings \ AzureWebJobsStorage=$AZURE_WEB_JOBS_STORAGE \ EventHubConnectionString=$EVENT_HUB_CONNECTION_STRING \
- CosmosDBConnectionString=$COSMOS_DB_CONNECTION_STRING
+ CosmosDBConnectionSetting=$COSMOS_DB_CONNECTION_STRING
``` # [Cmd](#tab/cmd)
az functionapp config appsettings set ^
--settings ^ AzureWebJobsStorage=%AZURE_WEB_JOBS_STORAGE% ^ EventHubConnectionString=%EVENT_HUB_CONNECTION_STRING% ^
- CosmosDBConnectionString=%COSMOS_DB_CONNECTION_STRING%
+ CosmosDBConnectionSetting=%COSMOS_DB_CONNECTION_STRING%
```
azure-functions Functions How To Azure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-azure-devops.md
steps:
Use one of the following samples to create a YAML file to build an app for a specific Python version. Python is only supported for function apps running on Linux.
-**Version 3.7**
- ```yaml pool: vmImage: ubuntu-latest steps: - task: UsePythonVersion@0
- displayName: "Setting Python version to 3.7 as required by functions"
+ displayName: "Set Python version to 3.9"
inputs:
- versionSpec: '3.7'
+ versionSpec: '3.9'
architecture: 'x64' - bash: | if [ -f extensions.csproj ]
steps:
artifactName: 'drop' ```
-**Version 3.6**
-
-```yaml
-pool:
- vmImage: ubuntu-latest
-steps:
-- task: UsePythonVersion@0
- displayName: "Setting Python version to 3.6 as required by functions"
- inputs:
- versionSpec: '3.6'
- architecture: 'x64'
-- bash: |
- if [ -f extensions.csproj ]
- then
- dotnet build extensions.csproj --output ./bin
- fi
- pip install --target="./.python_packages/lib/python3.6/site-packages" -r ./requirements.txt
-- task: ArchiveFiles@2
- displayName: "Archive files"
- inputs:
- rootFolderOrFile: "$(System.DefaultWorkingDirectory)"
- includeRootFolder: false
- archiveFile: "$(System.DefaultWorkingDirectory)/build$(Build.BuildId).zip"
-- task: PublishBuildArtifacts@1
- inputs:
- PathtoPublish: '$(System.DefaultWorkingDirectory)/build$(Build.BuildId).zip'
- artifactName: 'drop'
-```
- # [PowerShell](#tab/powershell) You can use the following sample to create a YAML file to package a PowerShell app. PowerShell is supported only for Windows Azure Functions.
steps:
Use one of the following samples to create a YAML file to build an app for a specific Python version. Python is only supported for function apps running on Linux.
-**Version 3.7**
```yaml pool: vmImage: ubuntu-latest steps: - task: UsePythonVersion@0
- displayName: "Setting Python version to 3.7 as required by functions"
+ displayName: "Set Python version to 3.9"
inputs:
- versionSpec: '3.7'
+ versionSpec: '3.9'
architecture: 'x64' - bash: | if [ -f extensions.csproj ]
steps:
artifactName: 'drop' ```
-**Version 3.6**
-
-```yaml
-pool:
- vmImage: ubuntu-latest
-steps:
-- task: UsePythonVersion@0
- displayName: "Setting Python version to 3.6 as required by functions"
- inputs:
- versionSpec: '3.6'
- architecture: 'x64'
-- bash: |
- if [ -f extensions.csproj ]
- then
- dotnet build extensions.csproj --output ./bin
- fi
- pip install --target="./.python_packages/lib/python3.6/site-packages" -r ./requirements.txt
-- task: ArchiveFiles@2
- displayName: "Archive files"
- inputs:
- rootFolderOrFile: "$(System.DefaultWorkingDirectory)"
- includeRootFolder: false
- archiveFile: "$(System.DefaultWorkingDirectory)/build$(Build.BuildId).zip"
-- task: PublishBuildArtifacts@1
- inputs:
- PathtoPublish: '$(System.DefaultWorkingDirectory)/build$(Build.BuildId).zip'
- artifactName: 'drop'
-```
- # [PowerShell](#tab/powershell) You can use the following sample to create a YAML file to package a PowerShell app. PowerShell is supported only for Windows Azure Functions.
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
We strongly recommended to always update to the latest version, or opt in to the
## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|
-| January 2024 |**Known Issues**<ul><li>The agent extension code size is beyond the deployment limit set by Arc, thus 1.29.5 will not install on Arc enabled servers. Fix is coming in 1.29.6.</li></ul>**Windows**<ul><li>Added support for Transport Layer Security 1.3</li><li>Reverted a change to enable multiple IIS subscriptions to use same filter. Feature will be redeployed once memory leak is fixed.</li><li>Improved ETW event throughput rate</li></ul>**Linux**<ul><li>Fix Error messages logged intended for mdsd.err went to mdsd.warn instead in 1.29.4 only. Likely error messages: "Exception while uploading to Gig-LA : ...", "Exception while uploading to ODS: ...", "Failed to upload to ODS: ..."</li><li>Syslog time zones incorrect: AMA now uses machine current time when AMA receives an event to populate the TimeGenerated field. The previous behavior parsed the time zone from the Syslog event which caused incorrect times if a device sent an event from a time zone different than the AMA collector machine.</li></ul> | 1.23.0 | 1.29.5 |
-| December 2023 |**Known Issues**<ul><li>The agent extension code size is beyond the deployment limit set by Arc, thus 1.29.4 will not install on Arc enabled servers. Fix is coming in 1.29.6.</li><li>Multiple IIS subscriptions causes a memory leak. feature reverted in 1.23.0.</ul>**Windows** <ul><li>Prevent CPU spikes by not using bookmark when resetting an Event Log subscription</li><li>Added missing fluentbit exe to AMA client setup for Custom Log support</li><li>Updated to latest AzureCredentialsManagementService and DsmsCredentialsManagement package</li><li>Update ME to v2.2023.1027.1417</li></ul>**Linux**<ul><li>Support for TLS V1.3</li><li>Support for nopri in Syslog</li><li>Ability to set disk quota from DCR Agent Settings</li><li>Add ARM64 Ubuntu 22 support</li><li>**Fixes**<ul><li>SysLog</li><ul><li>Parse syslog Palo Alto CEF with multiple space characters following the hostname</li><li>Fix an issue with incorrectly parsing messages containing two '\n' chars in a row</li><li>Improved support for non-RFC compliant devices</li><li>Support infoblox device messages containing both hostname and IP headers</li></ul><li>Fix AMA crash in RHEL 7.2</li><li>Remove dependency on "which" command</li><li>Fix port conflicts due to AMA using 13000 </li><li>Reliability and Performance improvements</li></ul></li></ul>| 1.22.0 | 1.29.4|
-| October 2023| **Windows** <ul><li>Minimize CPU spikes when resetting an Event Log subscription</li><li>Enable multiple IIS subscriptions to use same filter</li><li>Cleanup files and folders for inactive tenants in multi-tenant mode</li><li>AMA installer will not install unnecessary certs</li><li>AMA emits Telemetry table locally</li><li>Update Metric Extension to v2.2023.721.1630</li><li>Update AzureSecurityPack to v4.29.0.4</li><li>Update AzureWatson to v1.0.99</li></ul>**Linux**<ul><li> Add support for Process metrics counters for Log Analytics upload and Azure Monitor Metrics</li><li>Use rsyslog omfwd TCP for improved syslog reliability</li><li>Support Palo Alto CEF logs where hostname is followed by 2 spaces</li><li>Bug and reliability improvements</li></ul> |1.21.0|1.28.11|
-| September 2023| **Windows** <ul><li>Fix issue with high CPU usage due to excessive Windows Event Logs subscription reset</li><li>Reduce fluentbit resource usage by limiting tracked files older than 3 days and limiting logging to errors only</li><li>Fix race-condition where resource_id is unavailable when agent is restarted</li><li>Fix race-condition when vm-extension provision agent (aka GuestAgent) is issuing a disable-vm-extension command to AMA.</li><li>Update MetricExtension version to 2.2023.721.1630</li><li>Update Troubleshooter to v1.5.14 </li></ul>|1.20.0| None |
+| January 2024 |**Known Issues**<ul><li>The agent extension code size is beyond the deployment limit set by Arc, thus 1.29.5 won't install on Arc enabled servers. Fix is coming in 1.29.6.</li></ul>**Windows**<ul><li>Added support for Transport Layer Security 1.3</li><li>Reverted a change to enable multiple IIS subscriptions to use same filter. Feature will be redeployed once memory leak is fixed.</li><li>Improved ETW event throughput rate</li></ul>**Linux**<ul><li>Fix Error messages logged intended for mdsd.err went to mdsd.warn instead in 1.29.4 only. Likely error messages: "Exception while uploading to Gig-LA : ...", "Exception while uploading to ODS: ...", "Failed to upload to ODS: ..."</li><li>Syslog time zones incorrect: AMA now uses machine current time when AMA receives an event to populate the TimeGenerated field. The previous behavior parsed the time zone from the Syslog event which caused incorrect times if a device sent an event from a time zone different than the AMA collector machine.</li></ul> | 1.23.0 | 1.29.5 |
+| December 2023 |**Known Issues**<ul><li>The agent extension code size is beyond the deployment limit set by Arc, thus 1.29.4 won't install on Arc enabled servers. Fix is coming in 1.29.6.</li><li>Multiple IIS subscriptions causes a memory leak. feature reverted in 1.23.0.</ul>**Windows** <ul><li>Prevent CPU spikes by not using bookmark when resetting an Event Log subscription</li><li>Added missing fluentbit exe to AMA client setup for Custom Log support</li><li>Updated to latest AzureCredentialsManagementService and DsmsCredentialsManagement package</li><li>Update ME to v2.2023.1027.1417</li></ul>**Linux**<ul><li>Support for TLS V1.3</li><li>Support for nopri in Syslog</li><li>Ability to set disk quota from DCR Agent Settings</li><li>Add ARM64 Ubuntu 22 support</li><li>**Fixes**<ul><li>SysLog</li><ul><li>Parse syslog Palo Alto CEF with multiple space characters following the hostname</li><li>Fix an issue with incorrectly parsing messages containing two '\n' chars in a row</li><li>Improved support for non-RFC compliant devices</li><li>Support infoblox device messages containing both hostname and IP headers</li></ul><li>Fix AMA crash in RHEL 7.2</li><li>Remove dependency on "which" command</li><li>Fix port conflicts due to AMA using 13000 </li><li>Reliability and Performance improvements</li></ul></li></ul>| 1.22.0 | 1.29.4|
+| October 2023| **Windows** <ul><li>Minimize CPU spikes when resetting an Event Log subscription</li><li>Enable multiple IIS subscriptions to use same filter</li><li>Cleanup files and folders for inactive tenants in multitenant mode</li><li>AMA installer won't install unnecessary certs</li><li>AMA emits Telemetry table locally</li><li>Update Metric Extension to v2.2023.721.1630</li><li>Update AzureSecurityPack to v4.29.0.4</li><li>Update AzureWatson to v1.0.99</li></ul>**Linux**<ul><li> Add support for Process metrics counters for Log Analytics upload and Azure Monitor Metrics</li><li>Use rsyslog omfwd TCP for improved syslog reliability</li><li>Support Palo Alto CEF logs where hostname is followed by 2 spaces</li><li>Bug and reliability improvements</li></ul> |1.21.0|1.28.11|
+| September 2023| **Windows** <ul><li>Fix issue with high CPU usage due to excessive Windows Event Logs subscription reset</li><li>Reduce fluentbit resource usage by limiting tracked files older than 3 days and limiting logging to errors only</li><li>Fix race-condition where resource_id is unavailable when agent is restarted</li><li>Fix race-condition when vm-extension provision agent (also known as GuestAgent) is issuing a disable-vm-extension command to AMA.</li><li>Update MetricExtension version to 2.2023.721.1630</li><li>Update Troubleshooter to v1.5.14 </li></ul>|1.20.0| None |
| August 2023| **Windows** <ul><li>AMA: Allow prefixes in the tag names to handle regression</li><li>Updating package version for AzSecPack 4.28 release</li></ui>|1.19.0| None | | July 2023| **Windows** <ul><li>Fix crash when Event Log subscription callback throws errors.<li>MetricExtension updated to 2.2023.609.2051</li></ui> |1.18.0|None|
-| June 2023| **Windows** <ul><li>Add new file path column to custom logs table</li><li>Config setting to disable custom IMDS endpoint in Tenant.json file</li><li>FluentBit binaries signed with Microsoft customer Code Sign cert</li><li>Minimize number of retries on calls to refresh tokens</li><li>Don't overwrite resource ID with empty string</li><li>AzSecPack updated to version 4.27</li><li>AzureProfiler and AzurePerfCollector updated to version 1.0.0.990</li><li>MetricsExtension updated to version 2.2023.513.10</li><li>Troubleshooter updated to version 1.5.0</li></ul>**Linux** <ul><li>Add new column CollectorHostName to syslog table to identify forwarder/collector machine</li><li>Link OpenSSL dynamically</li><li>**Fixes**<ul><li>Allow uploads soon after AMA start up</li><li>Run LocalSink GC on a dedicated thread to avoid thread pool scheduling issues</li><li>Fix upgrade restart of disabled services</li><li>Handle Linux Hardening where sudo on root is blocked</li><li>CEF processing fixes for noncompliant RFC 5424 logs</li><li>ASA tenant can fail to start up due to config-cache directory permissions</li><li>Fix auth proxy in AMA</li><li>Fix to remove null characters in agentlauncher.log after log rotation</li><li>Fix for authenticated proxy(1.27.3)</li><li>Fix regression in VM Insights(1.27.4)</ul></li></ul>|1.17.0 |1.27.4|
-| May 2023 | **Windows** <ul><li>Enable Large Event support for all regions.</li><li>Update to TroubleShooter 1.4.0.</li><li>Fixed issue when Event Log subscription become invalid an would not resubscribe.</li><li>AMA: Fixed issue with Large Event sending too large data. Also affecting Custom Log.</li></ul> **Linux** <ul><li>Support for CIS and SELinux [hardening](./agents-overview.md)</li><li>Include Ubuntu 22.04 (Jammy) in azure-mdsd package publishing</li><li>Move storage SDK patch to build container</li><li>Add system Telegraf counters to AMA</li><li>Drop msgpack and syslog data if not configured in active configuration</li><li>Limit the events sent to Public ingestion pipeline</li><li>**Fixes** <ul><li>Fix mdsd crash in init when in persistent mode </li><li>Remove FdClosers from ProtocolListeners to avoid a race condition</li><li>Fix sed regex special character escaping issue in rpm macro for Centos 7.3.Maipo</li><li>Fix latency and future timestamp issue</li><li>Install AMA syslog configs only if customer is opted in for syslog in DCR</li><li>Fix heartbeat time check</li><li>Skip unnecessary cleanup in fatal signal handler</li><li>Fix case where fast-forwarding may cause intervals to be skipped</li><li>Fix comma separated custom log paths with fluent</li><li>Fix to prevent events folder growing too large and filling the disk</li><li>hot fix (1.26.3) for Syslog</li></ul><</li><ul> | 1.16.0.0 | 1.26.2-1.26.5<sup>Hotfix</sup>|
+| June 2023| **Windows** <ul><li>Add new FilePath column to custom logs table. You must manually add the column to your custom table.</li><li>Config setting to disable custom IMDS endpoint in Tenant.json file</li><li>FluentBit binaries signed with Microsoft customer Code Sign cert</li><li>Minimize number of retries on calls to refresh tokens</li><li>Don't overwrite resource ID with empty string</li><li>AzSecPack updated to version 4.27</li><li>AzureProfiler and AzurePerfCollector updated to version 1.0.0.990</li><li>MetricsExtension updated to version 2.2023.513.10</li><li>Troubleshooter updated to version 1.5.0</li></ul>**Linux** <ul><li>Add new column CollectorHostName to syslog table to identify forwarder/collector machine</li><li>Link OpenSSL dynamically</li><li>**Fixes**<ul><li>Allow uploads soon after AMA startup</li><li>Run LocalSink GC on a dedicated thread to avoid thread pool scheduling issues</li><li>Fix upgrade restart of disabled services</li><li>Handle Linux Hardening where sudo on root is blocked</li><li>CEF processing fixes for noncompliant RFC 5424 logs</li><li>ASA tenant can fail to start up due to config-cache directory permissions</li><li>Fix auth proxy in AMA</li><li>Fix to remove null characters in agentlauncher.log after log rotation</li><li>Fix for authenticated proxy(1.27.3)</li><li>Fix regression in VM Insights(1.27.4)</ul></li></ul>|1.17.0 |1.27.4|
+| May 2023 | **Windows** <ul><li>Enable Large Event support for all regions.</li><li>Update to TroubleShooter 1.4.0.</li><li>Fixed issue when Event Log subscription became invalid and wouldn't resubscribe.</li><li>AMA: Fixed issue with Large Event sending too large data. Also affecting Custom Log.</li></ul> **Linux** <ul><li>Support for CIS and SELinux [hardening](./agents-overview.md)</li><li>Include Ubuntu 22.04 (Jammy) in azure-mdsd package publishing</li><li>Move storage SDK patch to build container</li><li>Add system Telegraf counters to AMA</li><li>Drop msgpack and syslog data if not configured in active configuration</li><li>Limit the events sent to Public ingestion pipeline</li><li>**Fixes** <ul><li>Fix mdsd crash in init when in persistent mode </li><li>Remove FdClosers from ProtocolListeners to avoid a race condition</li><li>Fix sed regex special character escaping issue in rpm macro for Centos 7.3.Maipo</li><li>Fix latency and future timestamp issue</li><li>Install AMA syslog configs only if customer is opted in for syslog in DCR</li><li>Fix heartbeat time check</li><li>Skip unnecessary cleanup in fatal signal handler</li><li>Fix case where fast-forwarding may cause intervals to be skipped</li><li>Fix comma separated custom log paths with fluent</li><li>Fix to prevent events folder growing too large and filling the disk</li><li>hot fix (1.26.3) for Syslog</li></ul><</li><ul> | 1.16.0.0 | 1.26.2-1.26.5<sup>Hotfix</sup>|
| Apr 2023 | **Windows** <ul><li>AMA: Enable Large Event support based on Region.</li><li>AMA: Upgrade to FluentBit version 2.0.9</li><li>Update Troubleshooter to 1.3.1</li><li>Update ME version to 2.2023.331.1521</li><li>Updating package version for AzSecPack 4.26 release</li></ul>|1.15.0|None| | Mar 2023 | **Windows** <ul><li>Text file collection improvements to handle high rate logging and continuous tailing of longer lines</li><li>VM Insights fixes for collecting metrics from non-English OS</li></ul> | 1.14.0.0 | None | | Feb 2023 | <ul><li>**Linux (hotfix)** Resolved potential data loss due to "Bad file descriptor" errors seen in the mdsd error log with previous version. Upgrade to hotfix version</li><li>**Windows** Reliability improvements in Fluentbit buffering to handle larger text files</li></ul> | 1.13.1 | 1.25.2<sup>Hotfix</sup> |
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
The following prerequisites must be met prior to installing Azure Monitor Agent.
- `<virtual-machine-region-name>`.handler.control.monitor.azure.com (example: westus.handler.control.monitor.azure.com) - `<log-analytics-workspace-id>`.ods.opinsights.azure.com (example: 12345a01-b1cd-1234-e1f2-1234567g8h99.ods.opinsights.azure.com) (If you use private links on the agent, you must also add the [dce endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint)).-- **Disk Space**: Required disk space can vary greatly depending upon how an agent is utilized or if the agent is unable to communicate with the destinations where it is instructed to send monitoring data. The following provides guidance for capacity planning:
+- **Disk Space**: Required disk space can vary greatly depending upon how an agent is utilized or if the agent is unable to communicate with the destinations where it is instructed to send monitoring data. By default the agent requires 10Gb of disk space to run. The following provides guidance for capacity planning:
| Purpose | Environment | Path | Suggested Space | |:|:|:|:|
azure-monitor Prometheus Metrics Scrape Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-configuration.md
metadata:
Scraping these pods with specific annotations is disabled by default. To enable, in the `ama-metrics-settings-configmap`, add the regex for the namespace(s) of the pods with annotations you wish to scrape as the value of the field `podannotationnamespaceregex`.
-For example, the following setting scrapes pods with annotations only in the namespaces `kube-system` and `default`:
+For example, the following setting scrapes pods with annotations only in the namespaces `kube-system` and `my-namespace`:
```yaml pod-annotation-based-scraping: |-
azure-netapp-files Azure Netapp Files Resize Capacity Pools Or Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resize-capacity-pools-or-volumes.md
For information about monitoring a volumeΓÇÖs capacity, see [Monitor the capacit
* Volume quotas are indexed against `maxfiles` limits. Once a volume has surpassed a `maxfiles` limit, you cannot reduce the volume size below the quota that corresponds to that `maxfiles` limit. For more information and specific limits, see [`maxfiles` limits](azure-netapp-files-resource-limits.md#maxfiles-limits-). * Capacity pools with Basic network features have a minimum size of 4 TiB. For capacity pools with Standard network features, the minimum size is 1 TiB. For more information, see [Resource limits](azure-netapp-files-resource-limits.md)
+* Volume resize operations are nearly instantaneous but not always immediate. There can be a short delay for the volume's updated size to appear in the portal. Verify the size from a host perspective before re-attempting the resize operation.
## Resize the capacity pool using the Azure portal
azure-netapp-files Understand Path Lengths https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-path-lengths.md
+
+ Title: Understand path lengths in Azure NetApp Files
+description: Learn how file path limits and lengths are calculated in Azure NetApp Files.
++++ Last updated : 02/08/2024+++
+# Understand path lengths in Azure NetApp Files
+
+File and path length refers to the number of Unicode characters in a file path, including directories. This limit is a factor if the individual character lengths, which are determined by the size of the character in bytes. For instance, NFS and SMB allow path components of 255 bytes. The file encoding format of ASCII uses 8-bit encoding, meaning file path components (such as a file or folder name) in ASCII can be up to 255 characters since ASCII characters are 1 byte in size.
+
+The following table shows the supported component and path lengths in Azure NetApp Files volumes:
+
+| Component | NFS | SMB |
+| - | - | - |
+| Path component size | 255 bytes | 255 bytes |
+| Path length size | Unlimited | Default: 255 bytes <br /> Maximum in later Windows versions: 32,767 bytes |
+| Maximum path size for transversal | 4,096 bytes | 255 bytes |
+
+>[!NOTE]
+>Dual-protocol volumes use the lowest maximum value.
+
+If an SMB share name is `\\SMB-SHARE`, the share name adds 11 Unicode characters to the path length because each character is 1 byte. If the path to a specific file is `\\SMB-SHARE\apps\archive\file`, it's 29 Unicode characters; each character, including the slashes, is 1 byte. For NFS mounts, the same concepts apply. The mount path `/AzureNetAppFiles` is 17 Unicode characters of 1 byte each.
+
+Azure NetApp Files supports the same path length for SMB shares that modern Windows servers support: [up to 32,767 bytes](/windows/win32/fileio/maximum-file-path-limitation). However, depending on the version of the Windows client, some applications can't [support paths longer than 260 bytes](/windows/win32/fileio/naming-a-file). Individual path components (the values between slashes, such as file or folder names) support up to 255 bytes. For instance, a file name using the Latin capital ΓÇ£AΓÇ¥ (which takes up 1 byte per character) in a file path in Azure NetApp Files can't exceed 255 characters.
+
+```
+# mkdir 256charsaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+
+mkdir: cannot create directory ΓÇÿ256charsaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaΓÇÖ: File name too long
+
+# mkdir 255charsaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+
+# ls | grep 255
+255charsaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+```
+
+## Discerning character sizes
+
+The Linux utility [`uniutils`](https://billposer.org/Software/unidesc.html) can be used to find the byte size of Unicode characters by typing multiple instances of the character instance and viewing the **bytes** field.
+
+**Example 1:** The Latin capital A increments by 1 byte each time it's used (using a single hex value of 41, which is in the 0-255 range of ASCII characters).
+
+```
+# printf %b 'AAA' | uniname
+character byte UTF-32 encoded as glyph name
+ 0 0 000041 41 A LATIN CAPITAL LETTER A
+ 1 1 000041 41 A LATIN CAPITAL LETTER A
+ 2 2 000041 41 A LATIN CAPITAL LETTER A
+
+```
+
+**Result 1:** The name AAA uses 3 bytes out of 255.
+
+**Example 2:** The Japanese character σ¡ù increments 3 bytes each instance. This can be also calculated by the 3 separate hex code values (E5, AD, 97) under the **encoded as** field. Each hex value represents 1 byte:
+
+```
+# printf %b 'σ¡ùσ¡ùσ¡ù' | uniname
+character byte UTF-32 encoded as glyph name
+ 0 0 005B57 E5 AD 97 σ¡ù CJK character Nelson 1281
+ 1 3 005B57 E5 AD 97 σ¡ù CJK character Nelson 1281
+ 2 6 005B57 E5 AD 97 σ¡ù CJK character Nelson 1281
+```
+
+**Result 2:** A file named σ¡ùσ¡ùσ¡ù uses 9 bytes out of 255.
+
+**Example 3:** The letter Ä with diaeresis uses 2 bytes per instance (C3 + 84).
+
+```
+# printf %b 'ÄÄÄ' | uniname
+character byte UTF-32 encoded as glyph name
+ 0 0 0000C4 C3 84 Ä LATIN CAPITAL LETTER A WITH DIAERESIS
+ 1 2 0000C4 C3 84 Ä LATIN CAPITAL LETTER A WITH DIAERESIS
+ 2 4 0000C4 C3 84 Ä LATIN CAPITAL LETTER A WITH DIAERESIS
+```
+
+**Result 3:** A file named ÄÄÄ uses 6 bytes out of 255.
+
+**Example 4:** A special character, such as the 😃 emoji, falls into an undefined range that exceeds the 0-3 bytes used for Unicode characters. As a result, it uses a surrogate pair for its character encoding. In this case, each instance of the character uses 4 bytes.
+
+```
+# printf %b '😃😃😃' | uniname
+character byte UTF-32 encoded as glyph name
+ 0 0 01F603 F0 9F 98 83 😃 Character in undefined range
+ 1 4 01F603 F0 9F 98 83 😃 Character in undefined range
+ 2 8 01F603 F0 9F 98 83 😃 Character in undefined range
+```
+
+**Result 4:** A file named 😃😃😃 uses 12 bytes out of 255.
+
+Most emojis fall into the 4-byte range but can go up to 7 bytes. Of the more than one thousand standard emojis, approximately 180 are in the [Basic Multilingual Plane (BMP)](https://en.wikipedia.org/wiki/Plane_%28Unicode%29#Basic_Multilingual_Plane), which means they can be displayed as text or emoji in Azure NetApp Files, depending on the clientΓÇÖs support for the language type.
+
+For more detailed information on the BMP and other Unicode planes, see [Understand volume languages in Azure NetApp Files](understand-volume-languages.md).
+
+## Character byte impact on path lengths
+
+Although a path length is thought to be the number of characters in a file or folder name, it's actually the _size_ of the supported bytes in the path. Since each character adds a byte size to a name, different character sets in different languages support different file name lengths.
+
+Consider the following scenarios:
+
+- **A file or folder repeats the Latin alphabet character ΓÇ£AΓÇ¥ for its file name.** (for example, AAAAAAAA)
+
+ Since ΓÇ£AΓÇ¥ uses 1 byte and 255 bytes is the path component size limit, then 255 instances of ΓÇ£AΓÇ¥ would be allowed in a file name.
+
+- **A file or folder repeats the Japanese character σ¡ù in its name.**
+
+ Since ΓÇ£σ¡ùΓÇ¥ has a size of 3 bytes, the file name length limit would be 85 instances of σ¡ù (3 byte * 85 = 255 bytes), or a total of 85 characters.
+
+- **A file or folder repeats the grinning face emoji (😃) in its name.**
+
+A grinning face emoji (😃) uses 4 bytes, meaning a file name with only that emoji would allow a total of 64 characters (255 bytes/4 bytes).
+
+- A file or folder uses a combination of different characters (ie, Name字😃).
+
+When different characters with different byte sizes are used in a file or folder name, each character’s byte size factors in to the file or folder length. A file or folder name of Name字😃 would use 1+1+1+1+3+4 bytes (11 bytes) of the total 255-byte length.
+
+#### Special emoji concepts
+
+Special emojis, such as a flag emoji, exist under the BMP classification: the emoji renders as text or image depending on client support. When a client doesn't support the image designation, it instead uses regional text-based designations.
+
+For instance, the [United States flag](https://emojipedia.org/flag-united-states) use the characters "us" (which resemble the Latin characters U+S, but are actually special characters that use different encodings). Uniname shows the differences between the characters.
+
+```
+# printf %b 'US' | uniname
+character byte UTF-32 encoded as glyph name
+ 0 0 000055 55 U LATIN CAPITAL LETTER U
+ 1 1 000053 53 S LATIN CAPITAL LETTER S
++
+# printf %b '🇺🇸' | uniname
+character byte UTF-32 encoded as glyph name
+ 0 0 01F1FA F0 9F 87 BA 🇺 Character in undefined range
+ 1 4 01F1F8 F0 9F 87 B8 🇸 Character in undefined range
+```
+
+Characters designated for the flag emojis translate to flag images in supported systems, but remain as text values in unsupported systems. These characters use 4 bytes per character for a total of 8 bytes when a flag emoji is used. As such, a total of 31 flag emojis are allowed in a file name (255 bytes/8 bytes).
+
+## SMB path limits
+
+By default, Windows servers and clients support path lengths up to 260 bytes, but the actual file path lengths are shorter due to metadata added to Windows paths such as [the `<NUL>` value](/windows/win32/fileio/maximum-file-path-limitation?tabs=registry) and domain information.
+
+When a path limit is exceeded in Windows, a dialog box appears:
+++
+SMB path lengths can be extended when using Windows 10/Windows Server 2016 version 1607 or later by changing a registry value as covered in [Maximum Path Length Limitation](/windows/win32/fileio/maximum-file-path-limitation?tabs=registry). When this value is changed, path lengths can extend out to up to 32,767 bytes (minus metadata values).
+++
+Once this feature is enabled, you must access the SMB share needs using `\\?\` in the path to allow longer path lengths. This method doesn't support UNC paths, so the SMB share needs to be mapped to a drive letter.
++
+Using `\\?\Z:` instead allows access and supports longer file paths.
++
+>[!NOTE]
+>The Windows CMD doesn't currently support the use of `\\?\`.
+
+### Workaround if the max path length cannot be increased
+
+If the max path length can't be enabled in the Windows environment or the Windows client versions are too low, there's a workaround. You can mount the SMB share deeper into the directory structure can reduce the queried path length.
+
+For example, rather than mapping `\\NAS-SHARE\AzureNetAppFiles` to `Z:`, map `\\NAS-SHARE\AzureNetAppFiles\folder1\folder2\folder3\folder4` to `Z:`.
+
+## NFS path limits
+
+NFS path limits with Azure NetApp Files volumes have the same 255-byte limit for individual path components. Each component, however, is evaluated one at a time and can process up to 4,096 bytes per request with a near limitless total path length. For instance, if each path component is 255 bytes, an NFS client can evaluate up to 15 components per request (including `/` characters). As such, a `cd` request to a path over the 4,096-byte limit yields a "File name too long" error message.
+
+In most cases, Unicode characters are 1 byte or less, so the 4,096-byte limit corresponds to 4,096 characters. If a character is larger than 1 byte in size, then the path length is less than 4,096 characters. Characters with a size greater than 1 byte in size count more against the total character count than 1-byte characters.
+
+The path length max can be queried using the `getconf PATH_MAX /NFSmountpoint` command.
+
+>[!NOTE]
+>The limit is defined in the `limits.h` file on the NFS client. You shouldn't adjust these limits.
+
+## Dual-protocol volume considerations
+
+When using Azure NetApp Files for dual-protocol access, the difference in how path lengths are handled in NFS and SMB protocols can create incompatibilities across file and folders. For instance, Windows SMB supports up to 32,767 characters in a path (provided the long path feature is enabled on the SMB client), but NFS support can exceed that amount. As such, if a path length is created in NFS that exceeds the support of SMB, clients are unable to access the data once the path length maximums have been reached. In those cases, either take care to consider the lower end limits of file path lengths across protocols when creating file and folder names (and folder path depth) or map SMB shares closer to the desired folder path to reduce the path length.
+
+Instead of mapping the SMB share to the top level of the volume to navigate down to a path of `\\share\folder1\folder2\folder3\folder4`, consider mapping the SMB share to the entire path of `\\share\folder1\folder2\folder3\folder4`. As a result, a drive letter mapping to `Z:` lands in the desired folder and reduces the path length from `Z:\folder1\folder2\folder3\folder4\file` to `Z:\file`.
+
+### Special character considerations
+
+Azure NetApp Files volumes use a language type of [C.UTF-8](/cpp/build/reference/utf-8-set-source-and-executable-character-sets-to-utf-8), which covers many countries and languages including German, Cyrillic, Hebrew, and most Chinese/Japanese/Korean (CJK). Most common text characters in Unicode are 3 bytes or less. Special characters--such as emojis, musical symbols, and mathematical symbols--are often larger than 3 bytes. Some use [UTF-16 surrogate pair logic](/windows/win32/intl/surrogates-and-supplementary-characters).
+
+If you use a character that Azure NetApp Files doesn't support, you might see a warning requesting a different file name.
++
+Rather than the name being too long, the error actually results from the character byte size being too large for the Azure NetApp Files volume to use over SMB. There's no workaround in Azure NetApp Files for this limitation. For more information on special character handling in Azure NetApp Files, see [Protocol behavior with special character sets](understand-volume-languages.md#protocol-behaviors-with-special-character-sets).
+
+## Next steps
+
+* [Understand volume languages](understand-volume-languages.md)
azure-netapp-files Understand Volume Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-volume-languages.md
+
+ Title: Understand volume languages in Azure NetApp Files
+description: Learn about the supported languages and character sets with NFS, SMB, and dual-protocol configurations in Azure NetApp Files.
++++ Last updated : 02/08/2024++
+# Understand volume languages in Azure NetApp Files
+
+Volume language (akin to system locales on client operating systems) on an Azure NetApp Files volume controls the supported languages and character sets when using [NFS and SMB protocols](network-attached-storage-protocols.md). Azure NetApp Files uses a default volume language of C.UTF-8, which provides POSIX compliant UTF-8 encoding for character sets. The C.UTF-8 language natively supports characters with a size of 0-3 bytes, which includes a majority of the worldΓÇÖs languages on the [Basic Multilingual Plane (BMP)](https://en.wikipedia.org/wiki/Plane_%28Unicode%29#Basic_Multilingual_Plane) (including Japanese, German, and most of Hebrew and Cyrillic). For more information about the BMP, see [Unicode](#unicode).
+
+Characters outside of the BMP sometimes exceed the 3-byte size supported by Azure NetApp Files. They thus need to use [surrogate pair logic](/globalization/encoding/surrogate-pairs), where multiple character byte sets are combined to form new characters. Emoji symbols, for example, fall into this category and are supported in Azure NetApp Files in scenarios where UTF-8 isn't enforced: such as Windows clients that use UTF-16 encoding or NFSv3 that doesn't enforce UTF-8. NFSv4.x does enforce UTF-8, meaning surrogate pair characters don't display properly when using NFSv4.x.
+
+Nonstandard encoding, such as [Shift-JIS](https://wikipedia.org/wiki/Shift_JIS) and less common [CJK characters](https://en.wikipedia.org/wiki/List_of_CJK_fonts), also don't display properly when UTF-8 is enforced in Azure NetApp Files.
+
+>[!TIP]
+> You should send and receive text using UTF-8 to avoid situations where characters can't be translated properly, which can cause file creation/rename or copy error scenarios.
+
+The volume language settings currently can't be modified in Azure NetApp Files. For more information, see [Protocol behaviors with special character sets](#protocol-behaviors-with-special-character-sets).
+
+For best practices, see [Character set best practices](#character-set-best-practices).
+
+## Character encoding in Azure NetApp Files NFS and SMB volumes
+
+In an Azure NetApp Files file sharing environment, file and folder names are represented by a series of characters that end users read and interpret. The way those characters are displayed depends on how the client sends and receives encoding of those characters. For instance, if a client is sending legacy [ASCII (American Standard Code for Information Interchange)](https://www.ascii-code.com/) encoding to the Azure NetApp Files volume when accessing it, then it's limited to displaying only characters that are supported in the ASCII format.
+
+For instance, the Japanese character for data is 資. Since this character can't be represented in ASCII, a client using ASCII encoding show a “?” instead of 資.
+
+[ASCII supports only 95 printable characters](https://en.wikipedia.org/wiki/ASCII#Printable_characters), principally those found in the English language. Each of those characters uses 1 byte, which is factored into the [total file path length](understand-path-lengths.md) on an Azure NetApp Files volume. This limits the internationalization of datasets, since file names can have a variety of characters not recognized by ASCII, from Japanese to Cyrillic to emoji. An international standard ([ISO/IEC 8859](https://en.wikipedia.org/wiki/ISO/IEC_8859)) attempted to support more international characters, but also had its limitations. Most modern clients send and receive characters using some form of Unicode.
+
+### Unicode
+
+As a result of the limitations of ASCII and ISO/IEC 8859 encodings, the [Unicode](https://home.unicode.org/) standard was established so anyone can view their home region's language from their devices.
+
+* Unicode supports over one million character sets by increasing both the number of bytes per character allowed (up to 4 bytes) and the total number of bytes allowed in a file path as opposed to older encodings, such as ASCII.
+* Unicode supports backwards compatibility by reserving the first 128 characters for ASCII, while also ensuring the first 256 code points are identical to ISO/IEC 8859 standards.
+* In the Unicode standard, character sets are broken down into planes. A plane is a continuous group of 65,536 code points. In total, there are 17 planes (0-16) in the Unicode standard. The limit is 17 due to the limitations of UTF-16.
+* Plane 0 is the [Basic Multilingual Plane (BMP)](https://en.wikipedia.org/wiki/Plane_%28Unicode%29#Basic_Multilingual_Plane). This plane contains the most commonly used characters across multiple languages.
+* Of the 17 planes, only five currently have assigned character sets as of [Unicode version 15.1](https://www.unicode.org/versions/Unicode15.1.0/).
+* Planes 1-17 are known as [Supplementary Multilingual Planes (SMP)](https://en.wikipedia.org/wiki/Plane_%28Unicode%29#Supplementary_Multilingual_Plane) and contain less-used character sets, for example ancient writing systems such as cuneiform and hieroglyphs, as well as special Chinese/Japanese/Korean (CJK) characters.
+* For methods to see character lengths and path sizes and to control the encoding sent to a system, see [Converting files to different encodings](#converting-files-to-different-encodings).
+
+Unicode uses [Unicode Transformation Format](https://unicode.org/faq/utf_bom.html) as its standard, with UTF-8 and UTF-16 being the two main formats.
+
+#### Unicode planes
+
+Unicode leverages 17 planes of 65,536 characters (256 code points multiplied by 256 boxes in the plane), with Plane 0 as the [Basic Multilingual Plane (BMP)](https://en.wikipedia.org/wiki/Plane_%28Unicode%29#Basic_Multilingual_Plane). This plane contains the most commonly used characters across multiple languages. Because the world's languages and character sets exceed 65536 characters, more planes are needed to support less commonly used character sets.
+
+For instance, Plane 1 (the [Supplementary Multilingual Planes (SMP)](https://unicodeplus.com/plane/1)) includes historic scripts like cuneiform and Egyptian hieroglyphs as well as some [Osage](https://en.wikipedia.org/wiki/Osage_script), [Warang Citi](https://en.wikipedia.org/wiki/Warang_Citi), [Adlam](https://en.wikipedia.org/wiki/Adlam_script), [Wancho](https://en.wikipedia.org/wiki/Wancho_language#Orthography) and [Toto](https://en.wikipedia.org/wiki/Toto_language#Writing_system). Plane 1 also includes some symbols and [emoticon](https://en.wikipedia.org/wiki/Emoticons_(Unicode_block)) characters.
+
+Plane 2 ΓÇô the [Supplementary Ideographic Plane (SIP)](https://unicodeplus.com/plane/2) ΓÇô contains Chinese/Japanese/Korean (CJK) Unified Ideographs. Characters in planes 1 and 2 generally are 4 bytes in size.
+
+For example:
+
+- The ["grinning face with big eyes" emoticon "😃"](https://www.unicode.org/emoji/charts/full-emoji-list.html#1f603) in plane 1 is 4 bytes in size.
+- The [Egyptian hieroglyph "≡ôÇÇ](https://unicodeplus.com/U+13000)" in plane 1 is 4 bytes in size.
+- The [Osage character "𐒸](https://unicodeplus.com/U+104B8)" in plane 1 is 4 bytes in size.
+- The [CJK character "𫝁"](https://unicodeplus.com/U+2B741) in plane 2 is 4 bytes in size.
+
+Because these characters are all \>3 bytes in size, they require the use of surrogate pairs to work properly. Azure NetApp Files natively supports surrogate pairs, but the display of the characters varies depending on the protocol in use, the client's locale settings and the settings of the remote client access application.
+
+#### UTF-8
+
+UTF-8 uses 8-bit encoding and can have up to 1,112,064 code points (or characters). UTF-8 is the standard encoding across all languages in Linux-based operating systems. Because UTF-8 uses 8-bit encoding, the maximum unsigned integer possible is 255 (2^8 ΓÇô 1), which is also the maximum file name length for that encoding. UTF-8 is used on over 98% of pages on the Internet, making it by far the most adopted encoding standard. The [Web Hypertext Application Technology Working Group (WHATWG)](https://en.wikipedia.org/wiki/WHATWG) considers UTF-8 "the mandatory encoding for all [text]" and that for security reasons browser applications shouldn't use UTF-16.
+
+Characters in UTF-8 format each use 1 to 4 bytes, but nearly all characters in all languages use between 1 and 3 bytes. For instance:
+
+- The Latin alphabet letter "A" uses 1 byte. (One of the 128 reserved ASCII characters)
+- A copyright symbol "©" uses 2 bytes.
+- The character "ä" uses 2 bytes. (1 byte for "a" + 1 byte for the umlaut)
+- The Japanese Kanji symbol for data (資) uses 3 bytes.
+- A grinning face emoji (😃) uses 4 bytes.
+
+Language locales can use either computer standard UTF-8 (C.UTF-8) or a more [region-specific format](https://docs.moodle.org/403/en/Table_of_locales), such as en\_US.UTF-8, ja.UTF-8, etc. You should use UTF-8 encoding for Linux clients when accessing Azure NetApp Files whenever possible. As of OS X, macOS clients also use UTF-8 for its default encoding and shouldn't be adjusted.
+
+Windows clients use UTF-16. In most cases, this setting should be left as the default for the OS locale, but newer clients offer beta support for UTF-8 characters via a checkbox. Terminal clients in Windows can also be adjusted to use UTF-8 in PowerShell or CMD as needed. For more information, see [Dual protocol behaviors with special character sets](#dual-protocol-behaviors).
+
+#### UTF-16
+
+UTF-16 uses 16-bit encoding and is capable of encoding all 1,112,064 code points of Unicode. The encoding for UTF-16 can use one or two 16-bit code units, each 2 bytes in size. All characters in UTF-16 use 2 or 4-byte sizes. Characters in UTF-16 that use 4 bytes leverage [surrogate pairs](/windows/win32/intl/surrogates-and-supplementary-characters), which combine two separate 2-byte characters to create a new character. These supplementary characters fall outside of the standard BMP plane and into one of the other multilingual planes.
+
+UTF-16 is used in Windows operating systems and APIs, Java, and JavaScript. Since it doesn't support backwards compatibility with ASCII formats, it never gained popularity on the web. UTF-16 only makes up around 0.002% of all pages on the internet. The [Web Hypertext Application Technology Working Group (WHATWG)](https://en.wikipedia.org/wiki/WHATWG) considers UTF-8 "the mandatory encoding for all text" and recommends applications not use UTF-16 for browser security.
+
+Azure NetApp Files supports most UTF-16 characters, including surrogate pairs. In cases where the character isn't supported, Windows clients report an error of "file name you specified isn't valid or too long."
+
+## Character set handling over remote clients
+
+Remote connections to clients that mount Azure NetApp Files volumes (such as SSH connections to Linux clients to access NFS mounts) can be configured to send and receive specific volume language encodings. The language encoding sent to the client via the remote connection utility controls how character sets are created and viewed. As a result, a remote connection that uses a different language encoding than another remote connection (such as two different PuTTY windows) may show different results for characters when listing file and folder names in the Azure NetApp Files volume. In most cases, this won't create discrepancies (such as for Latin/English characters), but in the cases of special characters, such as emojis, results can vary.
+
+For instance, using an encoding of UTF-8 for the remote connection shows predictable results for characters in Azure NetApp Files volumes since C.UTF-8 is the volume language. The Japanese character for "data" (資) displays differently depending on the encoding being sent by the terminal.
+
+### Character encoding in PuTTY
+
+When a [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html) window uses UTF-8 (found in Windows's translation settings), the character is represented properly for an NFSv3 mounted volume in Azure NetApp Files:
++
+If the PuTTY window uses a different encoding, such as _ISO-8859-1:1998 (Latin-1, West Europe)_, the same character displays differently even though the file name is the same.
++
+PuTTY, by default, doesn't contain CJK encodings. There are [patches available](https://hp.vector.co.jp/authors/VA024651/PuTTYkj.html) to add those language sets to PuTTY.
+
+### Character encodings in Bastion
+
+Microsoft Azure recommends using Bastion for remote connectivity to virtual machines (VMs) in Azure. When using Bastion, the language encoding sent and received isn't exposed in the configuration but leverages standard UTF-8 encoding. As a result, most character sets seen in PuTTY using UTF-8 should also be visible in Bastion, provided the character sets are supported in the protocol being used.
+++
+>[!TIP]
+>Other SSH terminals can be used such as [TeraTerm](https://sourceforge.net/projects/tera-term/). TeraTerm provides a wider range of supported character sets by default, including CJK encodings and nonstandard encodings such as Shift-JIS.
+
+## Protocol behaviors with special character sets
+
+Azure NetApp Files volumes use UTF-8 encoding and natively support characters that don't exceed 3 bytes. All characters in the ASCII and UTF-8 set display properly because they fall in the 1 to 3-byte range. For example:
+
+- The Latin alphabet character "A" uses 1 byte (one of the 128 reserved ASCII characters).
+- A copyright symbol © uses 2 bytes.
+- The character "ä" uses 2 bytes (1 byte for "a" and 1 byte for the umlaut).
+- The Japanese Kanji symbol for data (資) uses 3 bytes.
+
+Azure NetApp Files also support some characters that exceed 3 bytes via surrogate pair logic (such as emoji), provided the client encoding and protocol version supports them. For more information about protocol behaviors, see:
+
+- [SMB behaviors](#smb-behaviors)
+- [NFS behaviors](#nfs-behaviors)
+- [Dual-protocol behaviors](#dual-protocol-behaviors)
+
+## SMB behaviors
+
+In SMB volumes, Azure NetApp Files creates and maintains two names for files or directories in any directory that has access from an SMB client: the original long name and a name in [8.3 format](/openspecs/windows_protocols/ms-fscc/18e63b13-ba43-4f5f-a5b7-11e871b71f14).
+
+### File names in SMB with Azure NetApp Files
+
+When file or directory names exceed the allowed character bytes or use unsupported characters, Azure NetApp Files generates an 8.3-format name as follows:
+
+- It truncates the original file or directory name.
+- It appends a tilde (~) and a numeral (1-5) to file or directory names that are no longer unique after being truncated.
+ If there are more than five files with nonunique names, Azure NetApp Files creates a unique name with no relation to the original name. For files, Azure NetApp Files truncates the file name extension to three characters.
+
+For example, if an NFS client creates a file named `specifications.html`, Azure NetApp Files creates the file name `specif~1.htm` following the 8.3 format. If this name already exists, Azure NetApp Files uses a different number at the end of the file name. For example, if an NFS client then creates another file named `specifications\_new.html`, the 8.3 format of `specifications\_new.html` is `specif~2.htm`.
+
+### Special character in SMB with Azure NetApp Files
+
+When using SMB with Azure NetApp Files volumes, characters that exceed 3 bytes used in file and folder names (including emoticons) are allowed due to surrogate pair support. The following is what Windows Explorer sees for characters outside of the BMP on a folder created from a Windows client when using English with the default UTF-16 encoding.
+
+>[!NOTE]
+>The default font in Windows Explorer is Segoe UI. Font changes can affect how some characters display on clients.
++
+How the characters display on the client depends on the system font and the language and locale settings. In general, characters that fall into the BMP are supported across all protocols, regardless if the encoding is UTF-8 or UTF-16.
+
+When using either CMD or [PowerShell](/powershell/scripting/dev-cross-plat/vscode/understanding-file-encoding), the character set view may depend on the font settings. These utilities have limited font choices by default. CMD uses Consolas as the default font.
++
+File names might not display as expected depending on the font used as some consoles don't natively support Segoe UI or other fonts that render special characters properly.
++
+This issue can be addressed on Windows clients by using [PowerShell ISE](/powershell/scripting/windows-powershell/ise/introducing-the-windows-powershell-ise), which provides more robust font support. For instance, setting the PowerShell ISE to Segoe UI displays the file names with supported characters properly.
++
+However, PowerShell ISE is designed for scripting, rather than managing shares. Newer Windows versions offer [Windows Terminal](https://www.microsoft.com/p/windows-terminal/9n0dx20hk701), which allows for control over the fonts and encoding values.
+
+>[!NOTE]
+> Use the [`chcp`](/windows-server/administration/windows-commands/chcp) command to view the encoding for the terminal. For a complete list of code pages, see [Code page identifiers](/windows/win32/intl/code-page-identifiers).
++
+If the volume is enabled for dual-protocol (both NFS and SMB), you might observe different behaviors. For more information, see [Dual-protocol behaviors with special character sets](#dual-protocol-behaviors).
++
+## NFS behaviors
+
+How NFS displays special characters depends on the version of NFS used, the client's locale settings, installed fonts, and the settings of the remote connection client in use. For instance, using Bastion to access an Ubuntu client may handle character displays differently than a PuTTY client set to a different locale on the same VM. The ensuing NFS examples rely on these locale settings for the Ubuntu VM:
+
+```
+~$ locale
+LANG=C.UTF-8
+LANGUAGE=
+LC\_CTYPE="C.UTF-8"
+LC\_NUMERIC="C.UTF-8"
+LC\_TIME="C.UTF-8"
+LC\_COLLATE="C.UTF-8"
+LC\_MONETARY="C.UTF-8"
+LC\_MESSAGES="C.UTF-8"
+LC\_PAPER="C.UTF-8"
+LC\_NAME="C.UTF-8"
+LC\_ADDRESS="C.UTF-8"
+LC\_TELEPHONE="C.UTF-8"
+LC\_MEASUREMENT="C.UTF-8"
+LC\_IDENTIFICATION="C.UTF-8"
+LC\_ALL=
+```
+
+### NFSv3 behavior
+
+NFSv3 doesn't enforce UTF encoding on files and folders. In most cases, special character sets should have no issues. However, the connection client being used can affect how characters are sent and received. For instance, using Unicode characters outside of the BMP for a folder name in the Azure connection client Bastion can result in some unexpected behavior due to how the client encoding works.
+
+In the following screenshot, Bastion is unable to copy and paste the values to the CLI prompt from outside of the browser when naming a directory over NFSv3. When attempting to copy and paste the value of `NFSv3Bastion𓀀𫝁😃𐒸`, the special characters display as quotation marks in the input.
++
+The copy-paste command is permitted over NFSv3, but the characters are created as their numeric values, affecting their display:
+
+`NFSv3Bastion'$'\262\270\355\240\214\355\260\200\355\241\255\355\275\201\355\240\275\355\270\203\355\240\201\355`
+
+This display is due to the encoding used by Bastion for sending text values when copying and pasting.
+
+When using PuTTY to create a folder with the same characters over NFSv3, the folder name than differently in Bastion than when Bastion was used to create it. The emoticon shows as expected (due to the installed fonts and locale setting), but the other characters (such as the Osage "𐒸") don't.
++
+From a PuTTY window, the characters display correctly:
++
+### NFSv4.x behavior
+
+NFSv4.x enforces UTF-8 encoding in file and folder names per the [RFC-8881 internationalization specs](https://www.rfc-editor.org/rfc/rfc8881.html#internationalization).
+
+As a result, if a special character is sent with non-UTF-8 encoding, NFSv4.x might not allow the value.
+
+In some cases, a command may be allowed using a character outside of the [Basic Multilingual Plane (BMP)](https://en.wikipedia.org/wiki/Plane_%28Unicode%29#Basic_Multilingual_Plane), but it might not display the value after it's created.
+
+For instance, issuing `mkdir` with a folder name including the characters "𓀀𫝁😃𐒸" (characters in the [Supplementary Multilingual Planes (SMP)](https://unicodeplus.com/plane/1) and the [Supplementary Ideographic Plane (SIP)](https://unicodeplus.com/plane/2)) seems to succeed in NFSv4.x. The folder won't be visible when running the `ls` command.
+
+```
+root@ubuntu:/NFSv4/NFS$ mkdir "NFSv4 Putty 𓀀𫝁😃𐒸"
+
+root@ubuntu:/NFSv4/NFS$ ls -la
+
+total 8
+
+drwxrwxr-x 3 nobody 4294967294 4096 Jan 10 17:15 .
+
+drwxrwxrwx 4 root root 4096 Jan 10 17:15 ..
+
+root@ubuntu:/NFSv4/NFS$
+```
+
+The folder exists in the volume. Changing to that hidden directory name works from the PuTTY client, and a file can be created inside of that directory.
+
+```
+root@ubuntu:/NFSv4/NFS$ cd "NFSv4 Putty 𓀀𫝁😃𐒸"
+
+root@ubuntu:/NFSv4/NFS/NFSv4 Putty 𓀀𫝁😃𐒸$ sudo touch Unicode.txt
+
+root@ubuntu:/NFSv4/NFS/NFSv4 Putty 𓀀𫝁😃𐒸$ ls -la
+
+-rw-r--r-- 1 root root 0 Jan 10 17:31 Unicode.txt
+```
+
+A stat command from PuTTY also confirms the folder exists:
+
+```
+root@ubuntu:/NFSv4/NFS$ stat "NFSv4 Putty 𓀀𫝁😃𐒸"
+
+**File: NFSv4 Putty** **𓀀**** 𫝁 ****😃**** 𐒸**
+
+Size: 4096 Blocks: 8 IO Block: 262144 **directory**
+
+Device: 3ch/60d Inode: 101 Links: 2
+
+Access: (0775/drwxrwxr-x) Uid: ( 0/ root) Gid: ( 0/ root)
+
+Access: 2024-01-10 17:15:44.860775000 +0000
+
+Modify: 2024-01-10 17:31:35.049770000 +0000
+
+Change: 2024-01-10 17:31:35.049770000 +0000
+
+Birth: -
+```
+
+Even though the folder is confirmed to exist, wildcard commands don't work, as the client can't officially "see" the folder in the display.
+
+```
+root@ubuntu:/NFSv4/NFS$ cp \* /NFSv3/
+
+cp: can't stat '\*': No such file or directory
+```
+
+NFSv4.1 sends an error to the client when it encounters a character that doesn't rely on UTF-8 encoding.
+
+For example, when using Bastion to attempt access to the same directory we created using PuTTY over NFSv4.1, this is the result:
+
+```
+root@ubuntu:/NFSv4/NFS$ cd "NFSv4 Putty 𓀀𫝁😃�"
+
+-bash: cd: $'NFSv4 Putty \262\270\355\240\214\355\260\200\355\241\255\355\275\201\355\240\275\355\270\203\355\240\201\355': Invalid argument
+
+The "invalid argument" error message doesn't help diagnose the root cause, but a packet capture shines a light on the problem:
+
+78 1.704856 y.y.y.y x.x.x.x NFS 346 V4 Call (Reply In 79) LOOKUP DH: 0x44caa451/NFSv4 Putty ��������
+
+79 1.705058 x.x.x.x y.y.y.y NFS 166 V4 Reply (Call In 25) OPEN Status: NFS4ERR\_INVAL
+```
+[NFS4ERR_INVAL](https://www.rfc-editor.org/rfc/rfc8881.html#name-utf-8-related-errors) is covered in RFC-8881.
+
+Since the folder can be accessed from PuTTY (due to the encoding being sent and received), it can be copied if the name is specified. After copying that folder from the NFSv4.1 Azure NetApp Files volume to the NFSv3 Azure NetApp Files volume, the folder name displays:
+
+```
+root@ubuntu:/NFSv4/NFS$ cp -r /NFSv4/NFS/"NFSv4 Putty 𓀀𫝁😃𐒸" /NFSv3/NFSv3/
+
+root@ubuntu:/NFSv4/NFS$ ls -la /NFSv3/NFSv3 | grep v4
+
+drwxrwxr-x 2 root root 4096 Jan 10 17:49 NFSv4 Putty 𓀀𫝁😃𐒸
+```
+
+The same `NFS4ERR\_INVAL` error can be seen if a file conversion (using `[iconv](https://linux.die.net/man/1/iconv)``) to a non-UTF-8 format is attempted, such as Shift-JIS.
+
+```
+# echo "Test file with SJIS encoded filename" \> "$(echo 'テストファイル.txt' | iconv -t SJIS)"
+ -bash: $(echo 'テストファイル.txt' | iconv -t SJIS): Invalid argument
+```
+
+For more information, see [Converting files to different encodings](#converting-files-to-different-encodings).
+
+## Dual protocol behaviors
+
+Azure NetApp Files allows volumes to be accessed by both NFS and SMB via dual-protocol access. Because of the vast differences in the language encoding used by NFS (UTF-8) and SMB (UTF-16), character sets, file and folder names, and path lengths can have very different behaviors across protocols.
+
+### Viewing NFS-created files and folders from SMB
+
+When Azure NetApp Files is used for dual-protocol access (SMB and NFS), a character set unsupported by UTF-16 might be used in a file name created using UTF-8 via NFS. In those scenarios, when SMB accesses a file with unsupported characters, the name is truncated in SMB using the [8.3 short file name convention](/openspecs/windows_protocols/ms-fscc/18e63b13-ba43-4f5f-a5b7-11e871b71f14).
+
+#### NFSv3-created files and SMB behaviors with character sets
+
+NFSv3 doesn't enforce UTF-8 encoding. Characters using nonstandard language encodings (such as Shift-JIS) work with Azure NetApp Files when using NFSv3.
+
+In the following example, a series of folder names using different character sets from various planes in Unicode were created in an Azure NetApp Files volume using NFSv3. When viewed from NFSv3, these show up correctly.
+
+```
+root@ubuntu:/NFSv3/dual$ ls -la
+
+drwxrwxr-x 2 root root 4096 Jan 10 19:43 NFSv3-BMP-English
+
+drwxrwxr-x 2 root root 4096 Jan 10 19:43 NFSv3-BMP-Japanese-German-資ä
+
+drwxrwxr-x 2 root root 4096 Jan 10 19:43 NFSv3-BMP-copyright-©
+
+drwxrwxr-x 2 root root 4096 Jan 10 19:44 NFSv3-CJK-plane2-𫝁
+
+drwxrwxr-x 2 root root 4096 Jan 10 19:44 NFSv3-emoji-plane1-😃
+```
+
+From Windows SMB, the folders with characters found in the BMP display properly, but characters outside of that plane display with the 8.3 name format due to the UTF-8/UTF-16 conversion being incompatible for those characters.
+++
+#### NFSv4.1-created files and SMB behaviors with character sets
+
+In the previous examples, a folder named `NFSv4 Putty 𓀀𫝁😃𐒸` was created on an Azure NetApp Files volume over NFSv4.1, but wasn't viewable using NFSv4.1. However, it can be seen using SMB. The name is truncated in SMB to a supported 8.3 format due to the unsupported character sets created from the NFS client and the incompatible UTF-8/UTF-16 conversion for characters in different Unicode planes.
++
+When a folder name uses standard UTF-8 characters found in the BMP (English or otherwise), then SMB translates the names properly.
+
+```
+root@ubuntu:/NFSv4/NFS$ mkdir NFS-created-English
+
+root@ubuntu:/NFSv4/NFS$ mkdir NFS-created-資ä
+
+root@ubuntu:/NFSv4/NFS$ ls -la
+
+total 16
+
+drwxrwxr-x 5 nobody 4294967294 4096 Jan 10 18:26 .
+
+drwxrwxrwx 4 root root 4096 Jan 10 17:15 ..
+
+**drwxrwxr-x 2 root root 4096 Jan 10 18:21 NFS-created-English**
+
+**drwxrwxr-x 2 root root 4096 Jan 10 18:26 NFS-created-**** 資 ****ä**
+```
++
+#### SMB-created files and folders over NFS
+
+Windows clients are the primary type of clients that are used to access SMB shares. These clients default to UTF-16 encoding. It's possible to support some UTF-8 encoded characters in Windows by enabling it in region settings:
++
+When a file or folder is created over an SMB share in Azure NetApp Files, the character set in use encode as UTF-16. As a result, clients using UTF-8 encoding (such as Linux-based NFS clients) may not be able to translate some character sets properly ΓÇô particularly characters that fall outside of the [Basic Multilingual Plane (BMP)](https://en.wikipedia.org/wiki/Plane_%28Unicode%29#Basic_Multilingual_Plane).
+
+##### Unsupported character behavior
+
+In those scenarios, when an NFS client accesses a file created using SMB with unsupported characters, the name displays as a series of numeric values representing the Unicode values for the character.
+
+For instance, this folder was created in Windows Explorer using characters outside of the BMP.
+
+```
+PS Z:\SMB\> dir
+
+Directory: Z:\SMB
+
+Mode LastWriteTime Length Name
+
+- - -
+
+d-- 1/9/2024 9:53 PM SMB𓀀𫝁😃𐒸
+```
+
+Over NFSv3, the SMB-created folder shows up:
+
+```
+$ ls -la
+
+drwxrwxrwx 2 root daemon 4096 Jan 9 21:53 'SMB'$'\355\240\214\355\260\200\355\241\255\355\275\201\355\240\275\355\270\203\355\240\201\355\262\270'
+```
+
+Over NFSv4.1, the SMB-created folder shows up as follows:
+
+```
+$ ls -la
+
+drwxrwxrwx 2 root daemon 4096 Jan 4 17:09 'SMB'$'\355\240\214\355\260\200\355\241\255\355\275\201\355\240\275\355\270\203\355\240\201\355\262\270'
+```
+
+##### Supported character behavior
+
+When the characters are in the BMP, there are no issues between the SMB and NFS protocols and their versions.
+
+For instance, a folder name created using SMB on an Azure NetApp Files volume with characters found in the BMP across multiple languages (English, German, Cyrillic, Runic) shows up fine across all protocols and versions.
+
+- [Basic Latin](https://unicodeplus.com/block/0000) "SMB"
+- [Greek](https://unicodeplus.com/block/0370) "ͶΘΩ"
+- [Cyrillic](https://unicodeplus.com/block/0400) "ЁЄЊ"
+- [Runic](https://unicodeplus.com/block/16A0) "ᚠᚱᛯ"
+- [CJK Compatibility Ideographs](https://unicodeplus.com/block/F900) "豈滑虜"
+
+This is how the name appears in SMB:
+
+```powershell
+
+PS Z:\SMB\> mkdir SMBͶΘΩЁЄЊᚠᚱᛯ豈滑虜
+
+Mode LastWriteTime Length Name
+
+- - -
+
+d-- 1/11/2024 8:00 PM SMBͶΘΩЁЄЊᚠᚱᛯ豈滑虜
+```
+
+This is how the name appears from NFSv3:
+
+```
+$ ls | grep SMBͶΘΩЁЄЊᚠᚱᛯ豈滑虜
+
+SMBͶΘΩЁЄЊᚠᚱᛯ豈滑虜
+```
+
+This is how the name appears from NFSv4.1:
+
+```
+$ ls /NFSv4/SMB | grep SMBͶΘΩЁЄЊᚠᚱᛯ豈滑虜
+
+SMBͶΘΩЁЄЊᚠᚱᛯ豈滑虜
+```
+
+## Converting files to different encodings
+
+File and folder names aren't the only portions of file system objects that utilize language encodings. File contents (such as special characters inside a text file) also can play a part. For instance, if a file is attempted to be saved with special characters in an incompatible format, then an error message may be seen. In this case, a file with Katagana characters can't be saved in ANSI, as those characters don't exist in that encoding.
++
+Once that file is saved in that format, the characters get converted to question marks:
++
+File encodings can be viewed from NAS clients. On Windows clients, you can use an application like Notepad or Notepad++ to view an encoding of a file. If [Windows Subsystem for Linux (WSL)](https://apps.microsoft.com/detail/9P9TQF7MRM4R) or [Git](https://git-scm.com/download/win) are installed on the client, the `file` command can be used.
++
+These applications also allow you to change the file's encoding by saving as different encoding types. In addition, PowerShell can be used to convert encoding on files with the [`Get-Content`](/powershell/module/microsoft.powershell.management/get-content) and [`Set-Content`](/powershell/module/microsoft.powershell.management/set-content) cmdlets.
+
+For example, the file `utf8-text.txt` is encoded as UTF-8 and contains characters outside of the BMP. Because UTF-8 is used, the characters are displayed properly.
++
+If the encoding is converted to UTF-32, the characters don't display properly.
+
+```powershell
+PS Z:\SMB\> Get-Content .\utf8-text.txt |Set-Content -Encoding UTF32 -Path utf32-text.txt
+```
++
+`Get-Content` can also be used to display the file contents. By default, PowerShell uses UTF-16 encoding (Code page 437) and the font selections for the console are limited, so the UTF-8 formatted file with special characters can't be displayed properly:
++
+Linux clients can use the [`file`](https://www.man7.org/linux/man-pages/man1/file.1.html) command to view the encoding of the file. In dual-protocol environments, if a file is created using SMB, the Linux client using NFS can check the file encoding.
+
+```
+$ file -i utf8-text.txt
+
+utf8-text.txt: text/plain; charset=utf-8
+
+$ file -i utf32-text.txt
+
+utf32-text.txt: text/plain; charset=utf-32le
+```
+
+File encoding conversion can be performed on Linux clients using the [`iconv`](https://www.man7.org/linux/man-pages/man1/iconv.1.html) command. To see the list of supported encoding formats, use `iconv -l`.
+
+For instance, the UTF-8 encoded file can be converted to UTF-16.
+
+```
+$ iconv -t UTF16 utf8-text.txt \> utf16-text.txt
+
+$ file -i utf8-text.txt
+
+utf8-text.txt: text/plain; **charset=utf-8**
+
+$ file -i utf16-text.txt
+
+utf16-text.txt: text/plain; **charset=utf-16le**
+```
+
+If the character set on the file's name or in the file's contents aren't supported by the destination encoding, then conversion isn't allowed. For instance, Shift-JIS can't support the characters in the file's contents.
+
+```
+$ iconv -t SJIS utf8-text.txt SJIS-text.txt
+
+iconv: illegal input sequence at position 0
+```
+
+If a file has characters that are supported by the encoding, then conversion will succeed. For instance, if the file contains the Katagana characters テストファイル, then Shift-JIS conversion will succeed over NFS. Since the NFS client being used here doesn't understand Shift-JIS due to locale settings, the encoding shows "unknown-8bit."
+
+```
+$ cat SJIS.txt
+
+テストファイル
+
+$ file -i SJIS.txt
+
+SJIS.txt: text/plain; charset=utf-8
+
+$ iconv -t SJIS SJIS.txt \> SJIS2.txt
+
+$ file -i SJIS.txt
+
+SJIS.txt: text/plain; **charset=utf-8**
+
+$ file -i SJIS2.txt
+
+SJIS2.txt: text/plain; **charset=unknown-8bit**
+```
+
+Because Azure NetApp Files volumes only support UTF-8 compatible formatting, the Katagana characters are converted to an unreadable format.
+
+```
+$ cat SJIS2.txt
+
+ΓûÆeΓûÆXΓûÆgΓûÆtΓûÆ@ΓûÆCΓûÆΓûÆ
+```
+
+When using NFSv4.x, conversion is allowed when noncompatible characters are present inside of the file's contents, even though NFSv4.x enforces UTF-8 encoding. In this example, a UTF-8 encoded file with Katagana characters located on an Azure NetApp Files volume shows the contents of a file properly.
+
+```
+$ file -i SJIS.txt
+
+SJIS.txt: text/plain; charset=utf-8
+
+S$ cat SJIS.txt
+
+テストファイル
+```
+
+But once it's converted, the characters in the file display improperly due to the incompatible encoding.
+
+```
+$ cat SJIS2.txt
+
+ΓûÆeΓûÆXΓûÆgΓûÆtΓûÆ@ΓûÆCΓûÆΓûÆ
+```
+
+If the file's name contains unsupported characters for UTF-8, then conversion succeeds over NFSv3, but fails over NFSv4.x due to the protocol version's UTF-8 enforcement.
+
+```
+# echo "Test file with SJIS encoded filename" \> "$(echo 'テストファイル.txt' | iconv -t SJIS)"
+
+-bash: $(echo 'テストファイル.txt' | iconv -t SJIS): Invalid argument
+```
+
+## Character set best practices
+
+When using special characters or characters outside of the standard [Basic Multilingual Plane (BMP)](https://en.wikipedia.org/wiki/Plane_%28Unicode%29#Basic_Multilingual_Plane) on Azure NetApp Files volumes, some best practices should be kept in consideration.
+
+- Since Azure NetApp Files volumes use UTF-8 volume language, the file encoding for NFS clients should also use UTF-8 encoding for consistent results.
+- Character sets in file names or contained in file contents should be UTF-8 compatible for proper display and functionality.
+- Because SMB uses UTF-16 character encoding, characters outside of the BMP might not display properly over NFS in dual-protocol volumes. As possible, minimize the use of special characters in file contents.
+- Avoid using special characters outside of the BMP in file names, especially when using NFSv4.1 or dual-protocol volumes.
+- For character sets not in the BMP, UTF-8 encoding should allow display of the characters in Azure NetApp Files when using a single file protocol (SMB only or NFS only). However, dual-protocol volumes aren't able to accommodate these character sets in most cases.
+- Nonstandard encoding (such as Shift-JIS) isn't supported on Azure NetApp Files volumes.
+- Surrogate pair characters (such as emoji) are supported on Azure NetApp Files volumes.
+
+## Next steps
+
+* [Understand path lengths in Azure NetApp Files](understand-path-lengths.md)
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/overview.md
Title: Azure Resource Manager overview description: Describes how to use Azure Resource Manager for deployment, management, and access control of resources on Azure. Previously updated : 01/03/2024 Last updated : 02/14/2024 # What is Azure Resource Manager?
Azure Resource Manager is the deployment and management service for Azure. It pr
To learn about Azure Resource Manager templates (ARM templates), see the [ARM template overview](../templates/overview.md). To learn about Bicep, see [Bicep overview](../bicep/overview.md).
+The following video covers basic concepts of Azure Resource Manager.
+
+> [!VIDEO https://learn-video.azurefd.net/vod/player?id=d257e6ec-abab-47f4-a209-22049e7a40b4]
+ ## Consistent management layer When you send a request through any of the Azure APIs, tools, or SDKs, Resource Manager receives the request. It authenticates and authorizes the request before forwarding it to the appropriate Azure service. Because all requests are handled through the same API, you see consistent results and capabilities in all the different tools.
If you're new to Azure Resource Manager, there are some terms you might not be f
* **resource provider** - A service that supplies Azure resources. For example, a common resource provider is `Microsoft.Compute`, which supplies the virtual machine resource. `Microsoft.Storage` is another common resource provider. See [Resource providers and types](resource-providers-and-types.md). * **declarative syntax** - Syntax that lets you state "Here's what I intend to create" without having to write the sequence of programming commands to create it. ARM templates and Bicep files are examples of declarative syntax. In those files, you define the properties for the infrastructure to deploy to Azure. * **ARM template** - A JavaScript Object Notation (JSON) file that defines one or more resources to deploy to a resource group, subscription, management group, or tenant. The template can be used to deploy the resources consistently and repeatedly. See [Template deployment overview](../templates/overview.md).
-* **Bicep file** - A file for declaratively deploying Azure resources. Bicep is a language that's been designed to provide the best authoring experience for infrastructure as code solutions in Azure. See [Bicep overview](../bicep/overview.md).
+* **Bicep file** - A file for declaratively deploying Azure resources. Bicep is a language that was designed to provide the best authoring experience for infrastructure as code solutions in Azure. See [Bicep overview](../bicep/overview.md).
* **extension resource** - A resource that adds to another resource's capabilities. For example, a role assignment is an extension resource. You apply a role assignment to any other resource to specify access. See [Extension resources](./extension-resource-types.md). For more definitions of Azure terminology, see [Azure fundamental concepts](/azure/cloud-adoption-framework/ready/considerations/fundamental-concepts).
There are some important factors to consider when defining your resource group:
To ensure state consistency for the resource group, all [control plane operations](./control-plane-and-data-plane.md) are routed through the resource group's location. When selecting a resource group location, we recommend that you select a location close to where your control operations originate. Typically, this location is the one closest to your current location. This routing requirement only applies to control plane operations for the resource group. It doesn't affect requests that are sent to your applications.
- If a resource group's region is temporarily unavailable, you may not be able to update resources in the resource group because the metadata is unavailable. The resources in other regions will still function as expected, but you may not be able to update them. This condition may also apply to global resources like Azure DNS, Azure DNS Private Zones, Azure Traffic Manager, and Azure Front Door. You can view which types have their metadata managed by Azure Resource Manager via the [list of types for the Azure Resource Graph resources table](../../governance/resource-graph/reference/supported-tables-resources.md#resources).
+ If a resource group's region is temporarily unavailable, you may not be able to update resources in the resource group because the metadata is unavailable. The resources in other regions still function as expected, but you may not be able to update them. This condition may also apply to global resources like Azure DNS, Azure DNS Private Zones, Azure Traffic Manager, and Azure Front Door. You can view which types have their metadata managed by Azure Resource Manager via the [list of types for the Azure Resource Graph resources table](../../governance/resource-graph/reference/supported-tables-resources.md#resources).
For more information about building reliable applications, see [Designing reliable Azure applications](/azure/architecture/checklist/resiliency-per-service).
azure-vmware Azure Vmware Solution Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-known-issues.md
description: This article provides details about the known issues of Azure VMwar
Previously updated : 1/29/2024 Last updated : 2/14/2024 # Known issues: Azure VMware Solution
Refer to the table to find details about resolution dates or possible workaround
| When adding a cluster to my private cloud, the **Cluster-n: vSAN physical disk alarm 'Operation'** and **Cluster-n: vSAN cluster alarm 'vSAN Cluster Configuration Consistency'** alerts are active in the vSphere Client | 2021 | This alert should be considered an informational message, since Microsoft manages the service. Select the **Reset to Green** link to clear it. | 2021 | | After my private cloud NSX-T Data Center upgrade to version [3.2.2](https://docs.vmware.com/en/VMware-NSX/3.2.2/rn/vmware-nsxt-data-center-322-release-notes/https://docsupdatetracker.net/index.html), the NSX-T Manager **Capacity - Maximum Capacity Threshold** alarm is raised | 2023 | Alarm raised because there are more than four clusters in the private cloud with the medium form factor for the NSX-T Data Center Unified Appliance. The form factor needs to be scaled up to large. This issue should get detected through Microsoft, however you can also open a support request. | 2023 | | When I build a VMware HCX Service Mesh with the Enterprise license, the Replication Assisted vMotion Migration option isn't available. | 2023 | The default VMware HCX Compute Profile doesn't have the Replication Assisted vMotion Migration option enabled. From the Azure VMware Solution vSphere Client, select the VMware HCX option and edit the default Compute Profile to enable Replication Assisted vMotion Migration. | 2023 |
-| [VMSA-2023-023](https://www.vmware.com/security/advisories/VMSA-2023-0023.html) VMware vCenter Server Out-of-Bounds Write Vulnerability (CVE-2023-34048) publicized in October 2023 | October 2023 | Microsoft is currently working with its security teams and partners to evaluate the risk to Azure VMware Solution and its customers. Initial investigations show that controls in place within Azure VMware Solution reduce the risk of CVE-2023-03048. However Microsoft is working on a plan to roll out security fixes soon to completely remediate the security vulnerability. | October 2023 |
+| [VMSA-2023-023](https://www.vmware.com/security/advisories/VMSA-2023-0023.html) VMware vCenter Server Out-of-Bounds Write Vulnerability (CVE-2023-34048) publicized in October 2023 | October 2023 | A risk assessment of CVE-2023-03048 was conducted and it was determined that sufficient controls are in place within Azure VMware Solution to reduce the risk of CVE-2023-03048 from a CVSS Base Score of 9.8 to an adjusted Environmental Score of [6.8](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H/MAC:L/MPR:H/MUI:R) or lower. Adjustments from the base score were possible due to the network isolation of the Azure VMware Solution vCenter Server (ports 2012, 2014, and 2020 are not exposed via any interactive network path) and multiple levels of authentication and authorization necassary to gain interactive access to the vCenter network segment. Microsoft is working on a plan to roll out security fixes soon to completely remediate the security vulnerability. | October 2023 |
| The AV64 SKU currently supports RAID-1 FTT1, RAID-5 FTT1, and RAID-1 FTT2 vSAN storage policies. For more information, see [AV64 supported RAID configuration](introduction.md#av64-supported-raid-configuration) | Nov 2023 | Use AV36, AV36P, or AV52 SKUs when RAID-6 FTT2 or RAID-1 FTT3 storage policies are needed. | N/A | | VMware HCX version 4.8.0 Network Extension (NE) Appliance VMs running in High Availability (HA) mode may experience intermittent Standby to Active failover. For more information, see [HCX - NE appliances in HA mode experience intermittent failover (96352)](https://kb.vmware.com/s/article/96352) | Jan 2024 | Avoid upgrading to VMware HCX 4.8.0 if you are using NE appliances in a HA configuration. | N/A |
azure-vmware Concepts Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-vulnerability-management.md
Azure VMware Solution takes a defense in depth approach to vulnerability and ris
- Details within the signal are adjudicated and assigned a CVSS score and risk rating according to compensating controls within the service. - The risk rating is used against internal bug bars, internal policies and regulations to establish a timeline for implementing a fix. - Internal engineering teams partner with appropriate parties to qualify and roll out any fixes, patches and other configuration updates necessary.-- Communications are drafted and published according to the risk rating assigned.
+- Communications are drafted when necassary and published according to the risk rating assigned.
>[!tip]
->Communications are surfaced through [Azure Service Health Portal](/azure/service-health/service-health-portal-update), [Known Issues](/azure/azure-vmware/azure-vmware-solution-known-issues) and Email.
+>Communications are surfaced through [Azure Service Health Portal](/azure/service-health/service-health-portal-update), [Known Issues](/azure/azure-vmware/azure-vmware-solution-known-issues) or Email.
### Subset of regulations governing vulnerability and risk management
azure-vmware Deploy Arc For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md
Running software in Azure VMware Solution, as a private cloud in Azure, offers s
To take advantage of these benefits if you are running in an Azure VMware Solution it is important to enable Arc through this document to fully integrate the experience with the AVS private cloud. Alternatively, Arc-enabling VMs through the following mechanisms will not create the necessary attributes to register the VM and software as part of Azure VMware Solution and therefore result in billing for SQL Server ESUs for: -- Arc-enabled servers,
+- Arc-enabled servers
- Arc-enabled VMware vSphere
When the script is run successfully, check the status to see if Azure Arc is now
- Choose **Azure Arc**. - Azure Arc state shows as **Configured**.
-Recover from failed deployments
+To recover from failed deployments:
If the Azure Arc resource bridge deployment fails, consult the [Azure Arc resource bridge troubleshooting](/azure/azure-arc/resource-bridge/troubleshoot-resource-bridge) guide. While there can be many reasons why the Azure Arc resource bridge deployment fails, one of them is KVA timeout error. Learn more about the [KVA timeout error](/azure/azure-arc/resource-bridge/troubleshoot-resource-bridge#kva-timeout-error) and how to troubleshoot.
Once you connected your Azure VMware Solution private cloud to Azure, you can br
The enable action starts a deployment and creates a resource in Azure, creating representative objects in Azure for your VMware vSphere resources. It allows you to manage who can access those resources through Role-based access control granularly.
-1. Repeat the previous steps for one or more virtual machine, network, resource pool, and VM template resources.
+Repeat the previous steps for one or more virtual machine, network, resource pool, and VM template resources.
Additionally, for virtual machines there is an additional section to configure **VM extensions**. This will enable guest management to facilitate additional Azure extensions to be installed on the VM. The steps to enable this would be:- 1. Select **Enable guest management**.-
-1. Choose a __Connectivity Method__ for the Arc agent.
-
-1. Provide an Administrator/Root access username and password for the VM.
+2. Choose a __Connectivity Method__ for the Arc agent.
+3. Provide an Administrator/Root access username and password for the VM.
If you choose to enable the guest management as a separate step or have issues with the VM extension install steps please review the prerequisites and steps discussed in the section below.
You need to enable guest management on the VMware VM before you can install an e
1. Select **Configuration** from the left navigation for a VMware VM. 1. Verify **Enable guest management** is now checked.
-From here additional extensions can be installed. See the [VM extensions Overview](/azure/azure-arc/servers/manage-vm-extensions) for a list of current extensions.
+From here additional extensions can be installed. See the [VM extensions](/azure/azure-arc/servers/manage-vm-extensions?branch=main) for a list of current extensions.
+
+### Install extensions
+To add extensions, follow these steps:
+1. Go to **vCenter Server Inventory >** **Virtual Machines** and select the virtual machine to which you need to add an extension.
+2. Locate **Settings >** **Extensions** from the left navigation and select **Add**. Alternatively, in the **Overview** page an **Extensions** click-through is listed under Properties.
+1. Select the extension you want to install. Some extensions require additional information.
+4. When you're done, select **Review + create**.
### Next Steps To manage Arc-enabled Azure VMware Solution go to: [Manage Arc-enabled Azure VMware private cloud - Azure VMware Solution](/azure/azure-vmware/manage-arc-enabled-azure-vmware-solution)+ To remove Arc-enabled  Azure VMWare Solution resources from Azure go to: [Remove Arc-enabled Azure VMware Solution vSphere resources from Azure - Azure VMware Solution](/azure/azure-vmware/remove-arc-enabled-azure-vmware-solution-vsphere-resources-from-azure)
backup Sql Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sql-support-matrix.md
Title: Azure Backup support matrix for SQL Server Backup in Azure VMs description: Provides a summary of support settings and limitations when backing up SQL Server in Azure VMs with the Azure Backup service. Previously updated : 02/07/2024 Last updated : 02/14/2024
You can use Azure Backup to back up SQL Server databases in Azure VMs hosted on
**Supported deployments** | SQL Marketplace Azure VMs and non-Marketplace (SQL Server manually installed) VMs are supported. **Supported regions** | Azure Backup for SQL Server databases is available in all regions, except France South (FRS), UK North (UKN), UK South (UKS), UG IOWA (UGI), and Germany (Black Forest). **Supported operating systems** | Windows Server 2022, Windows Server 2019, Windows Server 2016, Windows Server 2012 (all versions), Windows Server 2008 R2 SP1 <br/><br/> Linux isn't currently supported.
-**Supported SQL Server versions** | SQL Server 2022, SQL Server 2019, SQL Server 2017 as detailed on the [Search product lifecycle page](https://support.microsoft.com/lifecycle/search?alpha=SQL%20server%202017), SQL Server 2016 and SPs as detailed on the [Search product lifecycle page](https://support.microsoft.com/lifecycle/search?alpha=SQL%20server%202016%20service%20pack), SQL Server 2014, SQL Server 2012, SQL Server 2008 R2, SQL Server 2008 <br/><br/> Enterprise, Standard, Web, Developer, Express.<br><br>Express Local DB versions aren't supported.
+**Supported SQL Server versions** | SQL Server 2022 Express, SQL Server 2022, SQL Server 2019, SQL Server 2017 as detailed on the [Search product lifecycle page](https://support.microsoft.com/lifecycle/search?alpha=SQL%20server%202017), SQL Server 2016 and SPs as detailed on the [Search product lifecycle page](https://support.microsoft.com/lifecycle/search?alpha=SQL%20server%202016%20service%20pack), SQL Server 2014, SQL Server 2012, SQL Server 2008 R2, SQL Server 2008 <br/><br/> Enterprise, Standard, Web, Developer, Express.<br><br>Express Local DB versions aren't supported.
**Supported .NET versions** | .NET Framework 4.5.2 or later installed on the VM **Supported deployments** | SQL Marketplace Azure VMs and non-Marketplace (SQL Server that is manually installed) VMs are supported. Support for standalone instances is always on [availability groups](backup-sql-server-on-availability-groups.md). **Cross Region Restore** | Supported. [Learn more](restore-sql-database-azure-vm.md#cross-region-restore).
chaos-studio Chaos Studio Private Link Agent Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-private-link-agent-service.md
az rest --verbose --skip-authorization-header --header "Authorization=Bearer $ac
> [!NOTE] > The PrivateAccessID should exactly match the "resourceID" used to create the CSPA resource in Step 1.
-## Step 4: Update host VM to map the communications endpoint to the private endpoint
-
-During the Preview of this feature, customers need to update the Agent VM extensions settings to point to the communication endpoint that supports traffic over a private network. Customers need to update the host entry on the actual VM to map the communication endpoint to the private IP generated during the private endpoint creation. You can get the IP address from the "DNS Configuration" tab in the Private Endpoint resource seen in the following screenshot:
-
-[![Screenshot of Private Endpoint DNS Config tab.](images/dns-config.png)](images/dns-config.png#lightbox)
-
-After noting the IP address, you need to open the "hosts" file on your host VM and update it with the following entry:
-
-```
-<IP address> acs-frontdoor-prod-<azureRegion>.chaosagent.trafficmanager.net
-```
-
-> [!NOTE]
-> **Path of hosts file on Windows:** C:\Windows\System32\drivers\etc
->
->
-> **Path of hosts file on Linux:** /etc/hosts
-
-Example of what the "hosts" file should look like. The IP address and Azure region change for your scenario:
-
-[![Screenshot of hosts file.](images/cspa-hosts.png)](images/cspa-hosts.png#lightbox)
-
-Save and close the file.
-
-## Step 5: Update the communication endpoint in agentSettings and agentInstanceConfig JSON files
-
-In this step, you need to continue to edit files on the host VM machine. You need to update the "agentSettings.json" and "agentInstanceConfig.json" files to include the communication endpoint based on the region in which the VM targets were created in the previous steps.
-
-### Updating the agentSettings.json
-
-> [!NOTE]
-> **Path of agentSettings.json file on Windows:** C:\Packages\Plugins\Microsoft.Azure.Chaos.ChaosWindowsAgent-\<Version\>\win-x64\agentSettings.json
->
->
-> **Path of agentSettings.json file on Linux:** /var/lib/waagent/Microsoft.Azure.Chaos.ChaosLinuxAgent-\<Version\>\linux-x64
-
-<br/>
-
-**Communication endpoint format:** https://acs-frontdoor-prod-\<azureRegion\>.chaosagent.trafficmanager.net
-
-<br/>
-
-Example of updated agentSettings.json:
-
-[![Screenshot of agentSettings JSON.](images/agent-settings-json.png)](images/agent-settings-json.png#lightbox)
--
-### Updating the agentInstanceConfig.json
-
-> [!NOTE]
-> **Path of agentInstanceConfig.json file on Windows:** C:\Windows\System32\config\systemprofile\.azure-chaos- agent\data
->
->
-> **Path of agentInstanceConfig.json file on Linux:** /.azure-chaos-agent/data/agentInstanceConfig.json
-
-<br/>
-
-**Communication endpoint format:** https://acs-frontdoor-prod-\<azureRegion\>.chaosagent.trafficmanager.net
-
-<br/>
-
-Example of updated agentInstanceConfig.json:
-
-[![Screenshot of agentInstanceConfig JSON.](images/agent-instance-config-json.png)](images/agent-instance-config-json.png#lightbox)
-
-## Step 5.5: Disable CRL verification in agentSettings.JSON
-
-**IF** you blocked outbound access to Microsoft Certificate Revocation List (CRL) verification endpoints, then you need to update agentSettings.JSON to disable CRL verification check in the agent.
-
-By default this field is set to **true**, so you can either remove this field or set the value to false. See [here](chaos-studio-tutorial-agent-based-cli.md) for more details.
-
-```
-"communicationApi": {
- "checkCertRevocation": false
- }
-```
-
-The final agentSettings.JSON should appear as shown:
-
-[![Screenshot of agentSettings JSON with disabled CRL verification.](images/agent-settings-crl.png)](images/agent-settings-crl.png#lightbox)
-
-If outbound access to Microsoft CRL verification endpoints is not blocked, then you can ignore this step.
-
-## Step 6: Restart the Azure Chaos Agent service in the VM
+## Step 4: Restart the Azure Chaos Agent service in the VM
After making all the required changes to the host, restart the Azure Chaos Agent Service in the VM
Systemctl restart azure-chaos-agent
[![Screenshot of restarting Linux VM.](images/restart-linux-vm.png)](images/restart-linux-vm.png#lightbox)
-## Step 7: Run your Agent-based experiment using private endpoints
+## Step 5: Run your Agent-based experiment using private endpoints
After the restart, the Chaos agent should be able to communicate with the Agent Communication data plane service and the agent registration to the data plane should be successful. After successful registration, the agent will be able to heartbeat its status and you can go ahead and run the chaos agent-based experiments using private endpoints!
cloud-services-extended-support Enable Key Vault Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/enable-key-vault-virtual-machine.md
Previously updated : 05/12/2021 Last updated : 01/30/2024 # Apply the Key Vault VM extension to Azure Cloud Services (extended support)
communication-services Sub Eligibility Number Capability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/sub-eligibility-number-capability.md
Numbers can be purchased on eligible Azure subscriptions and in geographies wher
**Use the drop-down to select the country/region where you're getting numbers. You'll find information about availability, restrictions and other related info on the country specific page** > [!div class="op_single_selector"] >
+> - [Argentina](../numbers/phone-number-management-for-argentina.md)
> - [Australia](../numbers/phone-number-management-for-australia.md) > - [Austria](../numbers/phone-number-management-for-austria.md) > - [Belgium](../numbers/phone-number-management-for-belgium.md)
+> - [Brazil](../numbers/phone-number-management-for-brazil.md)
> - [Canada](../numbers/phone-number-management-for-canada.md)
+> - [Chile](../numbers/phone-number-management-for-chile.md)
> - [China](../numbers/phone-number-management-for-china.md)
+> - [Colombia](../numbers/phone-number-management-for-colombia.md)
> - [Denmark](../numbers/phone-number-management-for-denmark.md) > - [Estonia](../numbers/phone-number-management-for-estonia.md) > - [Finland](../numbers/phone-number-management-for-finland.md)
Numbers can be purchased on eligible Azure subscriptions and in geographies wher
> - [Lithuania](../numbers/phone-number-management-for-lithuania.md) > - [Luxembourg](../numbers/phone-number-management-for-luxembourg.md) > - [Malaysia](../numbers/phone-number-management-for-malaysia.md)
+> - [Mexico](../numbers/phone-number-management-for-mexico.md)
> - [Netherlands](../numbers/phone-number-management-for-netherlands.md) > - [New Zealand](../numbers/phone-number-management-for-new-zealand.md) > - [Norway](../numbers/phone-number-management-for-norway.md)
communication-services Pstn Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pstn-pricing.md
All prices shown below are in USD.
|-||--| |Toll-free |N/A |USD 0.3022/min |
+## Argentina telephony offers
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 25.00/mo |
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |USD 0.2347/min |
+
+## Brazil telephony offers
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 35.00/mo |
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |Starting at USD 0.1888/min |
+
+## Chile telephony offers
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 32.00/mo |
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |USD 0.1621/min |
+
+## Colombia telephony offers
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 25.00/mo |
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |USD 0.1707/min |
+
+## Mexico telephony offers
+### Phone number leasing charges
+|Number type |Monthly fee |
+|--|--|
+|Toll-Free |USD 30.00/mo |
+### Usage charges
+|Number type |To make calls |To receive calls |
+|-||--|
+|Toll-free |N/A |USD 0.2161/min |
***
communication-services Transfer Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/transfer-calls.md
# Transfer calls - During an active call, you may want to transfer the call to another person or number. Let's learn how. ## Prerequisites
confidential-ledger Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/architecture.md
Previously updated : 11/14/2022 Last updated : 01/30/2024 # Architecture
-The Azure confidential ledger, a REST API service, allows users to interact with the ledger through administrative and functional API calls. When data is recorded to the ledger, it is sent to the permissioned blockchain nodes that are secure enclaved backed replicas. The replicas follow a consensus concept. A user can also retrieve receipts for the data that has been committed to the ledger.
+The Azure confidential ledger, a REST API service, allows users to interact with the ledger through administrative and functional API calls. When data is recorded to the ledger, it is sent to the permissioned blockchain nodes that are secure enclaved backed replicas. The replicas follow a consensus concept. A user can also retrieve receipts for the data that was committed to the ledger.
There is also an optional consortium notion that will support multi-party collaboration in the future.
confidential-ledger Authenticate Ledger Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/authenticate-ledger-nodes.md
Previously updated : 11/14/2022 Last updated : 01/30/2024 # Authenticating Azure confidential ledger nodes
-Azure confidential ledger nodes can be authenticated by code samples and by users.
+Code samples and users can authenticate Azure confidential ledger nodes.
## Code samples
-When initializing, code samples get the node certificate by querying the Identity Service. After retrieving the node certificate, a code sample will query the ledger to get a quote, which is then validated using the Host Verify binaries. If the verification succeeds, the code sample proceeds to ledger operations.
+When initializing, code samples get the node certificate by querying the Identity Service. The code sample retrieves the node certificate before querying the ledger to get a quote, which is then validated using the Host Verify binaries. If the verification succeeds, the code sample proceeds to ledger operations.
## Users
-Users can validate the authenticity of Azure confidential ledger nodes to confirm they are indeed interfacing with their ledgerΓÇÖs enclave. You can build trust in Azure confidential ledger nodes in a few ways, which can be stacked on one another to increase the overall level of confidence. As such, steps 1 and 2 are important confidence building mechanisms for users of Azure confidential ledger enclave as part of the initial TLS handshake and authentication within functional workflows. Beyond that, a persistent client connection is maintained between the user's client and the confidential ledger.
+Users can validate the authenticity of Azure confidential ledger nodes to confirm they are indeed interfacing with their ledger's enclave. You can build trust in Azure confidential ledger nodes in a few ways, which can be stacked on one another to increase the overall level of confidence. As such, steps 1 and 2 are important confidence building mechanisms for users of Azure confidential ledger enclave as part of the initial TLS handshake and authentication within functional workflows. Furthermore, a persistent client connection is maintained between the user's client and the confidential ledger.
-1. **Validating a confidential ledger node**: This is accomplished by querying the identity service hosted by Microsoft, which provides a service certificate and thus helps verify that the ledger node is presenting a certificate endorsed/signed by the service certificate for that specific instance. Using PKI-based HTTPS, a serverΓÇÖs certificate is signed by a well-known Certificate Authority (CA) or intermediate CA. In the case of Azure confidential ledger, the CA certificate is returned by the Identity Service in the form of the service certificate. If this node certificate isnΓÇÖt signed by the returned service certificate, the client connection should fail (as implemented in the sample code).
+1. **Validating a confidential ledger node**: A confidential ledger node is validated by querying the identity service hosted by Microsoft, which provides a service certificate and thus helps verify that the ledger node is presenting a certificate endorsed/signed by the service certificate for that specific instance. A well-known Certificate Authority (CA) or intermediate CA signs a server's certificate using PKI-based HTTPS. In the case of Azure confidential ledger, the CA certificate is returned by the Identity Service in the form of the service certificate. If this node certificate isn't signed by the returned service certificate, the client connection should fail (as implemented in the sample code).
-2. **Validating a confidential ledger enclave**: A confidential ledger runs in an Intel® SGX enclave that’s represented by a remote attestation report (or quote), a data blob generated inside that enclave. It can be used by any other entity to verify that the quote has been produced from an application running with Intel® SGX protections. The quote contains claims that help identify various properties of the enclave and the application that it’s running. In particular, it contains the SHA-256 hash of the public key contained in the confidential ledger node's certificate. The quote of a confidential ledger node can be retrieved by calling a functional workflow API. The retrieved quote can then be validated following the steps described [here](https://microsoft.github.io/CCF/main/use_apps/verify_quote.html).
+2. **Validating a confidential ledger enclave**: A confidential ledger runs in an Intel® SGX enclave represented by a remote attestation report (or quote), a data blob generated inside that enclave. It can be used by any other entity to verify that the quote has been produced from an application running with Intel® SGX protections. The quote contains claims that help identify various properties of the enclave and the application that it's running. In particular, it contains the SHA-256 hash of the public key contained in the confidential ledger node's certificate. The quote of a confidential ledger node can be retrieved by calling a functional workflow API. The retrieved quote can then be validated following the steps described [here](https://microsoft.github.io/CCF/main/use_apps/verify_quote.html).
## Next steps
confidential-ledger Authentication Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/authentication-azure-ad.md
Previously updated : 07/12/2022 Last updated : 01/30/2024
confidential-ledger Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/overview.md
Previously updated : 11/14/2022 Last updated : 01/30/2024 # Microsoft Azure confidential ledger
-Microsoft Azure confidential ledger (ACL) is a new and highly secure service for managing sensitive data records. It runs exclusively on hardware-backed secure enclaves, a heavily monitored and isolated runtime environment which keeps potential attacks at bay. Furthermore, Azure confidential ledger runs on a minimalistic Trusted Computing Base (TCB), which ensures that no oneΓüáΓÇönot even MicrosoftΓüáΓÇöis "above" the ledger.
+Microsoft Azure confidential ledger (ACL) is a new and highly secure service for managing sensitive data records. It runs exclusively on hardware-backed secure enclaves, a heavily monitored and isolated runtime environment, which keeps potential attacks at bay. Furthermore, Azure confidential ledger runs on a minimalistic Trusted Computing Base (TCB), which ensures that no oneΓüáΓÇönot even MicrosoftΓüáΓÇöis "above" the ledger.
As its name suggests, Azure confidential ledger utilizes the [Azure Confidential Computing platform](../confidential-computing/index.yml) and the [Confidential Consortium Framework](https://ccf.dev) to provide a high integrity solution that is tamper-protected and evident. One ledger spans across three or more identical instances, each of which run in a dedicated, fully attested hardware-backed enclave. The ledger's integrity is maintained through a consensus-based blockchain.
For more information, you can watch the [Azure confidential ledger demo](https:/
## Key Features
-The confidential ledger is exposed through REST APIs which can be integrated into new or existing applications. The confidential ledger can be managed by administrators utilizing Administrative APIs (Control Plane). It can also be called directly by application code through Functional APIs (Data Plane). The Administrative APIs support basic operations such as create, update, get and, delete. The Functional APIs allow direct interaction with your instantiated ledger and include operations such as put and get data.
+The confidential ledger is exposed through REST APIs, which can be integrated into new or existing applications. Administrators can manage the confidential ledger with Administrative APIs (Control Plane). The confidential ledger can also be called directly by application code through Functional APIs (Data Plane). The Administrative APIs support basic operations such as create, update, get and, delete. The Functional APIs allow direct interaction with your instantiated ledger and include operations such as put and get data.
## Ledger security The ledger APIs support certificate-based authentication process with owner roles as well as Microsoft Entra ID based authentication and also role-based access (for example, owner, reader, and contributor).
-The data to the ledger is sent through TLS 1.3 connection and the TLS 1.3 connection terminates inside the hardware backed security enclaves (Intel® SGX enclaves). This ensures that no one can intercept the connection between a customer's client and the confidential ledger server nodes.
+The data to the ledger is sent through TLS 1.3 connection and the TLS 1.3 connection terminates inside the hardware backed security enclaves (Intel® SGX enclaves), ensuring that no one can intercept the connection between a customer's client and the confidential ledger server nodes.
### Ledger storage Confidential ledgers are created as blocks in blob storage containers belonging to an Azure Storage account. Transaction data can either be stored encrypted or in plaintext depending on your needs.
-The confidential ledger can be managed by administrators utilizing Administrative APIs (Control Plane), and can be called directly by your application code through Functional APIs (Data Plane). The Administrative APIs support basic operations such as create, update, get and, delete.
+Administrators can manage the confidential ledger with Administrative APIs (Control Plane), and the confidential ledger can be called directly by your application code through Functional APIs (Data Plane). The Administrative APIs support basic operations such as create, update, get and, delete.
The Functional APIs allow direct interaction with your instantiated confidential ledger and include operations such as put and get data.
The Functional APIs allow direct interaction with your instantiated confidential
|--|--| | ACL | Azure confidential ledger | | Ledger | An immutable append-only record of transactions (also known as a Blockchain) |
-| Commit | A confirmation that a transaction has been appended to the ledger. |
-| Receipt | Proof that the transaction was processed by the ledger. |
+| Commit | A confirmation that a transaction was appended to the ledger. |
+| Receipt | Proof that the ledger processed a transaction. |
## Next steps
confidential-ledger Quickstart Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-cli.md
Title: Quickstart ΓÇô Microsoft Azure confidential ledger with the Azure CLI
description: Learn to use the Microsoft Azure confidential ledger through the Azure CLI Previously updated : 03/22/2022 Last updated : 01/30/2024
# Quickstart: Create a confidential ledger using the Azure CLI
-Azure confidential ledger is a cloud service that provides a high integrity store for sensitive data logs and records that must be kept intact. In this quickstart you will use the [Azure CLI](/cli/azure/) to create a confidential ledger, view and update its properties, and delete it.
+Azure confidential ledger is a cloud service that provides a high integrity store for sensitive data logs and records that must be kept intact. In this quickstart, you use the [Azure CLI](/cli/azure/) to create a confidential ledger, view and update its properties, and delete it.
-For more information on Azure confidential ledger, and for examples of what can be stored in a confidential ledger, see [About Microsoft Azure confidential ledger](overview.md).
+For more information on Azure confidential ledger and examples of what can be stored in a confidential ledger, see [About Microsoft Azure confidential ledger](overview.md).
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
For more information on Azure confidential ledger, and for examples of what can
## Get your principal ID
-To create a confidential ledger, you'll need your Microsoft Entra principal ID (also called your object ID). To obtain your principal ID, use the Azure CLI [az ad signed-in-user](/cli/azure/ad/signed-in-user) command, and filter the results by `objectId`:
+To create a confidential ledger, you need your Microsoft Entra principal ID (also called your object ID). To obtain your principal ID, use the Azure CLI [az ad signed-in-user](/cli/azure/ad/signed-in-user) command, and filter the results by `objectId`:
```azurecli az ad signed-in-user show --query objectId ```
-Your result will be in the format `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
+Your result is in the format `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
## Create a confidential ledger
Use the Azure CLI [az confidentialledger create](/cli/azure/confidentialledger#a
az confidentialledger create --name "myLedger" --resource-group "myResourceGroup" --location "EastUS" --ledger-type "Public" --aad-based-security-principals ledger-role-name="Administrator" principal-id="<your-principal-id>" ```
-A successful operation will return the properties of the newly created ledger. Take note of the **ledgerUri**. In the example above, this URI is "https://myledger.confidential-ledger.azure.com".
+A successful operation returns the properties of the newly created ledger. Take note of the **ledgerUri**. In our example, this URI is "https://myledger.confidential-ledger.azure.com".
-You'll need this URI to transact with the confidential ledger from the data plane.
+You need this URI to transact with the confidential ledger from the data plane.
## View and update your confidential ledger properties
To update the properties of a confidential ledger, use do so, use the Azure CLI
az confidentialledger update --name "myLedger" --resource-group "myResourceGroup" --ledger-type "Public" --aad-based-security-principals ledger-role-name="Reader" principal-id="<your-principal-id>" ```
-If you again run [az confidentialledger show](/cli/azure/confidentialledger#az-confidentialledger-show), you'll see that the role has been updated.
+If you again run [az confidentialledger show](/cli/azure/confidentialledger#az-confidentialledger-show), you see that the role is updated.
```json "ledgerRoleName": "Reader",
If you again run [az confidentialledger show](/cli/azure/confidentialledger#az-c
## Next steps
-In this quickstart, you created a confidential ledger by using the Azure CLI. To learn more about Azure confidential ledger and how to integrate it with your applications, continue on to the articles below.
+In this quickstart, you created a confidential ledger by using the Azure CLI. To learn more about Azure confidential ledger and how to integrate it with your applications, continue on to these articles.
- [Overview of Microsoft Azure confidential ledger](overview.md)
confidential-ledger Quickstart Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-net.md
Title: Quickstart - Azure confidential ledger client library for .NET
description: Learn how to use Azure Confidential Ledger using the client library for .NET Previously updated : 07/15/2022 Last updated : 01/30/2024 ms.devlang: csharp
Azure confidential ledger client library resources:
- [.NET Core 3.1 SDK or later](https://dotnet.microsoft.com/download/dotnet-core) - [Azure CLI](/cli/azure/install-azure-cli)
-You will also need a running confidential ledger, and a registered user with the `Administrator` privileges. You can create a confidential ledger (and an administrator) using the [Azure portal](quickstart-portal.md), the [Azure CLI](quickstart-cli.md), or [Azure PowerShell](quickstart-powershell.md).
+You also need a running confidential ledger, and a registered user with the `Administrator` privileges. You can create a confidential ledger (and an administrator) using the [Azure portal](quickstart-portal.md), the [Azure CLI](quickstart-cli.md), or [Azure PowerShell](quickstart-powershell.md).
## Setup ### Create new .NET console app 1. In a command shell, run the following command to create a project named `acl-app`:- ```dotnetcli dotnet new console --name acl-app ```- 1. Change to the newly created *acl-app* directory, and run the following command to build the project: ```dotnetcli dotnet build ```- The build output should contain no warnings or errors.
-
```console Build succeeded. 0 Warning(s)
Install the Confidential Ledger client library for .NET with [NuGet][client_nuge
dotnet add package Azure.Security.ConfidentialLedger --version 1.0.0 ```
-For this quickstart, you'll also need to install the Azure SDK client library for Azure Identity:
+For this quickstart, you also need to install the Azure SDK client library for Azure Identity:
```dotnetcli dotnet add package Azure.Identity
dotnet add package Azure.Identity
## Object model
-The Azure confidential ledger client library for .NET allows you to create an immutable ledger entry in the service. The [Code examples](#code-examples) section shows how to create a write to the ledger and retrieve the transaction ID.
+The Azure confidential ledger client library for .NET allows you to create an immutable ledger entry in the service. The [Code examples](#code-examples) section shows how to create a write to the ledger and retrieve the transaction ID.
## Code examples
using Azure.Security.ConfidentialLedger.Certificate;
### Authenticate and create a client
-In this quickstart, logged in user is used to authenticate to Azure confidential ledger, which is preferred method for local development. The name of your confidential ledger is expanded to the key vault URI, in the format "https://\<your-confidential-ledger-name\>.confidential-ledger.azure.com". This example is using ['DefaultAzureCredential()'](/dotnet/api/azure.identity.defaultazurecredential) class from [Azure Identity Library](/dotnet/api/overview/azure/identity-readme), which allows to use the same code across different environments with different options to provide identity.
+In this quickstart, logged in user is used to authenticate to Azure confidential ledger, which is preferred method for local development. The name of your confidential ledger is expanded to the key vault URI, in the format "https://\<your-confidential-ledger-name\>.confidential-ledger.azure.com". This example is using ['DefaultAzureCredential()'](/dotnet/api/azure.identity.defaultazurecredential) class from [Azure Identity Library](/dotnet/api/overview/azure/identity-readme), which allows to use the same code across different environments with different options to provide identity.
```csharp credential = DefaultAzureCredential()
confidential-ledger Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-portal.md
Title: Quickstart ΓÇô Microsoft Azure confidential ledger with the Azure portal
description: Learn to use the Microsoft Azure confidential ledger through the Azure portal Previously updated : 11/14/2022 Last updated : 01/30/2024
# Quickstart: Create a confidential ledger using the Azure portal
-Azure confidential ledger is a cloud service that provides a high integrity store for sensitive data logs and records that require data to be kept intact. For more information on Azure confidential ledger, and for examples of what can be stored in a confidential ledger, see [About Microsoft Azure confidential ledger](overview.md).
+Azure confidential ledger is a cloud service that provides a high integrity store for sensitive data logs and records that require data to be kept intact. For more information on Azure confidential ledger and examples of what can be stored in a confidential ledger, see [About Microsoft Azure confidential ledger](overview.md).
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
Sign in to the [Azure portal](https://portal.azure.com).
## Create a confidential ledger 1. From the Azure portal menu, or from the Home page, select **Create a resource**.- 1. In the Search box, enter "Confidential Ledger", select said application, and then choose **Create**.- 1. On the Create confidential ledger section, provide the following information: - **Name**: Provide a unique name. - **Subscription**: Choose the desired subscription. - **Resource Group**: Select **Create new*** and enter a resource group name. - **Location**: In the pull-down menu, choose a location. - Leave the other options to their defaults.
-
1. Select the **Security** tab.-
-1. You must now add a Microsoft Entra ID-based or certificate-based user to your confidential ledger with a role of "Administrator." In this quickstart, we'll add a Microsoft Entra ID-based user. Select **+ Add Microsoft Entra ID-Based User**.
-
+1. You must now add a Microsoft Entra ID-based or certificate-based user to your confidential ledger with a role of "Administrator." In this quickstart, you add a Microsoft Entra ID-based user. Select **+ Add Microsoft Entra ID-Based User**.
1. You must add a Microsoft Entra ID-based or Certificate-based user. Search the right-hand pane for your email address. Select your row, and then choose **Select** at the bottom of the pane. Your user profile may already be in the Microsoft Entra ID-based user section, in which case you cannot add yourself again.- 1. In the **Ledger Role** drop-down field, select **Administrator**.
+1. Select **Review + Create**. After validation, select **Create**.
-1. Select **Review + Create**. After validation has passed, select **Create**.
-
-When the deployment is complete. select **Go to resource**.
+When the deployment is complete, select **Go to resource**.
:::image type="content" source="./media/confidential-ledger-portal-quickstart.png" alt-text="ACL portal create screen":::
-Take note of the two properties listed below:
-- **confidential ledger name**: In the example, it is "test-create-ledger-demo." You will use this name for other steps.
+Take note of these two properties:
+- **confidential ledger name**: In the example, it is "test-create-ledger-demo." Use this name for other steps.
- **Ledger endpoint**: In the example, this endpoint is `https://test-create-ledger-demo.confidential-ledger.azure.net/`. You will need these property names to transact with the confidential ledger from the data plane.
confidential-ledger Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-powershell.md
Title: Quickstart ΓÇô Microsoft Azure confidential ledger with Azure PowerShell
description: Learn to use the Microsoft Azure confidential ledger through Azure PowerShell Previously updated : 06/08/2022 Last updated : 01/30/2024
# Quickstart: Create a confidential ledger using Azure PowerShell
-Azure confidential ledger is a cloud service that provides a high integrity store for sensitive data logs and records that must be kept intact. In this quickstart you will use [Azure PowerShell](/powershell/azure/) to create a confidential ledger, view and update its properties, and delete it. For more information on Azure confidential ledger, and for examples of what can be stored in a confidential ledger, see [About Microsoft Azure confidential ledger](overview.md).
+Azure confidential ledger is a cloud service that provides a high integrity store for sensitive data logs and records that must be kept intact. In this quickstart, you use [Azure PowerShell](/powershell/azure/) to create a confidential ledger, view and update its properties, and delete it. For more information on Azure confidential ledger and examples of what can be stored in a confidential ledger, see [About Microsoft Azure confidential ledger](overview.md).
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. [!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-In this quickstart, you create a confidential ledger with [Azure PowerShell](/powershell/azure/). If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
+In this quickstart, you create a confidential ledger with [Azure PowerShell](/powershell/azure/). If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
## Prerequisites
In this quickstart, you create a confidential ledger with [Azure PowerShell](/po
## Get your principal ID and tenant ID
-To create a confidential ledger, you'll need your Microsoft Entra principal ID (also called your object ID). To obtain your principal ID, use the Azure PowerShell [Get-AzADUser](/powershell/module/az.resources/get-azaduser) cmdlet, with the `-SignedIn` flag:
+To create a confidential ledger, use your Microsoft Entra principal ID (also called your object ID). To obtain your principal ID, use the Azure PowerShell [Get-AzADUser](/powershell/module/az.resources/get-azaduser) cmdlet, with the `-SignedIn` flag:
```azurepowershell Get-AzADUser -SignedIn ```
-Your result will be listed under "Id", in the format `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
+Your result is listed under "Id", in the format `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
## Create a confidential ledger
-Use the Azure Powershell [New-AzConfidentialLedger](/powershell/module/az.confidentialledger/new-azconfidentialledger) command to create a confidential ledger in your new resource group.
+Use the Azure PowerShell [New-AzConfidentialLedger](/powershell/module/az.confidentialledger/new-azconfidentialledger) command to create a confidential ledger in your new resource group.
```azurepowershell New-AzConfidentialLedger -Name "myLedger" -ResourceGroupName "myResourceGroup" -Location "EastUS" -LedgerType "Public" -AadBasedSecurityPrincipal @{ LedgerRoleName="Administrator"; PrincipalId="34621747-6fc8-4771-a2eb-72f31c461f2e"; } ```
-A successful operation will return the properties of the newly created ledger. Take note of the **ledgerUri**. In the example above, this URI is "https://myledger.confidential-ledger.azure.com".
+A successful operation returns the properties of the newly created ledger. Take note of the **ledgerUri**. In the example above, this URI is "https://myledger.confidential-ledger.azure.com".
-You'll need this URI to transact with the confidential ledger from the data plane.
+You need this URI to transact with the confidential ledger from the data plane.
## View and update your confidential ledger properties
To update the properties of a confidential ledger, use do so, use the Azure Powe
Update-AzConfidentialLedger -Name "myLedger" -ResourceGroupName "myResourceGroup" -Location "EastUS" -LedgerType "Public" -AadBasedSecurityPrincipal @{ LedgerRoleName="Reader"; PrincipalId="34621747-6fc8-4771-a2eb-72f31c461f2e"; } ```
-If you again run [Get-AzConfidentialLedger](/powershell/module/az.confidentialledger/get-azconfidentialledger), you'll see that the role has been updated.
+If you again run [Get-AzConfidentialLedger](/powershell/module/az.confidentialledger/get-azconfidentialledger), you see that the role is updated.
```json "ledgerRoleName": "Reader",
confidential-ledger Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-python.md
Title: Quickstart ΓÇô Microsoft Azure confidential ledger Python client library
description: Learn to use the Microsoft Azure confidential ledger client library for Python Previously updated : 11/14/2022 Last updated : 01/30/2024
# Quickstart: Microsoft Azure confidential ledger client library for Python
-Get started with the Microsoft Azure confidential ledger client library for Python. Follow the steps below to install the package and try out example code for basic tasks.
+Get started with the Microsoft Azure confidential ledger client library for Python. Follow the steps in this article to install the package and try out example code for basic tasks.
-Microsoft Azure confidential ledger is a new and highly secure service for managing sensitive data records. Based on a permissioned blockchain model, Azure confidential ledger offers unique data integrity advantages, such as immutability (making the ledger append-only) and tamperproofing (to ensure all records are kept intact).
+Microsoft Azure confidential ledger is a new and highly secure service for managing sensitive data records. Based on a permissioned blockchain model, Azure confidential ledger offers unique data integrity advantages, such as immutability (making the ledger append-only) and tamper proofing (to ensure all records are kept intact).
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
Microsoft Azure confidential ledger is a new and highly secure service for manag
## Prerequisites - An Azure subscription - [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Python versions that are [supported by the Azure SDK for Python](https://github.com/Azure/azure-sdk-for-python#prerequisites).
+- Python versions [supported by the Azure SDK for Python](https://github.com/Azure/azure-sdk-for-python#prerequisites).
- [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell). ## Set up
pip install azure.confidentialledger
### Initialization
-We can now start writing our Python application. First, we'll import the required packages.
+We can now start writing our Python application. First, import the required packages.
```python # Import the Azure authentication library
from azure.confidentialledger import ConfidentialLedgerClient
from azure.confidentialledger.certificate import ConfidentialLedgerCertificateClient ```
-Next, we'll use the [DefaultAzureCredential Class](/python/api/azure-identity/azure.identity.defaultazurecredential) to authenticate the app.
+Next, use the [DefaultAzureCredential Class](/python/api/azure-identity/azure.identity.defaultazurecredential) to authenticate the app.
```python credential = DefaultAzureCredential() ```
-We'll finish setup by setting some variables for use in your application: the resource group (myResourceGroup), the name of ledger you want to create, and two urls to be used by the data plane client library.
+Finish setup by setting some variables for use in your application: the resource group (myResourceGroup), the name of ledger you want to create, and two urls to be used by the data plane client library.
> [!Important] > Each ledger must have a globally unique name. Replace \<your-unique-ledger-name\> with the name of your ledger in the following example.
ledger_url = "https://" + ledger_name + ".confidential-ledger.azure.com"
### Use the control plane client library
-The control plane client library (azure.mgmt.confidentialledger) allows operations on ledgers, such as creation, modification, and deletion, listing the ledgers associated with a subscription, and getting the details of a specific ledger.
+The control plane client library (azure.mgmt.confidentialledger) allows operations on ledgers, such as creation, modification, deletion, listing the ledgers associated with a subscription, and getting the details of a specific ledger.
-In our code, we will first create a control plane client by passing the ConfidentialLedgerAPI the credential variable and your Azure subscription ID (both of which are set above).
+In the code, first create a control plane client by passing the ConfidentialLedgerAPI the credential variable and your Azure subscription ID (both of which are set above).
```python confidential_ledger_mgmt = ConfidentialLedgerAPI(
print (f"- ID: {myledger.id}")
### Use the data plane client library
-Now that we have a ledger, we'll interact with it using the data plane client library (azure.confidentialledger).
+Now that we have a ledger, interact with it using the data plane client library (azure.confidentialledger).
-First, we will generate and save a confidential ledger certificate.
+First, we generate and save a confidential ledger certificate.
```python identity_client = ConfidentialLedgerCertificateClient(identity_url)
confidential-ledger Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-template.md
Previously updated : 11/14/2022 Last updated : 01/30/2024 # Quickstart: Create an Microsoft Azure confidential ledger with an ARM template
confidential-ledger Register Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/register-application.md
Previously updated : 07/15/2022 Last updated : 01/30/2024 #Customer intent: As developer, I want to know how to register my Azure confidential ledger application with the Microsoft identity platform so that the security token service can issue ID and/or access tokens to client applications that request them.
confidential-ledger Register Ledger Resource Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/register-ledger-resource-provider.md
Previously updated : 11/14/2022 Last updated : 01/30/2024
container-apps Managed Identity Image Pull https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity-image-pull.md
You can pull images from private repositories in Microsoft Azure Container Regis
With a system-assigned managed identity, the identity is created and managed by Azure Container Apps. The identity is tied to your container app and is deleted when your app is deleted. With a user-assigned managed identity, you create and manage the identity outside of Azure Container Apps. It can be assigned to multiple Azure resources, including Azure Container Apps.
+Container Apps checks for a new version of the image whenever a container is started. In Docker or Kubernetes terminology, Container Apps sets each container's image pull policy to `always`.
+ ::: zone pivot="azure-portal" This article describes how to use the Azure portal to configure your container app to use user-assigned and system-assigned managed identities to pull images from private Azure Container Registry repositories.
The following steps describe the process to configure your container app to use
1. Create a container app with a public image. 1. Add the user-assigned managed identity to the container app.
-1. Create a container app revision with a private image and the system-assigned managed identity.
+1. Create a container app revision with a private image and the user-assigned managed identity.
### Prerequisites
You can verify that the role was added by checking the identity from the **Ident
1. Select **Azure role assignments** from the menu on the managed identity resource page. 1. Verify that the `acrpull` role is assigned to the user-assigned managed identity.
+### Create a container app with a private image
+
+If you don't want to start by creating a container app with a public image, you can also do the following.
+
+1. Create a user-assigned managed identity.
+1. Add the `acrpull` role to the user-assigned managed identity.
+1. Create a container app with a private image and the user-assigned managed identity.
+
+This method is typical in Infrastructure as Code (IaC) scenarios.
+ ### Clean up resources If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the resource group.
cosmos-db Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/data-explorer.md
Title: Use Azure Cosmos DB Explorer to manage your data
-description: Learn about Azure Cosmos DB Explorer, a standalone web-based interface that allows you to view and manage the data stored in Azure Cosmos DB.
+ Title: Use the Explorer to manage your data
+
+description: Learn about the Azure Cosmos DB Explorer, a standalone web-based interface that allows you to view and manage the data stored in Azure Cosmos DB.
+++ Previously updated : 03/02/2023-- Last updated : 02/14/2024
+# CustomerIntent: As a database developer, I want to access the Explorer so that I can observe my data and make queries against my data.
-# Work with data using Azure Cosmos DB Explorer
+# Use the Azure Cosmos DB Explorer to manage your data
+ [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)]
-Azure Cosmos DB Explorer is a standalone web-based interface that allows you to view and manage the data stored in Azure Cosmos DB. Azure Cosmos DB Explorer is equivalent to the existing **Data Explorer** tab that is available in Azure portal when you create an Azure Cosmos DB account. The key advantages of Azure Cosmos DB Explorer over the existing Data explorer are:
+Azure Cosmos DB Explorer is a standalone web-based interface that allows you to view and manage the data stored in Azure Cosmos DB. Azure Cosmos DB Explorer is equivalent to the existing **Data Explorer** section that is available in Azure portal for Azure Cosmos DB accounts.
-- You have a full screen real-estate to view your data, run queries, stored procedures, triggers, and view their results. -- You can provide read or read-write access to your database account and its collections to other users who don't have access to Azure portal or subscription. -- You can share the query results with other users who don't have access to Azure portal or subscription.
+The Azure Cosmos DB Explorer has a few key advantages when compared to the Data explorer, including:
-## Access Azure Cosmos DB Explorer
+- Full screen real-estate to browse data, run queries, and observe query results
+- Ability to provide users without access to the Azure portal or an Azure subscription read or read-write capabilities over data in containers
+- Ability to share query results with users who don't have an Azure subscription or Azure portal access
-1. Sign in to [Azure portal](https://portal.azure.com/).
+## Prerequisites
+
+- An existing Azure Cosmos DB account.
+ - If you don't have an Azure subscription, [Try Azure Cosmos DB free](https://cosmos.azure.com/try/).
+
+## Access the Explorer directly using your Azure subscription
-1. From **All resources**, find and navigate to your Azure Cosmos DB account, select **Keys**, and copy the **Primary Connection String**. You can select either:
+You can use access the Explorer directly and use your existing credentials to quickly get started with the tool.
- - **Read-write Keys**. When you share the Read-write primary connection string other users, they can view and modify the databases, collections, queries, and other resources associated with that specific account.
- - **Read-only Keys**. When you share the read-only primary connection string with other users, they can view the databases, collections, queries, and other resources associated with that specific account. For example, if you want to share results of a query with your teammates who don't have access to Azure portal or your Azure Cosmos DB account, you can provide them with this value.
+1. Navigate to <https://cosmos.azure.com>.
-1. Go to [https://cosmos.azure.com/](https://cosmos.azure.com/).
+1. Select **Sign In**. Sign in using your existing credentials that have access to the Azure Cosmos DB account.
-1. Select **Connect to your account with connection string**, paste the connection string, and select **Connect**.
+1. Next, select your Azure subscription and target account from the **Select a Database Account** menu.
-To open Azure Cosmos DB Explorer from the Azure portal:
+ ![Screenshot of the 'Select a Database Account' menu in the Explorer.](media/data-explorer/select-database-account.png)
+
+## Access the Explorer from the Azure portal using your Azure subscription
+
+If you're already comfortable with the Azure portal, you can navigate directly from the in-portal Data Explorer to the standalone Explorer.
+
+1. Sign in to [Azure portal](https://portal.azure.com/).
-1. Select the **Data Explorer** in the left menu, then select **Open Full Screen**.
+1. Navigate to your existing Azure Cosmos DB account.
- :::image type="content" source="./media/data-explorer/open-data-explorer.png" alt-text="Screenshot shows Data Explorer page with Open Full Screen highlighted." lightbox="./media/data-explorer/open-data-explorer.png":::
+1. In the resource menu, select **Data Explorer**.
+
+1. Next, select the **Open Full Screen** menu option.
+
+ ![Screenshot of the Data Explorer page with the 'Open Full Screen' option highlighted.](media/data-explorer/open-full-screen.png)
1. In the **Open Full Screen** dialog, select **Open**.
+## Grant someone else access to the Explorer using a connection string
+
+Use either the **read-write** or **read-only** key to give another user access to your Azure Cosmos DB account. This method works even if the user doesn't have access to an Azure subscription or the Azure portal.
+
+1. Sign in to [Azure portal](https://portal.azure.com/).
+
+1. Navigate to your existing Azure Cosmos DB account.
+
+1. In the resource menu, select **Keys**.
+
+1. On the **keys** page, select either the **Read-write Keys** or **Read-only Keys** option. Then, copy the value of the **Primary Connection String** field. You use this value in a later step.
+
+ | | Description |
+ | | |
+ | **Read-write key** | Provides access to view and modify the databases, containers, queries, and other resources associated with that specific account |
+ | **Read-only key** | Provides access to view databases, containers, queries, and other resources associated with that specific account |
+
+ > [!TIP]
+ > If you want to share results of a query with your teammates who don't have access to an Azure Subscription or the Azure portal, you can provide them with the read-only .
+
+1. Now, have the other user navigate to <https://cosmos.azure.com>.
+
+1. Select **Connect to your account with connection string**. Then, have the user enter the connection string copied earlier and select **Connect**.
+
+## Configure request unit threshold
+
+In the Explorer, you can configure a limit to the request units per second (RU/s) that queries use. Use this functionality to control the cost and performance in request units (RU) of your queries. This functionality can also cancel high-cost queries automatically.
+
+1. Start in the Explorer for the target Azure Cosmos DB account.
+
+1. Select the **Settings** menu option.
+
+ ![Screenshot of an Data Explorer page with the 'Open Settings' option highlighted.](media/data-explorer/open-settings.png)
+
+1. In the **Settings** dialog, configure whether you want to **Enable RU threshold** and the actual **RU threshold** value.
+
+ ![Screenshot of the individual settings to configure the request unit threshold](media/data-explorer/configure-ru-threshold.png)
+
+ > [!TIP]
+ > The RU threshold is enabled automatically with a default value of **5,000** RUs.
+ ## Known issues
-Currently, viewing documents that contain a UUID isn't supported in Data Explorer. This limitation doesn't affect loading collections, only viewing individual documents or queries that include these documents. To view and manage these documents, users should continue to use the tool that was originally used to create these documents.
+Here are a few currently known issues:
-Customers receiving HTTP-401 errors may be due to insufficient Azure RBAC permissions for your Azure account, particularly if the account has a custom role. Any custom roles must have `Microsoft.DocumentDB/databaseAccounts/listKeys/*` action to use Data Explorer if signing in using their Microsoft Entra credentials.
+- Browsing items that contain a UUID isn't supported in Data Explorer. This limitation doesn't affect loading containers, only viewing individual items or queries that include these items. To view and manage these items, users should continue to use the same tooling/SDKs that was originally used to create these items.
-## Next steps
+- HTTP 401 errors could occur due to insufficient role-based access control permissions for your Microsoft Entra ID account. This condition can be true particularly if the account has a custom role. Any custom roles must have the `Microsoft.DocumentDB/databaseAccounts/listKeys/*` action included to use the Explorer.
-Now that you've learned how to get started with Azure Cosmos DB Explorer to manage your data, next you can:
+## Next step
-- [Getting started with queries](nosql/query/getting-started.md)-- [Stored procedures, triggers, and user-defined functions](stored-procedures-triggers-udfs.md)
+> [!div class="nextstepaction"]
+> [Getting started with queries](nosql/query/getting-started.md)
cosmos-db How To Configure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-configure-firewall.md
You can enable requests to access the Azure portal by selecting the **Allow acce
:::image type="content" source="./media/how-to-configure-firewall/enable-azure-portal.png" alt-text="Screenshot showing how to enable Azure portal access" border="true":::
+> [!NOTE]
+> If you are experiencing challenges connecting to your Azure Cosmos DB account from the Data Explorer, review the [Data Explorer troubleshooting guide](/troubleshoot/azure/cosmos-db/data-explorer).
+ ### Allow requests from global Azure datacenters or other sources within Azure If you access your Azure Cosmos DB account from services that donΓÇÖt provide a static IP (for example, Azure Stream Analytics and Azure Functions), you can still use the IP firewall to limit access. You can enable access from other sources within the Azure by selecting the **Accept connections from within Azure datacenters** option, as shown in the following screenshot:
cost-management-billing Enable Marketplace Purchases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/enable-marketplace-purchases.md
Previously updated : 02/08/2024 Last updated : 02/13/2024
For more information about setting up and configuring Marketplace product collec
## Next steps
+- To learn more about creating the private marketplace, see [Create private Azure Marketplace](/marketplace/create-manage-private-azure-marketplace-new#create-private-azure-marketplace).
- To learn more about setting up and configuring Marketplace product collections, see [Collections overview](/marketplace/create-manage-private-azure-marketplace-new#collections-overview). - To read more about the Marketplace in the [Microsoft commercial marketplace customer documentation](/marketplace/).
dedicated-hsm Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/monitoring.md
Previously updated : 11/14/2022 Last updated : 01/30/2024 # Azure Dedicated HSM monitoring
-The Azure Dedicated HSM Service provides a physical device for sole customer use with complete administrative control and management responsibility. The device made available is a [Thales Luna 7 HSM model A790](https://cpl.thalesgroup.com/encryption/hardware-security-modules/network-hsms). Microsoft will have no administrative access once provisioned by a customer, beyond physical serial port attachment as a monitoring role. As a result, customers are responsible for typical operational activities including comprehensive monitoring and log analysis.
-Customers are fully responsible for applications that use the HSMs and should work with Thales for support or consulting assistance. Due to the extent of customer ownership of operational hygiene, it is not possible for Microsoft to offer any kind of high availability guarantee for this service. It is the customerΓÇÖs responsibility to ensure their applications are correctly configured to achieve high availability. Microsoft will monitor and maintain device health and network connectivity.
+The Azure Dedicated HSM Service provides a physical device for sole customer use with complete administrative control and management responsibility. The device made available is a [Thales Luna 7 HSM model A790](https://cpl.thalesgroup.com/encryption/hardware-security-modules/network-hsms). Microsoft has no administrative access once provisioned by a customer, beyond physical serial port attachment as a monitoring role. As a result, customers are responsible for typical operational activities including comprehensive monitoring and log analysis.
+
+Customers are fully responsible for applications that use the HSMs and should work with Thales for support or consulting assistance. Due to the extent of customer ownership of operational hygiene, it is not possible for Microsoft to offer any kind of high availability guarantee for this service. It is the customer's responsibility to ensure their applications are correctly configured to achieve high availability. Microsoft monitors and maintains device health and network connectivity.
## Microsoft monitoring
-The Thales Luna 7 HSM device in use has by default SNMP and serial port as options for monitoring the device. Microsoft has used the serial port connection as a physical means to connect to the device to retrieve basic telemetry on device health. This includes items such as temperature and component status such as power supplies and fans.
-To achieve this, Microsoft uses a non-administrative ΓÇ£monitorΓÇ¥ role set up on the Thales device. This role gives the ability to retrieve the telemetry but does not give any access to the device in terms of administrative task or in any way viewing cryptographic information. Our customers can be assured their device is truly their own to manage, administer, and use for sensitive cryptographic key storage. In case any customer is not satisfied with this minimal access for basic health monitoring, they do have the option to disable the monitoring account. The obvious consequence of this is that Microsoft will have no information and hence no ability to provide any proactive notification of device health issues. In this situation, the customer is responsible for the health of the device.
+The Thales Luna 7 HSM device in use has by default SNMP and serial port as options for monitoring the device. Microsoft has used the serial port connection as a physical means to connect to the device to retrieve basic telemetry on device health, including temperature and component status (such as power supplies and fans).
+
+To do so, Microsoft uses a nonadministrative "monitor" role set up on the Thales device. This role gives the ability to retrieve the telemetry but does not give any access to the device in terms of administrative task or in any way viewing cryptographic information. Our customers can be assured their device is truly their own to manage, administer, and use for sensitive cryptographic key storage. In case any customer is not satisfied with this minimal access for basic health monitoring, they do have the option to disable the monitoring account. The obvious consequence of this is that Microsoft will have no information and hence no ability to provide any proactive notification of device health issues. In this situation, the customer is responsible for the health of the device.
+ The monitor function itself is set up to poll the device every 10 minutes to get health data. Due to the error prone nature of serial communications, only after multiple negative health indicators over a one hour period would an alert be raised. This alert would ultimately lead to a proactive customer communication notifying the issue.+ Depending on the nature of the issue, the appropriate course of action would be taken to reduce impact and ensure low risk remediation. For example, a power supply failure is a hot-swap procedure with no resultant tamper event so can be performed with low impact and minimal risk to operation. Other procedures may require a device to be zeroized and deprovisioned to minimize any security risk to the customer. In this situation a customer would provision an alternate device, rejoin a high availability pairing thus triggering device synchronization. Normal operation would resume in minimal time, with minimal disruption and lowest security risk. ## Customer monitoring
dedicated-hsm Quickstart Create Hsm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/quickstart-create-hsm-powershell.md
description: Create an Azure Dedicated HSM with Azure PowerShell
Previously updated : 11/14/2022 Last updated : 01/30/2024 ms.devlang: azurepowershell
dedicated-hsm Quickstart Hsm Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/quickstart-hsm-azure-cli.md
ms.devlang: azurecli Previously updated : 11/14/2022 Last updated : 01/30/2024
defender-for-cloud Enable Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-defender-for-endpoint.md
description: Learn how to deploy the Microsoft Defender for Endpoint integration
Previously updated : 01/22/2024 Last updated : 02/14/2024 # Enable the Microsoft Defender for Endpoint integration
After you select **Enable** in the insight panel, Defender for Cloud:
- Automatically onboards your Linux machines to Defender for Endpoint in the selected subscriptions. - Detects any previous installations of Defender for Endpoint and reconfigure them to integrate with Defender for Cloud.
-Use the [Defender for Endpoint status workbook](https://aka.ms/MDEStatus) to verify installation and deployment status of Defender for Endpoint on a Linux machine.
+Use the [Defender for Endpoint status workbook](https://github.com/Azure/Microsoft-Defender-for-Cloud/tree/main/Workbooks/Defender%20for%20Servers%20Deployment%20Status) to verify installation and deployment status of Defender for Endpoint on a Linux machine.
#### Enable on multiple subscriptions with a PowerShell script
URI: `https://management.azure.com/subscriptions/<subscriptionId>/providers/Micr
## Track MDE deployment status
-You can use the [Defender for Endpoint deployment status workbook](https://github.com/Azure/Microsoft-Defender-for-Cloud/tree/main/Workbooks/Defender%20for%20Endpoint%20Deployment%20Status) to track the Defender for Endpoint deployment status on your Azure VMs and non-Azure machines that are connected via Azure Arc. The interactive workbook provides an overview of machines in your environment showing their Microsoft Defender for Endpoint extension deployment status.
+You can use the [Defender for Endpoint deployment status workbook](https://github.com/Azure/Microsoft-Defender-for-Cloud/tree/main/Workbooks/Defender%20for%20Servers%20Deployment%20Status) to track the Defender for Endpoint deployment status on your Azure VMs and non-Azure machines that are connected via Azure Arc. The interactive workbook provides an overview of machines in your environment showing their Microsoft Defender for Endpoint extension deployment status.
## Access the Microsoft Defender for Endpoint portal
defender-for-cloud Managing And Responding Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/managing-and-responding-alerts.md
When triaging security alerts, you should prioritize alerts based on their alert
Each alert contains information regarding the alert that assists you in your investigation.
-**To investigate a security alert**:
+**To investigate a security alert**:
-1. Select an alert. A side pane opens and shows a description of the alert and all the affected resources.
+1. Select an alert. A side pane opens and shows a description of the alert and all the affected resources.
:::image type="content" source="./media/managing-and-responding-alerts/alerts-details-pane.png" alt-text="Screenshot of the high-level details view of a security alert."::: 1. Review the high-level information about the security alert.
-
+ - Alert severity, status, and activity time - Description that explains the precise activity that was detected - Affected resources
Each alert contains information regarding the alert that assists you in your inv
1. Select **View full details**. The right pane includes the **Alert details** tab containing further details of the alert to help you investigate the issue: IP addresses, files, processes, and more.
-
+ :::image type="content" source="./media/managing-and-responding-alerts/security-center-alert-remediate.png" alt-text="Screenshot that shows the full details page for an alert."::: Also in the right pane is the **Take action** tab. Use this tab to take further actions regarding the security alert. Actions such as:
The alerts list includes checkboxes so you can handle multiple alerts at once. F
1. Filter according to the alerts you want to handle in bulk.
- In this example, the alerts with severity of `Informational` for the resource `ASC-AKS-CLOUD-TALK` are selected.
+ In this example, the alerts with severity of `Informational` for the resource `ASC-AKS-CLOUD-TALK` are selected.
:::image type="content" source="media/managing-and-responding-alerts/processing-alerts-bulk-filter.png" alt-text="Screenshot that shows how to filter alerts to show related alerts.":::
-1. Use the checkboxes to select the alerts to be processed.
+1. Use the checkboxes to select the alerts to be processed.
- In this example, all alerts are selected. The **Change status** button is now available.
+ In this example, all alerts are selected. The **Change status** button is now available.
:::image type="content" source="media/managing-and-responding-alerts/processing-alerts-bulk-select.png" alt-text="Screenshot of selecting all alerts to handle in bulk.":::
The alerts list includes checkboxes so you can handle multiple alerts at once. F
:::image type="content" source="media/managing-and-responding-alerts/processing-alerts-bulk-change-status.png" alt-text="Screenshot of the security alerts status tab.":::
-The alerts shown in the current page have their status changed to the selected value.
+The alerts shown in the current page have their status changed to the selected value.
## Respond to a security alert
After investigating a security alert, you can respond to the alert from within M
:::image type="content" source="./media/managing-and-responding-alerts/alert-details-take-action.png" alt-text="Screenshot of the security alerts take action tab." lightbox="./media/managing-and-responding-alerts/alert-details-take-action.png":::
-1. Review the **Mitigate the threat** section for the manual investigation steps necessary to mitigate the issue.
+1. Review the **Mitigate the threat** section for the manual investigation steps necessary to mitigate the issue.
1. To harden your resources and prevent future attacks of this kind, remediate the security recommendations in the **Prevent future attacks** section.
After investigating a security alert, you can respond to the alert from within M
The alert is removed from the main alerts list. You can use the filter from the alerts list page to view all alerts with **Dismissed** status.
-1. We encourage you to provide feedback about the alert to Microsoft:
+1. We encourage you to provide feedback about the alert to Microsoft:
1. Marking the alert as **Useful** or **Not useful**. 1. Select a reason and add a comment.
- :::image type="content" source="./media/managing-and-responding-alerts/alert-feedback.png" alt-text="Screenshot of the provide feedback to Microsoft window which allows you to select the usefulness of an alert.":::
+ :::image type="content" source="./media/managing-and-responding-alerts/alert-feedback.png" alt-text="Screenshot of the provide feedback to Microsoft window that allows you to select the usefulness of an alert.":::
> [!TIP] > We review your feedback to improve our algorithms and provide better security alerts.
defender-for-cloud Monitoring Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/monitoring-components.md
Defender for Cloud depends on the [Log Analytics agent](../azure-monitor/agents/
- [Log Analytics agent for Windows supported operating systems](../azure-monitor/agents/agents-overview.md#supported-operating-systems) - [Log Analytics agent for Linux supported operating systems](../azure-monitor/agents/agents-overview.md#supported-operating-systems)
-Also ensure your Log Analytics agent is [properly configured to send data to Defender for Cloud](working-with-log-analytics-agent.md#manual-agent)
+Also ensure your Log Analytics agent is [properly configured to send data to Defender for Cloud](working-with-log-analytics-agent.md#manual-agent).
<a name="preexisting"></a>
The following use cases explain how deployment of the Log Analytics agent works
- **System Center Operations Manager agent is installed on the machine** - Defender for Cloud will install the Log Analytics agent extension side by side to the existing Operations Manager. The existing Operations Manager agent will continue to report to the Operations Manager server normally. The Operations Manager agent and Log Analytics agent share common run-time libraries, which will be updated to the latest version during this process. - **A pre-existing VM extension is present**:
- - When the Monitoring Agent is installed as an extension, the extension configuration allows reporting to only a single workspace. Defender for Cloud doesn't override existing connections to user workspaces. Defender for Cloud will store security data from the VM in the workspace already connected, if the "Security" or "SecurityCenterFree" solution has been installed on it. Defender for Cloud might upgrade the extension version to the latest version in this process.
+ - When the Monitoring Agent is installed as an extension, the extension configuration allows reporting to only a single workspace. Defender for Cloud doesn't override existing connections to user workspaces. Defender for Cloud will store security data from the VM in the workspace already connected, if the "Security" or "SecurityCenterFree" solution was installed on it. Defender for Cloud might upgrade the extension version to the latest version in this process.
- To see to which workspace the existing extension is sending data to, run the *TestCloudConnection.exe* tool to validate connectivity with Microsoft Defender for Cloud, as described in [Verify Log Analytics Agent connectivity](/services-hub/unified/health/assessments-troubleshooting#verify-log-analytics-agent-connectivity). Alternatively, you can open Log Analytics workspaces, select a workspace, select the VM, and look at the Log Analytics agent connection. - If you have an environment where the Log Analytics agent is installed on client workstations and reporting to an existing Log Analytics workspace, review the list of [operating systems supported by Microsoft Defender for Cloud](security-center-os-coverage.md) to make sure your operating system is supported.
Learn more about Azure's [Guest Configuration extension](../governance/machine-c
### Defender for Containers extensions
-This table shows the availability details for the components that are required by the protections offered by [Microsoft Defender for Containers](defender-for-containers-introduction.md).
+This table shows the availability details for the components required by the protections offered by [Microsoft Defender for Containers](defender-for-containers-introduction.md).
By default, the required extensions are enabled when you enable Defender for Containers from the Azure portal.
By default, the required extensions are enabled when you enable Defender for Con
| Required roles and permissions (subscription-level): | [Owner](../role-based-access-control/built-in-roles.md#owner) or [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) | [Owner](../role-based-access-control/built-in-roles.md#owner) or [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) | | Supported destinations: | The AKS Defender agent only supports [AKS clusters that have RBAC enabled](../aks/concepts-identity.md#kubernetes-rbac). | [See Kubernetes distributions supported for Arc-enabled Kubernetes](supported-machines-endpoint-solutions-clouds-containers.md?tabs=azure-aks#kubernetes-distributions-and-configurations) | | Policy-based: | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes |
-| Clouds: | **Defender agent**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Microsoft Azure operated by 21Vianet<br>**Azure Policy for Kubernetes **:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government, Microsoft Azure operated by 21Vianet|**Defender agent**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Microsoft Azure operated by 21Vianet<br>**Azure Policy for Kubernetes**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Microsoft Azure operated by 21Vianet|
+| Clouds: | **Defender agent**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Microsoft Azure operated by 21Vianet<br>**Azure Policy for Kubernetes**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government, Microsoft Azure operated by 21Vianet|**Defender agent**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Microsoft Azure operated by 21Vianet<br>**Azure Policy for Kubernetes**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Microsoft Azure operated by 21Vianet|
Learn more about the [roles used to provision Defender for Containers extensions](permissions.md#roles-used-to-automatically-provision-agents-and-extensions).
defender-for-cloud Multi Factor Authentication Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/multi-factor-authentication-enforcement.md
Title: Security recommendations for multifactor authentication
-description: Learn how to enforce multifactor authentication for your Azure subscriptions using Microsoft Defender for Cloud
+description: Learn how to enforce multifactor authentication for your Azure subscriptions using Microsoft Defender for Cloud.
Last updated 08/22/2023
The following recommendations in the Enable MFA control ensure you're meeting th
- Accounts with write permissions on Azure resources should be MFA enabled - Accounts with read permissions on Azure resources should be MFA enabled - There are three ways to enable MFA and be compliant with the two recommendations in Defender for Cloud: security defaults, per-user assignment, and conditional access (CA) policy. ### Free option - security defaults
To see which accounts don't have MFA enabled, use the following Azure Resource G
1. Enter the following query and select **Run query**.
- ```
+ ```Kusto
securityresources | where type =~ "microsoft.security/assessments/subassessments" | where id has "assessments/dabc9bc4-b8a8-45bd-9a5a-43000df8aa1c" or id has "assessments/c0cb17b2-0607-48a7-b0e0-903ed22de39b" or id has "assessments/6240402e-f77c-46fa-9060-a7ce53997754"
To see which accounts don't have MFA enabled, use the following Azure Resource G
- Conditional Access policy applied to Microsoft Entra roles (such as all global admins, external users, external domain, etc.) isn't supported yet. - External MFA solutions such as Okta, Ping, Duo, and more aren't supported within the identity MFA recommendations. - ## Next steps To learn more about recommendations that apply to other Azure resource types, see the following articles:
defender-for-cloud Onboard Management Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/onboard-management-group.md
Last updated 02/21/2023
# Enable Defender for Cloud on all subscriptions in a management group
-You can use Azure Policy to enable Microsoft Defender for Cloud on all the Azure subscriptions within the same management group (MG). This is more convenient than accessing them individually from the portal, and works even if the subscriptions belong to different owners.
+You can use Azure Policy to enable Microsoft Defender for Cloud on all the Azure subscriptions within the same management group (MG). This is more convenient than accessing them individually from the portal, and works even if the subscriptions belong to different owners.
## Prerequisites
az provider register --namespace Microsoft.Security --management-group-id …
> [!TIP] > Other than the scope, there are no required parameters.
-1. Select **Remediation**, and select **Create a remediation task** to ensure all existing subscriptions that don't have Defender for Cloud enabled, will get onboarded.
+1. Select **Remediation**, and select **Create a remediation task** to ensure all existing subscriptions that don't have Defender for Cloud enabled will get onboarded.
:::image type="content" source="./media/get-started/remediation-task.png" alt-text="Screenshot that shows how to create a remediation task for the Azure Policy definition Enable Defender for Cloud on your subscription.":::
The remediation task will then enable Defender for Cloud's basic functionality o
## Optional modifications
-There are various ways you might choose to modify the Azure Policy definition:
+There are various ways you might choose to modify the Azure Policy definition:
- **Define compliance differently** - The supplied policy classifies all subscriptions in the MG that aren't yet registered with Defender for Cloud as ΓÇ£non-compliantΓÇ¥. You might choose to set it to all subscriptions without Defender for Cloud's enhanced security features enabled. The supplied definition, defines *either* of the 'pricing' settings below as compliant. Meaning that a subscription set to 'standard' or 'free' is compliant. > [!TIP]
- > When any Microsoft Defender plan is enabled, it's described in a policy definition as being on the 'Standard' setting. When it's disabled, it's 'Free'. To learn about the differences between these plans, see [Microsoft Defender for Cloud's Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).
+ > When any Microsoft Defender plan is enabled, it's described in a policy definition as being on the 'Standard' setting. When it's disabled, it's 'Free'. To learn about the differences between these plans, see [Microsoft Defender for Cloud's Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).
- ```
+ ```json
"existenceCondition": { "anyof": [ {
There are various ways you might choose to modify the Azure Policy definition:
If you change it to the following, only subscriptions set to 'standard' would be classified as compliant:
- ```
+ ```json
"existenceCondition": { { "field": "microsoft.security/pricings/pricingTier",
There are various ways you might choose to modify the Azure Policy definition:
- **Define some Microsoft Defender plans to apply when enabling Defender for Cloud** - The supplied policy enables Defender for Cloud without any of the optional enhanced security features. You might choose to enable one or more of the Microsoft Defender plans.
- The supplied definition's `deployment` section has a parameter `pricingTier`. By default, this is set to `free`, but you can modify it.
-
+ The supplied definition's `deployment` section has a parameter `pricingTier`. By default, this is set to `free`, but you can modify it.
-## Next steps:
+## Next steps
-Now that you've onboarded an entire management group, enable the enhanced security features.
+Now that you onboarded an entire management group, enable the enhanced security features.
> [!div class="nextstepaction"] > [Enable enhanced protections](enable-enhanced-security.md)
defender-for-cloud Onboarding Guide 42Crunch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/onboarding-guide-42crunch.md
Because the quality of the API specification largely determines the scan coverag
> [!NOTE] > The following steps walk through the process of setting up the free version of 42Crunch. See the [FAQ section](#faq) to learn about the differences between the free and paid versions of 42Crunch and how to purchase 42Crunch on the Azure Marketplace.
-Through relying on the 42Crunch [Audit](https://42crunch.com/api-security-audit) and [Scan](https://42crunch.com/api-conformance-scan/) services, developers can proactively test and harden APIs within their CI/CD pipelines through static and dynamic testing of APIs against the top OWASP API risks and Open API specification best practices. The security scan results from 42Crunch are now available within Defender for Cloud, ensuring central security teams have visibility into the health of APIs within the Defender for Cloud recommendation experience, and can take governance steps natively available through Defender for Cloud recommendations.
+Through relying on the 42Crunch [Audit](https://42crunch.com/api-security-audit) and [Scan](https://42crunch.com/api-conformance-scan/) services, developers can proactively test and harden APIs within their CI/CD pipelines through static and dynamic testing of APIs against the top OWASP API risks and OpenAPI specification best practices. The security scan results from 42Crunch are now available within Defender for Cloud, ensuring central security teams have visibility into the health of APIs within the Defender for Cloud recommendation experience, and can take governance steps natively available through Defender for Cloud recommendations.
## Connect your GitHub repositories to Microsoft Defender for Cloud
-This feature requires a GitHub connector in Defender for Cloud. See [how to onboard your GitHub organizations](quickstart-onboard-github.md)
+This feature requires a GitHub connector in Defender for Cloud. See [how to onboard your GitHub organizations](quickstart-onboard-github.md).
## Configure 42Crunch Audit service
Install the 42Crunch API Security Audit plugin within your CI/CD pipeline throug
1. Sign in to GitHub. 1. Select a repository you want to configure the GitHub action to.
-1. Select **Actions**
-1. Select **New Workflow**
+1. Select **Actions**.
+1. Select **New Workflow**.
:::image type="content" source="media/onboarding-guide-42crunch/new-workflow.png" alt-text="Screenshot showing new workflow selection." lightbox="media/onboarding-guide-42crunch/new-workflow.png"::: To create a new default workflow:
-1. Choose "Setup a workflow yourself"
-1. Rename the workflow from main.yaml to 42crunch-audit.yml
+1. Choose **Setup a workflow yourself**.
+1. Rename the workflow from `main.yaml` to `42crunch-audit.yml`.
1. Go to [https://github.com/marketplace/actions/42crunch-rest-api-static-security-testing-freemium#full-workflow-example](https://github.com/marketplace/actions/42crunch-rest-api-static-security-testing-freemium#full-workflow-example). 1. Copy the full sample workflow and paste it in the workflow editor.
To create a new default workflow:
1. Select **Commit changes**. You can either directly commit to the main branch or create a pull request. We would recommend following GitHub best practices by creating a PR, as the default workflow launches when a PR is opened against the main branch. 1. Select **Actions** and verify the new action is running.
- :::image type="content" source="media/onboarding-guide-42crunch/new-action.png" alt-text="Screenshow showing new action running." lightbox="media/onboarding-guide-42crunch/new-action.png":::
+ :::image type="content" source="media/onboarding-guide-42crunch/new-action.png" alt-text="Screenshot showing new action running." lightbox="media/onboarding-guide-42crunch/new-action.png":::
1. After the workflow completes, select **Security**, then select **Code scanning** to view the results. 1. Select a Code Scanning alert detected by 42Crunch REST API Static Security Testing. You can also filter by tool in the Code scanning tab. Filter on **42Crunch REST API Static Security Testing**. :::image type="content" source="media/onboarding-guide-42crunch/code-scanning-alert.png" alt-text="Screenshot showing code scanning alert." lightbox="media/onboarding-guide-42crunch/code-scanning-alert.png":::
-You have now verified that the Audit results are showing in GitHub Code Scanning. Next, we verify that these Audit results are available within Defender for Cloud. It might take up to 30 minutes for results to show in Defender for Cloud.
+You now verified that the Audit results are showing in GitHub Code Scanning. Next, we verify that these Audit results are available within Defender for Cloud. It might take up to 30 minutes for results to show in Defender for Cloud.
## Navigate to Defender for Cloud
You have now verified that the Audit results are showing in GitHub Code Scanning
1. Filter by searching for **API security testing**. 1. Select the recommendation **GitHub repositories should have API security testing findings resolved**.
-The selected recommendation shows all 42Crunch Audit findings. You have completed the onboarding for the 42Crunch Audit step.
+The selected recommendation shows all 42Crunch Audit findings. You completed the onboarding for the 42Crunch Audit step.
:::image type="content" source="media/onboarding-guide-42crunch/api-recommendations.png" alt-text="Screenshot showing API summary." lightbox="media/onboarding-guide-42crunch/api-recommendations.png":::
The selected recommendation shows all 42Crunch Audit findings. You have complete
API Scan continually scans the API to ensure conformance to the OpenAPI contract and detect vulnerabilities at testing time. It detects OWASP API Security Top 10 issues early in the API lifecycle and validates that your APIs can handle unexpected requests.
-The scan requires a non-production live API endpoint, and the required credentials (API key/access token). [Follow these steps](https://github.com/42Crunch/apisecurity-tutorial) to configure the 42Crunch Scan.
+The scan requires a nonproduction live API endpoint, and the required credentials (API key/access token). [Follow these steps](https://github.com/42Crunch/apisecurity-tutorial) to configure the 42Crunch Scan.
## FAQ
The 42Crunch security Audit and Conformance scan identify potential vulnerabilit
### Can 42Crunch be used to enforce compliance with minimum quality and security standards for developers?
-Yes. 42Crunch includes the ability to enforce compliance using [Security Quality Gates (SQG)](https://docs.42crunch.com/latest/content/concepts/security_quality_gates.htm). SQGs are comprised of certain criteria that must be met to successfully pass an Audit or Scan. For example, an SQG can ensure that an Audit or Scan with one or more critical severity issues does not pass. In CI/CD, the 42Crunch Audit or Scan can be configured to fail a build if it fails to pass an SQG, thus requiring a developer to resolve the underlying issue before pushing their code.
+Yes. 42Crunch includes the ability to enforce compliance using [Security Quality Gates (SQG)](https://docs.42crunch.com/latest/content/concepts/security_quality_gates.htm). SQGs are composed of certain criteria that must be met to successfully pass an Audit or Scan. For example, an SQG can ensure that an Audit or Scan with one or more critical severity issues doesn't pass. In CI/CD, the 42Crunch Audit or Scan can be configured to fail a build if it fails to pass an SQG, thus requiring a developer to resolve the underlying issue before pushing their code.
The free version of 42Crunch uses default SQGs for both Audit and Scan whereas the paid enterprise version allows for customization of SQGs and tags, which allow SQGs to be applied selectively to groupings of APIs. ### What data is stored within 42Crunch's SaaS service?
-A limited free trial version of the 42Crunch security Audit and Conformance scan can be deployed in CI/CD, which generates reports locally without the need for a 42Crunch SaaS connection. In this version, there is no data shared with the 42Crunch platform.
+A limited free trial version of the 42Crunch security Audit and Conformance scan can be deployed in CI/CD, which generates reports locally without the need for a 42Crunch SaaS connection. In this version, there's no data shared with the 42Crunch platform.
For the full enterprise version of the 42Crunch platform, the following data is stored in the SaaS platform:
For the full enterprise version of the 42Crunch platform, the following data is
### How is 42Crunch licensed?
-42Crunch is licensed based on a combination of the number of APIs and the number of developers that are provisioned on the platform. For example pricing bundles, see [this marketplace listing](https://azuremarketplace.microsoft.com/marketplace/apps/42crunch1580391915541.42crunch_developer_first_api_security_platform?tab=overview). Custom pricing is available through private offers on the Azure commercial marketplace. For a custom quote, reach out to sales@42crunch.com.
+42Crunch is licensed based on a combination of the number of APIs and the number of developers that are provisioned on the platform. For example pricing bundles, see [this marketplace listing](https://azuremarketplace.microsoft.com/marketplace/apps/42crunch1580391915541.42crunch_developer_first_api_security_platform?tab=overview). Custom pricing is available through private offers on the Azure commercial marketplace. For a custom quote, reach out to <mailto:sales@42crunch.com>.
### What's the difference between the free and paid version of 42Crunch? 42Crunch offers both a free limited version and paid enterprise version of the security Audit and Conformance scan.
-For the free version of 42Crunch, the 42Crunch CI/CD plugins work standalone, with no requirement to sign in to the 42Crunch platform. Audit and scanning results are then made available in Microsoft Defender for Cloud, as well as within the CI/CD platform. Audits and scans are limited to up to 25 executions per month each, per repo, with a maximum of 3 repositories.
+For the free version of 42Crunch, the 42Crunch CI/CD plugins work standalone, with no requirement to sign in to the 42Crunch platform. Audit and scanning results are then made available in Microsoft Defender for Cloud, as well as within the CI/CD platform. Audits and scans are limited to up to 25 executions per month each, per repo, with a maximum of three repositories.
For the paid enterprise version of 42Crunch, Audits and scans are still executed locally in CI/CD but can sync with the 42Crunch platform service, where you can use several advanced features including customizable security quality gates, data dictionaries, and tagging. While the enterprise version is licensed for a certain number of APIs, there are no limits to the number of Audits and scans that can be run on a monthly basis.
defender-for-cloud Support Matrix Defender For Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-containers.md
Following are the features for each of the domains in Defender for Containers:
| Aspect | Details | |--|--|
-| Registries and images | **Supported**<br> ΓÇó ACR registries <br> ΓÇó [ACR registries protected with Azure Private Link](/azure/container-registry/container-registry-private-link) (Private registries requires access to Trusted Services) <br> ΓÇó Container images in Docker V2 format <br> **Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> is currently unsupported <br> ΓÇó Images with [Open Container Initiative (OCI)](https://github.com/opencontainers/image-spec/blob/main/spec.md) image format specification <br> |
+| Registries and images | **Supported**<br> ΓÇó ACR registries <br> ΓÇó [ACR registries protected with Azure Private Link](/azure/container-registry/container-registry-private-link) (Private registries requires access to Trusted Services) <br> ΓÇó Container images in Docker V2 format <br> ΓÇó Images with [Open Container Initiative (OCI)](https://github.com/opencontainers/image-spec/blob/main/spec.md) image format specification <br> **Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> is currently unsupported <br>
| Operating systems | **Supported** <br> ΓÇó Alpine Linux 3.12-3.16 <br> ΓÇó Red Hat Enterprise Linux 6-9 <br> ΓÇó CentOS 6-9<br> ΓÇó Oracle Linux 6-9 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap, openSUSE Tumbleweed <br> ΓÇó SUSE Enterprise Linux 11-15 <br> ΓÇó Debian GNU/Linux 7-12 <br> ΓÇó Google Distroless (based on Debian GNU/Linux 7-12) <br> ΓÇó Ubuntu 12.04-22.04 <br> ΓÇó Fedora 31-37<br> ΓÇó Mariner 1-2<br> ΓÇó Windows Server 2016, 2019, 2022| | Language specific packages <br><br> | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go |
Learn how to [use Azure Private Link to connect networks to Azure Monitor](../az
| Aspect | Details | |--|--|
-| Registries and images | **Supported**<br> ΓÇó ECR registries <br> ΓÇó Container images in Docker V2 format <br> **Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images is currently unsupported <br> ΓÇó Images with [Open Container Initiative (OCI)](https://github.com/opencontainers/image-spec/blob/main/spec.md) image format specification <br>ΓÇó Public repositories <br> ΓÇó Manifest lists <br>|
+| Registries and images | **Supported**<br> ΓÇó ECR registries <br> ΓÇó Container images in Docker V2 format <br> ΓÇó Images with [Open Container Initiative (OCI)](https://github.com/opencontainers/image-spec/blob/main/spec.md) image format specification <br> **Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images is currently unsupported <br> ΓÇó Public repositories <br> ΓÇó Manifest lists <br>|
| Operating systems | **Supported** <br> ΓÇó Alpine Linux 3.12-3.16 <br> ΓÇó Red Hat Enterprise Linux 6-9 <br> ΓÇó CentOS 6-9<br> ΓÇó Oracle Linux 6-9 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap, openSUSE Tumbleweed <br> ΓÇó SUSE Enterprise Linux 11-15 <br> ΓÇó Debian GNU/Linux 7-12 <br> ΓÇó Google Distroless (based on Debian GNU/Linux 7-12)<br> ΓÇó Ubuntu 12.04-22.04 <br> ΓÇó Fedora 31-37<br> ΓÇó Mariner 1-2<br> ΓÇó Windows server 2016, 2019, 2022| | Language specific packages <br><br> | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go |
Outbound proxy without authentication and outbound proxy with basic authenticati
| Aspect | Details | |--|--|
-| Registries and images | **Supported**<br> ΓÇó Google Registries (GAR, GCR) <br> ΓÇó Container images in Docker V2 format <br> **Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images is currently unsupported <br> ΓÇó Images with [Open Container Initiative (OCI)](https://github.com/opencontainers/image-spec/blob/main/spec.md) image format specification <br>ΓÇó Public repositories <br> ΓÇó Manifest lists <br>|
+| Registries and images | **Supported**<br> ΓÇó Google Registries (GAR, GCR) <br> ΓÇó Container images in Docker V2 format <br> ΓÇó Images with [Open Container Initiative (OCI)](https://github.com/opencontainers/image-spec/blob/main/spec.md) image format specification <br> **Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images is currently unsupported <br> ΓÇó Public repositories <br> ΓÇó Manifest lists <br>|
| Operating systems | **Supported** <br> ΓÇó Alpine Linux 3.12-3.16 <br> ΓÇó Red Hat Enterprise Linux 6-9 <br> ΓÇó CentOS 6-9<br> ΓÇó Oracle Linux 6-9 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap, openSUSE Tumbleweed <br> ΓÇó SUSE Enterprise Linux 11-15 <br> ΓÇó Debian GNU/Linux 7-12 <br> ΓÇó Google Distroless (based on Debian GNU/Linux 7-12)<br> ΓÇó Ubuntu 12.04-22.04 <br> ΓÇó Fedora 31-37<br> ΓÇó Mariner 1-2<br> ΓÇó Windows server 2016, 2019, 2022| | Language specific packages <br><br> | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go |
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Customers that are still using the API version **2022-09-01-preview** under `Mic
Customers currently using Defender for Cloud DevOps security from Azure portal won't be impacted.
-For details on the new API version, see [Microsoft Defender for Cloud REST APIs](/rest/api/defenderforcloud/).
+For details on the new API version, see [Microsoft Defender for Cloud REST APIs](/rest/api/defenderforcloud/operation-groups?view=rest-defenderforcloud-2023-09-01-preview).
## Changes in endpoint protection recommendations
defender-for-iot Concept Supported Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-supported-protocols.md
OT network sensors can detect the following protocols when identifying assets an
|Brand / Vendor |Protocols | |||
-|**ABB** | ABB 800xA DCS (IEC61850 MMS including ABB extension)<br> CNCP<br> RNRP<br> ABB IAC<br> ABB Totalflow |
+|**ABB** | ABB 800xA DCS (IEC61850 MMS including ABB extension)<br> CNCP<br> RNRP<br> ABB IAC<br> ABB Totalflow <br> ABB NetConfig |
|**ASHRAE** | BACnet<br> BACnet BACapp<br> BACnet BVLC | |**Beckhoff** | AMS (ADS)<br> Twincat | |**Cisco** | CAPWAP Control<br> CAPWAP Data<br> CDP<br> LWAPP |
+|**DICOM** | Dicom |
|**DNP. org** | DNP3 | |**Emerson** | DeltaV<br> DeltaV - Discovery<br> Emerson OpenBSI/BSAP<br> Ovation DCS ADMD<br>Ovation DCS DPUSTAT<br> Ovation DCS SSRPC | |**Emerson Fischer** | ROC |
+|**FANUC** | FANUC FOCUS |
+|**FieldComm Group**| HART-IP |
|**GE** | ADL (MarkVIe) <br>Bentley Nevada (System 1 / BN3500)<br>ClassicSDI (MarkVle) <br> EGD<br> GSM (GE MarkVI and MarkVIe)<br> InterSite<br> SDI (MarkVle) <br> SRTP (GE)<br> GE_CMP | |**Generic Applications** | Active Directory<br> RDP<br> Teamviewer<br> VNC<br> | |**Honeywell** | ENAP<br> Experion DCS CDA<br> Experion DCS FDA<br> Honeywell EUCN <br> Honeywell Discovery |
OT network sensors can detect the following protocols when identifying assets an
|**Omron** | FINS <br>HTTP | |**OPC** | AE <br>Common <br> DA <br>HDA <br> UA | |**Oracle** | TDS<br> TNS |
-|**Rockwell Automation** | CSP2<br> ENIP<br> EtherNet/IP CIP (including Rockwell extension)<br> EtherNet/IP CIP FW version 27 and above |
+|**Rockwell Automation** | CSP2<br> ENIP<br> EtherNet/IP CIP (including Rockwell extension)<br> EtherNet/IP CIP FW version 27 and above <br>Rockwell AADvance Discover <br> Rockwell AADvance SNCP/IXL |
|**Samsung** | Samsung TV |
-|**Schneider Electric** | Modbus/TCP<br> Modbus TCPΓÇôSchneider Unity Extensions<br> OASYS (Schneider Electric Telvant)<br> Schneider TSAA |
+|**Schneider Electric** | Modbus/TCP<br> Modbus TCPΓÇôSchneider Unity Extensions<br> OASYS (Schneider Electric Telvant)<br> Schneider TSAA <br> Schneider NetManage |
|**Schneider Electric / Invensys** | Foxboro Evo<br> Foxboro I/A<br> Trident<br> TriGP<br> TriStation | |**Schneider Electric / Modicon** | Modbus RTU | |**Schneider Electric / Wonderware** | Wonderware Suitelink |
defender-for-iot How To Accelerate Alert Incident Response https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-accelerate-alert-incident-response.md
Title: Accelerate on-premises OT alert workflows - Microsoft Defender for IoT
+ Title: Accelerate OT alert workflows - Microsoft Defender for IoT
description: Learn how to improve Microsoft Defender for IoT OT alert workflows on an OT network sensor or the on-premises management console. Previously updated : 12/20/2023 Last updated : 01/31/2024
+# Accelerate OT alert workflows
-# Accelerate on-premises OT alert workflows
+> [!NOTE]
+> Noted features are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
Microsoft Defender for IoT alerts enhance your network security and operations with real-time details about events logged in your network. OT alerts are triggered when OT network sensors detect changes or suspicious activity in network traffic that needs your attention. This article describes the following methods for reducing OT network alert fatigue in your team:
+- **Create suppression rules** from the Azure portal to reduce the alerts triggered by your sensors. If you're working in an air-gapped environment, do this by creating alert exclusion rules on the on-premises management console.
+ - **Create alert comments** for your teams to add to individual alerts, streamlining communication and record-keeping across your alerts. - **Create custom alert rules** to identify specific traffic in your network -- **Create alert exclusion rules** to reduce the alerts triggered by your sensors- ## Prerequisites -- To create alert comments or custom alert rules on an OT network sensor, you must have an OT network sensor installed and access to the sensor as an **Admin** user.
+Before you use the procedures on this page, note the following prerequisites:
-- To create a DNS allowlist on an OT sensor, you must have an OT network sensor installed and access to the sensor as a **Support** user.-- To create alert exclusion rules on an on-premises management console, you must have an on-premises management console installed and access to the on-premises management console as an **Admin** user.
+|To ... |You must have ... |
+|||
+|[Create alert suppression rules on the Azure portal](#create-alert-suppression-rules-on-the-azure-portal-public-preview) | A Defender for IoT subscription with at least one cloud-connected OT sensor and access as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner). |
+|[Create a DNS allowlist on an OT sensor](#allow-internet-connections-on-an-ot-network) | An OT network sensor installed and access to the sensor as the default *Admin* user. |
+|[Create alert comments on an OT sensor](#create-alert-comments-on-an-ot-sensor) | An OT network sensor installed and access to the sensor as any user with an **Admin** role. |
+|[Create custom alert rules on an OT sensor](#create-custom-alert-rules-on-an-ot-sensor) | An OT network sensor installed and access to the sensor as any user with an **Admin** role. |
+|[Create alert exclusion rules on an on-premises management console](#create-alert-exclusion-rules-on-an-on-premises-management-console) | An on-premises management console installed and access to the on-premises management console as any user with an **Admin** role. |
-For more information, see [Install OT agentless monitoring software](how-to-install-software.md) and [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
+For more information, see:
-## Create alert comments on an OT sensor
+- [Install OT monitoring software on OT sensors](ot-deploy/install-software-ot-sensor.md)
+- [Azure user roles and permissions for Defender for IoT](roles-azure.md)
+- [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md)
-1. Sign into your OT sensor and select **System Settings** > **Network Monitoring** > **Alert Comments**.
+## Suppress irrelevant alerts
-1. In the **Alert comments** pane, in the **Description** field, enter the new comment, and select **Add**. The new comment appears in the **Description** list below the field.
+Configure your OT sensors to suppress alerts for specific traffic on your network that would otherwise trigger an alert. For example, if all the OT devices monitored by a specific sensor are going through maintenance procedures for two days, you might want to define a rule to suppress all alerts generated by that sensor during the maintenance period.
- For example:
+- For cloud connected OT sensors, create alert suppression rules on the Azure portal to ignore specified traffic on your network that would otherwise trigger an alert.
- :::image type="content" source="media/alerts/create-custom-comment.png" alt-text="Screenshot of the Alert comments pane on the OT sensor.":::
+- For locally managed sensors, create alert exclusion rules on the on-premises management console, either using the UI or the API.
-1. Select **Submit** to add your comment to the list of available comments in each alert on your sensor.
+> [!IMPORTANT]
+> Rules configured on the Azure portal override any rules configured for the same sensor on the on-premises management console. If you're currently using alert exclusion rules on your on-premises management console, we recommend that you [migrate them to the Azure portal](#migrate-suppression-rules-from-an-on-premises-management-console-public-preview) as suppression rules before you start.
+>
+### Create alert suppression rules on the Azure portal (Public Preview)
-Custom comments are available in each alert on your sensor for team members to add. For more information, see [Add alert comments](how-to-view-alerts.md#add-alert-comments).
+This section describes how to create an alert suppression rule on the Azure portal, and is supported for cloud-connected sensors only.
-## Create custom alert rules on an OT sensor
+**To create an alert suppression rule**:
-Add custom alert rules to trigger alerts for specific activity on your network that's not covered by out-of-the-box functionality.
+1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Alerts** > :::image type="icon" source="media/how-to-accelerate-alert-incident-response/suppression-rules.png" border="false"::: **Suppression rules**.
-For example, for an environment running MODBUS, you might add a rule to detect any written commands to a memory register on a specific IP address and ethernet destination.
+1. On the **Suppression rules (Preview)** page, select **+ Create**.
-**To create a custom alert rule**:
+1. In the **Create suppression rule** pane **Details** tab, enter the following details:
-1. Sign into your OT sensor and select **Custom alert rules** > **+ Create rule**.
+ 1. Select your Azure subscription from the drop-down list.
-1. In the **Create custom alert rule** pane, define the following fields:
+ 1. Enter a meaningful name for your rule and an optional description.
- |Name |Description |
- |||
- |**Alert name** | Enter a meaningful name for the alert. |
- |**Alert protocol** | Select the protocol you want to detect. <br> In specific cases, select one of the following protocols: <br> <br> - For a database data or structure manipulation event, select **TNS** or **TDS**. <br> - For a file event, select **HTTP**, **DELTAV**, **SMB**, or **FTP**, depending on the file type. <br> - For a package download event, select **HTTP**. <br> - For an open ports (dropped) event, select **TCP** or **UDP**, depending on the port type. <br> <br> To create rules that track specific changes in one of your OT protocols, such as S7 or CIP, use any parameters found on that protocol, such as `tag` or `sub-function`. |
- |**Message** | Define a message to display when the alert is triggered. Alert messages support alphanumeric characters and any traffic variables detected. <br> <br> For example, you might want to include the detected source and destination addresses. Use curly brackets (**{}**) to add variables to the alert message. |
- |**Direction** | Enter a source and/or destination IP address where you want to detect traffic. |
- |**Conditions** | Define one or more conditions that must be met to trigger the alert. <br><br>- Select the **+** sign to create a condition set with multiple conditions that use the **AND** operator. The **+** sign is enabled only after selecting an **Alert protocol** value.<br>- If you select a MAC address or IP address as a variable, you must convert the value from a dotted-decimal address to decimal format. <br><br> You must add at least one condition to create a custom alert rule. |
- |**Detected** | Define a date and/or time range for the traffic you want to detect. Customize the days and time range to fit with maintenance hours or set working hours. |
- |**Action** | Define an action you want Defender for IoT to take automatically when the alert is triggered. <br>Have Defender for IoT create either an alert or event, with the specified severity. |
- |**PCAP included** | If you've selected to create an event, clear the **PCAP included** option as needed. If you've selected to create an alert, the PCAP is always included, and can't be removed. |
+ 1. Toggle on **Enabled** to have the rule start running as configured. You can also leave this option toggled off to start using the rule only later on.
- For example:
+ 1. In the **Suppress by time range** area, toggle on **Expiration date** to define a specific start and end date and time for your rule. Select **Add range** to add multiple time ranges.
- :::image type="content" source="media/how-to-accelerate-alert-incident-response/create-custom-alert-rule.png" alt-text="Screenshot of the Create custom alert rule pane for creating custom alert rules." lightbox="media/how-to-accelerate-alert-incident-response/create-custom-alert-rule.png":::
+ 1. In the **Apply on** area, select whether you want to apply the rule to all sensors on your subscription, or only on specific sites or sensors. If you select **Apply on custom selection**, select the sites and/or sensors where you want the rule to run.
-1. Select **Save** when you're done to save the rule.
+ When you select a specific site, the rule applies to all existing and future sensors associated with the site.
-### Edit a custom alert rule
+ 1. Select **Next** and confirm the override message.
-To edit a custom alert rule, select the rule and then select the options (**...**) menu > **Edit**. Modify the alert rule as needed and save your changes.
+1. In the **Create suppression rule** pane **Conditions** tab:
-Edits made to custom alert rules, such as changing a severity level or protocol, are tracked in the **Event timeline** page on the OT sensor.
+ 1. In the **Alert name** dropdown list, select one or more alerts for your rule. Selecting the name of an alert engine instead of a specific rule name applies the rule to all existing and future alerts associated with that engine.
-For more information, see [Track sensor activity](how-to-track-sensor-activity.md).
+ 1. Optionally filter your rule further by defining additional conditions, such as for traffic coming from specific sources, to specific destinations, or on specific subnets.
-### Disable, enable, or delete custom alert rules
+ 1. When you're finished configuring your rule conditions, select **Next**.
-Disable custom alert rules to prevent them from running without deleting them altogether.
+1. In the **Create suppression rule** pane **Review and create** tab, review the details of the rule you're creating and then select **Create**.
+
+Your rule is added to the list of suppression rules on the **Suppression rules (Preview)** page. Select a rule to edit or delete it as needed.
+
+> [!TIP]
+> If you need to export suppression rules, select the **Export** button from the toolbar. All rules configured are exported to a single .CSV file, which you can save locally.
+
+### Migrate suppression rules from an on-premises management console (Public Preview)
+
+If you're currently using an on-premises management console with cloud-connected sensors, we recommend that you migrate any exclusion rules to the Azure portal as suppression rules before you start creating new suppression rules. Any suppression rules configured on the Azure portal override alert exclusion rules that exist for the same sensors on the on-premises management console.
+
+**To export alert exclusion rules and import them to the Azure portal**:
+
+1. Sign into your on-premises management console and select **Alert Exclusion**.
+
+1. On the **Alert Exclusion** page, select :::image type="icon" source="media/how-to-accelerate-alert-incident-response/export.png" border="false"::: **Export** to export your rules to a .CSV file.
+
+1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Alerts** > **Suppression rules**.
+
+1. On the **Suppression rules (Preview)** page, select **Migrate local manager rules**, and then browse to and select the .CSV file you'd downloaded from the on-premises management console.
+
+1. In the **Migrate suppression rules** pane, review the uploaded list of suppression rules you're about to migrate, then select **Approve migration**.
+
+1. Confirm the override message.
+
+Your rules are added to the list of suppression rules on the **Suppression rules (Preview)** page. Select a rule to edit or delete it as needed.
+
+### Create alert exclusion rules on an on-premises management console
+
+We recommend creating alert exclusion rules on an on-premises management console only for locally managed sensors. For cloud-connected sensors, any suppression rules created on the Azure portal will override exclusion rules created on the on-premises management console for that sensor.
+
+**To create an alert exclusion rule**:
+
+1. Sign into your on-premises management console and select **Alert Exclusion** on the left-hand menu.
+
+1. On the **Alert Exclusion** page, select the **+** button at the top-right to add a new rule.
+
+1. In the **Create Exclusion Rule** dialog, enter the following details:
+
+ |Name |Description |
+ |||
+ |**Name** | Enter a meaningful name for your rule. The name can't contain quotes (`"`). |
+ |**By Time Period** | Select a time zone and the specific time period you want the exclusion rule to be active, and then select **ADD**. <br><br>Use this option to create separate rules for different time zones. For example, you might need to apply an exclusion rule between 8:00 AM and 10:00 AM in three different time zones. In this case, create three separate exclusion rules that use the same time period and the relevant time zone. |
+ |**By Device Address** | Select and enter the following values, and then select **ADD**: <br><br>- Select whether the designated device is a source, destination, or both a source and destination device. <br>- Select whether the address is an IP address, MAC address, or subnet <br>- Enter the value of the IP address, MAC address, or subnet. |
+ |**By Alert Title** | Select one or more alerts to add to the exclusion rule and then select **ADD**. To find alert titles, enter all, or part of an alert title and select the one you want from the dropdown list. |
+ |**By Sensor Name** | Select one or more sensors to add to the exclusion rule and then select **ADD**. To find sensor names, enter all or part of the sensor name and select the one you want from the dropdown list. |
+
+ > [!IMPORTANT]
+ > Alert exclusion rules are `AND` based, which means that alerts are only excluded when all rule conditions are met.
+ > If a rule condition is not defined, all options are included. For example, if you don't include the name of a sensor in the rule, the rule is applied to all sensors.
+
+ A summary of the rule parameters is shown at the bottom of the dialog.
+
+1. Check the rule summary shown at the bottom of the **Create Exclusion Rule** dialog and then select **SAVE**
+
+**To create alert exclusion rules via API**:
+
+Use the [Defender for IoT API](references-work-with-defender-for-iot-apis.md) to create on-premises management console alert exclusion rules from an external ticketing system or other system that manage network maintenance processes.
+
+Use the [maintenanceWindow (Create alert exclusions)](api/management-alert-apis.md#maintenancewindow-create-alert-exclusions) API to define the sensors, analytics engines, start time, and end time to apply the rule. Exclusion rules created via API are shown in the on-premises management console as read-only.
+
+For more information, see [Defender for IoT API reference](references-work-with-defender-for-iot-apis.md).
-In the **Custom alert rules** page, select one or more rules, and then select **Disable**, **Enable**, or **Delete** in the toolbar as needed.
## Allow internet connections on an OT network
The generated data mining report shows a list of the allowed domains and each IP
:::image type="content" source="media/how-to-accelerate-alert-incident-response/data-mining-report-allowlist.png" alt-text="Screenshot of data mining report of allowlist in the sensor console." lightbox="media/how-to-accelerate-alert-incident-response/data-mining-report-allowlist.png"::: +
+## Create alert comments on an OT sensor
+
+1. Sign into your OT sensor and select **System Settings** > **Network Monitoring** > **Alert Comments**.
+
+1. In the **Alert comments** pane, in the **Description** field, enter the new comment, and select **Add**. The new comment appears in the **Description** list below the field.
+
+ For example:
+
+ :::image type="content" source="media/alerts/create-custom-comment.png" alt-text="Screenshot of the Alert comments pane on the OT sensor.":::
+
+1. Select **Submit** to add your comment to the list of available comments in each alert on your sensor.
+
+Custom comments are available in each alert on your sensor for team members to add. For more information, see [Add alert comments](how-to-view-alerts.md#add-alert-comments).
+
+## Create custom alert rules on an OT sensor
+
+Add custom alert rules to trigger alerts for specific activity on your network that's not covered by out-of-the-box functionality.
+
+For example, for an environment running MODBUS, you might add a rule to detect any written commands to a memory register on a specific IP address and ethernet destination.
+
+**To create a custom alert rule**:
+
+1. Sign into your OT sensor and select **Custom alert rules** > **+ Create rule**.
+
+1. In the **Create custom alert rule** pane, define the following fields:
+
+ |Name |Description |
+ |||
+ |**Alert name** | Enter a meaningful name for the alert. |
+ |**Alert protocol** | Select the protocol you want to detect. <br> In specific cases, select one of the following protocols: <br> <br> - For a database data or structure manipulation event, select **TNS** or **TDS**. <br> - For a file event, select **HTTP**, **DELTAV**, **SMB**, or **FTP**, depending on the file type. <br> - For a package download event, select **HTTP**. <br> - For an open ports (dropped) event, select **TCP** or **UDP**, depending on the port type. <br> <br> To create rules that track specific changes in one of your OT protocols, such as S7 or CIP, use any parameters found on that protocol, such as `tag` or `sub-function`. |
+ |**Message** | Define a message to display when the alert is triggered. Alert messages support alphanumeric characters and any traffic variables detected. <br> <br> For example, you might want to include the detected source and destination addresses. Use curly brackets (**{}**) to add variables to the alert message. |
+ |**Direction** | Enter a source and/or destination IP address where you want to detect traffic. |
+ |**Conditions** | Define one or more conditions that must be met to trigger the alert. <br><br>- Select the **+** sign to create a condition set with multiple conditions that use the **AND** operator. The **+** sign is enabled only after selecting an **Alert protocol** value.<br>- If you select a MAC address or IP address as a variable, you must convert the value from a dotted-decimal address to decimal format. <br><br> You must add at least one condition to create a custom alert rule. |
+ |**Detected** | Define a date and/or time range for the traffic you want to detect. Customize the days and time range to fit with maintenance hours or set working hours. |
+ |**Action** | Define an action you want Defender for IoT to take automatically when the alert is triggered. <br>Have Defender for IoT create either an alert or event, with the specified severity. |
+ |**PCAP included** | If you've selected to create an event, clear the **PCAP included** option as needed. If you've selected to create an alert, the PCAP is always included, and can't be removed. |
+
+ For example:
+
+ :::image type="content" source="media/how-to-accelerate-alert-incident-response/create-custom-alert-rule.png" alt-text="Screenshot of the Create custom alert rule pane for creating custom alert rules." lightbox="media/how-to-accelerate-alert-incident-response/create-custom-alert-rule.png":::
+
+1. Select **Save** when you're done to save the rule.
+
+### Edit a custom alert rule
+
+To edit a custom alert rule, select the rule and then select the options (**...**) menu > **Edit**. Modify the alert rule as needed and save your changes.
+
+Edits made to custom alert rules, such as changing a severity level or protocol, are tracked in the **Event timeline** page on the OT sensor.
+
+For more information, see [Track sensor activity](how-to-track-sensor-activity.md).
+
+### Disable, enable, or delete custom alert rules
+
+Disable custom alert rules to prevent them from running without deleting them altogether.
+
+In the **Custom alert rules** page, select one or more rules, and then select **Disable**, **Enable**, or **Delete** in the toolbar as needed.
+ ## Create alert exclusion rules on an on-premises management console Create alert exclusion rules to instruct your sensors to ignore specific traffic on your network that would otherwise trigger an alert.
defender-for-iot How To Manage Cloud Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-cloud-alerts.md
Title: View and manage alerts on the Azure portal - Microsoft Defender for IoT description: Learn about viewing and managing alerts triggered by cloud-connected Microsoft Defender for IoT network sensors on the Azure portal. Previously updated : 12/12/2022 Last updated : 12/19/2023
Microsoft Defender for IoT alerts enhance your network security and operations w
- **To view alerts on the Azure portal**, you must have access as a [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader), [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) -- **To manage alerts on the Azure portal**, you must have access as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner). Alert management activities include modifying their statuses or severities, *Learning* an alert, or accessing PCAP data.
+- **To manage alerts on the Azure portal**, you must have access as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner). Alert management activities include modifying their statuses or severities, *Learning* an alert, accessing PCAP data, or using alert suppression rules.
For more information, see [Azure user roles and permissions for Defender for IoT](roles-azure.md).
For more information, see [Azure user roles and permissions for Defender for IoT
| **Category**| The [category](alert-engine-messages.md#supported-alert-categories) associated with the alert, such as *operational issues*, *custom alerts*, or *illegal commands*. | | **Type**| The internal name of the alert. |
+> [!TIP]
+> If you're seeing more alerts than expected, you might want to create suppression rules to prevent alerts from being triggered for legitimate network activity. For more information, see [Suppress irrelevant alerts](how-to-accelerate-alert-incident-response.md#suppress-irrelevant-alerts).
+ ### Filter alerts displayed Use the **Search** box, **Time range**, and **Add filter** options to filter the alerts displayed by specific parameters or to help locate a specific alert.
defender-for-iot Back Up Sensors From Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/legacy-central-management/back-up-sensors-from-management.md
For more information, see [Set up backup and restore files](../back-up-restore-s
1. Enter the number of GB you want to allocate for backup storage. When the configured limit is exceeded, the oldest backup file is deleted. **If you're storing backup files on the on-premises management console**, supported values are defined based on your [hardware profiles](../ot-appliance-sizing.md). For example:
-
+ |Hardware profile |Backup storage availability | ||| |**E1800** |Default storage is 40 GB; limit is 100 GB. | |**L500** | Default storage is 20 GB; limit is 50 GB. | |**L100** | Default storage is 10 GB; limit is 25 GB. |
- |**L60** [*](../ot-appliance-sizing.md#l60) | Default storage is 10 GB; limit is 25 GB. |
+ |**L60** | Default storage is 10 GB; limit is 25 GB. |
**If you're storing backup files on an external server**, there's no maximum storage. However, keep in mind:
defender-for-iot Ot Appliance Sizing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-appliance-sizing.md
Use the following hardware profiles for production line monitoring, typically in
||||| |L500 | Up to 200 Mbps | 1,000 |Physical / Virtual | |L100 | Up to 60 Mbps | 800 | Physical / Virtual |
-|L60 | Up to 10 Mbps | 100 |Physical / Virtual|
> [!IMPORTANT]
-> <a name="l60"></a>Upcoming Defender for IoT software versions are planned to require a minimum disk size of 100 GB. When that happens, the L60 hardware profile, which supports 60 GB of hard disk, will be deprecated.
+> Defender for IoT software versions require a minimum disk size of 100 GB. The L60 hardware profile, which only supports 60 GB of hard disk, has been deprecated.
>
-> We recommend that you plan any new deployments accordingly, using hardware profiles that support at least 100 GB. Migration steps from the L60 hardware profile will be provided together with the L60 deprecation.
+> If you have a legacy sensor, such as the L60 hardware profile, you can migrate it to a supported profile can be found by following the [back up and restore a sensor](back-up-restore-sensor.md) process.
## On-premises management console systems
defender-for-iot Ot Virtual Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-virtual-appliances.md
For all deployments, bandwidth results for virtual machines may vary, depending
|**E500** | **Max bandwidth**: 800 Mb/sec <br>**Max monitored assets**: 10,000 | **vCPU**: 8 <br>**Memory**: 32 GB <br>**Storage**: 500 GB (300 IOPS) | |**L500** | **Max bandwidth**: 160 Mb/sec <br>**Max monitored assets**: 1,000 | **vCPU**: 4 <br>**Memory**: 8 GB <br>**Storage**: 500 GB (150 IOPS) | |**L100** | **Max bandwidth**: 100 Mb/sec <br>**Max monitored assets**: 800 | **vCPU**: 4 <br>**Memory**: 8 GB <br>**Storage**: 100 GB (150 IOPS) |
-|**L60** [*](ot-appliance-sizing.md#l60) | **Max bandwidth**: 10 Mb/sec <br>**Max monitored assets**: 100 | **vCPU**: 4 <br>**Memory**: 8 GB <br>**Storage**: 60 GB (150 IOPS) |
> [!NOTE] > There is no need to pre-install an operating system on the VM, the sensor installation includes the operating system image.
defender-for-iot References Data Retention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-data-retention.md
The following table lists how long PCAP data is stored in each Defender for IoT
| Storage type | Details | ||| | **Azure portal** | PCAP files are available for download from the Azure portal for as long as the OT network sensor stores them. <br><br> Once downloaded, the files are cached on the Azure portal for 48 hours. <br><br> For more information, see [Access alert PCAP data](how-to-manage-cloud-alerts.md#access-alert-pcap-data). |
-| **OT network sensor** | Dependent on the sensor's storage capacity allocated for PCAP files, which is determined by its [hardware profile](ot-appliance-sizing.md): <br><br>- **C5600**: 130 GB <br>- **E1800**: 130 GB <br>- **E1000** : 78 GB<br>- **E500**: 78 GB <br>- **L500**: 7 GB <br>- **L100**: 2.5 GB<br>- **L60** [*](ot-appliance-sizing.md#l60): 2.5 GB <br><br> If a sensor exceeds its maximum storage capacity, the oldest PCAP file is deleted to accommodate the new one. <br><br> For more information, see [Access alert PCAP data](how-to-view-alerts.md#access-alert-pcap-data) and [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md). |
+| **OT network sensor** | Dependent on the sensor's storage capacity allocated for PCAP files, which is determined by its [hardware profile](ot-appliance-sizing.md): <br><br>- **C5600**: 130 GB <br>- **E1800**: 130 GB <br>- **E1000** : 78 GB<br>- **E500**: 78 GB <br>- **L500**: 7 GB <br>- **L100**: 2.5 GB<br><br> If a sensor exceeds its maximum storage capacity, the oldest PCAP file is deleted to accommodate the new one. <br><br> For more information, see [Access alert PCAP data](how-to-view-alerts.md#access-alert-pcap-data) and [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md). |
| **On-premises management console** | PCAP files aren't stored on the on-premises management console and are only accessed from the on-premises management console via a direct link to the OT sensor. | The usage of available PCAP storage space depends on factors such as the number of alerts, the type of the alert, and the network bandwidth, all of which affect the size of the PCAP file.
The following table lists the maximum number of events that can be stored for ea
| **E500** | 6M events | | **L500** | 3M events | | **L100** | 500-K events |
-| **L60** [*](ot-appliance-sizing.md#l60) | 500-K events |
For more information, see [Track sensor activity](how-to-track-sensor-activity.md) and [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md).
The retention of backup files depends on the sensor's architecture, as each hard
| Hardware profile | Allocated hard disk space | |||
-| **L60** [*](ot-appliance-sizing.md#l60) | Backups are not supported |
| **L100** | Backups are not supported | | **L500** | 20 GB | | **E1000** | 60 GB |
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Cloud features may be dependent on a specific sensor version. Such features are
| Version / Patch | Release date | Scope | Supported until | | - | | -- | - |
-| **23.2** | | | |
-| 23.2.0 | 12/2023 | Major | 11/2024 |
+| **24.1** | | | |
+| 24.1.0 |02/2024 | Major |01/2025 |
| **23.1** | | | | | 23.1.3 | 09/2023 | Patch | 08/2024 | | 23.1.2 | 07/2023 | Major | 06/2024 |
Version numbers are listed only in this article and in the [What's new in Micros
To understand whether a feature is supported in your sensor version, check the relevant version section below and its listed features.
+## Versions 24.1.x
+
+### Version 24.1.0
+
+**Release date**: 02/2024
+
+**Supported until**: 03/2025
+
+This version includes the following updates and enhancements:
+
+- [Alert suppression rules from the Azure portal](how-to-accelerate-alert-incident-response.md#suppress-irrelevant-alerts)
+- [Focused alerts in OT/IT environments](alerts.md#focused-alerts-in-otit-environments)
+- [Alert ID (ID field) is now aligned on the Azure portal and sensor console](how-to-manage-cloud-alerts.md#view-alerts-on-the-azure-portal)
+- [Newly supported protocols](concept-supported-protocols.md)
+- [L60 hardware profile is no longer supported](ot-appliance-sizing.md#production-line-monitoring-medium-and-small-deployments)
+ ## Versions 23.2.x ### Version 23.2.0
+**Release date**: 12/2023
+
+**Supported until**: 11/2024
+ This version includes the following updates and enhancements: - [Sensor software runs on a Debian 11 operating system](ot-deploy/install-software-ot-sensor.md) and [updates to this version may be heavier and longer than usual](whats-new.md#ot-network-sensors-now-run-on-debian-11)
This version includes the following new updates and fixes:
## Next steps
-For more information about the features listed in this article, see [What's new in Microsoft Defender for IoT?](whats-new.md) and [What's new archive for in Microsoft Defender for IoT for organizations](release-notes-archive.md).
+For more information about the features listed in this article, see [What's new in Microsoft Defender for IoT](whats-new.md) and [What's new archive for in Microsoft Defender for IoT for organizations](release-notes-archive.md).
defender-for-iot Roles Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/roles-azure.md
Permissions are applied to user roles across an entire Azure subscription, or in
| **[Download OT threat intelligence packages](how-to-work-with-threat-intelligence-packages.md#manually-update-locally-managed-sensors)** <br>Apply per subscription only | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | **[Push OT threat intelligence updates](how-to-work-with-threat-intelligence-packages.md#manually-push-updates-to-cloud-connected-sensors)** <br>Apply per subscription only | - | Γ£ö | Γ£ö | Γ£ö | | **[View Azure alerts](how-to-manage-cloud-alerts.md)** <br>Apply per subscription or site | Γ£ö | Γ£ö |Γ£ö | Γ£ö|
-| **[Modify Azure alerts](how-to-manage-cloud-alerts.md) (write access - change status, learn, download PCAP)** <br>Apply per subscription or site| - | Γ£ö |Γ£ö | Γ£ö |
+| **[Modify Azure alerts](how-to-manage-cloud-alerts.md) (write access - change status, learn, download PCAP, suppression rules)** <br>Apply per subscription or site| - | Γ£ö |Γ£ö | Γ£ö |
| **[View Azure device inventory](how-to-manage-device-inventory-for-organizations.md)** <br>Apply per subscription or site | Γ£ö | Γ£ö |Γ£ö | Γ£ö| | **[Manage Azure device inventory](how-to-manage-device-inventory-for-organizations.md) (write access)** <br>Apply per subscription or site | - | Γ£ö |Γ£ö | Γ£ö | | **[View Azure workbooks](workbooks.md)**<br>Apply per subscription or site | Γ£ö | Γ£ö |Γ£ö | Γ£ö |
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
Features released earlier than nine months ago are described in the [What's new
|Service area |Updates | |||
-| **OT networks** | - [Focused alerts in OT/IT environments](#focused-alerts-in-otit-environments)<br>- [Alert ID now aligned on the Azure portal and sensor console](#alert-id-now-aligned-on-the-azure-portal-and-sensor-console)<br>- [New setting to focus local networks in the device inventory](#new-setting-to-focus-local-networks-in-the-device-inventory) |
+| **OT networks** | - [Alert suppression rules from the Azure portal (Public preview)](#alert-suppression-rules-from-the-azure-portal-public-preview)<br>- [Focused alerts in OT/IT environments](#focused-alerts-in-otit-environments)<br>- [Alert ID now aligned on the Azure portal and sensor console](#alert-id-now-aligned-on-the-azure-portal-and-sensor-console)<br>- [Newly supported protocols](#newly-supported-protocols)|
+
+### Alert suppression rules from the Azure portal (Public preview)
+
+Now you can configure alert suppression rules from the Azure portal to instruct your OT sensors to specified traffic on your network that would otherwise trigger an alert.
+
+- Configure which alerts to suppress by specifying an alert title, IP/MAC address, hostname, subnet, sensor, or site.
+- Set each suppression rule to be active always, or only during a predefined period, such as for a specific maintenance window.
+
+> [!TIP]
+> If you're currently using exclusion rules on the on-premises management console, we recommend that you migrate them to suppression rules on the Azure portal.
+For more information, see [Suppress irrelevant alerts](how-to-accelerate-alert-incident-response.md#suppress-irrelevant-alerts).
### Focused alerts in OT/IT environments
The alert ID in the **Id** column on the Azure portal **Alerts** page now displa
> [!NOTE] > If the [alert was merged with other alerts](alerts.md#alert-management-options) from sensors that detected the same alert, the Azure portal displays the alert ID of the first sensor that generated the alerts.
-### New setting to focus local networks in the device inventory
+### Newly supported protocols
+
+We now support these protocols:
-To better focus the Azure device inventory on devices that are in your OT scope, we've added the **ICS** toggle in the **Subnets** sensor setting. This toggle marks the subnet as a subnet with OT networks. [Learn more](configure-sensor-settings-portal.md#configure-subnets-in-the-azure-portal).
+- HART-IP
+- FANUC FOCAS
+- Dicom
+- ABB NetConfig
+- Rockwell AADvance Discover
+- Rockwell AADvance SNCP/IXL
+- Schneider NetManage
+[See the updated protocol list](concept-supported-protocols.md).
+
+### L60 hardware profile is no longer supported
+
+The L60 hardware profile is no longer supported and is removed from support documentation. Hardware profiles now require a minimum of 100GB (the minimum hardware profile is now [L100](ot-virtual-appliances.md)).
+
+To migrate from the L60 profile to a supported profile follow the [Back up and restore OT network sensor](back-up-restore-sensor.md) procedure.
## January 2024
You might want to update your sensor to a specific version for various reasons,
:::image type="content" source="media/whats-new/send-package-multiple-versions-400.png" alt-text="Screenshot of sensor update pane with option to choose sensor update version." border="false" lightbox="media/whats-new/send-package-multiple-versions.png" ::: For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md#send-the-software-update-to-your-ot-sensor).
+| **OT networks** |**Version 24.1.0**: <br>- [Alert suppression rules from the Azure portal (Public preview)](#alert-suppression-rules-from-the-azure-portal-public-preview)|
## December 2023
Sensor versions 23.2.0 run on a Debian 11 operating system instead of Ubuntu. De
Using Debian as the base for our sensor software helps reduce the number of packages installed on the sensors, increasing efficiency and security of your systems.
-Due to the operating system switch, the software update from your legacy version to version 23.2.0 may be longer and heavier than usual.
+Due to the operating system switch, the software update from your legacy version to version 23.2.0 might be longer and heavier than usual.
For more information, see [Back up and restore OT network sensors from the sensor console](back-up-restore-sensor.md) and [Update Defender for IoT OT monitoring software](update-ot-software.md).
For example, use the privileged *admin* user in the following scenarios:
For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md). - ### New architecture for hybrid and air-gapped support Hybrid and air-gapped networks are common in many industries, such as government, financial services, or industrial manufacturing. Air-gapped networks are physically separated from other, unsecured external networks like enterprise networks or the internet, and are less vulnerable to cyber-attacks. However, air-gapped networks are still not completely secure, can still be breached, and must be secured and monitored carefully.
For more information, see:
### Live statuses for cloud-based sensor updates
-When running a sensor update from the Azure portal, a new progress bar appears in the **Sensor version** column during the update process. As the update progresses the bar shows the percentage of the update completed, showing you that the process is ongoing, is not stuck or has failed. For example:
+When running a sensor update from the Azure portal, a new progress bar appears in the **Sensor version** column during the update process. As the update progresses the bar shows the percentage of the update completed, showing you that the process is ongoing, isn't stuck or has failed. For example:
:::image type="content" source="media/whats-new/sensor-version-update-bar.png" alt-text="Screenshot of the update bar in the Sensor version column." lightbox="media/whats-new/sensor-version-update-bar.png":::
From your sensor, do one of the following to open the **Cloud connectivity troub
- On the **Overview** page, select the **Troubleshoot** link at the top of the page - Select **System settings > Sensor management > Health and troubleshooting > Cloud connectivity troubleshooting** - For more information, see [Check sensor - cloud connectivity issues](how-to-troubleshoot-sensor.md#check-sensorcloud-connectivity-issues). ### Event timeline access for OT sensor Read Only users
hdinsight-aks Create Cluster Using Arm Template Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/create-cluster-using-arm-template-script.md
Title: Export ARM template in Azure HDInsight on AKS
-description: How to create an ARM template to cluster using script in Azure HDInsight on AKS
+description: How to create an ARM template of a cluster in Azure HDInsight on AKS
Previously updated : 08/29/2023 Last updated : 02/12/2024
-# Export cluster ARM template using script
+# Export cluster ARM template - Azure portal
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-This article describes how to generate an ARM template for your cluster automatically using a script. You can use the ARM template to modify, clone, or recreate a cluster starting from the existing cluster's configurations.
+This article describes how to generate an ARM template for your cluster automatically. You can use the ARM template to modify, clone, or recreate a cluster starting from the existing cluster's configurations.
## Prerequisites * An operational HDInsight on AKS cluster.
-* Familiarity with [ARM template authoring and deployment](/azure/azure-resource-manager/templates/overview).
## Steps to generate ARM template for the cluster 1. Sign in to [Azure portal](https://portal.azure.com).
-2. In the Azure portal search bar, type "HDInsight on AKS cluster" and select "Azure HDInsight on AKS clusters" from the drop-down list.
+1. In the Azure portal search bar, type "HDInsight on AKS cluster" and select "Azure HDInsight on AKS clusters" from the drop-down list.
:::image type="content" source="./media/create-cluster-using-arm-template-script/cloud-portal-search.png" alt-text="Screenshot showing search option for getting started with HDInsight on AKS Cluster." border="true" lightbox="./media/create-cluster-using-arm-template-script/cloud-portal-search.png":::
-6. Select your cluster name from the list page.
+1. Select your cluster name from the list page.
:::image type="content" source="./media/create-cluster-using-arm-template-script/cloud-portal-list-view.png" alt-text="Screenshot showing selecting the HDInsight on AKS Cluster you require from the list." border="true" lightbox="./media/create-cluster-using-arm-template-script/cloud-portal-list-view.png":::
-2. Navigate to the overview blade of your cluster and click on *JSON View* at the top right.
+1. Navigate to the "Export template" blade of your cluster and click "Download" to export the template.
- :::image type="content" source="./media/create-cluster-using-arm-template-script/view-cost-json-view.png" alt-text="Screenshot showing how to view cost and JSON View buttons from the Azure portal." border="true" lightbox="./media/create-cluster-using-arm-template-script/view-cost-json-view.png":::
+ :::image type="content" source="./media/create-cluster-using-arm-template-script/export-template-download-view.png" alt-text="Screenshot showing export template option from the Azure portal." border="true" lightbox="./media/create-cluster-using-arm-template-script/export-template-download-view.png":::
-2. Copy the "Resource JSON" and save it to a local JSON file. For example, `template.json`.
-
-3. Click the following button at the top right in the Azure portal to launch Azure Cloud Shell.
-
- :::image type="content" source="./media/create-cluster-using-arm-template-script/cloud-shell.png" alt-text="Screenshot screenshot showing Cloud Shell icon.":::
-
-5. Make sure Cloud Shell is set to "Bash" on the top left and upload your `template.json` file.
-
- :::image type="content" source="./media/create-cluster-using-arm-template-script/azure-cloud-shell-template-upload.png" alt-text="Screenshot showing how to upload your template.json file." border="true" lightbox="./media/create-cluster-using-arm-template-script/azure-cloud-shell-template-upload.png":::
-
-2. Execute the following command to generate the ARM template.
-
- ```azurecli
- wget https://hdionaksresources.blob.core.windows.net/common/arm_transform.py
-
- python arm_transform.py template.json
- ```
-
- :::image type="content" source="./media/create-cluster-using-arm-template-script/azure-cloud-shell-script-output.png" alt-text="Screenshot showing results after running the script." border="true" lightbox="./media/create-cluster-using-arm-template-script/azure-cloud-shell-script-output.png":::
-
-This script creates an ARM template with name `template-modified.json` for your cluster and generates a command to deploy the ARM template.
-
-Now, your cluster ARM template is ready. You can update the properties of the cluster and finally deploy the ARM template to refresh the resources. To redeploy, you can either use the Azure CLI command output by the script or [deploy an ARM template using Azure portal](/azure/azure-resource-manager/templates/deploy-portal#deploy-resources-from-custom-template).
+Now, your cluster ARM template is ready. You can update the properties of the cluster and finally deploy the ARM template to refresh the resources. To redeploy, you can either use the "Deploy" option in your cluster under "Export template" blade by replacing the existing template with the modified template or see [deploy an ARM template using Azure portal](/azure/azure-resource-manager/templates/deploy-portal#deploy-resources-from-custom-template).
> [!IMPORTANT] > If you're cloning the cluster or creating a new cluster, you'll need to modify the `name`, `location`, and `fqdn` (the fqdn must match the cluster name).
hdinsight-aks Create Cluster Using Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/create-cluster-using-arm-template.md
Title: Export cluster ARM template
-description: Learn how to Create cluster ARM template
+description: Learn how to Create cluster ARM template using Azure CLI
Previously updated : 08/29/2023 Last updated : 02/12/2024
-# Export cluster ARM template
+# Export cluster ARM template - Azure CLI
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-This article describes how to generate an ARM template from resource JSON of your cluster.
-
-## Prerequisites
+This article describes how to generate an ARM template using Azure CLI.
* An operational HDInsight on AKS cluster.
-* Familiarity with [ARM template authoring and deployment](/azure/azure-resource-manager/templates/overview).
-
-## Steps to generate ARM template for the cluster
-
-1. Sign in to [Azure portal](https://portal.azure.com).
-
-1. In the Azure portal search bar, type "HDInsight on AKS cluster" and select "Azure HDInsight on AKS clusters" from the drop-down list.
- :::image type="content" source="./media/create-cluster-using-arm-template/portal-search.png" alt-text="Screenshot showing search option for getting started with HDInsight on AKS Cluster." border="true" lightbox="./media/create-cluster-using-arm-template/portal-search.png":::
-
-1. Select your cluster name from the list page.
-
- :::image type="content" source="./media/create-cluster-using-arm-template/portal-search-result.png" alt-text="Screenshot showing selecting the HDInsight on AKS Cluster you require from the list." border="true" lightbox="./media/create-cluster-using-arm-template/portal-search-result.png":::
-
-1. Go to the overview blade of your cluster and click on *JSON View* at the top right.
-
- :::image type="content" source="./media/create-cluster-using-arm-template/view-cost-json-view.png" alt-text="Screenshot showing how to view cost and JSON View buttons from the Azure portal." border="true" lightbox="./media/create-cluster-using-arm-template/view-cost-json-view.png":::
-
-1. Copy the response to an editor. For example: Visual Studio Code.
-1. Modify the response with the following changes to turn it into a valid ARM template.
-
- * Remove the following objects-
- * `id`, `systemData`
- * `deploymentId`, `provisioningState`, and `status` under properties object.
- * Change "name" value to `<your clusterpool name>/<your cluster name>`.
+## Steps to generate ARM template for the cluster
+
+1. Run the following command.
- :::image type="content" source="./media/create-cluster-using-arm-template/change-cluster-name.png" alt-text="Screenshot showing how to change cluster name.":::
-
- * Add "apiversion": "2023-06-01-preview" in the same section with name, location etc.
+ ```azurecli-interactive
- :::image type="content" source="./media/create-cluster-using-arm-template/api-version.png" alt-text="Screenshot showing how to modify the API version.":::
+ az group export --resource-group "{cluster-rg}" --resource-ids "{resource_id}" --include-parameter-default-value --include-comments
- 1. Open [custom template](/azure/azure-resource-manager/templates/deploy-portal#deploy-resources-from-custom-template) from the Azure portal and select "Build your own template in the editor" option.
-
- 1. Copy the modified response to the ΓÇ£resourcesΓÇ¥ object in the ARM template format. For example:
+ # cluster-rg = Resource group of your cluster
+ # resource_id = Cluster resource id. You can get it from "JSON view" in the overview blade of your cluster in the Azure portal.
+ ```
- :::image type="content" source="./media/create-cluster-using-arm-template/modify-get-response.png" alt-text="Screenshot showing how to modify the get response." border="true" lightbox="./media/create-cluster-using-arm-template/modify-get-response.png":::
+ :::image type="content" source="./media/create-cluster-using-arm-template/command-execution-output.png" alt-text="Screenshot showing output of the command executed to get the ARM template of the HDInsight on AKS Cluster." border="true" lightbox="./media/create-cluster-using-arm-template/command-execution-output.png":::
+
Now, your cluster ARM template is ready. You can update the properties of the cluster and finally deploy the ARM template to refresh the resources. Learn how to [deploy an ARM template](/azure/azure-resource-manager/templates/deploy-portal).
iot-edge Production Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/production-checklist.md
The following table describes image garbage collection parameters. All parameter
* Review outbound/inbound configuration * Allow connections from IoT Edge devices * Configure communication through a proxy
+ * Set DNS server in container engine settings
### Review outbound/inbound configuration
The [IoT Identity Service](https://azure.github.io/iot-identity-service/) provid
If your devices are going to be deployed on a network that uses a proxy server, they need to be able to communicate through the proxy to reach IoT Hub and container registries. For more information, see [Configure an IoT Edge device to communicate through a proxy server](how-to-configure-proxy-support.md).
+### Set DNS server in container engine settings
+
+Specify the DNS server for your environment in the container engine settings. The DNS server setting applies to all container modules started by the engine.
+
+1. In the `/etc/docker` directory on your device, edit the `daemon.json` file. Create the file if it doesn't exists.
+1. Add the **dns** key and set the DNS server address to a publicly accessible DNS service. If your edge device can't access a public DNS server, use an accessible DNS server address in your network. For example:
+
+ ```json
+ {
+ "dns": ["1.1.1.1"]
+ }
+ ```
+ ## Solution management * **Helpful**
key-vault About Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/about-certificates.md
Title: About Azure Key Vault certificates
description: Get an overview of the Azure Key Vault REST interface and certificates.
-tags: azure-resource-manager
Previously updated : 01/04/2023 Last updated : 01/30/2024
key-vault Certificate Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/certificate-access-control.md
Title: About Azure Key Vault Certificates access control
description: Overview of Azure Key Vault certificates access control
-tags: azure-resource-manager
Previously updated : 01/20/2023 Last updated : 01/30/2024
key-vault Certificate Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/certificate-scenarios.md
Title: Get started with Key Vault certificates
description: The following scenarios outline several of the primary usages of Key VaultΓÇÖs certificate management service including the additional steps required for creating your first certificate in your key vault.
-tags: azure-resource-manager
Previously updated : 11/14/2022 Last updated : 01/30/2024
key-vault Create Certificate Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/create-certificate-scenarios.md
Title: Monitor and manage certificate creation
description: Scenarios demonstrating a range of options for creating, monitoring, and interacting with the certificate creation process with Key Vault.
-tags: azure-resource-manager
Previously updated : 11/14/2022 Last updated : 01/30/2024
key-vault Create Certificate Signing Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/create-certificate-signing-request.md
Title: Creating and merging a certificate signing request in Azure Key Vault
description: Learn how to create and merge a CSR in Azure Key Vault.
-tags: azure-resource-manager
Previously updated : 01/20/2023 Last updated : 01/30/2024
key-vault Create Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/create-certificate.md
Title: Certificate creation methods
description: Learn about different options to create or import a Key Vault certificate in Azure Key Vault. There are several ways to create a Key Vault certificate.
-tags: azure-resource-manager
Previously updated : 11/14/2022 Last updated : 01/30/2024
key-vault How To Integrate Certificate Authority https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/how-to-integrate-certificate-authority.md
Title: Integrating Key Vault with DigiCert certificate authority
description: This article describes how to integrate Key Vault with DigiCert certificate authority so you can provision, manage, and deploy certificates for your network.
-tags: azure-resource-manager
Previously updated : 01/24/2022 Last updated : 01/30/2024
key-vault Overview Renew Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/overview-renew-certificate.md
Title: About Azure Key Vault certificate renewal
description: This article discusses how to renew Azure Key Vault certificates.
-tags: azure-resource-manager
Previously updated : 01/20/2023 Last updated : 01/30/2024
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-cli.md
Title: Quickstart - Set & view Azure Key Vault certificates with Azure CLI description: Quickstart showing how to set and retrieve a certificate from Azure Key Vault using Azure CLI
-tags: azure-resource-manager
Previously updated : 11/14/2022 Last updated : 01/30/2024 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure
key-vault Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-portal.md
Title: Azure Quickstart - Set and retrieve a certificate from Key Vault using Az
description: Quickstart showing how to set and retrieve a certificate from Azure Key Vault using the Azure portal
-tags: azure-resource-manager
Previously updated : 11/14/2022 Last updated : 01/30/2024 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store certificates in Azure
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-powershell.md
Title: Quickstart - Set & view Azure Key Vault certificates with Azure PowerShel
description: Quickstart showing how to set and retrieve a certificate from Azure Key Vault using Azure PowerShell
-tags: azure-resource-manager
Previously updated : 11/14/2022 Last updated : 01/30/2024 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure
key-vault Tutorial Import Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/tutorial-import-certificate.md
Title: Tutorial - Import a certificate in Key Vault using Azure portal | Microso
description: Tutorial showing how to import a certificate in Azure Key Vault
-tags: azure-resource-manager
Previously updated : 03/16/2022 Last updated : 01/30/2024 ms.devlang: azurecli
key-vault Tutorial Rotate Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/tutorial-rotate-certificates.md
Title: Tutorial - Updating certificate auto-rotation frequency in Key Vault | Mi
description: Tutorial showing how to update a certificate's auto-rotation frequency in Azure Key Vault using the Azure portal
-tags: azure-resource-manager
Previously updated : 01/20/2023 Last updated : 01/30/2024 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store certificates in Azure.
key-vault About Keys Secrets Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/about-keys-secrets-certificates.md
Title: Azure Key Vault Keys, Secrets, and Certificates Overview
description: Overview of Azure Key Vault REST interface and developer details for keys, secrets and certificates.
-tags: azure-resource-manager
Previously updated : 04/18/2023 Last updated : 01/30/2024
key-vault Access Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/access-behind-firewall.md
Title: Access Key Vault behind a firewall - Azure Key Vault | Microsoft Docs
description: Learn about the ports, hosts, or IP addresses to open to enable a key vault client application behind a firewall to access a key vault.
-tags: azure-resource-manager
Previously updated : 04/15/2021 Last updated : 01/30/2024
key-vault Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/alert.md
Title: Configure Azure Key Vault alerts
description: Learn how to create alerts to monitor the health of your key vault.
-tags: azure-resource-manager
Previously updated : 03/31/2021 Last updated : 01/30/2024 # Customer intent: As a key vault administrator, I want to learn the options available to monitor the health of my vaults.
key-vault Assign Access Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/assign-access-policy.md
Title: Assign an Azure Key Vault access policy (CLI)
description: How to use the Azure CLI to assign a Key Vault access policy to a security principal or application identity.
-tags: azure-resource-manager
Previously updated : 12/12/2022 Last updated : 01/30/2024 #Customer intent: As someone new to Key Vault, I'm trying to learn basic concepts that can help me understand Key Vault documentation.
key-vault Authentication Requests And Responses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/authentication-requests-and-responses.md
description: Learn how Azure Key Vault uses JSON-formatted requests and response
-tags: azure-resource-manager
Previously updated : 01/20/2023 Last updated : 01/30/2024
key-vault Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/backup.md
- Title: Back up a secret, key, or certificate stored in Azure Key Vault | Microsoft Docs
-description: Use this document to help back up a secret, key, or certificate stored in Azure Key Vault.
--
-tags: azure-resource-manager
---- Previously updated : 01/17/2023-
-#Customer intent: As an Azure Key Vault administrator, I want to back up a secret, key, or certificate in my key vault.
-
-# Azure Key Vault backup and restore
-
-This document shows you how to back up secrets, keys, and certificates stored in your key vault. A backup is intended to provide you with an offline copy of all your secrets in the unlikely event that you lose access to your key vault.
-
-## Overview
-
-Azure Key Vault automatically provides features to help you maintain availability and prevent data loss. Back up secrets only if you have a critical business justification. Backing up secrets in your key vault may introduce operational challenges such as maintaining multiple sets of logs, permissions, and backups when secrets expire or rotate.
-
-Key Vault maintains availability in disaster scenarios and will automatically fail over requests to a paired region without any intervention from a user. For more information, see [Azure Key Vault availability and redundancy](./disaster-recovery-guidance.md).
-
-If you want protection against accidental or malicious deletion of your secrets, configure soft-delete and purge protection features on your key vault. For more information, see [Azure Key Vault soft-delete overview](./soft-delete-overview.md).
-
-## Limitations
-
-> [!IMPORTANT]
-> Key Vault does not support the ability to backup more than 500 past versions of a key, secret, or certificate object. Attempting to backup a key, secret, or certificate object may result in an error. It is not possible to delete previous versions of a key, secret, or certificate.
-
-Key Vault doesn't currently provide a way to back up an entire key vault in a single operation and keys, secrets and certitificates must be backup indvidually.
-
-Also consider the following issues:
-
-* Backing up secrets that have multiple versions might cause time-out errors.
-* A backup creates a point-in-time snapshot. Secrets might renew during a backup, causing a mismatch of encryption keys.
-* If you exceed key vault service limits for requests per second, your key vault will be throttled, and the backup will fail.
-
-## Design considerations
-
-When you back up a key vault object, such as a secret, key, or certificate, the backup operation will download the object as an encrypted blob. This blob can't be decrypted outside of Azure. To get usable data from this blob, you must restore the blob into a key vault within the same Azure subscription and [Azure geography](https://azure.microsoft.com/global-infrastructure/geographies/).
-
-## Prerequisites
-
-To back up a key vault object, you must have:
-
-* Contributor-level or higher permissions on an Azure subscription.
-* A primary key vault that contains the secrets you want to back up.
-* A secondary key vault where secrets will be restored.
-
-## Back up and restore from the Azure portal
-
-Follow the steps in this section to back up and restore objects by using the Azure portal.
-
-### Back up
-
-1. Go to the Azure portal.
-2. Select your key vault.
-3. Go to the object (secret, key, or certificate) you want to back up.
-
- ![Screenshot showing where to select the Keys setting and an object in a key vault.](../media/backup-1.png)
-
-4. Select the object.
-5. Select **Download Backup**.
-
- ![Screenshot showing where to select the Download Backup button in a key vault.](../media/backup-2.png)
-
-6. Select **Download**.
-
- ![Screenshot showing where to select the Download button in a key vault.](../media/backup-3.png)
-
-7. Store the encrypted blob in a secure location.
-
-### Restore
-
-1. Go to the Azure portal.
-2. Select your key vault.
-3. Go to the type of object (secret, key, or certificate) you want to restore.
-4. Select **Restore Backup**.
-
- ![Screenshot showing where to select Restore Backup in a key vault.](../media/backup-4.png)
-
-5. Go to the location where you stored the encrypted blob.
-6. Select **OK**.
-
-## Back up and restore from the Azure CLI or Azure PowerShell
-
-# [Azure CLI](#tab/azure-cli)
-```azurecli
-## Log in to Azure
-az login
-
-## Set your subscription
-az account set --subscription {AZURE SUBSCRIPTION ID}
-
-## Register Key Vault as a provider
-az provider register -n Microsoft.KeyVault
-
-## Back up a certificate in Key Vault
-az keyvault certificate backup --file {File Path} --name {Certificate Name} --vault-name {Key Vault Name} --subscription {SUBSCRIPTION ID}
-
-## Back up a key in Key Vault
-az keyvault key backup --file {File Path} --name {Key Name} --vault-name {Key Vault Name} --subscription {SUBSCRIPTION ID}
-
-## Back up a secret in Key Vault
-az keyvault secret backup --file {File Path} --name {Secret Name} --vault-name {Key Vault Name} --subscription {SUBSCRIPTION ID}
-
-## Restore a certificate in Key Vault
-az keyvault certificate restore --file {File Path} --vault-name {Key Vault Name} --subscription {SUBSCRIPTION ID}
-
-## Restore a key in Key Vault
-az keyvault key restore --file {File Path} --vault-name {Key Vault Name} --subscription {SUBSCRIPTION ID}
-
-## Restore a secret in Key Vault
-az keyvault secret restore --file {File Path} --vault-name {Key Vault Name} --subscription {SUBSCRIPTION ID}
-```
-# [Azure PowerShell](#tab/powershell)
-
-```azurepowershell
-## Log in to Azure
-Connect-AzAccount
-
-## Set your subscription
-Set-AzContext -Subscription '{AZURE SUBSCRIPTION ID}'
-
-## Back up a certificate in Key Vault
-Backup-AzKeyVaultCertificate -VaultName '{Key Vault Name}' -Name '{Certificate Name}'
-
-## Back up a key in Key Vault
-Backup-AzKeyVaultKey -VaultName '{Key Vault Name}' -Name '{Key Name}'
-
-## Back up a secret in Key Vault
-Backup-AzKeyVaultSecret -VaultName '{Key Vault Name}' -Name '{Secret Name}'
-
-## Restore a certificate in Key Vault
-Restore-AzKeyVaultCertificate -VaultName '{Key Vault Name}' -InputFile '{File Path}'
-
-## Restore a key in Key Vault
-Restore-AzKeyVaultKey -VaultName '{Key Vault Name}' -InputFile '{File Path}'
-
-## Restore a secret in Key Vault
-Restore-AzKeyVaultSecret -VaultName '{Key Vault Name}' -InputFile '{File Path}'
-```
--
-## Next steps
---- [Move an Azure key vault across regions](move-region.md)-- [Enable Key Vault logging](howto-logging.md) for Key Vault+
+ Title: Back up a secret, key, or certificate stored in Azure Key Vault | Microsoft Docs
+description: Use this document to help back up a secret, key, or certificate stored in Azure Key Vault.
++++++ Last updated : 01/30/2024+
+#Customer intent: As an Azure Key Vault administrator, I want to back up a secret, key, or certificate in my key vault.
+
+# Azure Key Vault backup and restore
+
+This document shows you how to back up secrets, keys, and certificates stored in your key vault. A backup is intended to provide you with an offline copy of all your secrets in the unlikely event that you lose access to your key vault.
+
+## Overview
+
+Azure Key Vault automatically provides features to help you maintain availability and prevent data loss. Back up secrets only if you have a critical business justification. Backing up secrets in your key vault may introduce operational challenges such as maintaining multiple sets of logs, permissions, and backups when secrets expire or rotate.
+
+Key Vault maintains availability in disaster scenarios and will automatically fail over requests to a paired region without any intervention from a user. For more information, see [Azure Key Vault availability and redundancy](./disaster-recovery-guidance.md).
+
+If you want protection against accidental or malicious deletion of your secrets, configure soft-delete and purge protection features on your key vault. For more information, see [Azure Key Vault soft-delete overview](./soft-delete-overview.md).
+
+## Limitations
+
+> [!IMPORTANT]
+> Key Vault does not support the ability to backup more than 500 past versions of a key, secret, or certificate object. Attempting to backup a key, secret, or certificate object may result in an error. It is not possible to delete previous versions of a key, secret, or certificate.
+
+Key Vault doesn't currently provide a way to back up an entire key vault in a single operation and keys, secrets and certitificates must be backup indvidually.
+
+Also consider the following issues:
+
+* Backing up secrets that have multiple versions might cause time-out errors.
+* A backup creates a point-in-time snapshot. Secrets might renew during a backup, causing a mismatch of encryption keys.
+* If you exceed key vault service limits for requests per second, your key vault will be throttled, and the backup will fail.
+
+## Design considerations
+
+When you back up a key vault object, such as a secret, key, or certificate, the backup operation will download the object as an encrypted blob. This blob can't be decrypted outside of Azure. To get usable data from this blob, you must restore the blob into a key vault within the same Azure subscription and [Azure geography](https://azure.microsoft.com/global-infrastructure/geographies/).
+
+## Prerequisites
+
+To back up a key vault object, you must have:
+
+* Contributor-level or higher permissions on an Azure subscription.
+* A primary key vault that contains the secrets you want to back up.
+* A secondary key vault where secrets will be restored.
+
+## Back up and restore from the Azure portal
+
+Follow the steps in this section to back up and restore objects by using the Azure portal.
+
+### Back up
+
+1. Go to the Azure portal.
+2. Select your key vault.
+3. Go to the object (secret, key, or certificate) you want to back up.
+
+ ![Screenshot showing where to select the Keys setting and an object in a key vault.](../media/backup-1.png)
+
+4. Select the object.
+5. Select **Download Backup**.
+
+ ![Screenshot showing where to select the Download Backup button in a key vault.](../media/backup-2.png)
+
+6. Select **Download**.
+
+ ![Screenshot showing where to select the Download button in a key vault.](../media/backup-3.png)
+
+7. Store the encrypted blob in a secure location.
+
+### Restore
+
+1. Go to the Azure portal.
+2. Select your key vault.
+3. Go to the type of object (secret, key, or certificate) you want to restore.
+4. Select **Restore Backup**.
+
+ ![Screenshot showing where to select Restore Backup in a key vault.](../media/backup-4.png)
+
+5. Go to the location where you stored the encrypted blob.
+6. Select **OK**.
+
+## Back up and restore from the Azure CLI or Azure PowerShell
+
+# [Azure CLI](#tab/azure-cli)
+```azurecli
+## Log in to Azure
+az login
+
+## Set your subscription
+az account set --subscription {AZURE SUBSCRIPTION ID}
+
+## Register Key Vault as a provider
+az provider register -n Microsoft.KeyVault
+
+## Back up a certificate in Key Vault
+az keyvault certificate backup --file {File Path} --name {Certificate Name} --vault-name {Key Vault Name} --subscription {SUBSCRIPTION ID}
+
+## Back up a key in Key Vault
+az keyvault key backup --file {File Path} --name {Key Name} --vault-name {Key Vault Name} --subscription {SUBSCRIPTION ID}
+
+## Back up a secret in Key Vault
+az keyvault secret backup --file {File Path} --name {Secret Name} --vault-name {Key Vault Name} --subscription {SUBSCRIPTION ID}
+
+## Restore a certificate in Key Vault
+az keyvault certificate restore --file {File Path} --vault-name {Key Vault Name} --subscription {SUBSCRIPTION ID}
+
+## Restore a key in Key Vault
+az keyvault key restore --file {File Path} --vault-name {Key Vault Name} --subscription {SUBSCRIPTION ID}
+
+## Restore a secret in Key Vault
+az keyvault secret restore --file {File Path} --vault-name {Key Vault Name} --subscription {SUBSCRIPTION ID}
+```
+# [Azure PowerShell](#tab/powershell)
+
+```azurepowershell
+## Log in to Azure
+Connect-AzAccount
+
+## Set your subscription
+Set-AzContext -Subscription '{AZURE SUBSCRIPTION ID}'
+
+## Back up a certificate in Key Vault
+Backup-AzKeyVaultCertificate -VaultName '{Key Vault Name}' -Name '{Certificate Name}'
+
+## Back up a key in Key Vault
+Backup-AzKeyVaultKey -VaultName '{Key Vault Name}' -Name '{Key Name}'
+
+## Back up a secret in Key Vault
+Backup-AzKeyVaultSecret -VaultName '{Key Vault Name}' -Name '{Secret Name}'
+
+## Restore a certificate in Key Vault
+Restore-AzKeyVaultCertificate -VaultName '{Key Vault Name}' -InputFile '{File Path}'
+
+## Restore a key in Key Vault
+Restore-AzKeyVaultKey -VaultName '{Key Vault Name}' -InputFile '{File Path}'
+
+## Restore a secret in Key Vault
+Restore-AzKeyVaultSecret -VaultName '{Key Vault Name}' -InputFile '{File Path}'
+```
++
+## Next steps
++
+- [Move an Azure key vault across regions](move-region.md)
+- [Enable Key Vault logging](howto-logging.md) for Key Vault
key-vault Basic Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/basic-concepts.md
Title: What is Azure Key Vault? | Microsoft Docs
description: Learn how Azure Key Vault safeguards cryptographic keys and secrets that cloud applications and services use.
-tags: azure-resource-manager
Previously updated : 12/12/2022 Last updated : 01/30/2024 #Customer intent: As someone new to Key Vault, I'm trying to learn basic concepts that can help me understand Key Vault documentation.
key-vault Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/client-libraries.md
Title: Client Libraries for Azure Key Vault | Microsoft Docs
description: Client Libraries for Azure Key Vault
-tags: azure-resource-manager
Previously updated : 01/20/2023 Last updated : 01/30/2024
key-vault Common Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/common-error-codes.md
Title: Common error codes for Azure Key Vault | Microsoft Docs
description: Common error codes for Azure Key Vault
-tags: azure-resource-manager
Previously updated : 01/12/2023 Last updated : 01/30/2024 #Customer intent: As an Azure Key Vault administrator, I want to react to soft-delete being turned on for all key vaults.
key-vault Common Parameters And Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/common-parameters-and-headers.md
Title: Common parameters and headers
description: The parameters and headers common to all operations that you might perform on Key Vault resources.
-tags: azure-resource-manager
Previously updated : 01/11/2023 Last updated : 01/30/2024
key-vault Customer Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/customer-data.md
Title: Azure Key Vault customer data features - Azure Key Vault | Microsoft Docs
description: Learn about customer data, which Azure Key Vault receives during creation or update of vaults, keys, secrets, certificates, and managed storage accounts.
-tags: azure-resource-manager
Previously updated : 01/11/2023 Last updated : 01/30/2024
key-vault Developers Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/developers-guide.md
Previously updated : 01/17/2023 Last updated : 01/30/2024 # Azure Key Vault developer's guide
key-vault Event Grid Logicapps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/event-grid-logicapps.md
Title: Email when Key Vault status of the secret changes
description: Guide to use Logic Apps to respond to Key Vault secrets changes
-tags: azure-resource-manager
Previously updated : 01/11/2023 Last updated : 01/30/2024
key-vault Event Grid Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/event-grid-tutorial.md
Title: Receive and respond to key vault notifications with Azure Event Grid
description: Learn how to integrate Key Vault with Azure Event Grid.
-tags: azure-resource-manager
Previously updated : 01/11/2023 Last updated : 01/30/2024
key-vault Howto Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/howto-logging.md
Title: Enable Azure Key Vault logging
description: How to enable logging for Azure Key Vault, which saves information in an Azure storage account that you provide.
-tags: azure-resource-manager
Previously updated : 01/20/2023 Last updated : 01/30/2024 ms.devlang: azurecli
key-vault Integrate Databricks Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/integrate-databricks-blob-storage.md
Previously updated : 01/20/2023 Last updated : 01/30/2024 # Tutorial: Access Azure Blob Storage using Azure Databricks and Azure Key Vault
key-vault Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/logging.md
Title: Azure Key Vault logging | Microsoft Docs
description: Learn how to monitor access to your key vaults by enabling logging for Azure Key Vault, which saves information in an Azure storage account that you provide.
-tags: azure-resource-manager
Previously updated : 01/20/2023 Last updated : 01/30/2024 #Customer intent: As an Azure Key Vault administrator, I want to enable logging so I can monitor how my key vaults are accessed.
key-vault Monitor Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/monitor-key-vault.md
Title: Monitoring Azure Key Vault
description: Start here to learn how to monitor Azure Key Vault
-tags: azure-resource-manager
Previously updated : 09/21/2021 Last updated : 01/30/2024 # Customer intent: As a key vault administrator, I want to learn the options available to monitor the health of my vaults
key-vault Move Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/move-region.md
Title: Move a key vault to a different region - Azure Key Vault
description: This article offers guidance on moving a key vault to a different region.
-tags: azure-resource-manager
Previously updated : 11/11/2021 Last updated : 01/30/2024
key-vault Move Resourcegroup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/move-resourcegroup.md
Title: Azure Key Vault moving a vault to a different resource group | Microsoft
description: Guidance on moving a key vault to a different resource group.
-tags: azure-resource-manager
Previously updated : 03/31/2021 Last updated : 01/30/2024 # Customer intent: As a key vault administrator, I want to move my vault to another resource group.
key-vault Move Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/move-subscription.md
Title: Azure Key Vault moving a vault to a different subscription | Microsoft Do
description: Guidance on moving a key vault to a different subscription.
-tags: azure-resource-manager
Previously updated : 01/20/2023 Last updated : 01/30/2024 # Customer intent: As a key vault administrator, I want to move my vault to another subscription.
key-vault Overview Vnet Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/overview-vnet-service-endpoints.md
description: Learn how virtual network service endpoints for Azure Key Vault all
Previously updated : 11/20/2022 Last updated : 01/30/2024
key-vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/overview.md
Title: Azure Key Vault Overview - Azure Key Vault
description: Azure Key Vault is a secure secrets store, providing management for secrets, keys, and certificates, all backed by Hardware Security Modules.
-tags: azure-resource-manager
Previously updated : 02/28/2023 Last updated : 01/30/2024 - zerotrust-extra
key-vault Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/private-link-service.md
Title: Integrate Key Vault with Azure Private Link
description: Learn how to integrate Azure Key Vault with Azure Private Link Service Previously updated : 01/17/2023 Last updated : 01/30/2024
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/quick-create-cli.md
Title: Quickstart - Create an Azure Key Vault with the Azure CLI
description: Quickstart showing how to create an Azure Key Vault using the Azure CLI
-tags: azure-resource-manager
Previously updated : 01/27/2021 Last updated : 01/30/2024 ms.devlang: azurecli
key-vault Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/quick-create-portal.md
Title: Quickstart - Create an Azure Key Vault with the Azure portal
description: Quickstart showing how to create an Azure Key Vault using the Azure portal
-tags: azure-resource-manager
Previously updated : 01/20/2023 Last updated : 01/30/2024 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/quick-create-powershell.md
Title: Quickstart - Create an Azure Key Vault with Azure PowerShell
description: Quickstart showing how to create an Azure Key Vault using Azure PowerShell
-tags: azure-resource-manager
Previously updated : 01/27/2021 Last updated : 01/30/2024 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure
key-vault Rbac Access Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-access-policy.md
Previously updated : 05/08/2023 Last updated : 01/30/2024
key-vault Rbac Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-guide.md
Previously updated : 12/12/2022 Last updated : 01/30/2024
key-vault Security Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/security-features.md
Title: Azure Key Vault security overview
description: An overview of security features and best practices for Azure Key Vault.
-tags: azure-resource-manager
Previously updated : 09/25/2022 Last updated : 01/30/2024 #Customer intent: As a key vault administrator, I want to learn the options available to secure my vaults
key-vault Soft Delete Change https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/soft-delete-change.md
Title: Enable soft-delete on all key vault objects - Azure Key Vault | Microsoft
description: Use this document to adopt soft-delete for all key vaults and to make application and administration changes to avoid conflict errors.
-tags: azure-resource-manager
Previously updated : 03/31/2021 Last updated : 01/30/2024 #Customer intent: As an Azure Key Vault administrator, I want to react to soft-delete being turned on for all key vaults.
key-vault Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/whats-new.md
Title: What's new for Azure Key Vault
description: Recent updates for Azure Key Vault
-tags: azure-resource-manager
Previously updated : 01/17/2023 Last updated : 01/30/2024 #Customer intent: As an Azure Key Vault administrator, I want to react to soft-delete being turned on for all key vaults.
key-vault About Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/about-keys.md
description: Overview of Azure Key Vault REST interface and developer details fo
-tags: azure-resource-manager
Previously updated : 02/09/2024 Last updated : 01/30/2024
key-vault Byok Specification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/byok-specification.md
Title: Bring your own key specification - Azure Key Vault | Microsoft Docs
description: This document described bring your own key specification.
-tags: azure-resource-manager
Previously updated : 01/24/2023 Last updated : 01/30/2024
key-vault Hsm Protected Keys Byok https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/hsm-protected-keys-byok.md
description: Use this article to help you plan for, generate, and transfer your
-tags: azure-resource-manager
Previously updated : 10/23/2023 Last updated : 01/30/2024
key-vault Hsm Protected Keys Ncipher https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/hsm-protected-keys-ncipher.md
Title: How to generate and transfer HSM-protected keys for Azure Key Vault - Azu
description: Use this article to help you plan for, generate, and then transfer your own HSM-protected keys to use with Azure Key Vault. Also known as BYOK or bring your own key.
-tags: azure-resource-manager
Previously updated : 01/24/2023 Last updated : 01/30/2024
key-vault Hsm Protected Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/hsm-protected-keys.md
description: Learn how to plan for, generate, and then transfer your own HSM-pro
-tags: azure-resource-manager
Previously updated : 01/24/2023 Last updated : 01/30/2024
key-vault Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-bicep.md
Title: Azure Quickstart - Create an Azure key vault and a key by using Bicep | Microsoft Docs description: Quickstart showing how to create Azure key vaults, and add key to the vaults by using Bicep.
-tags: azure-resource-manager
Previously updated : 06/29/2022 Last updated : 01/30/2024 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure.
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-cli.md
Title: Create and retrieve attributes of a key in Azure Key Vault - Azure CLI description: Quickstart showing how to set and retrieve a key from Azure Key Vault using Azure CLI
-tags: azure-resource-manager
Previously updated : 01/04/2023 Last updated : 01/30/2024 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure
key-vault Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-portal.md
Title: Azure Quickstart - Set and retrieve a key from Key Vault using Azure port
description: Quickstart showing how to set and retrieve a key from Azure Key Vault using the Azure portal
-tags: azure-resource-manager
Previously updated : 01/04/2023 Last updated : 01/30/2024 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys in Azure
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-powershell.md
Title: Create and retrieve attributes of a key in Azure Key Vault ΓÇô Azure Powe
description: Quickstart showing how to set and retrieve a key from Azure Key Vault using Azure PowerShell
-tags: azure-resource-manager
Previously updated : 01/04/2023 Last updated : 01/30/2024 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure
key-vault Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/access-control.md
Title: Azure Key Vault Managed HSM access control
description: Learn how to manage access permissions for Azure Key Vault Managed HSM and keys. Understand the authentication and authorization models for Managed HSM and how to secure your HSMs.
-tags: azure-resource-manager
Previously updated : 01/26/2023 Last updated : 01/30/2024 # Customer intent: As the admin for managed HSMs, I want to set access policies and configure the Managed HSM, so that I can ensure it's secure and auditors can properly monitor all activities for these managed HSMs.
key-vault Authorize Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/authorize-azure-resource-manager.md
Title: Allow key management operations through Azure Resource Manager
description: Learn how to allow key management operations through ARM
-tags: azure-resource-manager
Previously updated : 05/25/2023 Last updated : 01/30/2024 # Customer intent: As a managed HSM administrator, I want to authorize Azure Resource Manager to perform key management operations via Azure Managed HSM
key-vault Hsm Protected Keys Byok https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/hsm-protected-keys-byok.md
Title: How to generate and transfer HSM-protected keys for Azure Key Vault Manag
description: Use this article to help you plan for, generate, and transfer your own HSM-protected keys to use with Managed HSM. Also known as bring your own key (BYOK).
-tags: azure-resource-manager
Previously updated : 01/04/2023 Last updated : 01/30/2024
key-vault Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/logging.md
Title: Azure Managed HSM logging
description: Use this tutorial to help you get started with Managed HSM logging.
-tags: azure-resource-manager
Previously updated : 02/06/2024 Last updated : 01/30/2024 #Customer intent: As a Managed HSM administrator, I want to enable logging so I can monitor how my HSM is accessed.
key-vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/overview.md
Title: Azure Managed HSM Overview - Azure Managed HSM | Microsoft Docs description: Azure Managed HSM is a cloud service that safeguards your cryptographic keys for cloud applications.
-tags: azure-resource-manager
Previously updated : 02/28/2023 Last updated : 01/30/2024
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/quick-create-cli.md
Title: Quickstart - Provision and activate an Azure Managed HSM
description: Quickstart showing how to provision and activate a managed HSM using Azure CLI
-tags: azure-resource-manager
Previously updated : 05/23/2023 Last updated : 01/30/2024 ms.devlang: azurecli
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/quick-create-powershell.md
Title: Create and retrieve attributes of a managed key in Azure Key Vault ΓÇô Az
description: Quickstart showing how to set and retrieve a managed key from Azure Key Vault using Azure PowerShell Previously updated : 05/05/2023 Last updated : 01/30/2024
-tags: azure-resource-manager
#Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure
key-vault Secure Your Managed Hsm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/secure-your-managed-hsm.md
Title: Secure access to a managed HSM - Azure Key Vault Managed HSM
description: Learn how to secure access to Managed HSM using Azure RBAC and Managed HSM local RBAC
-tags: azure-resource-manager
Previously updated : 11/06/2023 Last updated : 01/30/2024 # Customer intent: As a managed HSM administrator, I want to set access control and configure the Managed HSM, so that I can ensure it's secure and auditors can properly monitor all activities for this Managed HSM.
key-vault About Managed Storage Account Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/about-managed-storage-account-keys.md
Title: About Azure Key Vault managed storage account keys - Azure Key Vault
description: Overview of Azure Key Vault managed storage account keys.
-tags: azure-resource-manager
Previously updated : 10/01/2021 Last updated : 01/30/2024
key-vault About Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/about-secrets.md
Title: About Azure Key Vault secrets - Azure Key Vault
description: Overview of Azure Key Vault secrets.
-tags: azure-resource-manager
Previously updated : 01/17/2023 Last updated : 01/30/2024
key-vault Multiline Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/multiline-secrets.md
Title: Store a multiline secret in Azure Key Vault
description: Tutorial showing how to set multiline secrets from Azure Key Vault using Azure CLI and PowerShell
-tags: azure-resource-manager
Previously updated : 03/17/2021 Last updated : 01/30/2024 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure
key-vault Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-bicep.md
Title: Azure Quickstart - Create an Azure key vault and a secret using Bicep | M
description: Quickstart showing how to create Azure key vaults, and add secrets to the vaults using Bicep.
-tags: azure-resource-manager
Previously updated : 04/21/2023 Last updated : 01/30/2024 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure.
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-cli.md
Title: Quickstart - Set and retrieve a secret from Azure Key Vault description: Quickstart showing how to set and retrieve a secret from Azure Key Vault using Azure CLI
-tags: azure-resource-manager
Previously updated : 01/27/2021 Last updated : 01/30/2024 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure
key-vault Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-portal.md
Title: Azure Quickstart - Set and retrieve a secret from Key Vault using Azure p
description: Quickstart showing how to set and retrieve a secret from Azure Key Vault using the Azure portal
-tags: azure-resource-manager
Previously updated : 01/11/2023 Last updated : 01/30/2024 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-powershell.md
Title: Quickstart - Set & retrieve a secret from Key Vault using PowerShell
description: In this quickstart, learn how to create, retrieve, and delete secrets from an Azure Key Vault using Azure PowerShell.
-tags: azure-resource-manager
Previously updated : 01/27/2021 Last updated : 01/30/2024 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure
key-vault Tutorial Rotation Dual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/tutorial-rotation-dual.md
tags: 'rotation'
Previously updated : 01/20/2023 Last updated : 01/30/2024
load-balancer Configure Inbound NAT Rules Vm Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/configure-inbound-NAT-rules-vm-scale-set.md
Previously updated : 12/06/2022 Last updated : 02/14/2024
load-balancer Load Balancer Multiple Virtual Machine Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-multiple-virtual-machine-scale-set.md
Previously updated : 12/15/2022 Last updated : 02/14/2024
load-balancer Load Balancer Tcp Idle Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-tcp-idle-timeout.md
Previously updated : 02/06/2024 Last updated : 02/12/2024 # Configure TCP reset and idle timeout for Azure Load Balancer
-Azure Load Balancer rules have a default timeout range of 4 minutes to 100 minutes for Load Balancer rules, Outbound Rules, and Inbound NAT rules.The default setting is 4 minutes. If a period of inactivity is longer than the timeout value, there's no guarantee that the TCP or HTTP session is maintained between the client and your service.
+Azure Load Balancer rules have a default timeout range of 4 minutes to 100 minutes for Load Balancer rules, Outbound Rules, and Inbound NAT rules. The default setting is 4 minutes. If a period of inactivity is longer than the timeout value, there's no guarantee that the TCP or HTTP session is maintained between the client and your service.
The following sections describe how to change idle timeout and tcp reset settings for load balancer resources.
To set the idle timeout and tcp reset for a load balancer, edit the load-balance
:::image type="content" source="./media/load-balancer-tcp-idle-timeout/portal-lb-rules.png" alt-text="Edit load balancer rules." border="true" lightbox="./media/load-balancer-tcp-idle-timeout/portal-lb-rules.png"::: 1. Select your load-balancing rule. In this example, the load-balancing rule is named **myLBrule**.
-1. In the load-balancing rule, move the slider in **Idle timeout (minutes)** to your timeout value.
+1. In the load-balancing rule, input your timeout value into **Idle timeout (minutes)**.
1. Under **TCP reset**, select **Enabled**. :::image type="content" source="./media/load-balancer-tcp-idle-timeout/portal-lb-rules-tcp-reset.png" alt-text="Set idle timeout and tcp reset." border="true" lightbox="./media/load-balancer-tcp-idle-timeout/portal-lb-rules-tcp-reset.png":::
load-balancer Tutorial Load Balancer Ip Backend Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-load-balancer-ip-backend-portal.md
Previously updated : 12/16/2022 Last updated : 02/14/2024
In this tutorial, you:
Advance to the next article to learn how to create a cross-region load balancer: > [!div class="nextstepaction"]
-> [Create a cross-region Azure Load Balancer using the Azure portal](tutorial-cross-region-portal.md)
+> [Create a cross-region Azure Load Balancer using the Azure portal](tutorial-cross-region-portal.md)
machine-learning How To Manage Kubernetes Instance Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-kubernetes-instance-types.md
Previously updated : 11/09/2022 Last updated : 01/09/2024 # Create and manage instance types for efficient utilization of compute resources
-Instance types are an Azure Machine Learning concept that allows targeting certain types of compute nodes for training and inference workloads. For an Azure virtual machine, an example of an instance type is `STANDARD_D2_V3`.
+Instance types are an Azure Machine Learning concept that allows targeting certain types of compute nodes for training and inference workloads. For example, in an Azure virtual machine, an instance type is `STANDARD_D2_V3`. This article teaches you how to create and manage instance types for your computation requirements.
In Kubernetes clusters, instance types are represented in a custom resource definition (CRD) that's installed with the Azure Machine Learning extension. Two elements in the Azure Machine Learning extension represent the instance types:
The preceding code creates an instance type with the labeled behavior:
- Pods are assigned resource requests of `700m` for CPU and `1500Mi` for memory. - Pods are assigned resource limits of `1` for CPU, `2Gi` for memory, and `1` for NVIDIA GPU.
-Creation of custom instance types must meet the following parameters and definition rules, or it will fail:
+Creation of custom instance types must meet the following parameters and definition rules, or it fails:
| Parameter | Required or optional | Description | | | | |
blue_deployment = KubernetesOnlineDeployment(
-If you use the `resources` section, a valid resource definition needs to meet the following rules. An invalid resource definition will cause the model deployment to fail.
+If you use the `resources` section, a valid resource definition needs to meet the following rules. An invalid resource definition causes the model deployment to fail.
| Parameter | Required or optional | Description | | | | | | `requests:`<br>`cpu:`| Required | String values, which can't be zero or empty. <br>You can specify the CPU in millicores; for example, `100m`. You can also specify it in full numbers. For example, `"1"` is equivalent to `1000m`.| | `requests:`<br>`memory:` | Required | String values, which can't be zero or empty. <br>You can specify the memory as a full number + suffix; for example, `1024Mi` for 1024 MiB. <br>Memory can't be less than 1 MB.|
-| `limits:`<br>`cpu:` | Optional <br>(required only when you need GPU) | String values, which can't be zero or empty. <br>You can specify the CPU in millicores; for example `100m`. You can also specify it in full numbers. For example, `"1"` is equivalent to `1000m`. |
+| `limits:`<br>`cpu:` | Optional <br>(required only when you need GPU) | String values, which can't be zero or empty. <br>You can specify the CPU in millicores; for example, `100m`. You can also specify it in full numbers. For example, `"1"` is equivalent to `1000m`. |
| `limits:`<br>`memory:` | Optional <br>(required only when you need GPU) | String values, which can't be zero or empty. <br>You can specify the memory as a full number + suffix; for example, `1024Mi` for 1,024 MiB.| | `limits:`<br>`nvidia.com/gpu:` | Optional <br>(required only when you need GPU) | Integer values, which can't be empty and can be specified only in the `limits` section. <br>For more information, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#using-device-plugins). <br>If you require CPU only, you can omit the entire `limits` section.|
machine-learning How To Train With Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-ui.md
Previously updated : 11/04/2022 Last updated : 02/04/2024 # Submit a training job in Studio
-There are many ways to create a training job with Azure Machine Learning. You can use the CLI (see [Train models (create jobs)](how-to-train-model.md)), the REST API (see [Train models with REST (preview)](how-to-train-with-rest.md)), or you can use the UI to directly create a training job. In this article, you'll learn how to use your own data and code to train a machine learning model with a guided experience for submitting training jobs in Azure Machine Learning studio.
+There are many ways to create a training job with Azure Machine Learning. You can use the CLI (see [Train models (create jobs)](how-to-train-model.md)), the REST API (see [Train models with REST (preview)](how-to-train-with-rest.md)), or you can use the UI to directly create a training job. In this article, you learn how to use your own data and code to train a machine learning model with a guided experience for submitting training jobs in Azure Machine Learning studio.
[!INCLUDE [machine-learning-preview-generic-disclaimer](includes/machine-learning-preview-generic-disclaimer.md)]
There are many ways to create a training job with Azure Machine Learning. You ca
1. Select your subscription and workspace.
-* You may enter the job creation UI from the homepage. Click **Create new** and select **Job**.
+* You may enter the job creation UI from the homepage. Select **Create new** and select **Job**.
[![Azure Machine Learning studio homepage](media/how-to-train-with-ui/unified-job-submission-home.png)](media/how-to-train-with-ui/unified-job-submission-home.png)
-In this wizard, you can select your method of training, complete the rest of the submission wizard based on your selection, and submit the training job. Below we will walk through the wizard for running a custom script (command job).
+In this step, you can select your method of training, complete the rest of the submission form based on your selection, and submit the training job. Below we walk through the form with the steps for running a custom script (command job).
-[![Azure Machine Learning studio wizard landing page for users to choose method of training.](media/how-to-train-with-ui/training-method.png)](media/how-to-train-with-ui/training-method.png)
+[![Azure Machine Learning studio training form landing page for users to choose method of training.](media/how-to-train-with-ui/training-method.png)](media/how-to-train-with-ui/training-method.png)
## Configure basic settings
-The first step is configuring basic information about your training job. You can proceed next if you're satisfied with the defaults we have chosen for you or make changes to your desired preference.
+The first step is configuring basic information about your training job. You can proceed next if you're satisfied with the defaults we chose for you, or make changes to your desired preference.
-[![Azure Machine Learning studio job submission wizard for users to configure their basic settings.](media/how-to-train-with-ui/basic-settings.png)](media/how-to-train-with-ui/basic-settings.png)
+[![Azure Machine Learning studio job submission form for users to configure their basic settings.](media/how-to-train-with-ui/basic-settings.png)](media/how-to-train-with-ui/basic-settings.png)
These are the fields available: |Field| Description| || | |Job name| The job name field is used to uniquely identify your job. It's also used as the display name for your job.|
-|Experiment name| This helps organize the job in Azure Machine Learning studio. Each job's run record will be organized under the corresponding experiment in the studio's "Experiment" tab. By default, Azure will put the job in the **Default** experiment.|
+|Experiment name| This helps organize the job in Azure Machine Learning studio. Each job's run record is organized under the corresponding experiment in the studio's "Experiment" tab. By default, Azure puts the job in the **Default** experiment.|
|Description| Add some text describing your job, if desired.|
-|Timeout| Specify number of hours the entire training job is allowed to run. Once this limit is reached the system will cancel the job including any child jobs.|
+|Timeout| Specify number of hours the entire training job is allowed to run. Once this limit is reached the system cancels the job including any child jobs.|
|Tags| Add tags to your job to help with organization.| ## Training script
If the code isn't in the root directory, you should use the relative path. For e
``` Here, the source code is in the `src` subdirectory. The command would be `python ./src/main.py` (plus other command-line arguments).
-[![Image of referencing your code in the command in the training job submission wizard.](media/how-to-train-with-ui/training-script-code.png)](media/how-to-train-with-ui/training-script-code.png)
+[![Image of referencing your code in the command in the training job submission form.](media/how-to-train-with-ui/training-script-code.png)](media/how-to-train-with-ui/training-script-code.png)
### Inputs When you use an input in the command, you need to specify the input name. To indicate an input variable, use the form `${{inputs.input_name}}`. For instance, `${{inputs.wiki}}`. You can then refer to it in the command, for instance, `--data ${{inputs.wiki}}`.
-[![Image of referencing your inputs in the command in the training job submission wizard.](media/how-to-train-with-ui/training-script-inputs.png)](media/how-to-train-with-ui/training-script-inputs.png)
+[![Image of referencing your inputs in the command in the training job submission form.](media/how-to-train-with-ui/training-script-inputs.png)](media/how-to-train-with-ui/training-script-inputs.png)
## Select compute resources
Next step is to select the compute target on which you'd like your job to run. T
1. When you're satisfied with your choices, choose **Next**. [![Select a compute cluster dropdown selector image.](media/how-to-train-with-ui/compute.png)](media/how-to-train-with-ui/compute.png)
-If you're using Azure Machine Learning for the first time, you'll see an empty list and a link to create a new compute. For more information on creating the various types, see:
+If you're using Azure Machine Learning for the first time, you see an empty list and a link to create a new compute. For more information on creating the various types, see:
| Compute Type | How to | | | |
Curated environments are Azure-defined collections of Python packages used in co
### Custom environments
-Custom environments are environments you've specified yourself. You can specify an environment or reuse an environment that you've already created. To learn more, see [Manage software environments in Azure Machine Learning studio (preview)](how-to-manage-environments-in-studio.md#create-an-environment).
+Custom environments are environments you specified yourself. You can specify an environment or reuse an environment that you already created. To learn more, see [Manage software environments in Azure Machine Learning studio (preview)](how-to-manage-environments-in-studio.md#create-an-environment).
### Container registry image
If you don't want to use the Azure Machine Learning curated environments or spec
## Review and Create
-Once you've configured your job, choose **Next** to go to the **Review** page. To modify a setting, choose the pencil icon and make the change.
+Once you configured the job, choose **Next** to go to the **Review** page. To modify a setting, choose the pencil icon and make the change.
[![Azure Machine Learning studio job submission review pane image to validate selections before submission.](media/how-to-train-with-ui/review.png)](media/how-to-train-with-ui/review.png)
-To launch the job, choose **Submit training job**. Once the job is created, Azure will show you the job details page, where you can monitor and manage your training job.
+To launch the job, choose **Submit training job**. Once the job is created, Azure shows you the job details page, where you can monitor and manage your training job.
[!INCLUDE [Email Notification Include](includes/machine-learning-email-notifications.md)]
machine-learning How To Troubleshoot Kubernetes Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-kubernetes-compute.md
Previously updated : 11/11/2022 Last updated : 02/11/2024 # Troubleshoot Kubernetes Compute
-In this article, you learn how to troubleshoot common workload (including training jobs and endpoints) errors on the [Kubernetes compute](./how-to-attach-kubernetes-to-workspace.md).
+In this article, you learn how to troubleshoot common workload errors on the [Kubernetes compute](./how-to-attach-kubernetes-to-workspace.md). Common errors include training jobs and endpoint errors.
## Inference guide
-The common Kubernetes endpoint errors on Kubernetes compute are categorized into two scopes: **compute scope** and **cluster scope**. The compute scope errors are related to the compute target, such as the compute target is not found, or the compute target is not accessible. The cluster scope errors are related to the underlying Kubernetes cluster, such as the cluster itself is not reachable, or the cluster is not found.
+The common Kubernetes endpoint errors on Kubernetes compute are categorized into two scopes: **compute scope** and **cluster scope**. The compute scope errors are related to the compute target, such as the compute target isn't found, or the compute target isn't accessible. The cluster scope errors are related to the underlying Kubernetes cluster, such as the cluster itself isn't reachable, or the cluster isn't found.
### Kubernetes compute errors
- The common error types in **compute scope** that you might encounter when using Kubernetes compute to create online endpoints and online deployments for real-time model inference, which you can trouble shoot by following the guidelines:
+ The following are common error types in **compute scope** that you might encounter when using Kubernetes compute to create online endpoints and online deployments for real-time model inference. You can trouble shoot by following the linked sections for guidelines:
* [ERROR: GenericComputeError](#error-genericcomputeerror)
We could use the method to check private link setup by logging into one pod in t
* Find workspace ID in Azure portal or get this ID by running `az ml workspace show` in the command line. * Show all azureml-fe pods run by `kubectl get po -n azureml -l azuremlappname=azureml-fe`.
-* Login into any of them run `kubectl exec -it -n azureml {scorin_fe_pod_name} bash`.
+* Sign in into any of them run `kubectl exec -it -n azureml {scorin_fe_pod_name} bash`.
* If the cluster doesn't use proxy run `nslookup {workspace_id}.workspace.{region}.api.azureml.ms`. If you set up private link from VNet to workspace correctly, then the internal IP in VNet should be responded through the *DNSLookup* tool.
If you set up private link from VNet to workspace correctly, then the internal I
curl https://{workspace_id}.workspace.westcentralus.api.azureml.ms/metric/v2.0/subscriptions/{subscription}/resourceGroups/{resource_group}/providers/Microsoft.MachineLearningServices/workspaces/{workspace_name}/api/2.0/prometheus/post -X POST -x {proxy_address} -d {} -v -k ```
-When the proxy and workspace are correctly set up with a private link, you should observe an attempt to connect to an internal IP. A response with an HTTP 401 status code is expected in this scenario if a token is not provided.
+When the proxy and workspace are correctly set up with a private link, you should observe an attempt to connect to an internal IP. A response with an HTTP 401 status code is expected in this scenario if a token isn't provided.
## Other known issues
-### Kubernetes compute update does not take effect
+### Kubernetes compute update doesn't take effect
-At this time, the CLI v2 and SDK v2 do not allow updating any configuration of an existing Kubernetes compute. For example, changing the namespace does not take effect.
+At this time, the CLI v2 and SDK v2 don't allow updating any configuration of an existing Kubernetes compute. For example, changing the namespace doesn't take effect.
### Workspace or resource group name end with '-'
machine-learning Reference Yaml Mltable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-mltable.md
Title: 'CLI (v2) mltable YAML schema'
+ Title: 'CLI (v2) MLtable YAML schema'
description: Reference documentation for the CLI (v2) MLTable YAML schema.
Previously updated : 01/23/2023 Last updated : 02/14/2024
-# CLI (v2) mltable YAML schema
+# CLI (v2) MLtable YAML schema
[!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)]
-Find the source JSON schema at https://azuremlschemas.azureedge.net/latest/MLTable.schema.json.
+You can find the source JSON schema at https://azuremlschemas.azureedge.net/latest/MLTable.schema.json.
[!INCLUDE [schema note](includes/machine-learning-preview-old-json-schema-note.md)] ## How to author `MLTable` files
-This article contains information relating to the `MLTable` YAML schema only. For more information on MLTable, including `MLTable` file authoring, MLTable *artifacts* creation, consuming in Pandas and Spark, and end-to-end examples, read [Working with tables in Azure Machine Learning](how-to-mltable.md).
+This article presents information about the `MLTable` YAML schema only. For more information about MLTable, including
+- `MLTable` file authoring
+- MLTable *artifacts* creation
+- consumption in Pandas and Spark
+- end-to-end examples
+
+visit [Working with tables in Azure Machine Learning](how-to-mltable.md).
## YAML syntax | Key | Type | Description | Allowed values | Default value | | | - | -- | -- | - |
-| `$schema` | string | The YAML schema. If you use the Azure Machine Learning VS Code extension to author the YAML file, you can invoke schema and resource completions if you include `$schema` at the top of your file. | | |
-| `type` | const | `mltable` abstracts the schema definition for tabular data, to make it easier for data consumers to materialize the table into a Pandas/Dask/Spark dataframe | `mltable` | `mltable`|
-| `paths` | array | Paths can be a `file` path, `folder` path, or `pattern` for paths. `pattern` supports *globbing* patterns that specify sets of filenames with wildcard characters (`*`, `?`, `[abc]`, `[a-z]`). Supported URI types: `azureml`, `https`, `wasbs`, `abfss`, and `adl`. See [Core yaml syntax](reference-yaml-core-syntax.md) for more information that explains how to use the `azureml://` URI format. |`file`<br>`folder`<br>`pattern` | |
-| `transformations`| array | A defined transformation sequence, applied to data loaded from defined paths. Read [Transformations](#transformations) for more information. |`read_delimited`<br>`read_parquet`<br>`read_json_lines`<br>`read_delta_lake`<br>`take`<br>`take_random_sample`<br>`drop_columns`<br>`keep_columns`<br>`convert_column_types`<br>`skip`<br>`filter`<br>`extract_columns_from_partition_format` ||
+| `$schema` | string | The YAML schema. If you use the Azure Machine Learning Visual Studio Code extension to author the YAML file, you can invoke schema and resource completions if you include `$schema` at the top of your file | | |
+| `type` | const | `mltable` abstracts the schema definition for tabular data. Data consumers can more easily materialize the table into a Pandas/Dask/Spark dataframe | `mltable` | `mltable`|
+| `paths` | array | Paths can be a `file` path, `folder` path, or `pattern` for paths. `pattern` supports *globbing* patterns that specify sets of filenames with wildcard characters (`*`, `?`, `[abc]`, `[a-z]`). Supported URI types: `azureml`, `https`, `wasbs`, `abfss`, and `adl`. Visit [Core yaml syntax](reference-yaml-core-syntax.md) for more information about use of the `azureml://` URI format |`file`<br>`folder`<br>`pattern` | |
+| `transformations`| array | A defined transformation sequence, applied to data loaded from defined paths. Visit [Transformations](#transformations) for more information |`read_delimited`<br>`read_parquet`<br>`read_json_lines`<br>`read_delta_lake`<br>`take`<br>`take_random_sample`<br>`drop_columns`<br>`keep_columns`<br>`convert_column_types`<br>`skip`<br>`filter`<br>`extract_columns_from_partition_format` ||
### Transformations
This article contains information relating to the `MLTable` YAML schema only. Fo
|Read Transformation | Description | Parameters | ||||
-|`read_delimited` | Adds a transformation step to read delimited text file(s) provided in `paths`. | `infer_column_types`: Boolean to infer column data types. Defaults to True. Type inference requires that the current compute can access the data source. Currently, type inference will only pull the first 200 rows.<br><br>`encoding`: Specify the file encoding. Supported encodings: `utf8`, `iso88591`, `latin1`, `ascii`, `utf16`, `utf32`, `utf8bom` and `windows1252`. Default encoding: `utf8`.<br><br>`header`: user can choose one of the following options: `no_header`, `from_first_file`, `all_files_different_headers`, `all_files_same_headers`. Defaults to `all_files_same_headers`.<br><br>`delimiter`: The separator used to split columns.<br><br>`empty_as_string`: Specify if empty field values should load as empty strings. The default (False) will read empty field values as nulls. Passing this setting as *True* will read empty field values as empty strings. If the values are converted to numeric or datetime, then this setting has no effect, as empty values will be converted to nulls.<br><Br>`include_path_column`: Boolean to keep path information as column in the table. Defaults to False. This setting is useful when reading multiple files, and you want to know from which file a specific record originated. Additionally, you can keep useful information in the file path.<br><br>`support_multi_line`: By default (`support_multi_line=False`), all line breaks, including line breaks in quoted field values, will be interpreted as a record break. This approach to data reading increases speed, and it offers optimization for parallel execution on multiple CPU cores. However, it may result in silent production of more records with misaligned field values. Set this value to True when the delimited files are known to contain quoted line breaks. |
-| `read_parquet` | Adds a transformation step to read Parquet formatted file(s) provided in `paths`. | `include_path_column`: Boolean to keep path information as a table column. Defaults to False. This setting helps when you read multiple files, and you want to know from which file a specific record originated. Additionally, you can keep useful information in the file path.<br><br>**NOTE:** MLTable only supports reading parquet files that have columns consisting of primitive types. Columns containing arrays are **not** supported. |
-| `read_delta_lake` | Adds a transformation step to read a Delta Lake folder provided in `paths`. You can read the data at a particular timestamp or version. | `timestamp_as_of`: String. Timestamp to be specified for time-travel on the specific Delta Lake data. To read data at a specific point in time, the datetime string should have a [RFC-3339/ISO-8601 format](https://wikipedia.org/wiki/ISO_8601). (for example: "2022-10-01T00:00:00Z", "2022-10-01T00:00:00+08:00", "2022-10-01T01:30:00-08:00")<br><br>`version_as_of`: Integer. Version to be specified for time-travel on the specific Delta Lake data.<br><br>**One value of `timestamp_as_of` or `version_as_of` must be provided.**
-| `read_json_lines` | Adds a transformation step to read the json file(s) provided in `paths`. | `include_path_column`: Boolean to keep path information as column in the MLTable. Defaults to False. This setting becomes useful to read multiple files, and you want to know from which file a particular record originated. Additionally, you can keep useful information in file path.<br><br>`invalid_lines`: How to handle lines that have invalid JSON. Supported values: `error` and `drop`. Defaults to `error`.<br><br>`encoding`: Specify the file encoding. Supported encodings are `utf8`, `iso88591`, `latin1`, `ascii`, `utf16`, `utf32`, `utf8bom` and `windows1252`. Default is `utf8`.
+|`read_delimited` | Adds a transformation step to read the delimited text file(s) provided in `paths` | `infer_column_types`: Boolean to infer column data types. Defaults to True. Type inference requires that the current compute can access the data source. Currently, type inference only pulls the first 200 rows.<br><br>`encoding`: Specify the file encoding. Supported encodings: `utf8`, `iso88591`, `latin1`, `ascii`, `utf16`, `utf32`, `utf8bom`, and `windows1252`. Default encoding: `utf8`.<br><br>`header`: the user can choose one of these options: `no_header`, `from_first_file`, `all_files_different_headers`, `all_files_same_headers`. Defaults to `all_files_same_headers`.<br><br>`delimiter`: The separator that splits the columns.<br><br>`empty_as_string`: Specifies if empty field values should load as empty strings. The default value (False) reads empty field values as nulls. Passing this setting as *True* reads empty field values as empty strings. For values converted to numeric or datetime data types, this setting has no effect, because empty values are converted to nulls.<br><Br>`include_path_column`: Boolean to keep path information as column in the table. Defaults to False. This setting helps when reading multiple files, and you want to know the originating file for a specific record. Additionally, you can keep useful information in the file path.<br><br>`support_multi_line`: By default (`support_multi_line=False`), all line breaks, including line breaks in quoted field values, are interpreted as a record break. This approach to data reading increases speed, and it offers optimization for parallel execution on multiple CPU cores. However, it might result in silent production of more records with misaligned field values. Set this value to `True` when the delimited files are known to contain quoted line breaks |
+| `read_parquet` | Adds a transformation step to read the Parquet formatted file(s) provided in `paths` | `include_path_column`: Boolean to keep the path information as a table column. Defaults to False. This setting helps when you read multiple files, and you want to know the originating file for a specific record. Additionally, you can keep useful information in the file path.<br><br>**NOTE:** MLTable only supports reads of parquet files that have columns consisting of primitive types. Columns containing arrays are **not** supported |
+| `read_delta_lake` | Adds a transformation step to read a Delta Lake folder provided in `paths`. You can read the data at a particular timestamp or version | `timestamp_as_of`: String. Timestamp to be specified for time-travel on the specific Delta Lake data. To read data at a specific point in time, the datetime string should have an [RFC-3339/ISO-8601 format](https://wikipedia.org/wiki/ISO_8601) (for example: "2022-10-01T00:00:00Z", "2022-10-01T00:00:00+08:00", "2022-10-01T01:30:00-08:00").<br><br>`version_as_of`: Integer. Version to be specified for time-travel on the specific Delta Lake data.<br><br>**You must provide one value of `timestamp_as_of` or `version_as_of`**
+| `read_json_lines` | Adds a transformation step to read the json file(s) provided in `paths` | `include_path_column`: Boolean to keep path information as an MLTable column. Defaults to False. This setting helps when you read multiple files, and you want to know the originating file for a specific record. Additionally, you can keep useful information in the file path<br><br>`invalid_lines`: Determines how to handle lines that have invalid JSON. Supported values: `error` and `drop`. Defaults to `error`<br><br>`encoding`: Specify the file encoding. Supported encodings: `utf8`, `iso88591`, `latin1`, `ascii`, `utf16`, `utf32`, `utf8bom`, and `windows1252`. Defaults to `utf8`
#### Other transformations |Transformation | Description | Parameters | Example(s) |||||
-|`convert_column_types` | Adds a transformation step to convert the specified columns into their respective specified new types. | `columns`<br>An array of column names to convert.<br><br>`column_type`<br>The type to which you want to convert (`int`, `float`, `string`, `boolean`, `datetime`) | <code>- convert_column_types:<br>&emsp; &emsp;- columns: [Age]<br>&emsp; &emsp;&emsp; column_type: int</code><br> Convert the Age column to integer.<br><br><code>- convert_column_types:<br>&emsp; &emsp;- columns: date<br>&emsp; &emsp; &emsp;column_type:<br>&emsp; &emsp;&emsp; &emsp;&emsp; &emsp;datetime:<br>&emsp; &emsp;&emsp; &emsp;&emsp; &emsp;&emsp; &emsp;formats:<br>&emsp; &emsp;&emsp; &emsp;&emsp; &emsp;&emsp; &emsp;- "%d/%m/%Y"</code><br>Convert the date column to the format `dd/mm/yyyy`. Read [`to_datetime`](/python/api/mltable/mltable.datatype#mltable-datatype-to-datetime) for more information about datetime conversion.<br><br><code>- convert_column_types:<br>&emsp; &emsp;- columns: [is_weekday]<br>&emsp; &emsp; &emsp;column_type:<br>&emsp; &emsp;&emsp; &emsp;&emsp; &emsp;boolean:<br>&emsp; &emsp;&emsp; &emsp;&emsp; &emsp;&emsp; &emsp;true_values:['yes', 'true', '1']<br>&emsp; &emsp;&emsp; &emsp;&emsp; &emsp;&emsp; &emsp;false_values:['no', 'false', '0']</code><br> Convert the is_weekday column to a boolean; yes/true/1 values in the column will map to `True`, and no/false/0 values in the column will map to `False`. Read [`to_bool`](/python/api/mltable/mltable.datatype#mltable-datatype-to-bool) for more information about boolean conversion.
-|`drop_columns` | Adds a transformation step to remove desired columns from the dataset. | An array of column names to drop | `- drop_columns: ["col1", "col2"]`
-| `keep_columns` | Adds a transformation step to keep the specified columns, and remove all others from the dataset. | An array of column names to keep | `- keep_columns: ["col1", "col2"]` |
-|`extract_columns_from_partition_format` | Adds a transformation step to use the partition information of each path, and then extract them into columns based on the specified partition format.| partition format to use |`- extract_columns_from_partition_format: {column_name:yyyy/MM/dd/HH/mm/ss}` creates a datetime column, where 'yyyy', 'MM', 'dd', 'HH', 'mm' and 'ss' are used to extract year, month, day, hour, minute and second values for the datetime type |
-|`filter` | Filter the data, leaving only the records that match the specified expression. | An expression as a string. | `- filter: 'col("temperature") > 32 and col("location") == "UK"'` <br>Only leave rows where the temperature exceeds 32, and the location is the UK. |
+|`convert_column_types` | Adds a transformation step to convert the specified columns into their respective specified new types | `columns`<br>An array of column names to convert<br><br>`column_type`<br>The type into which you want to convert (`int`, `float`, `string`, `boolean`, `datetime`) | <code>- convert_column_types:<br>&emsp; &emsp;- columns: [Age]<br>&emsp; &emsp;&emsp; column_type: int</code><br> Convert the Age column to integer.<br><br><code>- convert_column_types:<br>&emsp; &emsp;- columns: date<br>&emsp; &emsp; &emsp;column_type:<br>&emsp; &emsp;&emsp; &emsp;&emsp; &emsp;datetime:<br>&emsp; &emsp;&emsp; &emsp;&emsp; &emsp;&emsp; &emsp;formats:<br>&emsp; &emsp;&emsp; &emsp;&emsp; &emsp;&emsp; &emsp;- "%d/%m/%Y"</code><br>Convert the date column to the format `dd/mm/yyyy`. Read [`to_datetime`](/python/api/mltable/mltable.datatype#mltable-datatype-to-datetime) for more information about datetime conversion.<br><br><code>- convert_column_types:<br>&emsp; &emsp;- columns: [is_weekday]<br>&emsp; &emsp; &emsp;column_type:<br>&emsp; &emsp;&emsp; &emsp;&emsp; &emsp;boolean:<br>&emsp; &emsp;&emsp; &emsp;&emsp; &emsp;&emsp; &emsp;true_values:['yes', 'true', '1']<br>&emsp; &emsp;&emsp; &emsp;&emsp; &emsp;&emsp; &emsp;false_values:['no', 'false', '0']</code><br> Convert the is_weekday column to a boolean; yes/true/1 values in the column map to `True`, and no/false/0 values in the column map to `False`. Read [`to_bool`](/python/api/mltable/mltable.datatype#mltable-datatype-to-bool) for more information about boolean conversion
+|`drop_columns` | Adds a transformation step to remove specific columns from the dataset | An array of column names to drop | `- drop_columns: ["col1", "col2"]`
+| `keep_columns` | Adds a transformation step to keep the specified columns, and remove all others from the dataset | An array of column names to preserve | `- keep_columns: ["col1", "col2"]` |
+|`extract_columns_from_partition_format` | Adds a transformation step to use the partition information of each path, and then extract them into columns based on the specified partition format.| partition format to use |`- extract_columns_from_partition_format: {column_name:yyyy/MM/dd/HH/mm/ss}` creates a datetime column, where 'yyyy', 'MM', 'dd', 'HH', 'mm' and 'ss' are used to extract year, month, day, hour, minute, and second values for the datetime type |
+|`filter` | Filter the data, leaving only the records that match the specified expression. | An expression as a string | `- filter: 'col("temperature") > 32 and col("location") == "UK"'` <br>Only leave rows where the temperature exceeds 32, and UK is the location |
|`skip` | Adds a transformation step to skip the first count rows of this MLTable. | A count of the number of rows to skip | `- skip: 10`<br> Skip first 10 rows |`take` | Adds a transformation step to select the first count rows of this MLTable. | A count of the number of rows from the top of the table to take | `- take: 5`<br> Take the first five rows.
-|`take_random_sample` | Adds a transformation step to randomly select each row of this MLTable, with probability chance. | `probability`<br>The probability of selecting an individual row. Must be in the range [0,1].<br><br>`seed`<br>Optional random seed. | <code>- take_random_sample:<br>&emsp; &emsp;probability: 0.10<br>&emsp; &emsp;seed:123</code><br> Take a 10 percent random sample of rows using a random seed of 123.
+|`take_random_sample` | Adds a transformation step to randomly select each row of this MLTable, with probability chance. | `probability`<br>The probability of selecting an individual row. Must be in the range [0,1].<br><br>`seed`<br>Optional random seed | <code>- take_random_sample:<br>&emsp; &emsp;probability: 0.10<br>&emsp; &emsp;seed:123</code><br> Take a 10 percent random sample of rows using a random seed of 123
## Examples
-This section provides examples of MLTable use. More examples are available:
+Examples of MLTable use. Find more examples at:
- [Working with tables in Azure Machine Learning](how-to-mltable.md)-- in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/sdk/python/assets/data).
+- the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/sdk/python/assets/data)
### Quickstart
-In this quickstart, you'll read the famous iris dataset from a public https server. The `MLTable` files should be located in a folder, so create the folder and `MLTable` file using:
+This quickstart reads the famous iris dataset from a public https server. To proceed, you must place the `MLTable` files in a folder. First, create the folder and `MLTable` file with:
```bash mkdir ./iris
cd ./iris
touch ./MLTable ```
-Next, add the following contents to the `MLTable` file:
+Next, place this content in the `MLTable` file:
```yml $schema: https://azuremlschemas.azureedge.net/latest/MLTable.schema.json
transformations:
include_path_column: true ```
-You can then materialize into Pandas using:
+You can then materialize into Pandas with:
> [!IMPORTANT]
-> You must have the `mltable` Python SDK installed. Install it with:<br>
+> You must have the `mltable` Python SDK installed. Install this SDK with:
+>
> `pip install mltable`. ```python
tbl = mltable.load("./iris")
df = tbl.to_pandas_dataframe() ```
-You should see that the data includes a new column named `Path`. This column contains the data path, which is `https://azuremlexamples.blob.core.windows.net/datasets/iris.csv`.
+Ensure that the data includes a new column named `Path`. This column contains the `https://azuremlexamples.blob.core.windows.net/datasets/iris.csv` data path.
-You can create a data asset using the CLI:
+The CLI can create a data asset:
```azurecli az ml data create --name iris-from-https --version 1 --type mltable --path ./iris ```
-The folder containing the `MLTable` will automatically upload to cloud storage (the default Azure Machine Learning datastore).
+The folder containing the `MLTable` automatically uploads to cloud storage (the default Azure Machine Learning datastore).
> [!TIP]
-> An Azure Machine Learning data asset is similar to web browser bookmarks (favorites). Instead of remembering long URIs (storage paths) that point to your most frequently used data, you can create a data asset, and then access that asset with a friendly name.
+> An Azure Machine Learning data asset is similar to web browser bookmarks (favorites). Instead of remembering long URIs (storage paths) that point to your most frequently-used data, you can create a data asset, and then access that asset with a friendly name.
### Delimited text files
type: mltable
# ADLS gen2: abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>/ # Datastore: azureml://subscriptions/<subid>/resourcegroups/<rg>/workspaces/<ws>/datastores/<datastore_name>/paths/<path> - paths: - pattern: azureml://subscriptions/<subid>/resourcegroups/<rg>/workspaces/<ws>/datastores/<datastore_name>/paths/<path>/*.parquet
transformations:
- extract_columns_from_partition_format: {timestamp:yyyy/MM/dd} ``` - ### Delta Lake ```yaml
transformations:
include_path_column: false ``` -- ## Next steps - [Install and use the CLI (v2)](how-to-configure-cli.md)-- [Working with tables in Azure Machine Learning](how-to-mltable.md)
+- [Working with tables in Azure Machine Learning](how-to-mltable.md)
managed-instance-apache-cassandra Use Vpn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/use-vpn.md
+
+ Title: Use VPN with Azure Managed Instance for Apache Cassandra
+description: Discover how to secure your cluster with vpn when you use Azure Managed Instance for Apache Cassandra.
++++ Last updated : 02/08/2024
+ms.devlang: azurecli
++
+# Use VPN with Azure Managed Instance for Apache Cassandra
+
+Azure Managed Instance for Apache Cassandra nodes requires access to many other Azure services when injected into your VNet. Normally this is enabled by ensuring your VNet has outbound access to the internet. If your security policy prohibits outbound access, you can also configure firewall rules or UDRs for the appropriate access, for more information, see [here](network-rules.md).
+
+However, if you have internal security concerns around data exfiltration, your security policy might also even prohibit direct access to these services from your VNet. By using a VPN with your Azure Managed Instance for Apache Cassandra, you can ensure that data nodes in the VNet communicate only to a single secure VPN endpoint, with no direct access to any other services.
+
+> [!IMPORTANT]
+> Using VPN with Azure Managed Instance for Apache Cassandra is in public preview.
+> This feature is provided without a service level agreement, and it's not recommended for production workloads.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## How to use VPN with Azure Managed Instance for Apache Cassandra
+
+1. Create a cluster of Cassandra Managed Instance using "VPN" as the value for the `--azure-connection-method` option:
+
+ ```bash
+ az managed-cassandra cluster create \
+ --cluster-name "vpn-test-cluster" \
+ --resource-group "vpn-test-rg" \
+ --location "eastus2" \
+ --azure-connection-method "VPN" \
+ --initial-cassandra-admin-password "password"
+ ```
+
+1. Use the following command to see the cluster properties. From the output, make a copy of the `privateLinkResourceId` ID:
+
+ ```bash
+ az managed-cassandra cluster show \
+ --resource-group "vpn-test-rg" \
+ --cluster-name "vpn-test-cluster"
+ ```
+
+1. On the portal, [create a private endpoint](../cosmos-db/how-to-configure-private-endpoints.md)
+ 1. On the Resource tab, select "Connect to an Azure resource by resource ID or alias." as the connection method and `Microsoft.Network/privateLinkServices` as the resource type. Enter the `privateLinkResourceId` from step (2).
+ 1. On the Virtual Network tab, select your virtual network's subnet and make sure to select the option for "Statically allocate IP address."
+ 1. Validate and create.
+
+ > [!NOTE]
+ > At the moment, the connection between our management service and your private endpoint requires the Azure Managed Instance for Apache Cassandra team cassandra-preview@microsoft.com to approve it.
+
+1. Get the IP address of your private endpoint NIC.
+
+1. Create a new data center using the IP address from (5) as the `--private-endpoint-ip-address` parameter.
++
+## Next steps
+- Learn about [hybrid cluster configuration](configure-hybrid-cluster.md) in Azure Managed Instance for Apache Cassandra.
migrate Migrate Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix.md
ms. Previously updated : 07/27/2023 Last updated : 02/14/2024
Migration and modernization | N/A | Migrate [VMware VMs](tutorial-migrate-vmware
[Device 42](https://go.microsoft.com/fwlink/?linkid=2097158) | Assess VMware VMs, Hyper-V VMs, physical servers, and other cloud workloads.| N/A [DMA](/sql/dma/dma-overview) | Assess SQL Server databases. | N/A [DMS](../dms/dms-overview.md) | N/A | Migrate SQL Server, Oracle, MySQL, PostgreSQL, MongoDB.
-[Dr Migrate](https://www.altra.cloud/products/dr-migrate) | Assess VMware VMs, Hyper-V VMs, physical servers, SQL Server databases, and other cloud workloads. | N/A
[Lakeside](https://go.microsoft.com/fwlink/?linkid=2104908) | Assess virtual desktop infrastructure (VDI) | N/A [Movere](https://www.movere.io/) | Assess VMware VMs, Hyper-V VMs, Xen VMs, physical servers, workstations (including VDI) and other cloud workloads. | N/A [RackWare](https://go.microsoft.com/fwlink/?linkid=2102735) | N/A | Migrate VMware VMs, Hyper-V VMs, Xen VMs, KVM VMs, physical servers, and other cloud workloads
migrate Tutorial App Containerization Java App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-java-app-service.md
ms.
Previously updated : 5/2/2022 Last updated : 02/14/2024 # Java web app containerization and migration to Azure App Service
Before you begin this tutorial, you should:
| **Identify a machine to install the tool** | A Windows machine to install and run the Azure Migrate: App Containerization tool. The Windows machine could be a server (Windows Server 2016 or later) or client (Windows 10) operating system, meaning that the tool can run on your desktop as well. <br/><br/> The Windows machine running the tool should have network connectivity to the servers/virtual machines hosting the Java web applications to be containerized.<br/><br/> Ensure that 6-GB space is available on the Windows machine running the Azure Migrate: App Containerization tool for storing application artifacts. <br/><br/> The Windows machine should have internet access, directly or via a proxy. **Application servers** | Enable Secure Shell (SSH) connection on port 22 on the server(s) running the Java application(s) to be containerized. <br/>
-**Java web application** | The tool currently supports: <br/><br/> - Applications running on Tomcat 8 or later.<br/> - Application servers on Ubuntu Linux 16.04/18.04/20.04, Debian 7/8, CentOS 6/7, Red Hat Enterprise Linux 5/6/7. <br/> - Applications using Java version 7 or later. <br/><br/> The tool currently doesn't support: <br/><br/> - Application servers running multiple Tomcat instances. <br/>
+**Java web application** | The tool currently supports: <br/><br/> - Applications running on Tomcat 8 or Tomcat 9.<br/> - Application servers on Ubuntu Linux 16.04/18.04/20.04, Debian 7/8, CentOS 6/7, Red Hat Enterprise Linux 5/6/7. <br/> - Applications using Java 7 or Java 8. <br/> If you have version outside of this, find an image that supports your required versions and modify the dockerfile to replace image <br/><br/> The tool currently doesn't support: <br/><br/> - Application servers running multiple Tomcat instances <br/>
## Prepare an Azure user account
nat-gateway Tutorial Migrate Outbound Nat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/tutorial-migrate-outbound-nat.md
Title: 'Tutorial: Migrate outbound access to NAT gateway'
-description: Learn how to migrate outbound access in your virtual network to a NAT gateway.
+description: Use this tutorial to learn how to migrate outbound access in your virtual network to an Azure NAT gateway.
Previously updated : 5/25/2022 Last updated : 02/13/2024
+# Customer intent: As a network engineer, I want to learn how to migrate my outbound access to a NAT gateway.
# Tutorial: Migrate outbound access to Azure NAT Gateway
-In this article, you'll learn how to migrate your outbound connectivity from [default outbound access](../virtual-network/ip-services/default-outbound-access.md) to a NAT gateway. You'll learn how to change your outbound connectivity from load balancer outbound rules to a NAT gateway. You'll reuse the IP address from the outbound rule configuration for the NAT gateway.
+In this tutorial, you learn how to migrate your outbound connectivity from [default outbound access](../virtual-network/ip-services/default-outbound-access.md) to a NAT gateway. You learn how to change your outbound connectivity from load balancer outbound rules to a NAT gateway. You reuse the IP address from the outbound rule configuration for the NAT gateway.
-Azure NAT Gateway is the recommended method for outbound connectivity. A NAT gateway is a fully managed and highly resilient Network Address Translation (NAT) service. A NAT gateway doesn't have the same limitations of SNAT port exhaustion as default outbound access. A NAT gateway replaces the need for outbound rules in a load balancer for outbound connectivity.
+Azure NAT Gateway is the recommended method for outbound connectivity. A NAT gateway is a fully managed and highly resilient Network Address Translation (NAT) service. A NAT gateway doesn't have the same limitations of Source Network Address Translation (SNAT) port exhaustion as default outbound access. A NAT gateway replaces the need for outbound rules in a load balancer for outbound connectivity.
-For more information about Azure NAT Gateway, see [What is Azure NAT Gateway](nat-overview.md)
+For more information about Azure NAT Gateway, see [What is Azure NAT Gateway?](nat-overview.md)
In this tutorial, you learn how to:
In this tutorial, you learn how to:
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* A standard public load balancer in your subscription. The load balancer must have a separate frontend IP address and outbound rules configured. For more information on creating an Azure Load Balancer, see [Quickstart: Create a public load balancer to load balance VMs using the Azure portal](../load-balancer/quickstart-load-balancer-standard-public-portal.md)
+* A standard public load balancer in your subscription. The load balancer must have a separate frontend IP address and outbound rules configured. For more information on creating an Azure Load Balancer, see [Quickstart: Create a public load balancer to load balance VMs using the Azure portal](../load-balancer/quickstart-load-balancer-standard-public-portal.md).
+
* The load balancer name used in the examples is **myLoadBalancer**. > [!NOTE]
In this tutorial, you learn how to:
## Migrate default outbound access
-In this section, youΓÇÖll learn how to change your outbound connectivity method from default outbound access to a NAT gateway.
+In this section, you learn how to change your outbound connectivity method from default outbound access to a NAT gateway.
1. Sign in to the [Azure portal](https://portal.azure.com).
-2. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways**.
+1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways**.
-3. In **NAT gateways**, select **+ Create**.
+1. In **NAT gateways**, select **+ Create**.
-4. In **Create network address translation (NAT) gateway**, enter or select the following information in the **Basics** tab.
+1. In **Create network address translation (NAT) gateway**, enter or select the following information in the **Basics** tab.
| Setting | Value | | - | -- |
In this section, youΓÇÖll learn how to change your outbound connectivity method
| Availability zone | Leave the default of **None**. | | Idle timeout (minutes) | Enter **10**. |
-5. Select the **Outbound IP** tab, or select **Next: Outbound IP** at the bottom of the page.
+1. Select the **Outbound IP** tab, or select **Next: Outbound IP** at the bottom of the page.
-6. In **Public IP addresses** in the **Outbound IP** tab, select **Create a new public IP address**.
+1. In **Public IP addresses** in the **Outbound IP** tab, select **Create a new public IP address**.
-7. In **Add a public IP address**, enter **myNATgatewayIP** in **Name**. Select **OK**.
+1. In **Add a public IP address**, enter **myNATgatewayIP** in **Name**. Select **OK**.
-8. Select the **Subnet** tab, or select **Next: Subnet** at the bottom of the page.
+1. Select the **Subnet** tab, or select **Next: Subnet** at the bottom of the page.
-9. In the pull-down box for **Virtual network**, select your virtual network.
+1. In the pull-down box for **Virtual network**, select your virtual network.
-10. In **Subnet name**, select the checkbox next to your subnet.
+1. In **Subnet name**, select the checkbox next to your subnet.
-11. Select the **Review + create** tab, or select **Review + create** at the bottom of the page.
+1. Select the **Review + create** tab, or select **Review + create** at the bottom of the page.
-12. Select **Create**.
+1. Select **Create**.
## Migrate load balancer outbound connectivity
-In this section, youΓÇÖll learn how to change your outbound connectivity method from outbound rules to a NAT gateway. You'll keep the same frontend IP address used for the outbound rules. You'll remove the outbound ruleΓÇÖs frontend IP configuration then create a NAT gateway with the same frontend IP address. A public load balancer is used throughout this section.
+In this section, you learn how to change your outbound connectivity method from outbound rules to a NAT gateway. You keep the same frontend IP address used for the outbound rules. You remove the outbound ruleΓÇÖs frontend IP configuration then create a NAT gateway with the same frontend IP address. A public load balancer is used throughout this section.
### Remove outbound rule frontend IP configuration
You remove the outbound rule and the associated frontend IP configuration from y
1. Sign in to the [Azure portal](https://portal.azure.com).
-2. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
-3. Select **myLoadBalancer** or your load balancer.
+1. Select **myLoadBalancer** or your load balancer.
-4. In **myLoadBalancer**, select **Frontend IP configuration** in **Settings**.
+1. In **myLoadBalancer**, select **Frontend IP configuration** in **Settings**.
-5. Note the **IP address** in **Frontend IP configuration** that you wish to migrate to a **NAT gateway**. You'll need this information in the next section. In this example, it's **myFrontendIP-outbound**.
+1. Note the **IP address** in **Frontend IP configuration** that you wish to migrate to a **NAT gateway**. You'll need this information in the next section. In this example, it's **myFrontendIP-outbound**.
-6. Select **Delete** next to the IP configuration you wish to remove. In this example, it's **myFrontendIP-outbound**.
+1. Select **Delete** next to the IP configuration you wish to remove. In this example, it's **myFrontendIP-outbound**.
:::image type="content" source="./media/tutorial-migrate-outbound-nat/frontend-ip.png" alt-text="Screenshot of frontend IP address removal for NAT gateway.":::
+1. Select **Delete**.
-7. Select **Delete**.
-
-8. In **Delete myFrontendIP-outbound**, select the check box next to **I have read and understood that this frontend IP configuration as well as the associated resources listed above will be deleted**.
+1. In **Delete myFrontendIP-outbound**, select the check box next to **I have read and understood that this frontend IP configuration as well as the associated resources listed above will be deleted**.
-9. Select **Delete**. This procedure will delete the frontend IP configuration and the outbound rule associated with the frontend.
+1. Select **Delete**. This procedure deletes the frontend IP configuration and the outbound rule associated with the frontend.
:::image type="content" source="./media/tutorial-migrate-outbound-nat/delete-frontend-ip.png" alt-text="Screenshot of confirmation of frontend IP address removal for NAT gateway."::: ### Create NAT gateway
-In this section, youΓÇÖll create a NAT gateway with the IP address previously used for outbound rule and assign it to your pre-created subnet within your virtual network. The subnet name for this example is **myBackendSubnet**.
+In this section, you create a NAT gateway with the IP address previously used for outbound rule and assign it to your precreated subnet within your virtual network. The subnet name for this example is **myBackendSubnet**.
1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways**.
-2. In **NAT gateways**, select **+ Create**.
+1. In **NAT gateways**, select **+ Create**.
-3. In **Create network address translation (NAT) gateway**, enter or select the following information in the **Basics** tab.
+1. In **Create network address translation (NAT) gateway**, enter or select the following information in the **Basics** tab.
| Setting | Value | | - | -- |
In this section, youΓÇÖll create a NAT gateway with the IP address previously us
| Availability zone | Leave the default of **None**. | | Idle timeout (minutes) | Enter **10**. |
-4. Select the **Outbound IP** tab, or select **Next: Outbound IP** at the bottom of the page.
+1. Select the **Outbound IP** tab, or select **Next: Outbound IP** at the bottom of the page.
-5. In **Public IP addresses** in the **Outbound IP** tab, select the IP address you noted from the previous section. In this example, it's **myPublicIP-outbound**.
+1. In **Public IP addresses** in the **Outbound IP** tab, select the IP address you noted from the previous section. In this example, it's **myPublicIP-outbound**.
-6. Select the **Subnet** tab, or select **Next: Subnet** at the bottom of the page.
+1. Select the **Subnet** tab, or select **Next: Subnet** at the bottom of the page.
-7. In the pull-down box for **Virtual network**, select your virtual network.
+1. In the pull-down box for **Virtual network**, select your virtual network.
-8. In **Subnet name**, select the checkbox for your subnet. In this example, it's **myBackendSubnet**.
+1. In **Subnet name**, select the checkbox for your subnet. In this example, it's **myBackendSubnet**.
-9. Select the **Review + create** tab, or select **Review + create** at the bottom of the page.
+1. Select the **Review + create** tab, or select **Review + create** at the bottom of the page.
-10. Select **Create**.
+1. Select **Create**.
## Clean up resources
the NAT gateway with the following steps:
1. From the left-hand menu, select **Resource groups**.
-2. Select the **myResourceGroup** resource group.
+1. Select the **myResourceGroup** resource group.
-3. Select **Delete resource group**.
+1. Select **Delete resource group**.
-4. Enter **myResourceGroup** and select **Delete**.
+1. Enter **myResourceGroup** and select **Delete**.
-## Next steps
+## Next step
In this article, you learned how to:
network-watcher Network Watcher Alert Triggered Packet Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-alert-triggered-packet-capture.md
Previously updated : 01/31/2024 Last updated : 02/14/2024 + # Monitor networks proactively with alerts and Azure Functions using Packet Capture Network Watcher packet capture creates capture sessions to track traffic in and out of virtual machines. The capture file can have a filter that is defined to track only the traffic that you want to monitor. This data is stored in a storage blob or locally on the guest machine.
By using Network Watcher alerting and functions from within the Azure ecosystem,
## Prerequisites
-* The latest version of [Azure PowerShell](/powershell/azure/install-azure-powershell).
-* An existing instance of Network Watcher. If you don't already have one, [create an instance of Network Watcher](network-watcher-create.md).
-* An existing virtual machine in the same region as Network Watcher with the [Windows extension](../virtual-machines/extensions/network-watcher-windows.md) or [Linux virtual machine extension](../virtual-machines/extensions/network-watcher-linux.md).
+- The latest version of [Azure PowerShell](/powershell/azure/install-azure-powershell).
+- An existing instance of Network Watcher. If you don't already have one, [create an instance of Network Watcher](network-watcher-create.md).
+- An existing virtual machine in the same region as Network Watcher with the [Windows extension](../virtual-machines/extensions/network-watcher-windows.md) or [Linux virtual machine extension](../virtual-machines/extensions/network-watcher-linux.md).
## Scenario
-In this example, your VM has more outgoing traffic than usual and you want to be alerted. Similarly, you can create alerts for any condition.
+In this example, a virtual machine (VM) has more outgoing traffic than usual and you want to be alerted. Similarly, you can create alerts for any condition.
When an alert is triggered, the packet-level data helps to analyze why the outgoing traffic has increased. You can take steps to return the virtual machine to its original state.
This scenario does the following:
To create an Azure function to process the alert and create a packet capture, follow these steps:
-1. In the [Azure portal](https://portal.azure.com), search for *function app* in **All services** and select it.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. In the search box at the top of the portal, enter *function app*. Select **Function App** from the search results
- :::image type="content" source="./media/network-watcher-alert-triggered-packet-capture/search-result.png" alt-text="Screenshot of finding the function app in Azure portal.":::
+ :::image type="content" source="./media/network-watcher-alert-triggered-packet-capture/function-app-portal-search.png" alt-text="Screenshot shows how to search for the function app in Azure portal." lightbox="./media/network-watcher-alert-triggered-packet-capture/function-app-portal-search.png":::
-2. Select **Create** to open the **Create Function App** screen.
+1. Select **+ Create**.
- :::image type="content" source="./media/network-watcher-alert-triggered-packet-capture/create-function-app.png" alt-text="Screenshot of the Create function app screen.":::
+1. In the **Basics** tab of **Create Function App**, enter or select values for the following settings:
-2. In the **Basics** tab, enter the following values:
- 1. Under **Project Details**, select the **Subscription** for which you want to create the Function app and the **Resource Group** to contain the app.
- 2. Under **Instance details**, do the following:
- 1. Enter the name of the Function app. This name will be appended by *.azurewebsites.net*.
- 2. In **Publish**, select the mode of publishing, either *Code* or *Docker Container*.
- 3. Select a **Runtime stack**.
- 4. Select the version of the Runtime stack in **Version**.
- 5. Select the **Region** in which you want to create the function app.
- 3. Select **OK** to create the app.
- 3. Under **Operating System**, select the type of Operating system that you're currently using. Azure recommends the type of Operating system based on your runtime stack selection.
- 4. Under **Plan**, select the type of plan that you want to use for the function app. Choose from the following options:
+ - Under **Project Details**, select the **Subscription** for which you want to create the Function app and the **Resource Group** to contain the app.
+ - Under **Instance details**, do the following:
+ - Enter the name of the Function app. This name will be appended by *.azurewebsites.net*.
+ - In **Publish**, select the mode of publishing, either *Code* or *Docker Container*.
+ - Select a **Runtime stack**.
+ - Select the version of the Runtime stack in **Version**.
+ - Select the **Region** in which you want to create the function app.
+ - Select **OK** to create the app.
+ - Under **Operating System**, select the type of Operating system that you're currently using. Azure recommends the type of Operating system based on your runtime stack selection.
+ - Under **Plan**, select the type of plan that you want to use for the function app. Choose from the following options:
- Consumption (Serverless) - For event-driven scaling for the most minimum cost. - Functions Premium - For enterprise-level, serverless applications with event-based scaling and network isolation. - App Service Plan - For reusing compute from an existing app service plan.
-3. Select **Review + create** to create the app.
+
+ :::image type="content" source="./media/network-watcher-alert-triggered-packet-capture/create-function-app-basics.png" alt-text="Screenshot of the Create function app page in the Azure portal." lightbox="./media/network-watcher-alert-triggered-packet-capture/create-function-app-basics.png":::
+
+1. Select **Review + create** to create the app.
### Create an Azure function
To create an Azure function to process the alert and create a packet capture, fo
2. Select **Develop in portal** from the **Development environment** drop-down. 3. Under **Select a template**, select **HTTP Trigger**.
-4. In the **Template details** section, do the following:
- 1. Enter the name of the function in the **New function** field.
- 2. Select **Function** as the **Authorization level** and select **Create**.
+4. In the **Template details** section:
+ - Enter the name of the function in the **New function** field.
+ - Select **Function** as the **Authorization level** and select **Create**.
5. After the function is created, go to the function and select **Code + Test**. :::image type="content" source="./media/network-watcher-alert-triggered-packet-capture/code-test.png" alt-text="Screenshot of the Code + Test screen.":::
network-watcher Vnet Flow Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-cli.md
Previously updated : 08/16/2023 Last updated : 02/14/2024 +
+#CustomerIntent: As an Azure administrator, I want to log my virtual network IP traffic using Network Watcher VNet flow logs so that I can analyze it later.
# Create, change, enable, disable, or delete VNet flow logs using the Azure CLI
az network watcher flow-log show --name myVNetFlowLog --resource-group NetworkWa
## Download a flow log
-To access and download VNet flow logs from your storage account, you can use Azure Storage Explorer. Fore more information, see [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md).
+To download VNet flow logs from your storage account, use the [az storage blob download](/cli/azure/storage/blob#az-storage-blob-download) command.
-VNet flow log files saved to a storage account follow the logging path shown in the following example:
+VNet flow log files are saved to the storage account at the following path:
``` https://{storageAccountName}.blob.core.windows.net/insights-logs-flowlogflowevent/flowLogResourceID=/SUBSCRIPTIONS/{subscriptionID}/RESOURCEGROUPS/NETWORKWATCHERRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKWATCHERS/NETWORKWATCHER_{Region}/FLOWLOGS/{FlowlogResourceName}/y={year}/m={month}/d={day}/h={hour}/m=00/macAddress={macAddress}/PT1H.json ```
+> [!NOTE]
+> You can also access and download VNet flow logs files from the storage account container using the Azure Storage Explorer. Storage Explorer is a standalone app that you can conveniently use to access and work with Azure Storage data. For more information, see [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md).
+ ## Disable traffic analytics on flow log resource To disable traffic analytics on the flow log resource and continue to generate and save VNet flow logs to a storage account, use [az network watcher flow-log update](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-update).
network-watcher Vnet Flow Logs Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-powershell.md
Previously updated : 08/16/2023 Last updated : 02/14/2024 +
+#CustomerIntent: As an Azure administrator, I want to log my virtual network IP traffic using Network Watcher VNet flow logs so that I can analyze it later.
# Create, change, enable, disable, or delete VNet flow logs using Azure PowerShell
Get-AzNetworkWatcherFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceG
## Download a flow log
-To access and download VNet flow logs from your storage account, you can use Azure Storage Explorer. Fore more information, see [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md).
+To download VNet flow logs from your storage account, use [Get-AzStorageBlobContent](/powershell/module/az.storage/get-azstorageblobcontent) cmdlet.
-VNet flow log files saved to a storage account follow the logging path shown in the following example:
+VNet flow log files are saved to the storage account at the following path:
``` https://{storageAccountName}.blob.core.windows.net/insights-logs-flowlogflowevent/flowLogResourceID=/SUBSCRIPTIONS/{subscriptionID}/RESOURCEGROUPS/NETWORKWATCHERRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKWATCHERS/NETWORKWATCHER_{Region}/FLOWLOGS/{FlowlogResourceName}/y={year}/m={month}/d={day}/h={hour}/m=00/macAddress={macAddress}/PT1H.json ```
+> [!NOTE]
+> You can also access and download VNet flow logs files from the storage account container using the Azure Storage Explorer. Storage Explorer is a standalone app that you can conveniently use to access and work with Azure Storage data. For more information, see [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md).
+ ## Disable traffic analytics on flow log resource To disable traffic analytics on the flow log resource and continue to generate and save VNet flow logs to storage account, use [Set-AzNetworkWatcherFlowLog](/powershell/module/az.network/set-aznetworkwatcherflowlog).
To delete a VNet flow log resource, use [Remove-AzNetworkWatcherFlowLog](/powers
Remove-AzNetworkWatcherFlowLog -Name myVNetFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG ```
-## Next steps
+## Related content
- To learn about traffic analytics, see [Traffic analytics](traffic-analytics.md). - To learn how to use Azure built-in policies to audit or enable traffic analytics, see [Manage traffic analytics using Azure Policy](traffic-analytics-policy-portal.md).
openshift Responsibility Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/responsibility-matrix.md
Microsoft and Red Hat are responsible for enabling changes to the cluster infras
<td> <ul>
-<li>Use the provided OpenShift Cluster Manager controls to add or remove additional worker nodes as required.
+<li>Add or remove additional worker nodes as required.
<li>Respond to Microsoft and Red Hat notifications regarding cluster resource requirements. </li>
partner-solutions New Relic How To Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-how-to-manage.md
For each virtual machine, the following info appears:
> [!NOTE] > If a virtual machine shows that an agent is installed, but the option **Uninstall extension** is disabled, the agent was configured through a different New Relic resource in the same Azure subscription. To make any changes, go to the other New Relic resource in the Azure subscription.
-## Monitor virtual machine scale sets by using the New Relic agent
+## Monitor Azure Virtual Machine Scale Sets by using the New Relic agent
-You can install New Relic agent on virtual machine scale sets as an extension. Select **Virtual Machine Scale Sets** under **New Relic account config** in the Resource menu. In the working pane, you see a list of all virtual machine scale sets in the subscription.
-Virtual Machine Scale Sets (VMSS) is an Azure Compute resource which can be used to deploy and manage a set of identical VMs. Please familiarize yourself with the Azure resource [here](../../virtual-machine-scale-sets/overview.md) and the orchestration modes available [here](../../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md).
+You can install New Relic agent on Azure Virtual Machine Scale Sets as an extension.
-The native integration can be used to install agent on both the uniform and flexible scale-sets. The new instances (VMs) of a scale set, in any mode, will receive the agent extension in the event of a scale-up scenario. VMSS resources in a uniform orchestration mode supports Automatic, Rolling, and Manual upgrade policy while resources in Flexible orchestration mode only supports manual upgrade today. In case, a manual upgrade policy is set for a resource, please upgrade the instances manually by installing the agent extension for the already scaled up instances. The auto-scaling and instance orchestration guide can be found [here](../../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#autoscaling-and-instance-orchestration)
+1. Select **Virtual Machine Scale Sets** under **New Relic account config** in the Resource menu.
+1. In the working pane, you see a list of all virtual machine scale sets in the subscription.
+
+Virtual Machine Scale Sets is an Azure Compute resource that can be used to deploy and manage a set of identical VMs. For more information, see [Virtual Machine Scale Sets](../../virtual-machine-scale-sets/overview.md).
+
+For more information on the orchestration modes available [orchestration modes](../../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md).
+
+Use native integration to install an agent on both the uniform and flexible scale-sets. The new instances (VMs) of a scale set, in any mode, receive the agent extension during scale-up. Virtual Machine Scale Sets resources in a uniform orchestration mode support _Automatic_, _Rolling_, and _Manual_ upgrade policy. Resources in Flexible orchestration mode only support manual upgrade.
+
+If a manual upgrade policy is set for a resource, upgrade the instances manually by installing the agent extension for the already scaled up instances. For more information on autoscaling and instance orchestration, see [autoscaling-and-instance-orchestration](../../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#autoscaling-and-instance-orchestration).
> [!NOTE]
-> In manual upgrade policy, pre-existing VM instances will not receive the extension automatically. This will show the agent status as **Partially Installed**. Please upgrade the VM instances by manually installing extension on them from the VM extensions blade or by going to ΓÇÿVMSS resource/InstancesΓÇÖ view.
+> In manual upgrade policy, pre-existing VM instances don't receive the extension automatically. The agent status shows as **Partially Installed**. Upgrade the VM instances by manually installing the extension on them from the VM extensions Resource menu, or go to specific Virtual Machine Scale Sets and select **Instances** from the Resource menu.
> [!NOTE]
-> The agent installation dashboard will support the automatic and rolling upgrade policy for Flex orchestration mode in the next release when similar support is available from VMSS Flex resources.
+> The agent installation dashboard supports the automatic and rolling upgrade policy for Flex orchestration mode in the next release when similar support is available from Virtual Machine Scale Sets Flex resources.
## Monitor app services by using the New Relic agent
For each app service, the following information appears:
|--|-| | **Resource name** | App service name.| | **Resource status** | Indicates whether the App service is running or stopped. The New Relic agent can be installed only on app services that are running.|
- | **App Service plan** | The plan that's configured for the app service.|
+ | **App Service plan** | The plan configured for the app service.|
| **Agent status** | Status of the agent. | To install the New Relic agent, select the app service and then select **Install Extension**. The application settings for the selected app service are updated, and the app service is restarted to complete the configuration of the New Relic agent.
payment-hsm Access Payshield Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/access-payshield-manager.md
ms.devlang: azurecli Previously updated : 01/31/2024 Last updated : 01/30/2024 # Tutorial: Use a VPN to access the payShield manager for your payment HSM
payment-hsm Change Performance Level https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/change-performance-level.md
Previously updated : 09/12/2022 Last updated : 01/30/2024 # How to change the performance level of a payment HSM
-Azure Payment HSM supports several SKUs; for a list, see [Azure Payment HSM overview: supported SKUs](support-guide.md#supported-skus). The performance license level of your payment HSM is initially determined by the SKU you specify during the creation process.
+Azure Payment HSM supports several SKUs; for a list, see [Azure Payment HSM overview: supported SKUs](support-guide.md#supported-skus). The SKU you specify during the creation process initially determines the performance license level of your payment HSM.
-You can change performance level of an existing payment HSM by changing its SKU. There will be no interruption in your production payment HSMs while performance level is being updated.
+You can change performance level of an existing payment HSM by changing its SKU. There is no interruption in your production payment HSMs while performance level is being updated.
The SKU of a payment HSM can be updated through ARMClient and PowerShell. ## Updating the SKU via ARMClient
-You can update the SKU of your payment HSM using the [Azure Resource Manager client tool](https://github.com/projectkudu/ARMClient), which is a simple command line tool that calls the Azure Resource Manager API. Installation instructions are at <https://github.com/projectkudu/ARMClient>.
+You can update the SKU of your payment HSM using the [Azure Resource Manager client tool](https://github.com/projectkudu/ARMClient), which is a simple command line tool that calls the Azure Resource Manager API. Installation instructions are at <https://github.com/projectkudu/ARMClient>.
Once installed, you can use the following command:
payment-hsm Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/getting-started.md
tags: azure-resource-manager Previously updated : 01/25/2024 Last updated : 01/30/2024
This article provides steps and information necessary to get started with Azure
3. Provide your contact information to the Microsoft account team and the Azure Payment HSM Product Manager [via email](mailto:paymentHSMRequest@microsoft.com), so they can set up your Thales support account.
- A Thales Customer ID will be created, so you can submit payShield 10K support issues as well as download documentation, software and firmware from Thales portal. The Thales Customer ID can be used by customer team to create individual account access to Thales support portal.
+ A Thales Customer ID is created, so you can submit payShield 10K support issues as well as download documentation, software, and firmware from Thales portal. The customer team can use the Thales Customer ID to create individual account access to Thales support portal.
| Email Form | |--|
This article provides steps and information necessary to get started with Azure
| Telephone No. (with Country Code): | | Is it state owned/governmental: Y / N |Located in a Free trade zone: Y / N|
-
4. You must next engage with the Microsoft CSAs to plan your deployment, and to understand the networking requirements and constraints/workarounds before onboarding the service. For details, see: - [Azure Payment HSM deployment scenarios](deployment-scenarios.md) - [Solution design for Azure Payment HSM](solution-design.md) - [Azure Payment HSM "fastpathenabled" feature flag and tag](fastpathenabled.md) - [Azure Payment HSM traffic inspection](inspect-traffic.md)
-
-5. Contact Microsoft support to get your subscription approved and receive feature registration, to access the Azure payment HSM service. See [Register the Azure Payment HSM resource providers](register-payment-hsm-resource-providers.md?tabs=azure-cli). You will not be charged at this step.
-6. Follow the [Tutorials](create-payment-hsm.md) and [How-To Guides](register-payment-hsm-resource-providers.md) to create payment HSMs. Customer billing will start when the HSM resource is created.
+5. Contact Microsoft support to get your subscription approved and receive feature registration, to access the Azure payment HSM service. See [Register the Azure Payment HSM resource providers](register-payment-hsm-resource-providers.md?tabs=azure-cli). There is no charge at this step.
+6. To create payment HSMs, follow the [Tutorials](create-payment-hsm.md) and [How-To Guides](register-payment-hsm-resource-providers.md). Customer billing starts when the HSM resource is created.
7. Upgrade the payShield 10K firmware to their desired version. 8. Review the support process and scope here for Microsoft support and Thales's support: [Azure Payment HSM Service support guide ](support-guide.md).
-9. Monitor your payShield 10K using standard SNMP V3 tools. payShield Monitor is an additional product available to provide continuous monitoring of HSMs. Contact Thales Sales rep for licensing information.
+9. Monitor your payShield 10K using standard SNMP V3 tools. payShield Monitor is another product available to provide continuous monitoring of HSMs. Contact Thales Sales rep for licensing information.
## Next steps
payment-hsm Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/overview.md
tags: azure-resource-manager
Previously updated : 01/31/2024 Last updated : 01/30/2024 # What is Azure Payment HSM?
payment-hsm Peer Vnets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/peer-vnets.md
Previously updated : 01/31/2024 Last updated : 01/30/2024
payment-hsm Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/quickstart-powershell.md
Previously updated : 01/31/2024 Last updated : 01/30/2024 ms.devlang: azurepowershell
payment-hsm Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/quickstart-template.md
description: Quickstart showing how to create Azure Payment HSM using Resource M
Previously updated : 01/31/2024 Last updated : 01/30/2024 tags: azure-resource-manager
payment-hsm Remove Payment Hsm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/remove-payment-hsm.md
Previously updated : 01/31/2024 Last updated : 01/30/2024
payment-hsm Reuse Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/reuse-vnet.md
Previously updated : 01/31/2024 Last updated : 01/30/2024 # How to reuse an existing virtual network
You can create a payment HSM on an existing virtual network by skipping the "Cre
# [Azure CLI](#tab/azure-cli)
-To create a subnet, you must know the name, resource group, and address space of the existing virtual network. To find them, use the Azure CLI [az network vnet list](/cli/azure/network/vnet#az-network-vnet-list) command. The output is easier to read if you format it as a table using the -o flag:
+To create a subnet, you must know the name, resource group, and address space of the existing virtual network. To find them, use the Azure CLI [az network vnet list](/cli/azure/network/vnet#az-network-vnet-list) command. The output is easier to read if you format it as a table using the -o flag:
```azurecli-interactive az network vnet list -o table
To verify that the VNet and subnet were created correctly, use the Azure CLI [az
az network vnet subnet show -g "myResourceGroup" --vnet-name "myVNet" -n myPHSMSubnet ```
-Make note of the subnet's ID, as it is needed for the next step. The ID of the subnet ends with the name of the subnet:
+Make note of the subnet's ID, as it is needed for the next step. The ID of the subnet ends with the name of the subnet:
```json "id": "/subscriptions/<subscriptionID>/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVNet/subnets/myPHSMSubnet",
Make note of the subnet's ID, as it is needed for the next step. The ID of the
# [Azure PowerShell](#tab/azure-powershell)
-To create a subnet, you must know the name, resource group, and address space of the existing virtual network. To find them, use the Azure PowerShell [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) cmdlet
+To create a subnet, you must know the name, resource group, and address space of the existing virtual network. To find them, use the Azure PowerShell [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) cmdlet.
```azurepowershell-interactive Get-AzVirtualNetwork
To verify that the subnet was added correctly, use the Azure PowerShell [Get-AzV
Get-AzVirtualNetwork -Name "myVNet" -ResourceGroupName "myResourceGroup" ```
-Make note of the subnet's ID, as it is needed for the next step. The ID of the subnet ends with the name of the subnet:
+Make note of the subnet's ID, as it is needed for the next step. The ID of the subnet ends with the name of the subnet:
```json "Id": "/subscriptions/<subscriptionID>/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVNet/subnets/myPHSMSubnet",
Make note of the subnet's ID, as it is needed for the next step. The ID of the
## Create a payment HSM
-Now that you've added a subnet to your existing virtual network, you can create a payment HSM by following the steps in [Create a payment HSM](create-payment-hsm.md#create-a-payment-hsm). You need the resource group; name and address space of the virtual network; and name, address space, and ID of the subnet.
+Now that the subnet is added to your existing virtual network, you can create a payment HSM by following the steps in [Create a payment HSM](create-payment-hsm.md#create-a-payment-hsm). You need the resource group; name and address space of the virtual network; and name, address space, and ID of the subnet.
## Next steps
payment-hsm Solution Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/solution-design.md
tags: azure-resource-manager Previously updated : 01/31/2024 Last updated : 01/30/2024
payment-hsm Support Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/support-guide.md
tags: azure-resource-manager Previously updated : 01/31/2024 Last updated : 01/30/2024
Microsoft works with Thales to ensure that customers meet the prerequisites befo
The only smart cards compatible with the ciphers used to enable over-network use smart cards have a blue band and are labeled "payShield Manager Card". - If a customer need to purchase a payShield Trusted Management Device (TMD), they should contact their Thales representatives or find their contacts through the [Thales contact page](https://cpl.thalesgroup.com/contact-us).-- Customers must download and review the "Hosted HSM End User Guide", which is available through the Thales CPL Customer Support Portal. The Hosted HSM End User Guide provides more details on the changes to payShield to this service.
+- Customers must download and review the "Hosted HSM End User Guide," which is available through the Thales CPL Customer Support Portal. The Hosted HSM End User Guide provides more details on the changes to payShield to this service.
- Customers must review the "Azure Payment HSM - Get Ready for payShield 10K" guide that they received from Microsoft. (Customers who do not have the guide may request it from [Microsoft Support](#microsoft-support).) - If a customer is new to payShield or the remote management option, they should take the formal training courses available from Thales and its approved partners.-- If a customer is using payShield on premises today with custom firmware, they must conduct a porting exercise to update the firmware to a version compatible with the Azure deployment. Contact a Thales account manager to request a quote.
+- If a customer is using payShield on premises today with custom firmware, they must conduct a porting exercise to update the firmware to a version compatible with the Azure deployment. To request a quote, contact a Thales account manager.
## Firmware and license support
-The HSM base firmware installed is Thales payShield10K base software version 1.4a 1.8.3. Versions below 1.4a 1.8.3. are not supported. Customers must ensure that they only upgrade to a firmware version that meets their compliance requirements.
+The HSM base firmware installed is Thales payShield10K base software version 1.4a 1.8.3. Versions less than 1.4a 1.8.3. are not supported. Customers must ensure that they only upgrade to a firmware version that meets their compliance requirements.
The licenses included in Azure payment HSM:
Microsoft provides support for hardware issues, networking issues, and provision
Microsoft support can be contacted by creating a support ticket through the Azure portal: -- From the Azure portal homepage, select the "Support + troubleshooting" icon (a question mark in a circle) in the upper-right.
+- From the Azure portal homepage, select the "Support + troubleshooting" icon (a question mark in a circle).
- Select the "Help + Support" button. - Select "Create a support request."-- On the "New support request" screen, select "Technical" as your issue type, and then "Payment HSM" as the service type.
+- On the "New support request" screen, select "Technical" as your issue type, and then "Payment HSM" as the service type.
## Thales support Thales will provide payment application-level support including client software, HSM configuration and backup, and HSM operation support.
-All Azure Payment HSM customers have Enhanced Support Plan with Thales. The [Thales Welcome Pack for Authentication and Encryption Products](https://supportportal.thalesgroup.com/csm?sys_kb_id=1d2bac074f13f340102400818110c7d9&id=kb_article_view&sysparm_rank=1&sysparm_tsqueryId=e7f1843d87f3c9107b0664e80cbb352e&sysparm_article=KB0019882) is an important reference for customers, as it explains the Thales support plan, scope, and responsiveness. Download the [Thales Welcome Pack PDF](https://supportportal.thalesgroup.com/sys_attachment.do?sys_id=52681fca1b1e0110e2af520f6e4bcb96).
+All Azure Payment HSM customers have an Enhanced Support Plan with Thales. The [Thales Welcome Pack for Authentication and Encryption Products](https://supportportal.thalesgroup.com/csm?sys_kb_id=1d2bac074f13f340102400818110c7d9&id=kb_article_view&sysparm_rank=1&sysparm_tsqueryId=e7f1843d87f3c9107b0664e80cbb352e&sysparm_article=KB0019882) is an important reference for customers, as it explains the Thales support plan, scope, and responsiveness. Download the [Thales Welcome Pack PDF](https://supportportal.thalesgroup.com/sys_attachment.do?sys_id=52681fca1b1e0110e2af520f6e4bcb96).
Thales support can be contacted through the [Thales CPL Customer Support Portal](https://supportportal.thalesgroup.com/csm).
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-extensions.md
The following extensions are available in Azure Database for PostgreSQL flexible
|[unaccent](https://www.postgresql.org/docs/13/unaccent.html) |Text search dictionary that removes accents |1.1 |1.1 |1.1 |1.1 |1.1 |1.1 | |[uuid-ossp](https://www.postgresql.org/docs/13/uuid-ossp.html) |Generate universally unique identifiers (UUIDs) |N/A |1.1 |1.1 |1.1 |1.1 |1.1 |
+> [!NOTE]
+> Several extensions listed as not applicable (N/A) for Postgres 16 are expected to be accessible in all regions by March 31, 2024.
+ ## dblink and postgres_fdw
private-link Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/availability.md
Title: Azure Private Link availability description: In this article, learn about which Azure services support Private Link.--++ Last updated 10/28/2022
private-link Configure Asg Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/configure-asg-private-endpoint.md
Title: Configure an application security group with a private endpoint description: Learn how to create a private endpoint with an application security group (ASG) or apply an ASG to an existing private endpoint.--++ Last updated 06/14/2022
private-link Create Private Endpoint Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-bicep.md
Title: 'Quickstart: Create a private endpoint - Bicep' description: In this quickstart, you'll learn how to create a private endpoint using Bicep. -+ Last updated 05/02/2022-+ #Customer intent: As someone who has a basic network background but is new to Azure, I want to create a private endpoint using Bicep.
private-link Create Private Endpoint Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-cli.md
Title: 'Quickstart: Create a private endpoint - Azure CLI' description: In this quickstart, you learn how to create a private endpoint using the Azure CLI. -+ Last updated 06/14/2023-+ #Customer intent: As someone who has a basic network background but is new to Azure, I want to create a private endpoint by using the Azure CLI.
private-link Create Private Endpoint Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-portal.md
Title: 'Quickstart: Create a private endpoint - Azure portal' description: In this quickstart, learn how to create a private endpoint using the Azure portal.-+ Last updated 06/13/2023-+ #Customer intent: As someone who has a basic network background but is new to Azure, I want to create a private endpoint on a SQL server so that I can securely connect to it.
private-link Create Private Endpoint Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-powershell.md
Title: 'Quickstart: Create a private endpoint - Azure PowerShell' description: In this quickstart, you learn how to create a private endpoint using Azure PowerShell. -+ Last updated 06/14/2023-+ #Customer intent: As someone who has a basic network background but is new to Azure, I want to create a private endpoint by using Azure PowerShell.
private-link Create Private Endpoint Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-template.md
Title: 'Quickstart: Create a private endpoint - ARM template' description: In this quickstart, you'll learn how to create a private endpoint using an Azure Resource Manager template (ARM template). -+ Last updated 07/18/2022-+ #Customer intent: As someone who has a basic network background but is new to Azure, I want to create a private endpoint by using an ARM template.
private-link Create Private Link Service Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-link-service-bicep.md
Title: 'Quickstart: Create a private link service - Bicep'
description: In this quickstart, you use Bicep to create a private link service. -+ Last updated 04/29/2022-+
private-link Create Private Link Service Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-link-service-cli.md
Title: 'Quickstart - Create an Azure Private Link service - Azure CLI' description: In this quickstart, learn how to create an Azure Private Link service using Azure CLI. -+ Last updated 02/03/2023-+ ms.devlang: azurecli #Customer intent: As someone with a basic network background, but is new to Azure, I want to create an Azure private link service using Azure CLI
private-link Create Private Link Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-link-service-portal.md
Title: 'Quickstart - Create a Private Link service - Azure portal'
description: Learn how to create a Private Link service using the Azure portal in this quickstart. -+ Last updated 08/29/2023-+ #Customer intent: As someone with a basic network background who's new to Azure, I want to create an Azure Private Link service by using the Azure portal
private-link Create Private Link Service Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-link-service-powershell.md
Title: 'Quickstart: Create an Azure private link service - Azure PowerShell' description: In this quickstart, learn how to create an Azure private link service using Azure PowerShell. -+ Last updated 06/23/2023-+ #Customer intent: As someone with a basic network background, but is new to Azure, I want to create an Azure private link service
private-link Create Private Link Service Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-link-service-template.md
Title: 'Quickstart: Create a private link service - ARM template' description: In this quickstart, you use an Azure Resource Manager template (ARM template) to create a private link service.-+ Last updated 03/30/2023-+
private-link Disable Private Endpoint Network Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/disable-private-endpoint-network-policy.md
Title: Manage network policies for private endpoints
description: Learn how to manage network policies for private endpoints. -+ Last updated 07/26/2023-+ ms.devlang: azurecli
private-link Disable Private Link Service Network Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/disable-private-link-service-network-policy.md
Title: 'Disable network policies for Azure Private Link service source IP address' description: Learn how to disable network policies for Azure Private Link. -+ Last updated 02/02/2023-+ ms.devlang: azurecli
private-link How To Approve Private Link Cross Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/how-to-approve-private-link-cross-subscription.md
Title: Approve private endpoint connections across subscriptions description: Get started learning how to approve and manage private endpoint connections across subscriptions by using Azure Private Link.--++ Last updated 01/11/2024
private-link Inspect Traffic With Azure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/inspect-traffic-with-azure-firewall.md
Title: 'Azure Firewall scenarios to inspect traffic destined to a private endpoint' description: Learn about different scenarios to inspect traffic destined to a private endpoint using Azure Firewall.-+ Last updated 08/14/2023-+
private-link Manage Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/manage-private-endpoint.md
Title: Manage Azure private endpoints
description: Learn how to manage private endpoints in Azure. -+ Last updated 05/17/2022-+
private-link Private Endpoint Dns Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns-integration.md
Title: Azure Private Endpoint DNS integration description: Learn about Azure Private Endpoint DNS configuration scenarios. -+ Last updated 11/15/2023-+
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
Title: Azure Private Endpoint private DNS zone values description: Learn about the private DNS zone values for Azure services that support private endpoints.--++ Last updated 11/15/2023
private-link Private Endpoint Export Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-export-dns.md
Title: Export DNS records for a private endpoint - Azure portal description: In this tutorial, learn how to export DNS records for a private endpoint in the Azure portal. --++ Last updated 07/25/2021
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md
Title: What is a private endpoint?
description: In this article, you learn how to use the Private Endpoint feature of Azure Private Link. -+ Last updated 10/13/2023-+ #Customer intent: As someone who has a basic network background but is new to Azure, I want to understand the capabilities of private endpoints so that I can securely connect to my Azure PaaS services within the virtual network.
private-link Private Link Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-link-overview.md
Title: What is Azure Private Link? description: Overview of Azure Private Link features, architecture, and implementation. Learn how Azure Private Endpoints and Azure Private Link service works and how to use them. -+ Last updated 01/17/2023-+
private-link Private Link Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-link-service-overview.md
Title: What is Azure Private Link service? description: Learn about Azure Private Link service. -+ Last updated 10/27/2022-+
private-link Rbac Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/rbac-permissions.md
Title: Azure RBAC permissions for Azure Private Link description: Get started learning about the Azure RBAC permissions needed to deploy a private endpoint and private link service.--++ Last updated 5/25/2021
private-link Troubleshoot Private Endpoint Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/troubleshoot-private-endpoint-connectivity.md
Title: Troubleshoot Azure Private Endpoint connectivity problems description: Step-by-step guidance to diagnose private endpoint connectivity-+ Last updated 03/28/2023-+ # Troubleshoot Azure Private Endpoint connectivity problems
private-link Troubleshoot Private Link Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/troubleshoot-private-link-connectivity.md
Title: Troubleshoot Azure Private Link Service connectivity problems description: Step-by-step guidance to diagnose private link connectivity-+ Last updated 03/29/2020-+ # Troubleshoot Azure Private Link Service connectivity problems
private-link Tutorial Dns On Premises Private Resolver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-dns-on-premises-private-resolver.md
Title: 'Tutorial: Create a private endpoint DNS infrastructure with Azure Private Resolver for an on-premises workload' description: Learn how to deploy a private endpoint with an Azure Private resolver for an on-premises workload.--++ Last updated 08/29/2023
private-link Tutorial Inspect Traffic Azure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-inspect-traffic-azure-firewall.md
Title: 'Tutorial: Inspect private endpoint traffic with Azure Firewall' description: Learn how to inspect private endpoint traffic with Azure Firewall.--++
private-link Tutorial Private Endpoint Sql Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-private-endpoint-sql-cli.md
Title: 'Tutorial: Connect to an Azure SQL server using an Azure Private Endpoint - Azure CLI' description: Use this tutorial to learn how to create an Azure SQL server with a private endpoint using Azure CLI -+ # Customer intent: As someone with a basic network background, but is new to Azure, I want to create a private endpoint on a SQL server so that I can securely connect to it. Last updated 11/03/2020-+
private-link Tutorial Private Endpoint Sql Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-private-endpoint-sql-portal.md
Title: 'Tutorial: Connect to an Azure SQL server using an Azure Private Endpoint - Azure portal' description: Get started with this tutorial to learn how to connect to a storage account privately via Azure Private Endpoint using the Azure portal. -+ Last updated 08/30/2023-+ # Customer intent: As someone with a basic network background, but is new to Azure, I want to create a private endpoint on a SQL server so that I can securely connect to it.
private-link Tutorial Private Endpoint Sql Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-private-endpoint-sql-powershell.md
Title: 'Tutorial: Connect to an Azure SQL server using an Azure Private Endpoint - PowerShell' description: Use this tutorial to learn how to create an Azure SQL server with a private endpoint using Azure PowerShell -+ # Customer intent: As someone with a basic network background, but is new to Azure, I want to create a private endpoint on a SQL server so that I can securely connect to it. Last updated 10/31/2020-+
private-link Tutorial Private Endpoint Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-private-endpoint-storage-portal.md
Title: 'Tutorial: Connect to a storage account using an Azure Private Endpoint' description: Get started with this tutorial using Azure Private endpoint to connect to a storage account privately.--++ Last updated 07/18/2023
reliability Reliability Guidance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-guidance-overview.md
For a more detailed overview of reliability principles in Azure, see [Reliabilit
|Azure DevOps|| [Azure DevOps Data protection - data availability](/azure/devops/organizations/security/data-protection?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json&preserve-view=true&#data-availability)| |Azure Elastic SAN|[Availability zone support](reliability-elastic-san.md#availability-zone-support)|[Disaster recovery and business continuity](reliability-elastic-san.md#disaster-recovery-and-business-continuity)| |Azure Health Data Services - Azure API for FHIR|| [Disaster recovery for Azure API for FHIR](../healthcare-apis/azure-api-for-fhir/disaster-recovery.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) |
+|Azure Health Insights|[Reliability in Azure Health Insights](reliability-health-insights.md)|[Reliability in Azure Health Insights](reliability-health-insights.md)|
|Azure IoT Hub| [IoT Hub high availability and disaster recovery](../iot-hub/iot-hub-ha-dr.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [IoT Hub high availability and disaster recovery](../iot-hub/iot-hub-ha-dr.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | |Azure Machine Learning Service|| [Failover for business continuity and disaster recovery](../machine-learning/v1/how-to-high-availability-machine-learning.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | |Azure NetApp Files|| [Manage disaster recovery using cross-region replication](../azure-netapp-files/cross-region-replication-manage-disaster-recovery.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) |
reliability Reliability Health Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-health-insights.md
+
+ Title: Reliability in Azure AI Health Insights
+
+description: This article describes Reliability in Azure AI Health Insights service.
+++++ Last updated : 02/06/2024++++
+# Reliability in Azure AI Health Insights
+
+This article describes reliability support in Azure AI Health Insights, and covers both regional reliability with availability zones and cross-region resiliency with disaster recovery . For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
+
+When you create a Health Insights resource in the Azure portal, you specify a region. From then on, your resource and all of its operations stay associated with that particular Azure region. It's rare, but not impossible, to encounter a network issue that hits an entire region.
+If your solution needs to always be available, then you should design it to either fail-over into another region or split the workload between two or more regions.
++
+## Availability zone support
+Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. In the case of a local zone failure, availability zones are designed so that if the one zone is affected, regional services, capacity, and high availability are supported by the remaining two zones.
+
+Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see Regions and availability zones.
+
+Azure availability zones-enabled services are designed to provide the right level of reliability and flexibility. Azure AI Health Insights support 'zonal' configuration, which means instances are pinned to a specific zone.
+++
+## Zone down experience
+During a zone-wide outage, the customer should expect a brief degradation of performance, until the service's self-healing rebalances underlying capacity to adjust to healthy zones. This isn't dependent on zone restoration; it's expected that the Microsoft-managed service self-healing state compensates for a lost zone, using capacity from other zones.
+++
+## Cross-region disaster recovery in multi-region geography
+Disaster recovery (DR) is about recovering from high-impact events, such as natural disasters or failed deployments that result in downtime. Regardless of the cause, the best remedy for a disaster is a well-defined and tested DR plan and an application design that actively supports DR.
+
+When it comes to DR, Microsoft uses the [shared responsibility model](/azure/reliability/business-continuity-management-program#shared-responsibility-model). In a shared responsibility model, Microsoft ensures that the baseline infrastructure and platform services are available. For those services, you are responsible for setting up a disaster recovery plan that works for your workload.
+
+For Azure AI Health Insights, the service does not store data for a long period, rather only when processing of the data. If a region failure occurs, all data associated to the requests that are in progress will be lost.
+If your solution needs to always be available, then you should design it to either fail-over into another region or split the workload between two or more regions. When you plan to deploy your application for DR, it's helpful to understand Azure regions and geographies. For more information, see [Azure cross-region replication](/azure/reliability/cross-region-replication-azure).
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Reliability in Azure](/azure/availability-zones/overview)
role-based-access-control Scope Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/scope-overview.md
The scope consists of a series of identifiers separated by the slash (/) charact
``` - `{subscriptionId}` is the ID of the subscription to use (a GUID).-- `{resourcesGroupName}` is the name of the containing resource group.
+- `{resourceGroupName}` is the name of the containing resource group.
- `{providerName}` is the name of the [resource provider](../azure-resource-manager/management/azure-services-resource-providers.md) that handles the resource, then `{resourceType}` and `{resourceSubType*}` identify further levels within that resource provider. - `{resourceName}` is the last part of the string that identifies a specific resource.
role-based-access-control Tutorial Role Assignments User Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/tutorial-role-assignments-user-powershell.md
-+ Last updated 02/02/2019
To complete this tutorial, you will need:
- Permissions to create users in Microsoft Entra ID (or have an existing user) - [Azure Cloud Shell](../cloud-shell/quickstart-powershell.md)
+- [Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/installation)
## Role assignments
To assign a role, you need a user, group, or service principal. If you don't alr
1. In Azure Cloud Shell, create a password that complies with your password complexity requirements. ```azurepowershell
- $PasswordProfile = New-Object -TypeName Microsoft.Open.AzureAD.Model.PasswordProfile
- $PasswordProfile.Password = "Password"
+ $PasswordProfile = @{ Password = "<Password>" }
```
-1. Create a new user for your domain using the [New-AzureADUser](/powershell/module/azuread/new-azureaduser) command.
+1. Create a new user for your domain using the [New-MgUser](/powershell/module/microsoft.graph.users/new-mguser) command.
```azurepowershell
- New-AzureADUser -DisplayName "RBAC Tutorial User" -PasswordProfile $PasswordProfile `
- -UserPrincipalName "rbacuser@example.com" -AccountEnabled $true -MailNickName "rbacuser"
+ New-MgUser -DisplayName "RBAC Tutorial User" -PasswordProfile $PasswordProfile `
+ -UserPrincipalName "rbacuser@example.com" -AccountEnabled:$true -MailNickName "rbacuser"
```
-
- ```Example
- ObjectId DisplayName UserPrincipalName UserType
- -- -- -- --
- 11111111-1111-1111-1111-111111111111 RBAC Tutorial User rbacuser@example.com Member
+
+ ```output
+ DisplayName Id Mail UserPrincipalName
+ -- -- - --
+ RBAC Tutorial User 11111111-1111-1111-1111-111111111111 rbacuser@example.com
``` ## Create a resource group
To clean up the resources created by this tutorial, delete the resource group an
1. When asked to confirm, type **Y**. It will take a few seconds to delete.
-1. Delete the user using the [Remove-AzureADUser](/powershell/module/azuread/remove-azureaduser) command.
+1. Delete the user using the [Remove-MgUser](/powershell/module/microsoft.graph.users/remove-mguser) command.
```azurepowershell
- Remove-AzureADUser -ObjectId "rbacuser@example.com"
+ $User = Get-MgUser -Filter "DisplayName eq 'RBAC Tutorial User'"
+ Remove-MgUser -UserId $User.Id
``` ## Next steps
sap High Availability Guide Suse Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-pacemaker.md
If you want to deploy resources by using the Azure CLI or the Azure portal, you
### Set up an Azure shared disk SBD device
-1. **[A]** Install iSCSI package.
-
- ```bash
- sudo zypper install open-iscsi
- ```
-
-2. **[A]** Enable the iSCSI and SBD services.
+1. **[A]** Enable the SBD services.
```bash
- sudo systemctl enable iscsid
- sudo systemctl enable iscsi
sudo systemctl enable sbd ```
-3. **[A]** Make sure that the attached disk is available.
+2. **[A]** Make sure that the attached disk is available.
```bash # lsblk
If you want to deploy resources by using the Azure CLI or the Azure portal, you
[5:0:0:0] disk Msft Virtual Disk 1.0 /dev/sdc ```
-4. **[A]** Retrieve the IDs of the attached disks.
+3. **[A]** Retrieve the IDs of the attached disks.
```bash # ls -l /dev/disk/by-id/scsi-* | grep sdc
If you want to deploy resources by using the Azure CLI or the Azure portal, you
The commands list device IDs for the SBD device. We recommend using the ID that starts with scsi-3. In the preceding example, the ID is **/dev/disk/by-id/scsi-3600224804208a67da8073b2a9728af19**.
-5. **[1]** Create the SBD device.
+4. **[1]** Create the SBD device.
Use the device ID from step 2 to create the new SBD devices on the first cluster node.
If you want to deploy resources by using the Azure CLI or the Azure portal, you
# sudo sbd -d /dev/disk/by-id/scsi-3600224804208a67da8073b2a9728af19 -1 60 -4 120 create ```
-6. **[A]** Adapt the SBD configuration.
+5. **[A]** Adapt the SBD configuration.
a. Open the SBD config file.
If you want to deploy resources by using the Azure CLI or the Azure portal, you
> [!NOTE] > If the SBD_DELAY_START property value is set to "no", change the value to "yes". You must also check the SBD service file to ensure that the value of TimeoutStartSec is greater than the value of SBD_DELAY_START. For more information, see [SBD file configuraton](https://documentation.suse.com/sle-ha/15-SP5/html/SLE-HA-all/cha-ha-storage-protect.html#pro-ha-storage-protect-sbd-config)
-7. Create the `softdog` configuration file.
+6. Create the `softdog` configuration file.
```bash echo softdog | sudo tee /etc/modules-load.d/softdog.conf ```
-8. Load the module.
+7. Load the module.
```bash sudo modprobe -v softdog
search Cognitive Search Concept Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-concept-intro.md
Last updated 01/30/2024 + # AI enrichment in Azure AI Search In Azure AI Search, *AI enrichment* refers to integration with [Azure AI services](/azure/ai-services/what-are-ai-services) to process content that isn't searchable in its raw form. Through enrichment, analysis and inference are used to create searchable content and structure where none previously existed.
-Because Azure AI Search is a text and vector search solution, the purpose of AI enrichment is to improve the utility of your content in search-related scenarios. Source content must be textual (you can't enrich vectors), but the content created by an enrichment pipeline can be vectorized and indexed in a vector store using skills like [Text Split skill](cognitive-search-skill-textsplit.md) for chunking and [AzureOpenAiEmbedding skill](cognitive-search-skill-azure-openai-embedding.md) for encoding.
+Because Azure AI Search is a text and vector search solution, the purpose of AI enrichment is to improve the utility of your content in search-related scenarios. Source content must be textual (you can't enrich vectors), but the content created by an enrichment pipeline can be vectorized and indexed in a vector store using skills like [Text Split skill](cognitive-search-skill-textsplit.md) for chunking and [AzureOpenAiEmbedding skill](cognitive-search-skill-azure-openai-embedding.md) for encoding.
+
+AI enrichment is based on [*skills*](cognitive-search-working-with-skillsets.md).
-Built-in skills apply the following transformation and processing to raw content:
+Built-in skills tap Azure AI services. They apply the following transformations and processing to raw content:
+ Translation and language detection for multi-lingual search + Entity recognition to extract people names, places, and other entities from large chunks of text
Built-in skills apply the following transformation and processing to raw content
+ Optical Character Recognition (OCR) to recognize printed and handwritten text in binary files + Image analysis to describe image content, and output the descriptions as searchable text fields
+Custom skills run your external code. Custom skills can be used for any custom processing that you want to include in the pipeline.
+ AI enrichment is an extension of an [**indexer pipeline**](search-indexer-overview.md) that connects to Azure data sources. An enrichment pipeline has all of the components of an indexer pipeline (indexer, data source, index), plus a [**skillset**](cognitive-search-working-with-skillsets.md) that specifies atomic enrichment steps. The following diagram shows the progression of AI enrichment:
search Index Add Custom Analyzers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-add-custom-analyzers.md
In the table below, the token filters that are implemented using Apache Lucene a
|[shingle](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/shingle/ShingleFilter.html)|ShingleTokenFilter|Creates combinations of tokens as a single token.<br><br> **Options**<br><br> maxShingleSize (type: int) - Defaults to 2.<br><br> minShingleSize (type: int) - Defaults to 2.<br><br> outputUnigrams (type: bool) - if true, the output stream contains the input tokens (unigrams) as well as shingles. The default is true.<br><br> outputUnigramsIfNoShingles (type: bool) - If true, override the behavior of outputUnigrams==false for those times when no shingles are available. The default is false.<br><br> tokenSeparator (type: string) - The string to use when joining adjacent tokens to form a shingle. The default is a single empty space ` `. <br><br> filterToken (type: string) - The string to insert for each position for which there is no token. The default is `_`.| |[snowball](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/snowball/SnowballFilter.html)|SnowballTokenFilter|Snowball Token Filter.<br><br> **Options**<br><br> language (type: string) - Allowed values include: `armenian`, `basque`, `catalan`, `danish`, `dutch`, `english`, `finnish`, `french`, `german`, `german2`, `hungarian`, `italian`, `kp`, `lovins`, `norwegian`, `porter`, `portuguese`, `romanian`, `russian`, `spanish`, `swedish`, `turkish`| |[sorani_normalization](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/ckb/SoraniNormalizationFilter.html)|SoraniNormalizationTokenFilter|Normalizes the Unicode representation of `Sorani` text.<br><br> **Options**<br><br> None.|
-|stemmer|StemmerTokenFilter|Language-specific stemming filter.<br><br> **Options**<br><br> language (type: string) - Allowed values include: <br> - [`arabic`](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/ar/ArabicStemmer.html)<br>- [`armenian`](https://snowballstem.org/algorithms/armenian/stemmer.html)<br>- [`basque`](https://snowballstem.org/algorithms/basque/stemmer.html)<br>- [`brazilian`](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/br/BrazilianStemmer.html)<br>- `bulgarian`<br>- [`catalan`](https://snowballstem.org/algorithms/catalan/stemmer.html)<br>- [`czech`](https://portal.acm.org/citation.cfm?id=1598600)<br>- [`danish`](https://snowballstem.org/algorithms/danish/stemmer.html)<br>- [`dutch`](https://snowballstem.org/algorithms/dutch/stemmer.html)<br>- [`dutchKp`](https://snowballstem.org/algorithms/kraaij_pohlmann/stemmer.html)<br>- [`english`](https://snowballstem.org/algorithms/porter/stemmer.html)<br>- [`lightEnglish`](https://ciir.cs.umass.edu/pubfiles/ir-35.pdf)<br>- [`minimalEnglish`](https://www.researchgate.net/publication/220433848_How_effective_is_suffixing)<br>- [`possessiveEnglish`](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/en/EnglishPossessiveFilter.html)<br>- [`porter2`](https://snowballstem.org/algorithms/english/stemmer.html)<br>- [`lovins`](https://snowballstem.org/algorithms/lovins/stemmer.html)<br>- [`finnish`](https://snowballstem.org/algorithms/finnish/stemmer.html)<br>- `lightFinnish`<br>- [`french`](https://snowballstem.org/algorithms/french/stemmer.html)<br>- [`lightFrench`](https://dl.acm.org/citation.cfm?id=1141523)<br>- [`minimalFrench`](https://dl.acm.org/citation.cfm?id=318984)<br>- `galician`<br>- `minimalGalician`<br>- [`german`](https://snowballstem.org/algorithms/german/stemmer.html)<br>- [`german2`](https://snowballstem.org/algorithms/german2/stemmer.html)<br>- [`lightGerman`](https://dl.acm.org/citation.cfm?id=1141523)<br>- `minimalGerman`<br>- [`greek`](https://sais.se/mthprize/2007/ntais2007.pdf)<br>- `hindi`<br>- [`hungarian`](https://snowballstem.org/algorithms/hungarian/stemmer.html)<br>- [`lightHungarian`](https://dl.acm.org/citation.cfm?id=1141523&dl=ACM&coll=DL&CFID=179095584&CFTOKEN=80067181)<br>- [`indonesian`](https://eprints.illc.uva.nl/741/2/MoL-2003-03.text.pdf)<br>- [`irish`](https://snowballstem.org/algorithms/irish/stemmer.html)<br>- [`italian`](https://snowballstem.org/algorithms/italian/stemmer.html)<br>- [`lightItalian`](https://www.ercim.eu/publication/ws-proceedings/CLEF2/savoy.pdf)<br>- [`sorani`](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/ckb/SoraniStemmer.html)<br>- [`latvian`](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/lv/LatvianStemmer.html)<br>- [`norwegian`](https://snowballstem.org/algorithms/norwegian/stemmer.html)<br>- [`lightNorwegian`](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/no/NorwegianLightStemmer.html)<br>- [`minimalNorwegian`](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/no/NorwegianMinimalStemmer.html)<br>- [`lightNynorsk`](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/no/NorwegianLightStemmer.html)<br>- [`minimalNynorsk`](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/no/NorwegianMinimalStemmer.html)<br>- [`portuguese`](https://snowballstem.org/algorithms/portuguese/stemmer.html)<br>- [`lightPortuguese`](https://dl.acm.org/citation.cfm?id=1141523&dl=ACM&coll=DL&CFID=179095584&CFTOKEN=80067181)<br>- [`minimalPortuguese`](https://www.inf.ufrgs.br/~buriol/papers/Orengo_CLEF07.pdf)<br>- [`portugueseRslp`](https://www.inf.ufrgs.br/~viviane/rslp/index.htm)<br>- [`romanian`](https://snowballstem.org/otherapps/romanian/)<br>- [`russian`](https://snowballstem.org/algorithms/russian/stemmer.html)<br>- [`lightRussian`](https://doc.rero.ch/lm.php?url=1000%2C43%2C4%2C20091209094227-CA%2FDolamic_Ljiljana_-_Indexing_and_Searching_Strategies_for_the_Russian_20091209.pdf)<br>- [`spanish`](https://snowballstem.org/algorithms/spanish/stemmer.html)<br>- [`lightSpanish`](https://www.ercim.eu/publication/ws-proceedings/CLEF2/savoy.pdf)<br>- [`swedish`](https://snowballstem.org/algorithms/swedish/stemmer.html)<br>- `lightSwedish`<br>- [`turkish`](https://snowballstem.org/algorithms/turkish/stemmer.html)|
+|stemmer|StemmerTokenFilter|Language-specific stemming filter.<br><br> **Options**<br><br> language (type: string) - Allowed values include: <br> - [`arabic`](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/ar/ArabicStemmer.html)<br>- [`armenian`](https://snowballstem.org/algorithms/armenian/stemmer.html)<br>- [`basque`](https://snowballstem.org/algorithms/basque/stemmer.html)<br>- [`brazilian`](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/br/BrazilianStemmer.html)<br>- `bulgarian`<br>- [`catalan`](https://snowballstem.org/algorithms/catalan/stemmer.html)<br>- [`czech`](https://portal.acm.org/citation.cfm?id=1598600)<br>- [`danish`](https://snowballstem.org/algorithms/danish/stemmer.html)<br>- [`dutch`](https://snowballstem.org/algorithms/dutch/stemmer.html)<br>- [`dutchKp`](https://snowballstem.org/algorithms/kraaij_pohlmann/stemmer.html)<br>- [`english`](https://snowballstem.org/algorithms/porter/stemmer.html)<br>- [`lightEnglish`](https://ciir.cs.umass.edu/pubfiles/ir-35.pdf)<br>- [`minimalEnglish`](https://www.researchgate.net/publication/220433848_How_effective_is_suffixing)<br>- [`possessiveEnglish`](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/en/EnglishPossessiveFilter.html)<br>- [`porter2`](https://snowballstem.org/algorithms/english/stemmer.html)<br>- [`lovins`](https://snowballstem.org/algorithms/lovins/stemmer.html)<br>- [`finnish`](https://snowballstem.org/algorithms/finnish/stemmer.html)<br>- `lightFinnish`<br>- [`french`](https://snowballstem.org/algorithms/french/stemmer.html)<br>- [`lightFrench`](https://dl.acm.org/citation.cfm?id=1141523)<br>- [`minimalFrench`](https://dl.acm.org/citation.cfm?id=318984)<br>- `galician`<br>- `minimalGalician`<br>- [`german`](https://snowballstem.org/algorithms/german/stemmer.html)<br>- [`german2`](https://snowballstem.org/algorithms/german2/stemmer.html)<br>- [`lightGerman`](https://dl.acm.org/citation.cfm?id=1141523)<br>- `minimalGerman`<br>- [`greek`](https://sais.se/mthprize/2007/ntais2007.pdf)<br>- `hindi`<br>- [`hungarian`](https://snowballstem.org/algorithms/hungarian/stemmer.html)<br>- [`lightHungarian`](https://dl.acm.org/citation.cfm?id=1141523&dl=ACM&coll=DL&CFID=179095584&CFTOKEN=80067181)<br>- [`indonesian`](https://eprints.illc.uva.nl/741/2/MoL-2003-03.text.pdf)<br>- [`irish`](https://snowballstem.org/algorithms/irish/stemmer.html)<br>- [`italian`](https://snowballstem.org/algorithms/italian/stemmer.html)<br>- [`lightItalian`](https://www.ercim.eu/publication/ws-proceedings/CLEF2/savoy.pdf)<br>- [`sorani`](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/ckb/SoraniStemmer.html)<br>- [`latvian`](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/lv/LatvianStemmer.html)<br>- [`norwegian`](https://snowballstem.org/algorithms/norwegian/stemmer.html)<br>- [`lightNorwegian`](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/no/NorwegianLightStemmer.html)<br>- [`minimalNorwegian`](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/no/NorwegianMinimalStemmer.html)<br>- [`lightNynorsk`](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/no/NorwegianLightStemmer.html)<br>- [`minimalNynorsk`](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/no/NorwegianMinimalStemmer.html)<br>- [`portuguese`](https://snowballstem.org/algorithms/portuguese/stemmer.html)<br>- [`lightPortuguese`](https://dl.acm.org/citation.cfm?id=1141523&dl=ACM&coll=DL&CFID=179095584&CFTOKEN=80067181)<br>- [`minimalPortuguese`](https://web.archive.org/web/20230425141918/https://www.inf.ufrgs.br/~buriol/papers/Orengo_CLEF07.pdf)<br>- [`portugueseRslp`](https://web.archive.org/web/20230422082818/https://www.inf.ufrgs.br/~viviane/rslp/index.htm)<br>- [`romanian`](https://snowballstem.org/otherapps/romanian/)<br>- [`russian`](https://snowballstem.org/algorithms/russian/stemmer.html)<br>- [`lightRussian`](https://doc.rero.ch/lm.php?url=1000%2C43%2C4%2C20091209094227-CA%2FDolamic_Ljiljana_-_Indexing_and_Searching_Strategies_for_the_Russian_20091209.pdf)<br>- [`spanish`](https://snowballstem.org/algorithms/spanish/stemmer.html)<br>- [`lightSpanish`](https://www.ercim.eu/publication/ws-proceedings/CLEF2/savoy.pdf)<br>- [`swedish`](https://snowballstem.org/algorithms/swedish/stemmer.html)<br>- `lightSwedish`<br>- [`turkish`](https://snowballstem.org/algorithms/turkish/stemmer.html)|
|[stemmer_override](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/miscellaneous/StemmerOverrideFilter.html)|StemmerOverrideTokenFilter|Any dictionary-Stemmed terms are marked as keywords, which prevents stemming down the chain. Must be placed before any stemming filters.<br><br> **Options**<br><br> rules (type: string array) - Stemming rules in the following format `word => stem` for example `ran => run`. The default is an empty list. Required.| |[stopwords](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/StopFilter.html)|StopwordsTokenFilter|Removes stop words from a token stream. By default, the filter uses a predefined stop word list for English.<br><br> **Options**<br><br> stopwords (type: string array) - A list of stopwords. Can't be specified if a stopwordsList is specified.<br><br> stopwordsList (type: string) - A predefined list of stopwords. Can't be specified if `stopwords` is specified. Allowed values include:`arabic`, `armenian`, `basque`, `brazilian`, `bulgarian`, `catalan`, `czech`, `danish`, `dutch`, `english`, `finnish`, `french`, `galician`, `german`, `greek`, `hindi`, `hungarian`, `indonesian`, `irish`, `italian`, `latvian`, `norwegian`, `persian`, `portuguese`, `romanian`, `russian`, `sorani`, `spanish`, `swedish`, `thai`, `turkish`, default: `english`. Can't be specified if `stopwords` is specified. <br><br> ignoreCase (type: bool) - If true, all words are lower cased first. The default is false.<br><br> removeTrailing (type: bool) - If true, ignore the last search term if it's a stop word. The default is true. |[synonym](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/synonym/SynonymFilter.html)|SynonymTokenFilter|Matches single or multi word synonyms in a token stream.<br><br> **Options**<br><br> synonyms (type: string array) - Required. List of synonyms in one of the following two formats:<br><br> -incredible, unbelievable, fabulous => amazing - all terms on the left side of => symbol are replaced with all terms on its right side.<br><br> -incredible, unbelievable, fabulous, amazing - A comma-separated list of equivalent words. Set the expand option to change how this list is interpreted.<br><br> ignoreCase (type: bool) - Case-folds input for matching. The default is false.<br><br> expand (type: bool) - If true, all words in the list of synonyms (if => notation is not used) map to one another. <br>The following list: incredible, unbelievable, fabulous, amazing is equivalent to: incredible, unbelievable, fabulous, amazing => incredible, unbelievable, fabulous, amazing<br><br>- If false, the following list: incredible, unbelievable, fabulous, amazing are equivalent to: incredible, unbelievable, fabulous, amazing => incredible.|
search Performance Benchmarks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/performance-benchmarks.md
- ignite-2023 Previously updated : 01/19/2024 Last updated : 02/09/2024 # Azure AI Search performance benchmarks
-Azure AI Search's performance depends on a [variety of factors](search-performance-tips.md) including the size of your search service and the types of queries you're sending. To help estimate the size of search service needed for your workload, we've run several benchmarks to document the performance for different search services and configurations. *These benchmarks in no way guarantee a certain level of performance from your service but can give you an idea of the performance you can expect*.
+> [!IMPORTANT]
+> These benchmarks in no way guarantee a certain level of performance from your service, however, they can serve as a useful guide for estimating potential performance under similar configurations.
+>
+Azure AI Search's performance depends on a [variety of factors](search-performance-tips.md) including the size of your search service and the types of queries you're sending. To help estimate the size of search service needed for your workload, we've run several benchmarks to document the performance for different search services and configurations.
To cover a range of different use cases, we ran benchmarks for two main scenarios:
search Search What Is An Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-an-index.md
Title: Index overview
+ Title: Search index overview
description: Explains what is a search index in Azure AI Search and describes content, construction, physical expression, and the index schema.
Last updated 01/19/2024
-# Indexes in Azure AI Search
+# Search indexes in Azure AI Search
In Azure AI Search, a *search index* is your searchable content, available to the search engine for indexing, full text search, vector search, hybrid search, and filtered queries. An index is defined by a schema and saved to the search service, with data import following as a second step. This content exists within your search service, apart from your primary data stores, which is necessary for the millisecond response times expected in modern search applications. Except for indexer-driven indexing scenarios, the search service never connects to or queries your source data.
All indexing and query requests target an index. Endpoints are usually one of th
| `<your-service>.search.windows.net/indexes` | Targets the indexes collection. Used when creating, listing, or deleting an index. Admin rights are required for these operations, available through admin [API keys](search-security-api-keys.md) or a [Search Contributor role](search-security-rbac.md#built-in-roles-used-in-search). | | `<your-service>.search.windows.net/indexes/<your-index>/docs` | Targets the documents collection of a single index. Used when querying an index or data refresh. For queries, read rights are sufficient, and available through query API keys or a data reader role. For data refresh, admin rights are required. |
+Search subscribers, or the person who created the search service, can manage the search service in the Azure portal. An Azure subscription requires Contributor or above permissions to create or delete services. You can [sign in to the Azure portal](https://portal.azure.com) for a direct connection to your search service.
+
+For other clients, we recommend reviewing the quickstarts for connection steps:
+++ [Quickstart: REST](search-get-started-rest.md)++ [Quickstart: Azure SDKs](search-get-started-text.md)+ ## Next steps You can get hands-on experience creating an index using almost any sample or walkthrough for Azure AI Search. For starters, you could choose any of the quickstarts from the table of contents.
search Vector Search Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-filters.md
- ignite-2023 Previously updated : 11/01/2023 Last updated : 02/14/2024 # Filters in vector queries
-You can set a [vector filter modes on a vector query](vector-search-how-to-query.md) to specify whether you want filtering before or after query execution. Filters are set on and iterate over string and numeric fields attributed as `filterable` in the index, but the effects of filtering determine *what* the vector query executes over: the searchable space, or the documents in the search results.
+You can set a [**vector filter modes on a vector query**](vector-search-how-to-query.md) to specify whether you want filtering before or after query execution.
+
+Filters determine the scope of a vector query. Filters are set on and iterate over nonvector string and numeric fields attributed as `filterable` in the index, but the purpose of a filter determines *what* the vector query executes over: the entire searchable space, or the contents of a search result.
This article describes each filter mode and provides guidance on when to use each one.
To understand the conditions under which one filter mode performs better than th
For the small and medium workloads, we used a Standard 2 (S2) service with one partition and one replica. For the large workload, we used a Standard 3 (S3) service with 12 partitions and one replica.
-Indexes had an identical construction: one key field, one vector field, one text field, and one numeric filterable field.
+Indexes had an identical construction: one key field, one vector field, one text field, and one numeric filterable field. The following index is defined using the 2023-07-01-preview syntax.
```python def get_index_schema(self, index_name, dimensions):
def get_index_schema(self, index_name, dimensions):
"name": index_name, "fields": [ {"name": "id", "type": "Edm.String", "key": True, "searchable": True},
- {"name": "myvector", "type": "Collection(Edm.Single)", "dimensions": dimensions,
+ {"name": "content_vector", "type": "Collection(Edm.Single)", "dimensions": dimensions,
"searchable": True, "retrievable": True, "filterable": False, "facetable": False, "sortable": False, "vectorSearchConfiguration": "defaulthnsw"}, {"name": "text", "type": "Edm.String", "searchable": True, "filterable": False, "retrievable": True,
- "sortable": False, "facetable": False, "key": False},
+ "sortable": False, "facetable": False},
{"name": "score", "type": "Edm.Double", "searchable": False, "filterable": True,
- "retrievable": True, "sortable": True, "facetable": True, "key": False}
+ "retrievable": True, "sortable": True, "facetable": True}
], "vectorSearch": {
search Vector Search How To Create Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-create-index.md
api-key: {{admin-api-key}}
+## Update a vector store
+
+To update a vector store, modify the schema and if necessary, reload documents to populate new fields. APIs for schema updates include [Create or Update Index (REST)](/rest/api/searchservice/indexes/create-or-update), [CreateOrUpdateIndex](/dotnet/api/azure.search.documents.indexes.searchindexclient.createorupdateindexasync) in the Azure SDK for .NET, [create_or_update_index](/python/api/azure-search-documents/azure.search.documents.indexes.searchindexclient?view=azure-python#azure-search-documents-indexes-searchindexclient-create-or-update-index&preserve-view=true) in the Azure SDK for Python, and similar methods in other Azure SDKs.
+
+The standard guidance for updating an index is covered in [Drop and rebuild an index](search-howto-reindex.md).
+
+Key points include:
+++ Drop and rebuild is often required for updates to and deletion of existing fields.+++ However, you can update an existing schema with the following modifications, with no rebuild required:+
+ + Add new fields to a fields collection.
+ + Add new vector configurations, assigned to new fields but not existing fields that have already been vectorized.
+ + Change "retrievable" (values are true or false) on an existing field. Vector fields must be searchable and retrievable, but if you want to disable access to a vector field in situations where drop and rebuild isn't feasible, you can set retrievable to false.
+ ## Next steps As a next step, we recommend [Query vector data in a search index](vector-search-how-to-query.md).
search Vector Search Index Size https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-index-size.md
- ignite-2023 Previously updated : 01/30/2024 Last updated : 02/14/2024 # Vector index size limits
The following table shows vector quotas by partition, and by service if all part
+ Vector quotas for are the vector indexes created for each vector field, and they're enforced at the partition level. On Basic, the sum total of all vector fields can't be more than 1 GB because Basic only has one partition. On S1, which can have up to 12 partitions, the quota for vector data is 3 GB if you allocate just one partition, or up to 36 GB if you allocate all 12 partitions. For more information about partitions and replicas, see [Estimate and manage capacity](search-capacity-planning.md).
+## How to determine service creation date
+
+Find out whether your search service was created before July 1, 2023. If it's an older service, consider creating a new search service to benefit from the higher limits. Newer services at the same tier offer at least twice as much vector storage.
+
+1. In Azure portal, open the resource group.
+
+1. On the left nav pane, under **Settings**, select **Deployments**.
+
+1. Locate your search service deployment. If there are many deployments, use the filter to look for "search".
+
+1. Select the deployment. If you have more than one, click through to see if it resolves to your search service.
+
+ :::image type="content" source="media/vector-search-index-size/resource-group-deployments.png" alt-text="Screenshot of a filtered deployments list.":::
+
+1. Expand deployment details. You should see *Created* and the creation date.
+
+ :::image type="content" source="media/vector-search-index-size/deployment-details.png" alt-text="Screenshot of the deployment details showing creation date.":::
+
+1. Now that you know the age of your search service, review the vector quota limits based on service creation:
+
+ + [Before July 1, 2023](search-limits-quotas-capacity.md#services-created-before-july-1-2023)
+ + [After July 1, 2023](search-limits-quotas-capacity.md#services-created-after-july-1-2023-in-supported-regions)
+ ## How to get vector index size
-Use the REST APIs to return vector index size:
+A request for vector metrics is a data plane operation. You can use the Azure portal, REST APIs, or Azure SDKs to get vector usage at the service level through service statistics and for individual indexes.
-+ [GET Index Statistics](/rest/api/searchservice/indexes/get-statistics) returns usage for a given index.
+### [**Portal**](#tab/portal-vector-quota)
+
+Usage information can be found on the **Overview** page's **Usage** tab. Portal pages refresh every few minutes so if you recently updated an index, wait a bit before checking results.
+
+The following screenshot is for a newer Standard 1 (S1) tier, configured for one partition and one replica. Vector index quota, measured in megabytes, refers to the internal vector indexes created for each vector field. Overall, indexes consume almost 460 megabytes of available storage, but the vector index component takes up just 93 megabytes of the 460 used on this search service.
++
+The tile on the Usage tab tracks vector index consumption at the search service level. If you increase or decrease search service capacity, the tile reflects the changes accordingly.
+### [**REST**](#tab/rest-vector-quota)
+
+Use the following data plane REST APIs (version 2023-11-01 or later) for vector usage statistics:
+++ [GET Index Statistics](/rest/api/searchservice/indexes/get-statistics) returns usage for a given index. + [GET Service Statistics](/rest/api/searchservice/get-service-statistics/get-service-statistics) returns quota and usage for the search service all-up. For a visual, here's the sample response for a Basic search service that has the quickstart vector search index. `storageSize` and `vectorIndexSize` are reported in bytes.
Return service statistics to compare usage against available quota at the servic
} ``` ++ ## Factors affecting vector index size There are three major components that affect the size of your internal vector index:
search Vector Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-store.md
- ignite-2023 Previously updated : 02/12/2024 Last updated : 02/14/2024 # Vector storage in Azure AI Search
-Azure AI Search provides vector storage and configurations for [vector search](vector-search-overview.md) and [hybrid queries](hybrid-search-overview.md). Support is implemented at the field level, which means you can combine vector and nonvector fields in the same search corpus.
+Azure AI Search provides vector storage and configurations for [vector search](vector-search-overview.md) and [hybrid search](hybrid-search-overview.md). Support is implemented at the field level, which means you can combine vector and nonvector fields in the same search corpus.
Vectors are stored in a search index. Use the [Create Index REST API](/rest/api/searchservice/indexes/create-or-update) or an equivalent Azure SDK method to [create the vector store](vector-search-how-to-create-index.md). Considerations for vector storage include the following points:
-+ Schema design to fit your use case.
-+ Index sizing and search service capacity.
-+ Vector data ingestion: loading, chunking, and embedding.
-+ Vector data retrieval from an index is always through the query APIs. Your intended user experience determines whether query results are passed directly to a client app for rendering, or goes through an orchestration layer for generative AI.
++ Design a schema to fit your use case based on the intended vector retrieval pattern.++ Estimate index size and check search service capacity.++ Manage a vector store++ Secure a vector store ## Vector retrieval patterns
In Azure AI Search, there are two patterns for working with search results.
+ Generative search. Language models formulate a response to the user's query using data from Azure AI Search. This pattern usually includes an orchestration layer to coordinate prompts and maintain context. In this pattern, results are fed into prompt flows and chat models like GPT and Text-Davinci. This approach is based on [**Retrieval augmented generation (RAG)**](retrieval-augmented-generation-overview.md) architecture, where the search index provides the grounding data.
-+ Classic search. The search engine formulates a response based on content in your index, and you render those results in a client app. In a direct response from the search engine, results are returned in a flattened row set, and you can choose which fields are passed to the client app. It's expected that you would populate the vector store (search index) with nonvector content that's human readable in your response. The search engine matches on vectors, but can return nonvector values from the same search document.
++ Classic search. The search engine formulates a response based on content in your index, and you render those results in a client app. In a direct response from the search engine, results are returned in a flattened row set, and you can choose which fields are passed to the client app. It's expected that you would populate the vector store (search index) with nonvector content that's human readable in your response. The search engine matches on vectors, but can return nonvector values from the same search document. [**Vector queries**](vector-search-how-to-query.md) and [**hybrid queries**](hybrid-search-how-to-query.md) cover the types of requests. Your index schema should reflect your primary use case.
-## Schema designs for each retrieval pattern
+## Schema of a vector store
-The following examples highlight the differences in field composition for solutions build for generative AI versus classic search.
+The following examples highlight the differences in field composition for solutions build for generative AI or classic search.
-An index schema for a vector store requires a name, a key field, one or more vector fields, and a vector configuration. Nonvector fields are recommended for hybrid queries, or for returning verbatim human readable content that doesn't have to go through a language model. For instructions about vector configuration, see [Create a vector store](vector-search-how-to-create-index.md).
+An index schema for a vector store requires a name, a key field (string), one or more vector fields, and a vector configuration. Nonvector fields are recommended for hybrid queries, or for returning verbatim human readable content that doesn't have to go through a language model. For instructions about vector configuration, see [Create a vector store](vector-search-how-to-create-index.md).
### Basic vector field configuration
-A vector field, such as "content_vector" in the following example, is of type `Collection(Edm.Single)`. It must be searchable and retrievable. It can't be filterable, facetable, or sortable, and it can't have analyzers, normalizers, or synonym map assignments. It must have dimensions set to the number of embeddings generated by the embedding model. For instance, if you're using text-embedding-ada-002, it generates 1,536 embeddings. A vector search profile is specified in a vector search configuration and assigned to a vector field using the profile name.
+A vector field, such as `"content_vector"` in the following example, is of type `Collection(Edm.Single)`. It must be searchable and retrievable. It can't be filterable, facetable, or sortable, and it can't have analyzers, normalizers, or synonym map assignments. It must have dimensions set to the number of embeddings generated by the embedding model. For instance, if you're using text-embedding-ada-002, it generates 1,536 embeddings. A vector search profile is specified in a separate vector search configuration and assigned to a vector field using a profile name.
-Content (nonvector) fields are useful for human readable text returned directly from the search engine. If you're using language models exclusively for response formulation, you can skip nonvector content fields. The following example assumes that "content" is the human readable equivalent of the "content_vector" field.
+```json
+{
+ "name": "content_vector",
+ "type": "Collection(Edm.Single)",
+ "searchable": true,
+ "retrievable": true,
+ "dimensions": 1536,
+ "vectorSearchProfile": "my-vector-profile"
+}
+```
+
+### Fields collection for basic vector workloads
+
+Here's an example showing a vector field in context, with other fields in a collection.
-Metadata fields are useful for filters, especially if metadata includes origin information about the source document.
+The key field (required) is `"id"` in this example. The `"content"` field is the human readable equivalent of the `"content_vector"` field. Although if you're using language models exclusively for response formulation, you can skip nonvector content fields. Metadata fields are useful for filters, especially if metadata includes origin information about the source document. You can't filter on a vector field directly, but you can set prefilter or postfilter modes to filter before or after vector query execution.
```json
-"name": "example-index-basic-vector-field",
+"name": "example-basic-vector-idx",
"fields": [ { "name": "id", "type": "Edm.String", "searchable": false, "filterable": true, "retrievable": true, "key": true },
- { "name": "content_vector", "type": "Collection(Edm.Single)", "searchable": true, "filterable": false, "retrievable": true,
- "dimensions": 1536, "vectorSearchProfile": null },
+ { "name": "content_vector", "type": "Collection(Edm.Single)", "searchable": true, "retrievable": true, "dimensions": 1536, "vectorSearchProfile": null },
{ "name": "content", "type": "Edm.String", "searchable": true, "retrievable": true, "analyzer": null }, { "name": "metadata", "type": "Edm.String", "searchable": true, "filterable": true, "retrievable": true, "sortable": true, "facetable": true } ]
Metadata fields are useful for filters, especially if metadata includes origin i
We recommend the [Import and vectorize data wizard](search-get-started-portal-import-vectors.md) for evaluation and proof-of-concept testing. The wizard generates the example schema in this section.
-The bias of this schema is that search documents are built around data chunks. If a language model formulates the response, you want a schema designed around data chunks.
+The bias of this schema is that search documents are built around data chunks. If a language model formulates the response, as is typical for RAG apps, you want a schema designed around data chunks.
Data chunking is necessary for staying within the input limits of language models, but it also improves precision in similarity search when queries can be matched against smaller chunks of content pulled from multiple parent documents. Finally, if you're using semantic ranking, the semantic ranker also has token limits, which are more easily met if data chunking is part of your approach.
In the following example, for each search document, there's one chunk ID, chunk,
] ```
-## Vector data retrieval
+### Schema for RAG and chat-style apps
+
+If you're designing storage for generative search, you can create separate indexes for the static content that you indexed and vectorized, and a second index for conversations that can be used in prompt flows. The following indexes are created from the [**chat-with-your-data-solution-accelerator**](https://github.com/Azure-Samples/azure-search-openai-solution-accelerator) accelerator.
++
+Fields from the chat index that support generative search experience:
+
+```json
+"name": "example-index-from-accelerator",
+"fields": [
+ { "name": "id", "type": "Edm.String", "searchable": false, "filterable": true, "retrievable": true },
+ { "name": "content", "type": "Edm.String", "searchable": true, "filterable": false, "retrievable": true },
+ { "name": "content_vector", "type": "Collection(Edm.Single)", "searchable": true, "retrievable": true, "dimensions": 1536, "vectorSearchProfile": "my-vector-profile"},
+ { "name": "metadata", "type": "Edm.String", "searchable": true, "filterable": false, "retrievable": true },
+ { "name": "title", "type": "Edm.String", "searchable": true, "filterable": true, "retrievable": true, "facetable": true },
+ { "name": "source", "type": "Edm.String", "searchable": true, "filterable": true, "retrievable": true },
+ { "name": "chunk", "type": "Edm.Int32", "searchable": false, "filterable": true, "retrievable": true },
+ { "name": "offset", "type": "Edm.Int32", "searchable": false, "filterable": true, "retrievable": true }
+]
+```
+
+Here's a screenshot showing [Search explorer](search-explorer.md) search results for the conversations index. The search score is 1.00 because the search was unqualified. Notice the fields that exist to support orchestration and prompt flows. A conversation ID identifies a specific chat. `"type"` indicates whether the content is from the user or the assistant. Dates are used to age out chats from the history.
++
+## Physical structure and size
+
+In Azure AI Search, the physical structure of an index is largely an internal implementation. You can access its schema, load and query its content, monitor its size, and manage capacity, but the clusters themselves (indexes, [shards](search-capacity-planning.md#concepts-search-units-replicas-partitions-shards), and other files and folders) are managed internally by Microsoft.
+
+The size and substance of an index is determined by:
+++ Quantity and composition of your documents++ Attributes on individual fields++ Index configuration, including vector configuration that specifies how the internal navigation structures are created based on whether you choose HNSW or exhaustive KNN for similarity search.+
+Vector store index limits and estimations are covered in [another article](vector-search-index-size.md), but it's highlighted here to emphasize that maximum storage varies by service tier, and also by when the search service was created. Newer same-tier services have significantly more capacity for vector indexes.
+++ [Check the deployment date of your search service](vector-search-index-size.md#how-to-determine-service-creation-date). If it was created before July 1, 2023, consider creating a new search service for greater capacity.+++ [Choose a scalable tier](search-sku-tier.md) if you anticipate fluctuations in vector storage requirements. The Basic tier is fixed at one partition. Consider Standard 1 (S1) and above for more flexibility and faster performance.+
+In terms of usage metrics, a vector index is an internal data structure created for each vector field. As such, a vector storage is always a fraction of the overall index size. Other nonvector fields and data structures consume the remainder of the quota for index size and consumed storage at the service level.
+
+## Basic operations and interaction
+
+This section introduces vector run time operations, including connecting to and securing a single index.
+
+> [!NOTE]
+> When managing an index, be aware that there is no portal or API support for moving or copying an index. Instead, customers typically point their application deployment solution at a different search service (if using the same index name), or revise the name to create a copy on the current search service, and then build it.
+
+### Continuously available
+
+An index is immediately available for queries as soon as the first document is indexed, but won't be fully operational until all documents are indexed. Internally, an index is [distributed across partitions and executes on replicas](search-capacity-planning.md#concepts-search-units-replicas-partitions-shards). The physical index is managed internally. The logical index is managed by you.
+
+An index is continuously available, with no ability to pause or take it offline. Because it's designed for continuous operation, any updates to its content, or additions to the index itself, happen in real time. As a result, queries might temporarily return incomplete results if a request coincides with a document update.
+
+Notice that query continuity exists for document operations (refreshing or deleting) and for modifications that don't affect the existing structure and integrity of the current index (such as adding new fields). If you need to make structural updates (changing existing fields), those are typically managed using a drop-and-rebuild workflow in a development environment, or by creating a new version of the index on production service.
+
+To avoid an [index rebuild](search-howto-reindex.md), some customers who are making small changes choose to "version" a field by creating a new one that coexists alongside a previous version. Over time, this leads to orphaned content in the form of obsolete fields or obsolete custom analyzer definitions, especially in a production index that is expensive to replicate. You can address these issues on planned updates to the index as part of index lifecycle management.
+
+### Secure access to vector data
+
+<!-- Azure AI Search supports comprehensive security. Authentication and authorization -->
-The vector search algorithms specify the navigation structures used at query time. The structures are created during indexing, but used during queries.
+### Manage vector stores
-The content of your vector fields is determined by the [embedding step](vector-search-how-to-generate-embeddings.md) that vectorizes or encodes your content. If you use the same embedding model for all of your fields, you can [build vector queries](vector-search-how-to-query.md) that cover all of them.
+Azure provides a monitoring platform that includes diagnostic logging and alerting.
-If you use search results as grounding data, where a chat model generates the answer to a query, design a schema that stores chunks of text. Data chunking is a requirement if source files are too large for the embedding model. It's also efficient for chat if the original source files contain a varied information.
++ Enable logging++ Set up alerts++ Back up and restore isn't natively supported but there are samples.++ Scale ## See also
-+ [Quickstart: Vector search using REST APIs](search-get-started-vector.md)
-+ [Vector store creation](vector-search-how-to-create-index.md)
-+ [Vector query creation](vector-search-how-to-query.md)
-+ [Azure Cognitive Search and LangChain: A Seamless Integration for Enhanced Vector Search Capabilities](https://techcommunity.microsoft.com/t5/azure-ai-services-blog/azure-cognitive-search-and-langchain-a-seamless-integration-for/ba-p/3901448)
++ [Create a vector store using REST APIs (Quickstart)](search-get-started-vector.md)++ [Create a vector store](vector-search-how-to-create-index.md)++ [Query a vector store](vector-search-how-to-query.md)
sentinel Create Codeless Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-codeless-connector.md
To learn from an example, see the [Data connector connection rules reference exa
Use Postman to call the data connector API to create the data connector which combines the connection rules and previous components. Verify the connector is now connected in the UI.
+## Secure confidential input
+
+Whatever authentication is used by your CCP data connector, take these steps to ensure confidential information is kept secure. The goal is to pass along credentials from the ARM template to the CCP without leaving readable confidential objects in your deployments history.
+
+### Create label
+
+The data connector definition creates a UI element to prompt for security credentials. For example, if your data connector authenticates to a log source with OAuth, your data connector definition section includes the `OAuthForm` type in the instructions. This sets up the ARM template to prompt for the credentials.
+
+```json
+"instructions": [
+ {
+ "type": "OAuthForm",
+ "parameters": {
+ "UsernameLabel": "Username",
+ "PasswordLabel": "Password",
+ "connectButtonLabel": "Connect",
+ "disconnectButtonLabel": "Disconnect"
+ }
+ }
+],
+```
+
+### Store confidential input
+
+A section of the ARM deployment template provides a place for the administrator deploying the data connector to enter the password. Use `securestring` to keep the confidential information secured in an object that isn't readable after deployment. For more information, see [Security recommendations for parameters](../azure-resource-manager/templates/best-practices.md#security-recommendations-for-parameters).
+
+```json
+"mainTemplate": {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "[variables('dataConnectorCCPVersion')]",
+ "parameters": {
+ "Username": {
+ "type": "securestring",
+ "minLength": 1,
+ "metadata": {
+ "description": "Enter the username to connect to your data source."
+ },
+ "Password": {
+ "type": "securestring",
+ "minLength": 1,
+ "metadata": {
+ "description": "Enter the API key, client secret or password required to connect."
+ }
+ },
+ // more deployment template information
+ }
+}
+```
+
+### Use the securestring objects
+
+Finally, the CCP utilizes the credential objects in the data connector section.
+
+```json
+"auth": {
+ "type": "OAuth2",
+ "ClientSecret": "[[parameters('Password')]",
+ "ClientId": "[[parameters('Username')]",
+ "GrantType": "client_credentials",
+ "TokenEndpoint": "https://api.contoso.com/oauth/token",
+ "TokenEndpointHeaders": {
+ "Content-Type": "application/x-www-form-urlencoded"
+ },
+ "TokenEndpointQueryParameters": {
+ "grant_type": "client_credentials"
+ }
+},
+```
+
+>[!Note]
+> The strange syntax for the credential object, `"ClientSecret": "[[parameters('Password')]",` isn't a typo!
+> In order to create the deployment template which also uses parameters, you need to escape the parameters in that section with an extra starting`[`. This allows the parameters to assign a value based on the user interaction with the connector.
+>
+> For more information, see [Template expressions escape characters](../azure-resource-manager/templates/template-expressions.md#escape-characters).
+
+ ## Create the deployment template Manually package an Azure Resource Management (ARM) template using the [example template](#example-arm-template) as your guide.
+In addition to the example template, published solutions available in the Microsoft Sentinel content hub use the CCP for their data connector. Review the following solutions as more examples of how to stitch the components together into an ARM template.
+
+- [Ermes Browser Security](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Ermes%20Browser%20Security/Package/mainTemplate.json)
+- [Palo Alto Prisma Cloud CWPP](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Ermes%20Browser%20Security/Package/mainTemplate.json)
+ ## Deploy the connector Deploy your codeless connector as a custom template.
Consider using the ARM template test toolkit (arm-ttk) to validate the template
#### Example ARM template - parameters
-For more information, see [Parameters in ARM templates](../azure-resource-manager/templates/parameters.md) and [Security recommendations for parameters](../azure-resource-manager/templates/best-practices.md#security-recommendations-for-parameters).
+For more information, see [Parameters in ARM templates](../azure-resource-manager/templates/parameters.md).
+
+>[!Warning]
+> Use `securestring` for all passwords and secrets in objects readable after resource deployment.
+> For more information, see [Secure confidential input](#secure-confidential-input) and [Security recommendations for parameters](../azure-resource-manager/templates/best-practices.md#security-recommendations-for-parameters).
+ ```json {
There are 5 ARM deployment resources in this template guide which house the 4 CC
// "minLength": 1 //}, //"apikey": {
- // "defaultValue": "API Key",
+ // "defaultValue": "",
// "type": "securestring", // "minLength": 1 //}
sentinel Data Connector Connection Rules Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connector-connection-rules-reference.md
Reference the [Create or Update](/rest/api/securityinsights/data-connectors/crea
**PUT** method ```http
-https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{resourceGroupName}}/providers/Microsoft.OperationalInsights/workspaces/{{workspaceName}}/providers/Microsoft.SecurityInsights/dataConnectors/{{dataConnectorId}}?api-version=
+https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{resourceGroupName}}/providers/Microsoft.OperationalInsights/workspaces/{{workspaceName}}/providers/Microsoft.SecurityInsights/dataConnectors/{{dataConnectorId}}?api-version={{apiVersion}}
``` ## URI parameters
-For more information, see [Data Connectors - Create or Update URI Parameters](/rest/api/securityinsights/data-connectors/create-or-update#uri-parameters)
+For more information about the latest API version, see [Data Connectors - Create or Update URI Parameters](/rest/api/securityinsights/data-connectors/create-or-update#uri-parameters).
|Name | Description | |||
The request body for the CCP data connector has the following structure:
"name": "{{dataConnectorId}}", "kind": "RestApiPoller", "etag": "",
- "DataType": ""
"properties": { "connectorDefinitionName": "", "auth": {},
The CCP supports the following authentication types:
> [!NOTE] > CCP OAuth2 implementation does not support certificate credentials.
-As a best practice, use parameters in the auth section instead of hard-coding credentials.
-- For more information, see [Best practice recommendations for parameters](../azure-resource-manager/templates/best-practices.md#security-recommendations-for-parameters).
+As a best practice, use parameters in the auth section instead of hard-coding credentials. For more information, see [Secure confidential input](create-codeless-connector.md#secure-confidential-input).
-In order to create the deployment template which also uses parameters, you need to escape the parameters in this section with an extra starting `[`. This allows the parameters to assign a value based on the user interaction with the connector.
-- For more information, see [Template expressions escape characters](../azure-resource-manager/templates/template-expressions.md#escape-characters).
+In order to create the deployment template which also uses parameters, you need to escape the parameters in this section with an extra starting `[`. This allows the parameters to assign a value based on the user interaction with the connector. For more information, see [Template expressions escape characters](../azure-resource-manager/templates/template-expressions.md#escape-characters).
-To enable the credentials to be entered from the UI, the `connectorUIConfig` section requires `instructions` with the desired parameters.
-- For more information, see [Data connector definitions reference for the Codeless Connector Platform](data-connector-ui-definitions-reference.md#instructions).
+To enable the credentials to be entered from the UI, the `connectorUIConfig` section requires `instructions` with the desired parameters. For more information, see [Data connector definitions reference for the Codeless Connector Platform](data-connector-ui-definitions-reference.md#instructions).
#### Basic auth
Example Basic auth using parameters defined in `connectorUIconfig`:
| Field | Required | Type | Description | Default value | | - | - | - | - | - |
-| **ApiKey** | Mandatory | string | user secret key | |
+| **ApiKey** | True | string | user secret key | |
| **ApiKeyName** | | string | name of the Uri header containing the ApiKey value | `Authorization` | | **ApiKeyIdentifier** | | string | string value to prepend the token | `token` | | **IsApiKeyInPostPayload** | | boolean | send secret in POST body instead of header | `false` |
After the user returns to the client via the redirect URL, the application will
| - | - | - | - | | **ClientId** | True | String | The client id | | **ClientSecret** | True | String | The client secret |
-| **AuthorizationCode** | Mandatory when grantType = `authorization_code` | String | If grant type is `authorization_code` this field value will be the authorization code returned from the auth serve. |
+| **AuthorizationCode** | True when grantType = `authorization_code` | String | If grant type is `authorization_code` this field value will be the authorization code returned from the auth serve. |
| **Scope** | True for `authorization_code` grant type<br> optional for `client_credentials` grant type| String | A space-separated list of scopes for user consent. For more information, see [OAuth2 scopes and permissions](/entra/identity-platform/scopes-oidc). |
-| **RedirectUri** | Mandatory when grantType = `authorization_code` | String | URL for redirect, must be `https://portal.azure.com/TokenAuthorize` |
+| **RedirectUri** | True when grantType = `authorization_code` | String | URL for redirect, must be `https://portal.azure.com/TokenAuthorize` |
| **GrantType** | True | String | `authorization_code` or `client_credentials` | | **TokenEndpoint** | True | String | URL to exchange code with valid token in `authorization_code` grant or client id and secret with valid token in `client_credentials` grant. | | **TokenEndpointHeaders** | | Object | An optional key value object to send custom headers to token server |
OAuth2 auth code grant
"authorizationEndpointQueryParameters": { "prompt": "consent" },
- "redirectionUri": "https://portal.azure.com/TokenAuthorize",
+ "redirectUri": "https://portal.azure.com/TokenAuthorize",
"tokenEndpointHeaders": { "Accept": "application/json", "Content-Type": "application/x-www-form-urlencoded"
Paging: {
```json Paging: {
- "pagingType" = "PersistentLinkHeader",
+ "pagingType" : "PersistentLinkHeader",
"pageSizeParameterName" : "limit", "pageSize" : 500 }
sentinel Data Connector Ui Definitions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connector-ui-definitions-reference.md
Reference the Create Or Update operation in the REST API docs to find the latest
**PUT** method ```http
-https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{resourceGroupName}}/providers/Microsoft.OperationalInsights/workspaces/{{workspaceName}}/providers/Microsoft.SecurityInsights/dataConnectorDefinitions/{{dataConnectorDefinitionName}}?api-version=
+https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{resourceGroupName}}/providers/Microsoft.OperationalInsights/workspaces/{{workspaceName}}/providers/Microsoft.SecurityInsights/dataConnectorDefinitions/{{dataConnectorDefinitionName}}?api-version={{apiVersion}}
``` ## URI parameters
+For more information about the latest API version, see [Data Connector Definitions - Create or Update URI Parameters](/rest/api/securityinsights/data-connector-definitions/create-or-update#uri-parameters)
+ |Name | Description | |||
-| **dataConnectorDefinition** | The data connector definition must be a unique name and is the same as the `name` parameter in the [request body](#request-body).|
+| **dataConnectorDefinitionName** | The data connector definition must be a unique name and is the same as the `name` parameter in the [request body](#request-body).|
| **resourceGroupName** | The name of the resource group, not case sensitive. | | **subscriptionId** | The ID of the target subscription. | | **workspaceName** | The *name* of the workspace, not the ID.<br>Regex pattern: `^[A-Za-z0-9][A-Za-z0-9-]+[A-Za-z0-9]$` |
This section provides parameters that define the set of instructions that appear
] ```
-|Array Property |Type |Description |
-||||
-| **title** | String | Optional. Defines a title for your instructions. |
-| **description** | String | Optional. Defines a meaningful description for your instructions. |
-| **innerSteps** | Array | Optional. Defines an array of inner instruction steps. |
-| **instructions** | Array of [instructions](#instructions) | Required. Defines an array of instructions of a specific parameter type. |
+|Array Property | Required | Type |Description |
+||||--|
+| **title** | | String | Defines a title for your instructions. |
+| **description** | | String | Defines a meaningful description for your instructions. |
+| **innerSteps** | | Array | Defines an array of inner instruction steps. |
+| **instructions** | True | Array of [instructions](#instructions) | Defines an array of instructions of a specific parameter type. |
#### instructions
Here are some examples of the `Textbox` type. These examples correspond to the p
} ```
-| Array Value |Type |Description |
+| Array Value | Required | Type |Description |
|||-|
-|**fillWith** | ENUM | Optional. Array of environment variables used to populate a placeholder. Separate multiple placeholders with commas. For example: `{0},{1}` <br><br>Supported values: `workspaceId`, `workspaceName`, `primaryKey`, `MicrosoftAwsAccount`, `subscriptionId` |
-|**label** | String | Defines the text for the label above a text box. |
-|**value** | String | Defines the value to present in the text box, supports placeholders. |
-|**rows** | Rows | Optional. Defines the rows in the user interface area. By default, set to **1**. |
-|**wideLabel** |Boolean | Optional. Determines a wide label for long strings. By default, set to `false`. |
+|**fillWith** | | ENUM | Array of environment variables used to populate a placeholder. Separate multiple placeholders with commas. For example: `{0},{1}` <br><br>Supported values: `workspaceId`, `workspaceName`, `primaryKey`, `MicrosoftAwsAccount`, `subscriptionId` |
+|**label** | True | String | Defines the text for the label above a text box. |
+|**value** | True | String | Defines the value to present in the text box, supports placeholders. |
+|**rows** | | Rows | Defines the rows in the user interface area. By default, set to **1**. |
+|**wideLabel** | | Boolean | Determines a wide label for long strings. By default, set to `false`. |
#### InfoMessage
Here's an example of an expandable instruction group:
:::image type="content" source="media/create-codeless-connector/accordion-instruction-area.png" alt-text="Screenshot of an expandable, extra instruction group.":::
-|Array Value |Type |Description |
-||||
-|**title** | String | Defines the title for the instruction step. |
-|**description** | String | Optional descriptive text. |
-|**canCollapseAllSections** | Boolean | Optional. Determines whether the section is a collapsible accordion or not. |
-|**noFxPadding** | Boolean | Optional. If `true`, reduces the height padding to save space. |
-|**expanded** | Boolean | Optional. If `true`, shows as expanded by default. |
+|Array Value | Required | Type |Description |
+|||||
+|**title** | True | String | Defines the title for the instruction step. |
+|**description** | | String | Optional descriptive text. |
+|**canCollapseAllSections** | | Boolean | Determines whether the section is a collapsible accordion or not. |
+|**noFxPadding** | | Boolean | If `true`, reduces the height padding to save space. |
+|**expanded** | | Boolean | If `true`, shows as expanded by default. |
For a detailed example, see the configuration JSON for the [Windows DNS connector](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Windows%20Server%20DNS/Data%20Connectors/template_DNS.JSON).
Some **InstallAgent** types appear as a button, others appear as a link. Here ar
:::image type="content" source="media/create-codeless-connector/link-by-text.png" alt-text="Screenshot of a link added as inline text.":::
-|Array Values |Type |Description |
-||||
-|**linkType** | ENUM | Determines the link type, as one of the following values: <br><br>`InstallAgentOnWindowsVirtualMachine`<br>`InstallAgentOnWindowsNonAzure`<br> `InstallAgentOnLinuxVirtualMachine`<br> `InstallAgentOnLinuxNonAzure`<br>`OpenSyslogSettings`<br>`OpenCustomLogsSettings`<br>`OpenWaf`<br> `OpenAzureFirewall` `OpenMicrosoftAzureMonitoring` <br> `OpenFrontDoors` <br>`OpenCdnProfile` <br>`AutomaticDeploymentCEF` <br> `OpenAzureInformationProtection` <br> `OpenAzureActivityLog` <br> `OpenIotPricingModel` <br> `OpenPolicyAssignment` <br> `OpenAllAssignmentsBlade` <br> `OpenCreateDataCollectionRule` |
-|**policyDefinitionGuid** | String | Required when using the **OpenPolicyAssignment** linkType. For policy-based connectors, defines the GUID of the built-in policy definition. |
-|**assignMode** | ENUM | Optional. For policy-based connectors, defines the assign mode, as one of the following values: `Initiative`, `Policy` |
-|**dataCollectionRuleType** | ENUM | Optional. For DCR-based connectors, defines the type of data collection rule type as either `SecurityEvent`, or `ForwardEvent`. |
+|Array Values | Required |Type |Description |
+|||||
+|**linkType** | True | ENUM | Determines the link type, as one of the following values: <br><br>`InstallAgentOnWindowsVirtualMachine`<br>`InstallAgentOnWindowsNonAzure`<br> `InstallAgentOnLinuxVirtualMachine`<br> `InstallAgentOnLinuxNonAzure`<br>`OpenSyslogSettings`<br>`OpenCustomLogsSettings`<br>`OpenWaf`<br> `OpenAzureFirewall` `OpenMicrosoftAzureMonitoring` <br> `OpenFrontDoors` <br>`OpenCdnProfile` <br>`AutomaticDeploymentCEF` <br> `OpenAzureInformationProtection` <br> `OpenAzureActivityLog` <br> `OpenIotPricingModel` <br> `OpenPolicyAssignment` <br> `OpenAllAssignmentsBlade` <br> `OpenCreateDataCollectionRule` |
+|**policyDefinitionGuid** | True when using `OpenPolicyAssignment` linkType. | String | For policy-based connectors, defines the GUID of the built-in policy definition. |
+|**assignMode** | | ENUM | For policy-based connectors, defines the assign mode, as one of the following values: `Initiative`, `Policy` |
+|**dataCollectionRuleType** | | ENUM | For DCR-based connectors, defines the type of data collection rule type as either `SecurityEvent`, or `ForwardEvent`. |
## Example data connector definition
sentinel Deploy Dynamics 365 Finance Operations Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/dynamics-365/deploy-dynamics-365-finance-operations-solution.md
Title: Deploy Microsoft Sentinel solution for Dynamics 365 Finance and Operations description: This article introduces you to the process of deploying the Microsoft Sentinel Solution for Dynamics 365 Finance and Operations--++ Previously updated : 05/14/2023 Last updated : 02/12/2024 # Deploy Microsoft Sentinel solution for Dynamics 365 Finance and Operations
Before you begin, verify that:
- You have a defined Microsoft Sentinel workspace and have read and write permissions to the workspace. - [Microsoft Dynamics 365 Finance version 10.0.33 or above](/dynamics365/finance/get-started/whats-new-changed-changed-10-0-33) is enabled and you have administrative access to the monitored environments. - You can create an [Azure Function App](../../azure-functions/functions-overview.md) with the `Microsoft.Web/Sites`, `Microsoft.Web/ServerFarms`, `Microsoft.Insights/Components`, andΓÇ»`Microsoft.Storage/StorageAccounts` permissions.-- You can create [Data Collection Rules/Endpoints](../../azure-monitor/essentials/data-collection-rule-overview.md) with the permissions:
+- You can create [Data Collection Rules/Endpoints](../../azure-monitor/essentials/data-collection-rule-overview.md) with the permissions:
- `Microsoft.Insights/DataCollectionEndpoints`, and `Microsoft.Insights/DataCollectionRules`. - Assign the Monitoring Metrics Publisher role to the Azure Function.
In the connector page, make sure that you meet the required prerequisites and co
To enable data collection, you create a new role in Finance and Operations with permissions to view the Database Log entity. The role is then assigned to a dedicated Finance and Operations user, mapped to the Microsoft Entra client ID of the Function App's system assigned managed identity.
-To collect the managed identity application ID from Microsoft Entra ID:
+To collect the managed identity application ID from Microsoft Entra ID:
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Browse to **Microsoft Entra ID** > **Enterprise applications**.
To collect the managed identity application ID from Microsoft Entra ID:
1. In the Finance and Operations portal, navigate toΓÇ»**System administration > Setup > Microsoft Entra ID** applications.
-1. Create a new entry in the table:
+1. Create a new entry in the table:
- For theΓÇ»**Client Id**, type the application ID of the managed identity. - For the **Name**, type a name for the application. - For the **User ID**, type the user ID created in the [previous step](#create-a-user-for-data-collection-in-finance-and-operations).
-### Enable auditing on the relevant Dynamics 365 Finance and Operations data tables
+### Enable auditing on the relevant Dynamics 365 Finance and Operations data tables
> [!NOTE] > Before you enable auditing on Dynamics 365 F&O, review the [database logging recommended practices](/dynamics365/fin-ops-core/dev-itpro/sysadmin/configure-manage-database-log#database-logging-and-performance).
-The analytics rules currently provided with this solution monitor and detect threats based on logs sourced from these tables:
+The analytics rules provided with this solution monitor and detect threats based on logs generated in the System Database Log.
-- All tables under **System**-- The **Bank accounts** table under **Bank**
+If you're planning to use the analytics rules provided in this solution, enable auditing for the following tables:
-If you're planning to use the analytics rules provided in this solution, enable auditing for the **System** and **Bank accounts** tables.
+|Category |Table |
+|||
+|System | `UserInfo` |
+|Bank | `BankAccountTable` |
+|Not specified | `SysAADClientTable` |
-This screenshot shows the **System** and **Bank accounts** tables under **logging database changes**.
+Enable auditing on tables using the **Database log setup** wizard in the Finance and Operations portal.
+- In the **Tables and fields** page, you might want to select the **Show table names** checkbox to make it easier to find your tables.
+- To enable auditing of all fields in the selected tables, in the **Types of change** page, select all four check boxes for any relevant table names with empty field labels. Sort the table list by the **Field label** column in ascending order (A-Z).
+- Select **Yes** for all warning messages.
-To enable auditing on Finance and Operations tables you want to monitor:
+For more information, see [Set up database logging](/dynamics365/fin-ops-core/dev-itpro/sysadmin/configure-manage-database-log#set-up-database-logging).
-1. In the Finance and Operations portal, Select **Modules > System Administration > Database log > Database log setup**.
-1. SelectΓÇ»**New** >ΓÇ»**Next**, and select the tables you want to monitor.
-1. SelectΓÇ»**Next**.
-1. To enable auditing on all fields of the selected tables, mark all four check marks to the right of the table names with empty field labels. To see the tables with empty field labels at the top, sort the table list by the field table in ascending order (A to Z):
-
- :::image type="content" source="media/deploy-dynamics-365-finance-operations-solution/finance-and-operations-logging-database-changes-new.png" alt-text="Screenshot of configuring the selected Finance and Operations database tables.":::
-
-1. Select **Next** and then **Finish**.
-1. Select **Yes** in all warning messages.
### Verify that the data connector is ingesting logs to Microsoft Sentinel
-To verify that log ingestion is working:
+To verify that log ingestion is working:
1. Run activities (create, update, delete) on any of the tables you enabled for monitoring in the [previous step](#enable-auditing-on-the-relevant-dynamics-365-finance-and-operations-data-tables). 1. Wait up to 15 minutes for Microsoft Sentinel to ingest the logs to the logs table in the workspace.
sentinel Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/roles.md
Users with particular job requirements might need to be assigned other roles or
- **Install and manage out-of-the-box content**
- Find packaged solutions for end-to-end products or standalone content from the content hub in Microsoft Sentinel. To install and manage content from the content hub, assign the **Microsoft Sentinel Contributor** role at the resource group level. For some solutions, the [**Template Spec Contributor**](../role-based-access-control/built-in-roles.md#template-spec-contributor) role is still required.
+ Find packaged solutions for end-to-end products or standalone content from the content hub in Microsoft Sentinel. To install and manage content from the content hub, assign the **Microsoft Sentinel Contributor** role at the resource group level.
- **Automate responses to threats with playbooks**
This table summarizes the Microsoft Sentinel roles and their allowed actions in
| Microsoft Sentinel Contributor | -- | -- | &#10003; | &#10003; | &#10003; | &#10003;| | Microsoft Sentinel Playbook Operator | &#10003; | -- | -- | -- | -- | --| | Logic App Contributor | &#10003; | &#10003; | -- | -- | -- |-- |
-| Template Spec Contributor | -- | -- | -- | -- | -- |&#10003;[**](#content-hub) |
<a name=workbooks></a>* Users with these roles can create and delete workbooks with the [Workbook Contributor](../role-based-access-control/built-in-roles.md#workbook-contributor) role. Learn about [Other roles and permissions](#other-roles-and-permissions).
-<a name=content-hub></a>** The requirement for the Template Spec Contributor role to install and manage content from content hub is still required for some edge cases in addition to Microsoft Sentinel Contributor.
- Review the [role recommendations](#role-and-permissions-recommendations) for which roles to assign to which users in your SOC. ## Custom roles and advanced Azure RBAC
After understanding how roles and permissions work in Microsoft Sentinel, you ca
| | [Microsoft Sentinel Playbook Operator](../role-based-access-control/built-in-roles.md#microsoft-sentinel-playbook-operator) | Microsoft Sentinel's resource group, or the resource group where your playbooks are stored | Attach playbooks to analytics and automation rules. <br>Run playbooks. | |**Security engineers** | [Microsoft Sentinel Contributor](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) |Microsoft Sentinel's resource group | View data, incidents, workbooks, and other Microsoft Sentinel resources. <br><br>Manage incidents, such as assigning or dismissing incidents. <br><br>Create and edit workbooks, analytics rules, and other Microsoft Sentinel resources.<br><br>Install and update solutions from content hub. | | | [Logic Apps Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor) | Microsoft Sentinel's resource group, or the resource group where your playbooks are stored | Attach playbooks to analytics and automation rules. <br>Run and modify playbooks. |
-||[Template Spec Contributor](../role-based-access-control/built-in-roles.md#template-spec-contributor)|Microsoft Sentinel's resource group |Install and manage content from the content hub.|
| **Service Principal** | [Microsoft Sentinel Contributor](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) | Microsoft Sentinel's resource group | Automated configuration for management tasks |
site-recovery Site Recovery Monitor And Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-monitor-and-troubleshoot.md
Azure Site Recovery also provides default alerts via Azure Monitor, which enable
### Enable built-in Azure Monitor alerts
-To enable built-in Azure Monitor alerts for Azure Site Recovery, for a particular subscription, navigate to **Preview Features** in the [Azure portal](https://ms.portal.azure.com) and register the feature flag **EnableAzureSiteRecoveryAlertToAzureMonitor** for the selected subscription.
+To enable built-in Azure Monitor alerts for Azure Site Recovery, for a particular subscription, navigate to **Preview Features** in the [Azure portal](https://ms.portal.azure.com) and register the feature flag **EnableAzureSiteRecoveryAlertsToAzureMonitor** for the selected subscription.
> [!NOTE] > We recommended that you wait for 24 hours for the registration to take effect before testing out the feature.
spring-apps How To Deploy With Custom Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-deploy-with-custom-container-image.md
To install the agents for other languages, refer to the official documentation f
New Relic:
-* Python: [Standard Python agent install](https://docs.newrelic.com/docs/apm/agents/python-agent/installation/standard-python-agent-install/)
+* Python: [Install the Python agent](https://docs.newrelic.com/install/python/)
* Node.js: [Install the Node.js agent](https://docs.newrelic.com/docs/apm/agents/nodejs-agent/installation-configuration/install-nodejs-agent/) Dynatrace:
storage Monitor Blob Storage Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/monitor-blob-storage-reference.md
Title: Azure Blob Storage monitoring data reference-
-description: Log and metrics reference for monitoring data from Azure Blob Storage.
-recommendations: false
---
+ Title: Monitoring data reference for Azure Blob Storage
+description: This article contains important reference material you need when you monitor Azure Blob Storage.
Last updated : 02/12/2024+ Previously updated : 06/06/2023+ -+
-# Azure Blob Storage monitoring data reference
-
-See [Monitoring Azure Storage](monitor-blob-storage.md) for details on collecting and analyzing monitoring data for Azure Storage.
-
-## Metrics
-
-The following tables list the platform metrics collected for Azure Storage.
-
-### Capacity metrics
-
-Capacity metrics values are refreshed daily (up to 24 Hours). The time grain defines the time interval for which metrics values are presented. The supported time grain for all capacity metrics is one hour (PT1H).
-
-Azure Storage provides the following capacity metrics in Azure Monitor.
+<!--
+IMPORTANT
+To make this template easier to use, first:
+1. Search and replace Azure Blob Storage with the official name of your service.
+2. Search and replace blob-storage with the service name to use in GitHub filenames.-->
+
+<!-- VERSION 3.0 2024_01_01
+For background about this template, see https://review.learn.microsoft.com/en-us/help/contribute/contribute-monitoring?branch=main -->
+
+<!-- Most services can use the following sections unchanged. All headings are required unless otherwise noted.
+The sections use #included text you don't have to maintain, which changes when Azure Monitor functionality changes. Add info into the designated service-specific places if necessary. Remove #includes or template content that aren't relevant to your service.
+At a minimum your service should have the following two articles:
+1. The primary monitoring article (based on the template monitor-service-template.md)
+ - Title: "Monitor Azure Blob Storage"
+ - TOC Title: "Monitor"
+ - Filename: "monitor-blob-storage.md"
+2. A reference article that lists all the metrics and logs for your service (based on this template).
+ - Title: "Azure Blob Storage monitoring data reference"
+ - TOC Title: "Monitoring data reference"
+ - Filename: "monitor-blob-storage-reference.md".
+-->
-#### Account Level
--
-#### Blob storage
-
-This table shows [Blob storage metrics](../../azure-monitor/essentials/metrics-supported.md#microsoftstoragestorageaccountsblobservices).
-
-| Metric | Description |
-| - | -- |
-| BlobCapacity | The total of Blob storage used in the storage account. <br/><br/> Unit: Bytes <br/> Aggregation Type: Average <br/> Value example: 1024 <br/> Dimensions: **BlobType**, and **Tier** ([Definition](#metrics-dimensions)) |
-| BlobCount | The number of blob objects stored in the storage account. <br/><br/> Unit: Count <br/> Aggregation Type: Average <br/> Value example: 1024 <br/> Dimensions: **BlobType**, and **Tier** ([Definition](#metrics-dimensions)) |
-| BlobProvisionedSize | The amount of storage provisioned in the storage account. This metric is applicable to premium storage accounts only. <br/><br/> Unit: bytes <br/> Aggregation Type: Average |
-| ContainerCount | The number of containers in the storage account. <br/><br/> Unit: Count <br/> Aggregation Type: Average <br/> Value example: 1024 |
-| IndexCapacity | The amount of storage used by ADLS Gen2 Hierarchical Index <br/><br/> Unit: Bytes <br/> Aggregation Type: Average <br/> Value example: 1024 |
+# Azure Blob Storage monitoring data reference
-### Transaction metrics
+<!-- Intro -->
-Transaction metrics are emitted on every request to a storage account from Azure Storage to Azure Monitor. In the case of no activity on your storage account, there will be no data on transaction metrics in the period. All transaction metrics are available at both account and Blob storage service level. The time grain defines the time interval that metric values are presented. The supported time grains for all transaction metrics are PT1H and PT1M.
+See [Monitor Azure Blob Storage](monitor-blob-storage.md) for details on the data you can collect for Azure Blob Storage and how to use it.
+<!-- ## Metrics. Required section. -->
+<a name="metrics-dimensions"></a>
-<a id="metrics-dimensions"></a>
+### Supported metrics for Microsoft.Storage/storageAccounts
+The following table lists the metrics available for the Microsoft.Storage/storageAccounts resource type.
-## Metrics dimensions
+### Supported metrics for Microsoft.Storage/storageAccounts/blobServices
+The following table lists the metrics available for the Microsoft.Storage/storageAccounts/blobServices resource type.
-Azure Storage supports following dimensions for metrics in Azure Monitor.
+<!-- ## Metric dimensions. Required section. -->
### Dimensions available to all storage services
Azure Storage supports following dimensions for metrics in Azure Monitor.
For the metrics supporting dimensions, you need to specify the dimension value to see the corresponding metrics values. For example, if you look at **Transactions** value for successful responses, you need to filter the **ResponseType** dimension with **Success**. If you look at **BlobCount** value for Block Blob, you need to filter the **BlobType** dimension with **BlockBlob**.
-<a id="resource-logs-preview"></a>
+<!-- ## Resource logs. Required section. -->
+<a name="resource-logs-preview"></a>
-## Resource logs
+### Supported resource logs for Microsoft.Storage/storageAccounts/blobServices
-The following table lists the properties for Azure Storage resource logs when they're collected in Azure Monitor Logs or Azure Storage. The properties describe the operation, the service, and the type of authorization that was used to perform the operation.
+<!-- ## Azure Monitor Logs tables. Required section. -->
+
+- [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity)
+- [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics)
+- [StorageBlobLogs](/azure/azure-monitor/reference/tables/storagebloblogs)
+
+The following sections describe the properties for Azure Storage resource logs when they're collected in Azure Monitor Logs or Azure Storage. The properties describe the operation, the service, and the type of authorization that was used to perform the operation.
### Fields that describe the operation
The following table lists the properties for Azure Storage resource logs when th
"smbCommandMajor" : "0x6", "smbCommandMinor" : "DirectoryCloseAndDelete" }- } ``` [!INCLUDE [Account level capacity metrics](../../../includes/azure-storage-logs-properties-service.md)]
-## See also
+<!-- ## Activity log. Required section. -->
+- [Microsoft.Storage resource provider operations](/azure/role-based-access-control/resource-provider-operations#microsoftstorage)
+
+<!-- ## Other schemas. Optional section. Please keep heading in this order. If your service uses other schemas, add the following include and information.
+<!-- List other schemas and their usage here. These can be resource logs, alerts, event hub formats, etc. depending on what you think is important. You can put JSON messages, API responses not listed in the REST API docs, and other similar types of info here. -->
+
+## Related content
-- See [Monitoring Azure Storage](monitor-blob-storage.md) for a description of monitoring Azure Storage.-- See [Monitoring Azure resources with Azure Monitor](../../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+- See [Monitor Azure Blob Storage](monitor-blob-storage.md) for a description of monitoring Azure Blob Storage.
+- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
storage Monitor Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/monitor-blob-storage.md
Title: Monitoring Azure Blob Storage-
-recommendations: false
-description: Learn how to monitor the performance and availability of Azure Blob Storage. Monitor Azure Blob Storage data, learn about configuration, and analyze metric and log data.
---
+ Title: Monitor Azure Blob Storage
+description: Start here to learn how to monitor Azure Blob Storage.
Last updated : 02/07/2024+ Previously updated : 08/08/2023+ -+
-# Monitoring Azure Blob Storage
-
-When you have critical applications and business processes that rely on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data that's generated by Azure Blob Storage and how you can use the features of Azure Monitor to analyze alerts on this data.
-
-## Monitoring overview page in Azure portal
-
-The **Overview** page in the Azure portal for each Blob storage resource includes a brief view of the resource usage, such as requests and hourly billing. This information is useful, but only a small amount of the monitoring data is available. Some of this data is collected automatically and is available for analysis as soon as you create the resource. You can enable additional types of data collection with some configuration.
--
-## What is Azure Monitor?
-
-Azure Blob Storage creates monitoring data by using [Azure Monitor](../../azure-monitor/overview.md), which is a full stack monitoring service in Azure. Azure Monitor provides a complete set of features to monitor your Azure resources and resources in other clouds and on-premises.
-
-Start with the article [Monitoring Azure resources with Azure Monitor](../../azure-monitor/essentials/monitor-azure-resource.md) which describes the following:
--- What is Azure Monitor?-- Costs associated with monitoring-- Monitoring data collected in Azure-- Configuring data collection-- Standard tools in Azure for analyzing and alerting on monitoring data-
-The following sections build on this article by describing the specific data gathered from Azure Storage. Examples show how to configure data collection and analyze this data with Azure tools.
+<!--
+IMPORTANT
+To make this template easier to use, first:
+1. Search and replace Azure Blob Storage with the official name of your service.
+2. Search and replace blob-storage with the service name to use in GitHub filenames.-->
+
+<!-- VERSION 3.0 2024_01_07
+For background about this template, see https://review.learn.microsoft.com/en-us/help/contribute/contribute-monitoring?branch=main -->
+
+<!-- Most services can use the following sections unchanged. The sections use #included text you don't have to maintain, which changes when Azure Monitor functionality changes. Add info into the designated service-specific places if necessary. Remove #includes or template content that aren't relevant to your service.
+At a minimum your service should have the following two articles:
+1. The primary monitoring article (based on this template)
+ - Title: "Monitor Azure Blob Storage"
+ - TOC Title: "Monitor"
+ - Filename: "monitor-blob-storage.md"
+2. A reference article that lists all the metrics and logs for your service (based on the template data-reference-template.md).
+ - Title: "Azure Blob Storage monitoring data reference"
+ - TOC Title: "Monitoring data reference"
+ - Filename: "monitor-blob-storage-reference.md".
+-->
+
+# Monitor Azure Blob Storage
+
+<!-- Intro -->
+
+>[!IMPORTANT]
+>Metrics and logs in Azure Monitor support only Azure Resource Manager storage accounts. Azure Monitor doesn't support classic storage accounts. If you want to use metrics or logs on a classic storage account, you need to migrate to an Azure Resource Manager storage account. For more information, see [Migrate to Azure Resource Manager](/azure/virtual-machines/migration-classic-resource-manager-overview).
+
+<!-- ## Insights. Optional section. If your service has insights, add the following include and information. -->
+<!-- Insights service-specific information. Add brief information about what your Azure Monitor insights provide here. You can refer to another article that gives details or add a screenshot. -->
+Azure Storage insights offer a unified view of storage performance, capacity, and availability. See [Monitor storage with Azure Monitor Storage insights](../common/storage-insights-overview.md).
+
+<!-- ## Resource types. Required section. -->
+
+<!-- ## Data storage. Required section. Optionally, add service-specific information about storing your monitoring data after the include. -->
+<!-- Add service-specific information about storing monitoring data here, if applicable. For example, SQL Server stores other monitoring data in its own databases. -->
+
+<!-- METRICS SECTION START ->
+
+<!-- ## Platform metrics. Required section.
+ - If your service doesn't collect platform metrics, use the following include: [!INCLUDE [horz-monitor-no-platform-metrics](~/articles/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-no-platform-metrics.md)]
+ - If your service collects platform metrics, add the following include, statement, and service-specific information as appropriate. -->
+For a list of available metrics for Azure Blob Storage, see [Azure Blob Storage monitoring data reference](monitor-blob-storage-reference.md#metrics).
+<!-- Platform metrics service-specific information. Add service-specific information about your platform metrics here.-->
+
+<!-- ## Prometheus/container metrics. Optional. If your service uses containers/Prometheus metrics, add the following include and information.
+<!-- Add service-specific information about your container/Prometheus metrics here.-->
+
+<!-- ## System metrics. Optional. If your service uses system-imported metrics, add the following include and information.
+<!-- Add service-specific information about your system-imported metrics here.-->
+
+<!-- ## Custom metrics. Optional. If your service uses custom imported metrics, add the following include and information.
+<!-- Custom imported service-specific information. Add service-specific information about your custom imported metrics here.-->
+
+<!-- ## Non-Azure Monitor metrics. Optional. If your service uses any non-Azure Monitor based metrics, add the following include and information.
+<!-- Non-Monitor metrics service-specific information. Add service-specific information about your non-Azure Monitor metrics here.-->
+
+<!-- METRICS SECTION END ->
+
+<!-- LOGS SECTION START -->
+
+<a name="collection-and-routing"></a>
+<!-- ## Resource logs. Required section.
+ - If your service doesn't collect resource logs, use the following include [!INCLUDE [horz-monitor-no-resource-logs](~/articles/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-no-resource-logs.md)]
+ - If your service collects resource logs, add the following include, statement, and service-specific information as appropriate. -->
+For the available resource log categories, their associated Log Analytics tables, and the logs schemas for Azure Blob Storage, see [Azure Blob Storage monitoring data reference](monitor-blob-storage-reference.md#resource-logs).
+<!-- Resource logs service-specific information. Add service-specific information about your resource logs here.
+NOTE: Azure Monitor already has general information on how to configure and route resource logs. See https://learn.microsoft.com/azure/azure-monitor/platform/diagnostic-settings. Ideally, don't repeat that information here. You can provide a single screenshot of the diagnostic settings portal experience if you want. -->
-## Monitoring data
-
-Azure Blob Storage collects the same kinds of monitoring data as other Azure resources, which are described in [Monitoring data from Azure resources](../../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data).
-
-See [Azure Blob Storage monitoring data reference](monitor-blob-storage-reference.md) for detailed information on the metrics and logs metrics created by Azure Blob Storage.
-
-Metrics and logs in Azure Monitor support only Azure Resource Manager storage accounts. Azure Monitor doesn't support classic storage accounts. If you want to use metrics or logs on a classic storage account, you need to migrate to an Azure Resource Manager storage account. For more information, see [Migrate to Azure Resource Manager](../../virtual-machines/migration-classic-resource-manager-overview.md).
-
-You can continue using classic metrics and logs if you want to. In fact, classic metrics and logs are available in parallel with metrics and logs in Azure Monitor. The support remains in place until Azure Storage ends the service on legacy metrics and logs.
+> [!NOTE]
+> Data Lake Storage Gen2 doesn't appear as a storage type because Data Lake Storage Gen2 is a set of capabilities available to Blob storage.
-## Collection and routing
+#### Destination limitations
-Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+For general destination limitations, see [Destination limitations](/azure/azure-monitor/essentials/diagnostic-settings#destination-limitations). The following limitations apply only to monitoring Azure Storage accounts.
-Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
+- You can't send logs to the same storage account that you're monitoring with this setting.
+ This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
-To collect resource logs, you must create a diagnostic setting. When you create the setting, choose **blob** as the type of storage that you want to enable logs for. Then, specify one of the following categories of operations for which you want to collect logs.
+- You can't set a retention policy.
-| Category | Description |
-|:|:|
-| StorageRead | Read operations on objects. |
-| StorageWrite | Write operations on objects. |
-| StorageDelete | Delete operations on objects. |
+ If you archive logs to a storage account, you can manage the retention policy of a log container by defining a lifecycle management policy. To learn how, see [Optimize costs by automating Azure Blob Storage access tiers](lifecycle-management-overview.md).
-The **audit** resource log category group allows you to collect the baseline of resource logs that Microsoft deems necessary for auditing your resource. What's collected is dynamic, and Microsoft may change it over time as new resource log categories become available. If you choose the **audit** category group, you can't specify any other resource categories, because the system will decide which logs to collect. For more information, see [Diagnostic settings in Azure Monitor: Resource logs](../../azure-monitor/essentials/diagnostic-settings.md#resource-logs).
+ If you send logs to Log Analytics, you can manage the data retention period of Log Analytics at the workspace level or even specify different retention settings by data type. To learn how, see [Change the data retention period](/azure/azure-monitor/logs/data-retention-archive).
-> [!NOTE]
-> Data Lake Storage Gen2 doesn't appear as a storage type. That's because Data Lake Storage Gen2 is a set of capabilities available to Blob storage.
+<!-- ## Activity log. Required section. Optionally, add service-specific information about your activity log after the include. -->
+<!-- Activity log service-specific information. Add service-specific information about your activity log here. -->
-See [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/platform/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, and PowerShell. You can also find links to information about how to create a diagnostic setting by using an Azure Resource Manager template or an Azure Policy definition.
+<!-- ## Imported logs. Optional section. If your service uses imported logs, add the following include and information.
+<!-- Add service-specific information about your imported logs here. -->
-## Destination limitations
+<!-- ## Other logs. Optional section.
+If your service has other logs that aren't resource logs or in the activity log, add information that states what they are and what they cover here. You can describe how to route them in a later section. -->
-For general destination limitations, see [Destination limitations](../../azure-monitor/essentials/diagnostic-settings.md#destination-limitations). The following limitations apply only to monitoring Azure Storage accounts.
+<!-- LOGS SECTION END ->
-- You can't send logs to the same storage account that you're monitoring with this setting.
+<!-- ANALYSIS SECTION START -->
- This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
+<!-- ## Analyze data. Required section. -->
+<a name="analyzing-logs"></a>
-- You can't set a retention policy.
+<!-- ### External tools. Required section. -->
- If you archive logs to a storage account, you can manage the retention policy of a log container by defining a lifecycle management policy. To learn how, see [Optimize costs by automating Azure Blob Storage access tiers](lifecycle-management-overview.md).
+### Analyze metrics for Azure Blob Storage
- If you send logs to Log Analytics, you can manage the data retention period of Log Analytics at the workspace level or even specify different retention settings by data type. To learn how, see [Change the data retention period](../../azure-monitor/logs/data-retention-archive.md).
+Metrics for Azure Blob Storage are in these namespaces:
-## Analyzing metrics
+- Microsoft.Storage/storageAccounts
+- Microsoft.Storage/storageAccounts/blobServices
-For a list of all Azure Monitor support metrics, which includes Azure Blob Storage, see [Azure Monitor supported metrics](../../azure-monitor/essentials/metrics-supported.md).
+For a complete list of the dimensions that Azure Storage supports, see [Metrics dimensions](monitor-blob-storage-reference.md#metrics-dimensions).
### [Azure portal](#tab/azure-portal)
-You can analyze metrics for Azure Storage with metrics from other Azure services by using Metrics Explorer. Open Metrics Explorer by choosing **Metrics** from the **Azure Monitor** menu. For details on using this tool, see [Analyze metrics with Azure Monitor metrics explorer](../../azure-monitor/essentials/analyze-metrics.md).
+You can analyze metrics for Azure Storage with metrics from other Azure services by using Metrics Explorer. Open Metrics Explorer by choosing **Metrics** from the **Azure Monitor** menu. For details on using this tool, see [Analyze metrics with Azure Monitor metrics explorer](/azure/azure-monitor/essentials/analyze-metrics).
This example shows how to view **Transactions** at the account level.
For metrics that support dimensions, you can filter the metric with the desired
![Screenshot of accessing metrics with dimension in the Azure portal](./media/monitor-blob-storage/access-metrics-portal-with-dimension.png)
-For a complete list of the dimensions that Azure Storage supports, see [Metrics dimensions](monitor-blob-storage-reference.md#metrics-dimensions).
-
-Metrics for Azure Blob Storage are in these namespaces:
--- Microsoft.Storage/storageAccounts-- Microsoft.Storage/storageAccounts/blobServices- ### [PowerShell](#tab/azure-powershell) #### List the metric definition
In this example, replace the `<resource-ID>` placeholder with the resource ID of
Get-AzMetricDefinition -ResourceId $resourceId ```
-#### Reading metric values
+#### Read metric values
You can read account-level metric values of your storage account or the Blob storage service. Use the [Get-AzMetric](/powershell/module/Az.Monitor/Get-AzMetric) cmdlet.
You can read account-level metric values of your storage account or the Blob sto
Get-AzMetric -ResourceId $resourceId -MetricName "UsedCapacity" -TimeGrain 01:00:00 ```
-#### Reading metric values with dimensions
+#### Read metric values with dimensions
When a metric supports dimensions, you can read metric values and filter them by using dimension values. Use the [Get-AzMetric](/powershell/module/Az.Monitor/Get-AzMetric) cmdlet.
Get-AzMetric -ResourceId $resourceId -MetricName Transactions -TimeGrain 01:00:0
### [Azure CLI](#tab/azure-cli)
-For a list of all Azure Monitor support metrics, which includes Azure Blob Storage, see [Azure Monitor supported metrics](../../azure-monitor/essentials/metrics-supported.md).
- #### List the account-level metric definition You can list the metric definition of your storage account or the Blob storage service. Use the [az monitor metrics list-definitions](/cli/azure/monitor/metrics#az-monitor-metrics-list-definitions) command.
You can read the metric values of your storage account or the Blob storage servi
az monitor metrics list --resource <resource-ID> --metric "UsedCapacity" --interval PT1H ```
-#### Reading metric values with dimensions
+#### Read metric values with dimensions
When a metric supports dimensions, you can read metric values and filter them by using dimension values. Use the [az monitor metrics list](/cli/azure/monitor/metrics#az-monitor-metrics-list) command.
The following example shows how to list a metric definition at the account level
```
-#### Reading account-level metric values
+#### Read account-level metric values
The following example shows how to read `UsedCapacity` data at the account level:
The following example shows how to read `UsedCapacity` data at the account level
```
-#### Reading multidimensional metric values
+#### Read multidimensional metric values
For multidimensional metrics, you need to define metadata filters if you want to read metric data on specific dimension values.
The following example shows how to read metric data on the metric supporting mul
-## Analyzing logs
+### Analyze logs for Azure Blob Storage
-You can access resource logs either as a blob in a storage account, as event data, or through Log Analytics queries. For information about how to find those logs, see [Azure resource logs](../../azure-monitor/essentials/resource-logs.md).
-
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../../azure-monitor/essentials/resource-logs-schema.md). The schema for Azure Blob Storage resource logs is found in [Azure Blob Storage monitoring data reference](monitor-blob-storage-reference.md).
+You can access resource logs either as a blob in a storage account, as event data, or through Log Analytics queries. For information about how to find those logs, see [Azure resource logs](/azure/azure-monitor/essentials/resource-logs).
To get the list of SMB and REST operations that are logged, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages). Log entries are created only if there are requests made against the service endpoint. For example, if a storage account has activity in its file endpoint but not in its table or queue endpoints, only logs that pertain to the Azure Blob Storage service are created. Azure Storage logs contain detailed information about successful and failed requests to a storage service. This information can be used to monitor individual requests and to diagnose issues with a storage service. Requests are logged on a best-effort basis.
-The [Activity log](../../azure-monitor/essentials/activity-log.md) is a type of platform log located in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
- When you view a storage account in the Azure portal, the operations called by the portal are also logged. For this reason, you may see operations logged in a storage account even though you haven't written any data to the account.
-### Log authenticated requests
+#### Log authenticated requests
The following types of authenticated requests are logged:
When you view a storage account in the Azure portal, the operations called by th
Requests made by the Blob storage service itself, such as log creation or deletion, aren't logged. For a full list of the logged data, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) and [Storage log format](monitor-blob-storage-reference.md). > [!NOTE]
-> Azure Monitor currently filters out logs that describe activity in the "insights-logs-" container. You can track activities in that container by using storage analytics (classic logs).
+> Azure Monitor currently filters out logs that describe activity in the "insights-logs-" container.
-### Log anonymous requests
+#### Log anonymous requests
The following types of anonymous requests are logged:
Requests made by the Blob storage service itself, such as log creation or deleti
All other failed anonymous requests aren't logged. For a full list of the logged data, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) and [Storage log format](monitor-blob-storage-reference.md).
-### Sample Kusto queries
-
-If you send logs to Log Analytics, you can access those logs by using Azure Monitor log queries. For more information, see [Log Analytics tutorial](../../azure-monitor/logs/log-analytics-tutorial.md).
-
+<!-- ### Sample Kusto queries. Required section. If you have sample Kusto queries for your service, add them after the include. -->
+<!-- Add sample Kusto queries for your service here. -->
Here are some queries that you can enter in the **Log search** bar to help you monitor your Blob storage. These queries work with the [new language](../../azure-monitor/logs/log-query-overview.md).
-> [!IMPORTANT]
-> When you select **Logs** from the storage account resource group menu, Log Analytics is opened with the query scope set to the current resource group. This means that log queries will only include data from that resource group. If you want to run a query that includes data from other resources or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../../azure-monitor/logs/scope.md) for details.
-
-Use these queries to help you monitor your Azure Storage accounts:
- - To list the 10 most common errors over the last three days. ```kusto
Use these queries to help you monitor your Azure Storage accounts:
| render piechart ```
-## Alerts
+<!-- ### Azure Blob Storage service-specific analytics. Optional section.
+Add short information or links to specific articles that outline how to analyze data for your service. -->
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../../azure-monitor/alerts/alerts-metric-overview.md), [logs](../../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../../azure-monitor/alerts/activity-log-alerts.md).
+<!-- ANALYSIS SECTION END ->
-The following table lists some example scenarios to monitor and the proper metric to use for the alert:
+<!-- ALERTS SECTION START -->
-| Scenario | Metric to use for alert |
-|-|-|
-| Blob Storage service is throttled. | Metric: Transactions<br>Dimension name: Response type |
-| Blob Storage requests are successful 99% of the time. | Metric: Availability<br>Dimension names: Geo type, API name, Authentication |
-| Blob Storage egress has exceeded 500 GiB in one day. | Metric: Egress<br>Dimension names: Geo type, API name, Authentication |
+<!-- ## Alerts. Required section. -->
-## Feature support
+<!-- ### Azure Blob Storage alert rules. Required section.
+**MUST HAVE** service-specific alert rules. Include useful alerts on metrics, logs, log conditions, or activity log.
+Fill in the following table with metric and log alerts that would be valuable for your service. Change the format as necessary for readability. You can instead link to an article that discusses your common alerts in detail.
+Ask your PMs if you don't know. This information is the BIGGEST request we get in Azure Monitor, so don't avoid it long term. People don't know what to monitor for best results. Be prescriptive. -->
+
+### Azure Blob Storage alert rules
+The following table lists common and recommended alert rules for Azure Blob Storage and the proper metric to use for the alert:
+
+| Alert type | Condition | Description |
+|-|-|-|
+| Metric | Blob Storage service is throttled. | Transactions<br>Dimension name: Response type |
+| Metric | Blob Storage requests are successful 99% of the time. | Availability<br>Dimension names: Geo type, API name, Authentication |
+| Metric | Blob Storage egress has exceeded 500 GiB in one day. | Egress<br>Dimension names: Geo type, API name, Authentication |
+
+<!-- ### Advisor recommendations -->
+
+<!-- ALERTS SECTION END -->
+
+<!-- Note from v-thepet: I don't think the following section is relevant but maybe it's required in Blob docs?
+## Feature support
[!INCLUDE [Blob Storage feature support in Azure Storage accounts](../../../includes/azure-storage-feature-support.md)]
+-->
+## Related content
+<!-- You can change the wording and add more links if useful. -->
+
+Other Blob Storage monitoring content:
+- [Azure Blob Storage monitoring data reference](monitor-blob-storage-reference.md). A reference of the logs and metrics created by Azure Blob Storage.
+- [Best practices for monitoring Azure Blob Storage](blob-storage-monitoring-scenarios.md). Guidance for common monitoring and troubleshooting scenarios.
+- [Metrics and logs FAQ](storage-blob-faq.yml#metrics-and-logs).
+
+Overall Azure Storage monitoring content:
+- [Monitor storage with Azure Monitor Storage insights](../common/storage-insights-overview.md). Get a unified view of storage performance, capacity, and availability.
+- [Transition to metrics in Azure Monitor](../common/storage-metrics-migration.md). Move from Storage Analytics metrics to metrics in Azure Monitor.
+- [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=/azure/storage/blobs/toc.json). See common performance issues and guidance about how to troubleshoot them.
+- [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=/azure/storage/blobs/toc.json). See common availability issues and guidance about how to troubleshoot them.
+- [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=/azure/storage/blobs/toc.json). See common issues with connecting clients and how to troubleshoot them.
+
+Azure Monitor content:
+- [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource). General details on monitoring Azure resources.
+- [Azure Monitor Metrics overview](/azure/azure-monitor/essentials/data-platform-metrics). The basics of metrics and metric dimensions.
+- [Azure Monitor Logs overview](/azure/azure-monitor/logs/data-platform-logs). The basics of logs and how to collect and analyze them.
+- [Analyze metrics with Azure Monitor metrics explorer](/azure/azure-monitor/essentials/analyze-metrics). A tour of Metrics Explorer.
+- [Overview of Log Analytics in Azure Monitor](/azure/azure-monitor/logs/log-analytics-overview). A tour of Log Analytics.
+
+Training modules:
+- [Gather metrics from your Azure Blob Storage containers](/training/modules/gather-metrics-blob-storage/). Create charts that show metrics, with step-by-step guidance.
+- [Monitor, diagnose, and troubleshoot your Azure Storage](/training/modules/monitor-diagnose-and-troubleshoot-azure-storage/). Troubleshoot storage account issues, with step-by-step guidance.
-## Frequently asked questions (FAQ)
-
-See [Metrics and Logs FAQ](storage-blob-faq.yml#metrics-and-logs).
-
-## Next steps
-
-Get started with any of these guides.
-
-| Guide | Description |
-|||
-| [Gather metrics from your Azure Blob Storage containers](/training/modules/gather-metrics-blob-storage/) | Create charts that show metrics (Contains step-by-step guidance). |
-| [Monitor, diagnose, and troubleshoot your Azure Storage](/training/modules/monitor-diagnose-and-troubleshoot-azure-storage/) | Troubleshoot storage account issues (contains step-by-step guidance). |
-| [Monitor storage with Azure Monitor Storage insights](../common/storage-insights-overview.md) | A unified view of storage performance, capacity, and availability |
-| [Best practices for monitoring Azure Blob Storage](blob-storage-monitoring-scenarios.md) | Guidance for common monitoring and troubleshooting scenarios. |
-| [Analyze metrics with Azure Monitor metrics explorer](../../azure-monitor/essentials/analyze-metrics.md) | A tour of Metrics Explorer.
-| [Overview of Log Analytics in Azure Monitor](../../azure-monitor/logs/log-analytics-overview.md) | A tour of Log Analytics. |
-| [Azure Monitor Metrics overview](../../azure-monitor/essentials/data-platform-metrics.md) | The basics of metrics and metric dimensions |
-| [Azure Monitor Logs overview](../../azure-monitor/logs/data-platform-logs.md)| The basics of logs and how to collect and analyze them |
-| [Transition to metrics in Azure Monitor](../common/storage-metrics-migration.md) | Move from Storage Analytics metrics to metrics in Azure Monitor. |
-| [Azure Blob Storage monitoring data reference](monitor-blob-storage-reference.md) | A reference of the logs and metrics created by Azure Blob Storage |
-| [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=/azure/storage/blobs/toc.json)| Common performance issues and guidance about how to troubleshoot them. |
-| [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=/azure/storage/blobs/toc.json)| Common availability issues and guidance about how to troubleshoot them.|
-| [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=/azure/storage/blobs/toc.json)| Common issues with connecting clients and how to troubleshoot them.|
storage Storage Quickstart Blobs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-cli.md
az login
## Create a resource group
-Create an Azure resource group with the [az group create](/cli/azure/group#az-group-create()) command. A resource group is a logical container into which Azure resources are deployed and managed.
+Create an Azure resource group with the [az group create](/cli/azure/group#az-group-create) command. A resource group is a logical container into which Azure resources are deployed and managed.
Remember to replace placeholder values in angle brackets with your own values:
az group create \
## Create a storage account
-Create a general-purpose storage account with the [az storage account create](/cli/azure/storage/account#az-storage-account-create()) command. The general-purpose storage account can be used for all four
+Create a general-purpose storage account with the [az storage account create](/cli/azure/storage/account#az-storage-account-create) command. The general-purpose storage account can be used for all four
Remember to replace placeholder values in angle brackets with your own values:
az storage account create \
## Create a container
-Blobs are always uploaded into a container. You can organize groups of blobs in containers similar to the way you organize your files on your computer in folders. Create a container for storing blobs with the [az storage container create](/cli/azure/storage/container#az-storage-container-create()) command.
+Blobs are always uploaded into a container. You can organize groups of blobs in containers similar to the way you organize your files on your computer in folders. Create a container for storing blobs with the [az storage container create](/cli/azure/storage/container#az-storage-container-create) command.
The following example uses your Microsoft Entra account to authorize the operation to create the container. Before you create the container, assign the [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) role to yourself. Even if you are the account owner, you need explicit permissions to perform data operations against the storage account. For more information about assigning Azure roles, see [Assign an Azure role for access to blob data](assign-azure-role-data-access.md).
vi helloworld
When the file opens, press **insert**. Type *Hello world*, then press **Esc**. Next, type *:x*, then press **Enter**.
-In this example, you upload a blob to the container you created in the last step using the [az storage blob upload](/cli/azure/storage/blob#az-storage-blob-upload()) command. It's not necessary to specify a file path since the file was created at the root directory. Remember to replace placeholder values in angle brackets with your own values:
+In this example, you upload a blob to the container you created in the last step using the [az storage blob upload](/cli/azure/storage/blob#az-storage-blob-upload) command. It's not necessary to specify a file path since the file was created at the root directory. Remember to replace placeholder values in angle brackets with your own values:
```azurecli az storage blob upload \
This operation creates the blob if it doesn't already exist, and overwrites it i
When you upload a blob using the Azure CLI, it issues respective [REST API calls](/rest/api/storageservices/blob-service-rest-api) via http and https protocols.
-To upload multiple files at the same time, you can use the [az storage blob upload-batch](/cli/azure/storage/blob#az-storage-blob-upload-batch()) command.
+To upload multiple files at the same time, you can use the [az storage blob upload-batch](/cli/azure/storage/blob#az-storage-blob-upload-batch) command.
## List the blobs in a container
-List the blobs in the container with the [az storage blob list](/cli/azure/storage/blob#az-storage-blob-list()) command. Remember to replace placeholder values in angle brackets with your own values:
+List the blobs in the container with the [az storage blob list](/cli/azure/storage/blob#az-storage-blob-list) command. Remember to replace placeholder values in angle brackets with your own values:
```azurecli az storage blob list \
az storage blob list \
## Download a blob
-Use the [az storage blob download](/cli/azure/storage/blob#az-storage-blob-download()) command to download the blob you uploaded earlier. Remember to replace placeholder values in angle brackets with your own values:
+Use the [az storage blob download](/cli/azure/storage/blob#az-storage-blob-download) command to download the blob you uploaded earlier. Remember to replace placeholder values in angle brackets with your own values:
```azurecli az storage blob download \
azcopy copy 'C:\myDirectory\myFile.txt' 'https://mystorageaccount.blob.core.wind
## Clean up resources
-If you want to delete the resources you created as part of this quickstart, including the storage account, delete the resource group by using the [az group delete](/cli/azure/group#az-group-delete()) command. Remember to replace placeholder values in angle brackets with your own values:
+If you want to delete the resources you created as part of this quickstart, including the storage account, delete the resource group by using the [az group delete](/cli/azure/group#az-group-delete) command. Remember to replace placeholder values in angle brackets with your own values:
```azurecli az group delete \
storage Install Container Storage Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/install-container-storage-aks.md
Before you create your cluster, you should understand which back-end storage opt
### Data storage options
-* **[Azure Elastic SAN](../elastic-san/elastic-san-introduction.md)**: Azure Elastic SAN preview is a good fit for general purpose databases, streaming and messaging services, CD/CI environments, and other tier 1/tier 2 workloads. Storage is provisioned on demand per created volume and volume snapshot. Multiple clusters can access a single SAN concurrently, however persistent volumes can only be attached by one consumer at a time.
+* **[Azure Elastic SAN](../elastic-san/elastic-san-introduction.md)**: Azure Elastic SAN is a good fit for general purpose databases, streaming and messaging services, CD/CI environments, and other tier 1/tier 2 workloads. Storage is provisioned on demand per created volume and volume snapshot. Multiple clusters can access a single SAN concurrently, however persistent volumes can only be attached by one consumer at a time.
* **[Azure Disks](../../virtual-machines/managed-disks-overview.md)**: Azure Disks are a good fit for databases such as MySQL, MongoDB, and PostgreSQL. Storage is provisioned per target container storage pool size and maximum volume size.
storage Monitor Queue Storage Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/monitor-queue-storage-reference.md
Title: Azure Queue Storage monitoring data reference
-description: Log and metrics reference for monitoring data from Azure Queue Storage.
--- Previously updated : 04/20/2021
+ Title: Monitoring data reference for Azure Queue Storage
+description: This article contains important reference material you need when you monitor Azure Queue Storage.
Last updated : 02/12/2024+ -++ # Azure Queue Storage monitoring data reference
-See [Monitoring Azure Storage](monitor-queue-storage.md) for details on collecting and analyzing monitoring data for Azure Storage.
-
-## Metrics
-
-The following tables list the platform metrics collected for Azure Storage.
-
-### Capacity metrics
-
-Capacity metrics values are refreshed daily (up to 24 hours). The time grain defines the time interval for which metrics values are presented. The supported time grain for all capacity metrics is one hour (PT1H).
-
-Azure Storage provides the following capacity metrics in Azure Monitor.
-
-#### Account-level capacity metrics
+<!-- Intro -->
+See [Monitor Azure Queue Storage](monitor-queue-storage.md) for details on the data you can collect for Azure Queue Storage and how to use it.
-#### Queue Storage metrics
+<!-- ## Metrics. Required section. -->
-This table shows [Queue Storage metrics](../../azure-monitor/essentials/metrics-supported.md#microsoftstoragestorageaccountsqueueservices).
+### Supported metrics for Microsoft.Storage/storageAccounts
+The following table lists the metrics available for the Microsoft.Storage/storageAccounts resource type.
-| Metric | Description |
-| - | -- |
-| **QueueCapacity** | The amount of Queue Storage used by the storage account. <br><br> Unit: `Bytes` <br> Aggregation type: `Average` <br> Value example: `1024` |
-| **QueueCount** | The number of queues in the storage account. <br><br> Unit: `Count` <br> Aggregation type: `Average` <br> Value example: `1024` |
-| **QueueMessageCount** | The number of unexpired queue messages in the storage account. <br><br> Unit: `Count` <br> Aggregation type: `Average` <br> Value example: `1024` |
-
-### Transaction metrics
-
-Transaction metrics are emitted on every request to a storage account from Azure Storage to Azure Monitor. In the case of no activity on your storage account, there will be no data on transaction metrics in the period. All transaction metrics are available at both account and Queue Storage service level. The time grain defines the time interval that metric values are presented. The supported time grains for all transaction metrics are PT1H and PT1M.
-
+### Supported metrics for Microsoft.Storage/storageAccounts/queueServices
+The following table lists the metrics available for the Microsoft.Storage/storageAccounts/queueServices resource type.
<a id="metrics-dimensions"></a>-
-## Metrics dimensions
-
-Azure Storage supports following dimensions for metrics in Azure Monitor.
- [!INCLUDE [Metrics dimensions](../../../includes/azure-storage-account-metrics-dimensions.md)] <a id="resource-logs-preview"></a>+
+### Supported resource logs for Microsoft.Storage/storageAccounts/queueServices
+
+<!-- ## Azure Monitor Logs tables. Required section. -->
-## Resource logs
+- [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity)
+- [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics)
+- [StorageQueueLogs](/azure/azure-monitor/reference/tables/storagequeuelogs)
-The following table lists the properties for Azure Storage resource logs when they're collected in Azure Monitor Logs or Azure Storage. The properties describe the operation, the service, and the type of authorization that was used to perform the operation.
+The following tables list the properties for Azure Storage resource logs when they're collected in Azure Monitor Logs or Azure Storage. The properties describe the operation, the service, and the type of authorization that was used to perform the operation.
### Fields that describe the operation
The following table lists the properties for Azure Storage resource logs when th
[!INCLUDE [Account level capacity metrics](../../../includes/azure-storage-logs-properties-service.md)]
-## See also
+<!-- ## Activity log. Required section. -->
+- [Microsoft.Storage resource provider operations](/azure/role-based-access-control/resource-provider-operations#microsoftstorage)
+
+## Related content
+
+- See [Monitor Azure Queue Storage](monitor-queue-storage.md) for a description of monitoring Azure Queue Storage.
+- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
-- See [Monitoring Azure Queue Storage](monitor-queue-storage.md) for a description of monitoring Azure Queue Storage.-- See [Monitoring Azure resources with Azure Monitor](../../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
storage Monitor Queue Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/monitor-queue-storage.md
Title: Monitoring Azure Queue Storage
-description: Learn how to monitor the performance and availability of Azure Queue Storage. Monitor Azure Queue Storage data, learn about configuration, and analyze metric and log data.
+ Title: Monitor Azure Queue Storage
+description: Start here to learn how to monitor Azure Queue Storage.
Last updated : 02/12/2024++ - - Previously updated : 08/08/2023- ms.devlang: csharp # ms.devlang: csharp, powershell, azurecli-
-# Monitoring Azure Queue Storage
+<!--
+IMPORTANT
+To make this template easier to use, first:
+1. Search and replace Azure Queue Storage with the official name of your service.
+2. Search and replace queue-storage with the service name to use in GitHub filenames.-->
+
+<!-- VERSION 3.0 2024_01_07
+For background about this template, see https://review.learn.microsoft.com/en-us/help/contribute/contribute-monitoring?branch=main -->
-When you have critical applications and business processes that rely on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data that's generated by Azure Queue Storage and how you can use the features of Azure Monitor to analyze alerts on this data.
+<!-- Most services can use the following sections unchanged. The sections use #included text you don't have to maintain, which changes when Azure Monitor functionality changes. Add info into the designated service-specific places if necessary. Remove #includes or template content that aren't relevant to your service.
+At a minimum your service should have the following two articles:
+1. The primary monitoring article (based on this template)
+ - Title: "Monitor Azure Queue Storage"
+ - TOC Title: "Monitor"
+ - Filename: "monitor-queue-storage.md"
+2. A reference article that lists all the metrics and logs for your service (based on the template data-reference-template.md).
+ - Title: "Azure Queue Storage monitoring data reference"
+ - TOC Title: "Monitoring data reference"
+ - Filename: "monitor-queue-storage-reference.md".
+-->
-## Monitor overview
+# Monitor Azure Queue Storage
-The **Overview** page in the Azure portal for each Queue Storage resource includes a brief view of the resource usage, such as requests and hourly billing. This information is useful, but only a small amount of the monitoring data is available. Some of this data is collected automatically and is available for analysis as soon as you create the resource. You can enable additional types of data collection with some configuration.
+<!-- Intro -->
-## What is Azure Monitor?
+> [!IMPORTANT]
+> Metrics and logs in Azure Monitor support only Azure Resource Manager storage accounts. Azure Monitor doesn't support classic storage accounts. If you want to use metrics or logs on a classic storage account, you need to migrate to an Azure Resource Manager storage account. For more information, see [Migrate to Azure Resource Manager](/azure/virtual-machines/migration-classic-resource-manager-overview).
+
+<!-- ## Insights. Optional section. If your service has insights, add the following include and information. -->
+<!-- Insights service-specific information. Add brief information about what your Azure Monitor insights provide here. You can refer to another article that gives details or add a screenshot. -->
+Azure Storage insights offer a unified view of storage performance, capacity, and availability. See [Monitor storage with Azure Monitor Storage insights](../common/storage-insights-overview.md).
-Azure Queue Storage creates monitoring data by using [Azure Monitor](../../azure-monitor/overview.md), which is a full stack monitoring service in Azure. Azure Monitor provides a complete set of features to monitor your Azure resources as well as resources in other clouds and on-premises.
+<!-- ## Resource types. Required section. -->
-Start with [Monitor Azure resources with Azure Monitor](../../azure-monitor/essentials/monitor-azure-resource.md) which describes the following:
+<!-- ## Data storage. Required section. Optionally, add service-specific information about storing your monitoring data after the include. -->
+<!-- Add service-specific information about storing monitoring data here, if applicable. For example, SQL Server stores other monitoring data in its own databases. -->
-- What is Azure Monitor?-- Costs associated with monitoring-- Monitoring data collected in Azure-- Configuring data collection-- Standard tools in Azure for analyzing and alerting on monitoring data
+<!-- METRICS SECTION START ->
-The following sections build on this article by describing the specific data gathered from Azure Storage. Examples show how to configure data collection and analyze this data with Azure tools.
+<!-- ## Platform metrics. Required section.
+ - If your service doesn't collect platform metrics, use the following include: [!INCLUDE [horz-monitor-no-platform-metrics](~/articles/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-no-platform-metrics.md)]
+ - If your service collects platform metrics, add the following include, statement, and service-specific information as appropriate. -->
+For a list of available metrics for Azure Queue Storage, see [Azure Queue Storage monitoring data reference](monitor-queue-storage-reference.md#metrics).
-## Monitoring data
+> [!IMPORTANT]
+> On **January 9, 2024** Storage Analytics metrics, also referred to as *classic metrics*, retired. If you used classic metrics, see [Move from Storage Analytics metrics to Azure Monitor metrics](../common/storage-metrics-migration.md) to transition to metrics in Azure Monitor.
-Azure Queue Storage collects the same kinds of monitoring data as other Azure resources, which are described in [Monitoring data from Azure resources](../../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data).
+> [!NOTE]
+> Azure Compute, not Azure Storage, supports metrics for managed disks or unmanaged disks. For more information, see [Per disk metrics for Managed and Unmanaged Disks](https://azure.microsoft.com/blog/per-disk-metrics-managed-disks/).
-See [Azure Queue Storage monitoring data reference](monitor-queue-storage-reference.md) for detailed information on the metrics and logs metrics created by Azure Queue Storage.
+<!-- Platform metrics service-specific information. Add service-specific information about your platform metrics here.-->
-Metrics and logs in Azure Monitor support only Azure Resource Manager storage accounts. Azure Monitor doesn't support classic storage accounts. If you want to use metrics or logs on a classic storage account, you need to migrate to an Azure Resource Manager storage account. See [Migrate to Azure Resource Manager](../../virtual-machines/migration-classic-resource-manager-overview.md).
+<!-- ## Prometheus/container metrics. Optional. If your service uses containers/Prometheus metrics, add the following include and information.
+<!-- Add service-specific information about your container/Prometheus metrics here.-->
-You can continue using classic metrics and logs if you want to. In fact, classic metrics and logs are available in parallel with metrics and logs in Azure Monitor. The support remains in place until Azure Storage ends the service on legacy metrics and logs.
+<!-- ## System metrics. Optional. If your service uses system-imported metrics, add the following include and information.
+<!-- Add service-specific information about your system-imported metrics here.-->
-## Collection and routing
+<!-- ## Custom metrics. Optional. If your service uses custom imported metrics, add the following include and information.
+<!-- Custom imported service-specific information. Add service-specific information about your custom imported metrics here.-->
-Platform metrics and the activity log are collected automatically, but can be routed to other locations by using a diagnostic setting.
+<!-- ## Non-Azure Monitor metrics. Optional. If your service uses any non-Azure Monitor based metrics, add the following include and information.
+<!-- Non-Monitor metrics service-specific information. Add service-specific information about your non-Azure Monitor metrics here.-->
-Resource Logs are not collected and stored until you create a diagnostic setting and route them to one or more locations.
+<!-- METRICS SECTION END ->
-To collect resource logs, you must create a diagnostic setting. When you create the setting, choose **queue** as the type of storage that you want to enable logs for. Then, specify one of the following categories of operations for which you want to collect logs.
+<!-- LOGS SECTION START -->
+
+<!-- ## Resource logs. Required section.
+ - If your service doesn't collect resource logs, use the following include [!INCLUDE [horz-monitor-no-resource-logs](~/articles/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-no-resource-logs.md)]
+ - If your service collects resource logs, add the following include, statement, and service-specific information as appropriate. -->
+For the available resource log categories, their associated Log Analytics tables, and the logs schemas for Azure Queue Storage, see [Azure Queue Storage monitoring data reference](monitor-queue-storage-reference.md#resource-logs).
+<!-- Resource logs service-specific information. Add service-specific information about your resource logs here.
+NOTE: Azure Monitor already has general information on how to configure and route resource logs. See https://learn.microsoft.com/azure/azure-monitor/platform/diagnostic-settings. Ideally, don't repeat that information here. You can provide a single screenshot of the diagnostic settings portal experience if you want. -->
+<a name="collection-and-routing"></a>
+### Azure Queue Storage diagnostic settings
+
+When you create the diagnostic setting, choose **queue** as the type of storage that you want to enable logs for. Then, specify one of the following categories of operations for which you want to collect logs.
| Category | Description | |:|:|
-| **StorageRead** | Read operations on objects. |
-| **StorageWrite** | Write operations on objects. |
-| **StorageDelete** | Delete operations on objects. |
+| StorageRead | Read operations on objects. |
+| StorageWrite | Write operations on objects. |
+| StorageDelete | Delete operations on objects. |
-The **audit** resource log category group allows you to collect the baseline of resource logs that Microsoft deems necessary for auditing your resource. What's collected is dynamic, and Microsoft may change it over time as new resource log categories become available. If you choose the **audit** category group, you can't specify any other resource categories, because the system will decide which logs to collect. For more information, see [Diagnostic settings in Azure Monitor: Resource logs](../../azure-monitor/essentials/diagnostic-settings.md#resource-logs).
+The **audit** resource log category group allows you to collect the baseline of resource logs that Microsoft deems necessary for auditing your resource. What's collected is dynamic, and Microsoft may change it over time as new resource log categories become available. If you choose the **audit** category group, you can't specify any other resource categories, because the system will decide which logs to collect. For more information, see [Diagnostic settings in Azure Monitor: Resource logs](/azure/azure-monitor/essentials/diagnostic-settings#resource-logs).
-See [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/platform/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, and PowerShell. You can also find links to information about how to create a diagnostic setting by using an Azure Resource Manager template or an Azure Policy definition.
+#### Destination limitations
-## Destination limitations
+For general destination limitations, see [Destination limitations](/azure/azure-monitor/essentials/diagnostic-settings#destination-limitations). The following limitations apply only to monitoring Azure Storage accounts.
-For general destination limitations, see [Destination limitations](../../azure-monitor/essentials/diagnostic-settings.md#destination-limitations). The following limitations apply only to monitoring Azure Storage accounts.
+- You can't send logs to the same storage account that you're monitoring with this setting. This situation would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
-- You can't send logs to the same storage account that you are monitoring with this setting.
+- You can't set a retention policy.
- This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
+ If you archive logs to a storage account, you can manage the retention policy of a log container by defining a lifecycle management policy. To learn how, see [Optimize costs by automatically managing the data lifecycle](../blobs/lifecycle-management-overview.md).
-- You can't set a retention policy.
+ If you send logs to Log Analytics, you can manage the data retention period of Log Analytics at the workspace level or even specify different retention settings by data type. To learn how, see [Change the data retention period](/azure/azure-monitor/logs/data-retention-archive).
+
+<!-- ## Activity log. Required section. Optionally, add service-specific information about your activity log after the include. -->
+<!-- Activity log service-specific information. Add service-specific information about your activity log here. -->
- If you archive logs to a storage account, you can manage the retention policy of a log container by defining a lifecycle management policy. To learn how, see [Optimize costs by automating Azure Blob Storage access tiers](../blobs/lifecycle-management-overview.md).
+<!-- ## Imported logs. Optional section. If your service uses imported logs, add the following include and information.
+<!-- Add service-specific information about your imported logs here. -->
- If you send logs to Log Analytics, you can manage the data retention period of Log Analytics at the workspace level or even specify different retention settings by data type. To learn how, see [Change the data retention period](../../azure-monitor/logs/data-retention-archive.md).
+<!-- ## Other logs. Optional section.
+If your service has other logs that aren't resource logs or in the activity log, add information that states what they are and what they cover here. You can describe how to route them in a later section. -->
-## Analyzing metrics
+<!-- LOGS SECTION END ->
-For a list of all Azure Monitor support metrics, which includes Azure Queue Storage, see [Azure Monitor supported metrics](../../azure-monitor/essentials/metrics-supported.md).
+<!-- ANALYSIS SECTION START -->
+
+<!-- ## Analyze data. Required section. -->
+
+<!-- ### External tools. Required section. -->
+
+### Analyze metrics for Azure Queue Storage
+
+Metrics for Azure Queue Storage are in these namespaces:
+
+- Microsoft.Storage/storageAccounts
+- Microsoft.Storage/storageAccounts/queueServices
+
+For a list of all Azure Monitor supported metrics, which includes Azure Queue Storage, see [Azure Monitor supported metrics](/azure/azure-monitor/essentials/metrics-supported).
### [Azure portal](#tab/azure-portal)
-You can analyze metrics for Azure Storage with metrics from other Azure services by using Azure Metrics Explorer. Open Metrics Explorer by choosing **Metrics** from the **Azure Monitor** menu. For details on using this tool, see [Analyze metrics with Azure Monitor metrics explorer](../../azure-monitor/essentials/analyze-metrics.md).
+You can analyze metrics for Azure Storage with metrics from other Azure services by using Metrics Explorer. Open Metrics Explorer by choosing **Metrics** from the **Azure Monitor** menu. For details on using this tool, see [Analyze metrics with Azure Monitor metrics explorer](/azure/azure-monitor/essentials/analyze-metrics).
This example shows how to view **Transactions** at the account level.
For metrics that support dimensions, you can filter the metric with the desired
For a complete list of the dimensions that Azure Storage supports, see [Metrics dimensions](monitor-queue-storage-reference.md#metrics-dimensions).
-Metrics for Azure Queue Storage are in these namespaces:
--- Microsoft.Storage/storageAccounts-- Microsoft.Storage/storageAccounts/queueServices- ### [PowerShell](#tab/azure-powershell) #### List the metric definition
In this example, replace the `<resource-ID>` placeholder with the resource ID of
Get-AzMetricDefinition -ResourceId $resourceId ```
-#### Reading metric values
+#### Read metric values
You can read account-level metric values of your storage account or the Queue Storage service. Use the [Get-AzMetric](/powershell/module/az.monitor/get-azmetric) cmdlet.
You can read account-level metric values of your storage account or the Queue St
Get-AzMetric -ResourceId $resourceId -MetricNames "UsedCapacity" -TimeGrain 01:00:00 ```
-#### Reading metric values with dimensions
+#### Read metric values with dimensions
When a metric supports dimensions, you can read metric values and filter them by using dimension values. Use the [Get-AzMetric](/powershell/module/Az.Monitor/Get-AzMetric) cmdlet.
You can read the metric values of your storage account or the Queue Storage serv
az monitor metrics list --resource <resource-ID> --metric "UsedCapacity" --interval PT1H ```
-#### Reading metric values with dimensions
+#### Read metric values with dimensions
When a metric supports dimensions, you can read metric values and filter them by using dimension values. Use the [az monitor metrics list](/cli/azure/monitor/metrics#az-monitor-metrics-list) command.
In these examples, replace the `<resource-ID>` placeholder with the resource ID
Replace the `<subscription-ID>` variable with the ID of your subscription. For guidance on how to obtain values for `<tenant-ID>`, `<application-ID>`, and `<AccessKey>`, see [Use the portal to create a Microsoft Entra application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md).
-### List the account-level metric definition
+#### List the account-level metric definition
The following example shows how to list a metric definition at the account level:
The following example shows how to list a metric definition at the account level
```
-### Reading account-level metric values
+#### Read account-level metric values
The following example shows how to read `UsedCapacity` data at the account level:
The following example shows how to read `UsedCapacity` data at the account level
```
-### Reading multidimensional metric values
+#### Read multidimensional metric values
For multidimensional metrics, you need to define metadata filters if you want to read metric data on specific dimension values.
The following example shows how to read metric data on the metric supporting mul
-## Analyzing logs
-
-****
-
-You can access resource logs either as a blob in a storage account, as event data, or through Log Analytic queries. For information about how to find those logs, see [Azure resource logs](../../azure-monitor/essentials/resource-logs.md).
+<a name="analyzing-logs"></a>
+### Analyze logs for Azure Queue Storage
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../../azure-monitor/essentials/resource-logs-schema.md). The schema for Azure Queue Storage resource logs is found in [Azure Queue Storage monitoring data reference](monitor-queue-storage-reference.md).
+You can access resource logs either as a blob in a storage account, as event data, or through Log Analytics queries. For information about how to find those logs, see [Azure resource logs](/azure/azure-monitor/essentials/resource-logs).
To get the list of SMB and REST operations that are logged, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages).
-Log entries are created only if there are requests made against the service endpoint. For example, if a storage account has activity in its file endpoint but not in its table or queue endpoints, only logs that pertain to the Azure Queue service are created. Azure Storage logs contain detailed information about successful and failed requests to a storage service. This information can be used to monitor individual requests and to diagnose issues with a storage service. Requests are logged on a best-effort basis.
- Log entries are created only if there are requests made against the service endpoint. For example, if a storage account has activity in its queue endpoint but not in its table or blob endpoints, only logs that pertain to Queue Storage are created. Azure Storage logs contain detailed information about successful and failed requests to a storage service. This information can be used to monitor individual requests and to diagnose issues with a storage service. Requests are logged on a best-effort basis.
-The [Activity log](../../azure-monitor/essentials/activity-log.md) is a type of platform log located in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
-
+When you view a storage account in the Azure portal, the operations called by the portal are also logged. For this reason, you may see operations logged in a storage account even though you haven't written any data to the account.
-### Log authenticated requests
+#### Log authenticated requests
-The following types of authenticated requests are logged:
+ The following types of authenticated requests are logged:
- Successful requests-- Failed requests, including timeout, throttling, network, authorization, and other errors
+- Failed requests, including time-out, throttling, network, authorization, and other errors
- Requests that use a shared access signature (SAS) or OAuth, including failed and successful requests - Requests to analytics data (classic log data in the **$logs** container and class metric data in the **$metric** tables) Requests made by the Queue Storage service itself, such as log creation or deletion, aren't logged. For a full list of the logged data, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) and [Storage log format](monitor-queue-storage-reference.md).
-### Log anonymous requests
+#### Log anonymous requests
-The following types of anonymous requests are logged:
+ The following types of anonymous requests are logged:
- Successful requests - Server errors - Time out errors for both client and server-- Failed `GET` requests with the error code 304 (`Not Modified`)
+- Failed GET requests with the error code 304 (`Not Modified`)
-All other failed anonymous requests aren't logged. For a full list of the logged data, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) and [Storage log format](monitor-queue-storage-reference.md).
-
-### Sample Kusto queries
-
-If you send logs to Log Analytics, you can access those logs by using Azure Monitor log queries. For more information, see [Log Analytics tutorial](../../azure-monitor/logs/log-analytics-tutorial.md).
-
-Here are some queries that you can enter in the **Log search** bar to help you monitor your queues. These queries work with the [new language](../../azure-monitor/logs/log-query-overview.md).
-
-> [!IMPORTANT]
-> When you select **Logs** from the storage account resource group menu, Log Analytics is opened with the query scope set to the current resource group. This means that log queries will only include data from that resource group. If you want to run a query that includes data from other resources or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../../azure-monitor/logs/scope.md) for details.
-
-Use these queries to help you monitor your Azure Storage accounts:
+<!-- ### Sample Kusto queries. Required section. If you have sample Kusto queries for your service, add them after the include. -->
+<!-- Add sample Kusto queries for your service here. -->
+Here are some queries that you can enter in the **Log search** bar to help you monitor your Queue Storage. These queries work with the [new language](../../azure-monitor/logs/log-query-overview.md). For more information, see [Log Analytics tutorial](/azure/azure-monitor/logs/log-analytics-tutorial).
- To list the 10 most common errors over the last three days.
Use these queries to help you monitor your Azure Storage accounts:
| render piechart ```
-## Alerts
+<!-- ### Azure Queue Storage service-specific analytics. Optional section.
+Add short information or links to specific articles that outline how to analyze data for your service. -->
+
+<!-- ANALYSIS SECTION END ->
+
+<!-- ALERTS SECTION START -->
+
+<!-- ## Alerts. Required section. -->
+
+<!-- ### Azure Queue Storage alert rules. Required section.
+**MUST HAVE** service-specific alert rules. Include useful alerts on metrics, logs, log conditions, or activity log.
+Fill in the following table with metric and log alerts that would be valuable for your service. Change the format as necessary for readability. You can instead link to an article that discusses your common alerts in detail.
+Ask your PMs if you don't know. This information is the BIGGEST request we get in Azure Monitor, so don't avoid it long term. People don't know what to monitor for best results. Be prescriptive. -->
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../../azure-monitor/alerts/alerts-metric-overview.md), [logs](../../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../../azure-monitor/alerts/activity-log-alerts.md).
+### Azure Queue Storage alert rules
+The following table lists common and recommended alert rules for Azure Queue Storage and the proper metric to use for the alert:
-The following table lists some example scenarios to monitor and the proper metric to use for the alert:
+| Alert type | Condition | Description |
+|-|-|-|
+| Metric | Queue Storage service is throttled. | Transactions<br>Dimension name: Response type |
+| Metric | Queue Storage requests are successful 99% of the time. | Availability<br>Dimension names: Geo type, API name, Authentication |
+| Metric | Queue Storage egress has exceeded 500 GiB in one day. | Egress<br>Dimension names: Geo type, API name, Authentication |
-| Scenario | Metric to use for alert |
-|-|-|
-| Queue Storage service is throttled. | Metric: Transactions<br>Dimension name: Response type |
-| Queue Storage requests are successful 99% of the time. | Metric: Availability<br>Dimension names: Geo type, API name, Authentication |
-| Queue Storage egress has exceeded 500 GiB in one day. | Metric: Egress<br>Dimension names: Geo type, API name, Authentication |
+<!-- ### Advisor recommendations -->
-## FAQ
+<!-- ALERTS SECTION END -->
-**Does Azure Storage support metrics for managed disks or unmanaged disks?**
+## Related content
+<!-- You can change the wording and add more links if useful. -->
-No. Compute instances support the metrics on disks. For more information, see [Per disk metrics for managed and unmanaged disks](https://azure.microsoft.com/blog/per-disk-metrics-managed-disks/).
+Other Queue Storage monitoring content:
+- [Azure Queue Storage monitoring data reference](monitor-queue-storage-reference.md). A reference of the logs and metrics created by Azure Queue Storage.
+- [Performance and scalability checklist for Queue Storage](storage-performance-checklist.md)
-## Next steps
+Overall Azure Storage monitoring content:
+- [Monitor storage with Azure Monitor Storage insights](../common/storage-insights-overview.md). Get a unified view of storage performance, capacity, and availability.
+- [Transition to metrics in Azure Monitor](../common/storage-metrics-migration.md). Move from Storage Analytics metrics to metrics in Azure Monitor.
+- [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=/azure/storage/blobs/toc.json). See common performance issues and guidance about how to troubleshoot them.
+- [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=/azure/storage/blobs/toc.json). See common availability issues and guidance about how to troubleshoot them.
+- [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=/azure/storage/blobs/toc.json). See common issues with connecting clients and how to troubleshoot them.
+- [Monitor, diagnose, and troubleshoot your Azure Storage (training module)](/training/modules/monitor-diagnose-and-troubleshoot-azure-storage/). Troubleshoot storage account issues, with step-by-step guidance.
-Get started with any of these guides.
+Azure Monitor content:
+- [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource). General details on monitoring Azure resources.
+- [Azure Monitor Metrics overview](/azure/azure-monitor/essentials/data-platform-metrics). The basics of metrics and metric dimensions.
+- [Azure Monitor Logs overview](/azure/azure-monitor/logs/data-platform-logs). The basics of logs and how to collect and analyze them.
+- [Analyze metrics with Azure Monitor metrics explorer](/azure/azure-monitor/essentials/analyze-metrics). A tour of Metrics Explorer.
+- [Overview of Log Analytics in Azure Monitor](/azure/azure-monitor/logs/log-analytics-overview). A tour of Log Analytics.
-| Guide | Description |
-|||
-| [Monitor, diagnose, and troubleshoot your Azure Storage](/training/modules/monitor-diagnose-and-troubleshoot-azure-storage/) | Troubleshoot storage account issues (contains step-by-step guidance). |
-| [Monitor storage with Azure Monitor Storage insights](../common/storage-insights-overview.md) | A unified view of storage performance, capacity, and availability |
-| [Analyze metrics with Azure Monitor metrics explorer](../../azure-monitor/essentials/analyze-metrics.md) | A tour of Metrics Explorer.
-| [Overview of Log Analytics in Azure Monitor](../../azure-monitor/logs/log-analytics-overview.md) | A tour of Log Analytics. |
-| [Azure Monitor Metrics overview](../../azure-monitor/essentials/data-platform-metrics.md) | The basics of metrics and metric dimensions |
-| [Azure Monitor Logs overview](../../azure-monitor/logs/data-platform-logs.md)| The basics of logs and how to collect and analyze them |
-| [Transition to metrics in Azure Monitor](../common/storage-metrics-migration.md) | Move from Storage Analytics metrics to metrics in Azure Monitor. |
-| [Azure Queue Storage monitoring data reference](monitor-queue-storage-reference.md) | A reference of the logs and metrics created by Azure Queue Storage |
-| [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=/azure/storage/queues/toc.json)| Common performance issues and guidance about how to troubleshoot them. |
-| [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=/azure/storage/queues/toc.json)| Common availability issues and guidance about how to troubleshoot them.|
-| [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=/azure/storage/queues/toc.json)| Common issues with connecting clients and how to troubleshoot them.|
storage Monitor Table Storage Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/monitor-table-storage-reference.md
Title: Azure Table storage monitoring data reference
-description: Log and metrics reference for monitoring data from Azure Table storage.
---
+ Title: Monitoring data reference for Azure Table Storage
+description: This article contains important reference material you need when you monitor Azure Table Storage.
Last updated : 02/13/2024+ Previously updated : 08/18/2022++ -
-# Azure Table storage monitoring data reference
-
-See [Monitoring Azure Storage](monitor-table-storage.md) for details on collecting and analyzing monitoring data for Azure Storage.
-
-## Metrics
-
-The following tables list the platform metrics collected for Azure Storage.
-
-### Capacity metrics
-
-Capacity metrics values are sent to Azure Monitor every hour. The values are refreshed daily. The time grain defines the time interval for which metrics values are presented. The supported time grain for all capacity metrics is one hour (PT1H).
-
-Azure Storage provides the following capacity metrics in Azure Monitor.
-
-#### Account-level metrics
+# Azure Table Storage monitoring data reference
+<!-- Intro -->
-#### Table storage
+See [Monitor Azure Table Storage](monitor-table-storage.md) for details on the data you can collect for Azure Table Storage and how to use it.
-This table shows [Table storage metrics](../../azure-monitor/essentials/metrics-supported.md#microsoftstoragestorageaccountstableservices).
+<!-- ## Metrics. Required section. -->
-| Metric | Description |
-| - | -- |
-| TableCapacity | The amount of Table storage used by the storage account. <br/><br/> Unit: Bytes <br/> Aggregation Type: Average <br/> Value example: 1024 |
-| TableCount | The number of tables in the storage account. <br/><br/> Unit: Count <br/> Aggregation Type: Average <br/> Value example: 1024 |
-| TableEntityCount | The number of table entities in the storage account. <br/><br/> Unit: Count <br/> Aggregation Type: Average <br/> Value example: 1024 |
+### Supported metrics for Microsoft.Storage/storageAccounts
+The following table lists the metrics available for the Microsoft.Storage/storageAccounts resource type.
-To learn how to calculate Table storage capacity, see [Calculate the size/capacity of storage account and it services](https://techcommunity.microsoft.com/t5/azure-paas-blog/calculate-the-size-capacity-of-storage-account-and-it-services/ba-p/1064046).
-
-### Transaction metrics
-
-Transaction metrics are emitted on every request to a storage account from Azure Storage to Azure Monitor. In the case of no activity on your storage account, there will be no data on transaction metrics in the period. All transaction metrics are available at both account and Table storage service level. The time grain defines the time interval that metric values are presented. The supported time grains for all transaction metrics are PT1H and PT1M.
-
+### Supported metrics for Microsoft.Storage/storageAccounts/tableServices
+The following table lists the metrics available for the Microsoft.Storage/storageAccounts/tableServices resource type.
<a id="metrics-dimensions"></a>-
-## Metrics dimensions
-
-Azure Storage supports following dimensions for metrics in Azure Monitor.
- [!INCLUDE [Metrics dimensions](../../../includes/azure-storage-account-metrics-dimensions.md)] <a id="resource-logs-preview"></a>+
+### Supported resource logs for Microsoft.Storage/storageAccounts/tableServices
+
+<!-- ## Azure Monitor Logs tables. Required section. -->
-## Resource logs
+- [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity)
+- [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics)
+- [StorageTableLogs](/azure/azure-monitor/reference/tables/storagetablelogs)
-The following table lists the properties for Azure Storage resource logs when they're collected in Azure Monitor Logs or Azure Storage. The properties describe the operation, the service, and the type of authorization that was used to perform the operation.
+The following tables list the properties for Azure Storage resource logs when they're collected in Azure Monitor Logs or Azure Storage. The properties describe the operation, the service, and the type of authorization that was used to perform the operation.
### Fields that describe the operation
The following table lists the properties for Azure Storage resource logs when th
[!INCLUDE [Account level capacity metrics](../../../includes/azure-storage-logs-properties-service.md)]
-## See also
+<!-- ## Activity log. Required section. -->
+- [Microsoft.Storage resource provider operations](/azure/role-based-access-control/resource-provider-operations#microsoftstorage)
+
+## Related content
+
+- See [Monitor Azure Table Storage](monitor-table-storage.md) for a description of monitoring Azure Table Storage.
+- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
-- See [Monitoring Azure Table storage](monitor-table-storage.md) for a description of monitoring Azure Storage.-- See [Monitoring Azure resources with Azure Monitor](../../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
storage Monitor Table Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/monitor-table-storage.md
Title: Monitoring Azure Table storage
-description: Learn how to monitor the performance and availability of Azure Table storage. Monitor Azure Table storage data, learn about configuration, and analyze metric and log data.
---
+ Title: Monitor Azure Table Storage
+description: Start here to learn how to monitor Azure Table Storage.
Last updated : 02/13/2024+ Previously updated : 08/08/2023+ -+ ms.devlang: csharp-
+# ms.devlang: csharp, powershell, azurecli
-# Monitoring Azure Table storage
+<!--
+IMPORTANT
+To make this template easier to use, first:
+1. Search and replace Azure Table Storage with the official name of your service.
+2. Search and replace table-storage with the service name to use in GitHub filenames.-->
+
+<!-- VERSION 3.0 2024_01_07
+For background about this template, see https://review.learn.microsoft.com/en-us/help/contribute/contribute-monitoring?branch=main -->
-When you have critical applications and business processes that rely on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data that's generated by Azure Table storage and how you can use the features of Azure Monitor to analyze alerts on this data.
+<!-- Most services can use the following sections unchanged. The sections use #included text you don't have to maintain, which changes when Azure Monitor functionality changes. Add info into the designated service-specific places if necessary. Remove #includes or template content that aren't relevant to your service.
+At a minimum your service should have the following two articles:
+1. The primary monitoring article (based on this template)
+ - Title: "Monitor Azure Table Storage"
+ - TOC Title: "Monitor"
+ - Filename: "monitor-table-storage.md"
+2. A reference article that lists all the metrics and logs for your service (based on the template data-reference-template.md).
+ - Title: "Azure Table Storage monitoring data reference"
+ - TOC Title: "Monitoring data reference"
+ - Filename: "monitor-table-storage-reference.md".
+-->
-## Monitor overview
+# Monitor Azure Table Storage
-The **Overview** page in the Azure portal for each Table storage resource includes a brief view of the resource usage, such as requests and hourly billing. This information is useful, but only a small amount of the monitoring data is available. Some of this data is collected automatically and is available for analysis as soon as you create the resource. You can enable additional types of data collection with some configuration.
+<!-- Intro -->
-## What is Azure Monitor?
-Azure Table storage creates monitoring data by using [Azure Monitor](../../azure-monitor/overview.md), which is a full stack monitoring service in Azure. Azure Monitor provides a complete set of features to monitor your Azure resources and resources in other clouds and on-premises.
+> [!IMPORTANT]
+> Metrics and logs in Azure Monitor support only Azure Resource Manager storage accounts. Azure Monitor doesn't support classic storage accounts. If you want to use metrics or logs on a classic storage account, you need to migrate to an Azure Resource Manager storage account. For more information, see [Migrate to Azure Resource Manager](/azure/virtual-machines/migration-classic-resource-manager-overview).
-Start with the article [Monitoring Azure resources with Azure Monitor](../../azure-monitor/essentials/monitor-azure-resource.md) which describes the following:
+<!-- ## Insights. Optional section. If your service has insights, add the following include and information. -->
+<!-- Insights service-specific information. Add brief information about what your Azure Monitor insights provide here. You can refer to another article that gives details or add a screenshot. -->
+Azure Storage insights offer a unified view of storage performance, capacity, and availability. See [Monitor storage with Azure Monitor Storage insights](../common/storage-insights-overview.md).
-- What is Azure Monitor?-- Costs associated with monitoring-- Monitoring data collected in Azure-- Configuring data collection-- Standard tools in Azure for analyzing and alerting on monitoring data
+<!-- ## Resource types. Required section. -->
+
+<!-- ## Data storage. Required section. Optionally, add service-specific information about storing your monitoring data after the include. -->
+<!-- Add service-specific information about storing monitoring data here, if applicable. For example, SQL Server stores other monitoring data in its own databases. -->
+
+<!-- METRICS SECTION START ->
+
+<!-- ## Platform metrics. Required section.
+ - If your service doesn't collect platform metrics, use the following include: [!INCLUDE [horz-monitor-no-platform-metrics](~/articles/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-no-platform-metrics.md)]
+ - If your service collects platform metrics, add the following include, statement, and service-specific information as appropriate. -->
+For a list of available metrics for Azure Table Storage, see [Azure Table Storage monitoring data reference](monitor-table-storage-reference.md#metrics).
+
+> [!IMPORTANT]
+> On **January 9, 2024** Storage Analytics metrics, also referred to as *classic metrics*, retired. If you used classic metrics, see [Move from Storage Analytics metrics to Azure Monitor metrics](../common/storage-metrics-migration.md) to transition to metrics in Azure Monitor. You can continue using classic logs if you want to. However, we recommend that you transition to using Azure Storage logs in Azure Monitor instead of Storage Analytics logs.
-The following sections build on this article by describing the specific data gathered from Azure Storage. Examples show how to configure data collection and analyze this data with Azure tools.
+> [!NOTE]
+> Azure Compute, not Azure Storage, supports metrics for managed disks or unmanaged disks. For more information, see [Per disk metrics for Managed and Unmanaged Disks](https://azure.microsoft.com/blog/per-disk-metrics-managed-disks/).
-## Monitoring data
+<!-- Platform metrics service-specific information. Add service-specific information about your platform metrics here.-->
-Azure Table storage collects the same kinds of monitoring data as other Azure resources, which are described in [Monitoring data from Azure resources](../../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data).
+<!-- ## Prometheus/container metrics. Optional. If your service uses containers/Prometheus metrics, add the following include and information.
+<!-- Add service-specific information about your container/Prometheus metrics here.-->
-See [Azure Table storage monitoring data reference](monitor-table-storage-reference.md) for detailed information on the metrics and logs metrics created by Azure Table storage.
+<!-- ## System metrics. Optional. If your service uses system-imported metrics, add the following include and information.
+<!-- Add service-specific information about your system-imported metrics here.-->
-Metrics and logs in Azure Monitor support only Azure Resource Manager storage accounts. Azure Monitor doesn't support classic storage accounts. If you want to use metrics or logs on a classic storage account, you need to migrate to an Azure Resource Manager storage account. See [Migrate to Azure Resource Manager](../../virtual-machines/migration-classic-resource-manager-overview.md).
+<!-- ## Custom metrics. Optional. If your service uses custom imported metrics, add the following include and information.
+<!-- Custom imported service-specific information. Add service-specific information about your custom imported metrics here.-->
-You can continue using classic metrics and logs if you want to. In fact, classic metrics and logs are available in parallel with metrics and logs in Azure Monitor. The support remains in place until Azure Storage ends the service on legacy metrics and logs.
+<!-- ## Non-Azure Monitor metrics. Optional. If your service uses any non-Azure Monitor based metrics, add the following include and information.
+<!-- Non-Monitor metrics service-specific information. Add service-specific information about your non-Azure Monitor metrics here.-->
-## Collection and routing
+<!-- METRICS SECTION END ->
-Platform metrics and the Activity log are collected automatically, but can be routed to other locations by using a diagnostic setting.
+<!-- LOGS SECTION START -->
-Resource Logs are not collected and stored until you create a diagnostic setting and route them to one or more locations.
+<!-- ## Resource logs. Required section.
+ - If your service doesn't collect resource logs, use the following include [!INCLUDE [horz-monitor-no-resource-logs](~/articles/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-no-resource-logs.md)]
+ - If your service collects resource logs, add the following include, statement, and service-specific information as appropriate. -->
+For the available resource log categories, their associated Log Analytics tables, and the logs schemas for Azure Table Storage, see [Azure Table Storage monitoring data reference](monitor-table-storage-reference.md#resource-logs).
+<!-- Resource logs service-specific information. Add service-specific information about your resource logs here.
+NOTE: Azure Monitor already has general information on how to configure and route resource logs. See https://learn.microsoft.com/azure/azure-monitor/platform/diagnostic-settings. Ideally, don't repeat that information here. You can provide a single screenshot of the diagnostic settings portal experience if you want. -->
+<a name="collection-and-routing"></a>
+### Azure Table Storage diagnostic settings
-To collect resource logs, you must create a diagnostic setting. When you create the setting, choose **table** as the type of storage that you want to enable logs for. Then, specify one of the following categories of operations for which you want to collect logs.
+When you create the diagnostic setting, choose **table** as the type of storage that you want to enable logs for. Then, specify one of the following categories of operations for which you want to collect logs.
| Category | Description | |:|:|
To collect resource logs, you must create a diagnostic setting. When you create
| StorageWrite | Write operations on objects. | | StorageDelete | Delete operations on objects. |
-The **audit** resource log category group allows you to collect the baseline of resource logs that Microsoft deems necessary for auditing your resource. What's collected is dynamic, and Microsoft may change it over time as new resource log categories become available. If you choose the **audit** category group, you can't specify any other resource categories, because the system will decide which logs to collect. For more information, see [Diagnostic settings in Azure Monitor: Resource logs](../../azure-monitor/essentials/diagnostic-settings.md#resource-logs).
+The **audit** resource log category group allows you to collect the baseline of resource logs that Microsoft deems necessary for auditing your resource. What's collected is dynamic, and Microsoft may change it over time as new resource log categories become available. If you choose the **audit** category group, you can't specify any other resource categories, because the system will decide which logs to collect. For more information, see [Diagnostic settings in Azure Monitor: Resource logs](/azure/azure-monitor/essentials/diagnostic-settings#resource-logs).
-See [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/platform/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, and PowerShell. You can also find links to information about how to create a diagnostic setting by using an Azure Resource Manager template or an Azure Policy definition.
+#### Destination limitations
-## Destination limitations
+For general destination limitations, see [Destination limitations](/azure/azure-monitor/essentials/diagnostic-settings#destination-limitations). The following limitations apply only to monitoring Azure Storage accounts.
-For general destination limitations, see [Destination limitations](../../azure-monitor/essentials/diagnostic-settings.md#destination-limitations). The following limitations apply only to monitoring Azure Storage accounts.
+- You can't send logs to the same storage account that you're monitoring with this setting. This situation would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
-- You can't send logs to the same storage account that you are monitoring with this setting.
+- You can't set a retention policy.
- This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
+ If you archive logs to a storage account, you can manage the retention policy of a log container by defining a lifecycle management policy. To learn how, see [Optimize costs by automatically managing the data lifecycle](../blobs/lifecycle-management-overview.md).
-- You can't set a retention policy.
+ If you send logs to Log Analytics, you can manage the data retention period of Log Analytics at the workspace level or even specify different retention settings by data type. To learn how, see [Change the data retention period](/azure/azure-monitor/logs/data-retention-archive).
+
+<!-- ## Activity log. Required section. Optionally, add service-specific information about your activity log after the include. -->
+<!-- Activity log service-specific information. Add service-specific information about your activity log here. -->
+
+<!-- ## Imported logs. Optional section. If your service uses imported logs, add the following include and information.
+<!-- Add service-specific information about your imported logs here. -->
+
+<!-- ## Other logs. Optional section.
+If your service has other logs that aren't resource logs or in the activity log, add information that states what they are and what they cover here. You can describe how to route them in a later section. -->
- If you archive logs to a storage account, you can manage the retention policy of a log container by defining a lifecycle management policy. To learn how, see [Optimize costs by automating Azure Blob Storage access tiers](../blobs/lifecycle-management-overview.md).
+<!-- LOGS SECTION END ->
- If you send logs to Log Analytics, you can manage the data retention period of Log Analytics at the workspace level or even specify different retention settings by data type. To learn how, see [Change the data retention period](../../azure-monitor/logs/data-retention-archive.md).
+<!-- ANALYSIS SECTION START -->
-## Analyzing metrics
+<!-- ## Analyze data. Required section. -->
+
+<!-- ### External tools. Required section. -->
+
+### Analyze metrics for Azure Table Storage
+
+Metrics for Azure Table Storage are in these namespaces:
+
+- Microsoft.Storage/storageAccounts
+- Microsoft.Storage/storageAccounts/tableServices
-For a list of all Azure Monitor support metrics, which includes Azure Table storage, see [Azure Monitor supported metrics](../../azure-monitor/essentials/metrics-supported.md).
+For a list of all Azure Monitor supported metrics, which includes Azure Table Storage, see [Azure Monitor supported metrics](/azure/azure-monitor/essentials/metrics-supported).
### [Azure portal](#tab/azure-portal)
-You can analyze metrics for Azure Storage with metrics from other Azure services by using Metrics Explorer. Open Metrics Explorer by choosing **Metrics** from the **Azure Monitor** menu. For details on using this tool, see [Analyze metrics with Azure Monitor metrics explorer](../../azure-monitor/essentials/analyze-metrics.md).
+You can analyze metrics for Azure Storage with metrics from other Azure services by using Metrics Explorer. Open Metrics Explorer by choosing **Metrics** from the **Azure Monitor** menu. For details on using this tool, see [Analyze metrics with Azure Monitor metrics explorer](/azure/azure-monitor/essentials/analyze-metrics).
This example shows how to view **Transactions** at the account level.
For metrics that support dimensions, you can filter the metric with the desired
For a complete list of the dimensions that Azure Storage supports, see [Metrics dimensions](monitor-table-storage-reference.md#metrics-dimensions).
-Metrics for Azure Table storage are in these namespaces:
--- Microsoft.Storage/storageAccounts-- Microsoft.Storage/storageAccounts/tableServices- ### [PowerShell](#tab/azure-powershell) #### List the metric definition
-You can list the metric definition of your storage account or the Table storage service. Use the [Get-AzMetricDefinition](/powershell/module/az.monitor/get-azmetricdefinition) cmdlet.
+You can list the metric definition of your storage account or the Table Storage service. Use the [Get-AzMetricDefinition](/powershell/module/az.monitor/get-azmetricdefinition) cmdlet.
-In this example, replace the `<resource-ID>` placeholder with the resource ID of the entire storage account or the resource ID of the Table storage service. You can find these resource IDs on the **Properties** pages of your storage account in the Azure portal.
+In this example, replace the `<resource-ID>` placeholder with the resource ID of the entire storage account or the resource ID of the Table Storage service. You can find these resource IDs on the **Properties** pages of your storage account in the Azure portal.
```powershell $resourceId = "<resource-ID>" Get-AzMetricDefinition -ResourceId $resourceId ```
-#### Reading metric values
+#### Read metric values
-You can read account-level metric values of your storage account or the Table storage service. Use the [Get-AzMetric](/powershell/module/Az.Monitor/Get-AzMetric) cmdlet.
+You can read account-level metric values of your storage account or the Table Storage service. Use the [Get-AzMetric](/powershell/module/az.monitor/get-azmetric) cmdlet.
```powershell $resourceId = "<resource-ID>" Get-AzMetric -ResourceId $resourceId -MetricNames "UsedCapacity" -TimeGrain 01:00:00 ```
-#### Reading metric values with dimensions
+#### Read metric values with dimensions
When a metric supports dimensions, you can read metric values and filter them by using dimension values. Use the [Get-AzMetric](/powershell/module/Az.Monitor/Get-AzMetric) cmdlet.
Get-AzMetric -ResourceId $resourceId -MetricName Transactions -TimeGrain 01:00:0
#### List the account-level metric definition
-You can list the metric definition of your storage account or the Table storage service. Use the [az monitor metrics list-definitions](/cli/azure/monitor/metrics#az-monitor-metrics-list-definitions) command.
-
-In this example, replace the `<resource-ID>` placeholder with the resource ID of the entire storage account or the resource ID of the Table storage service. You can find these resource IDs on the **Properties** pages of your storage account in the Azure portal.
+You can list the metric definition of your storage account or the Table Storage service. Use the [az monitor metrics list-definitions](/cli/azure/monitor/metrics#az-monitor-metrics-list-definitions) command.
+
+In this example, replace the `<resource-ID>` placeholder with the resource ID of the entire storage account or the resource ID of the Table Storage service. You can find these resource IDs on the **Properties** pages of your storage account in the Azure portal.
```azurecli-interactive az monitor metrics list-definitions --resource <resource-ID>
In this example, replace the `<resource-ID>` placeholder with the resource ID of
#### Read account-level metric values
-You can read the metric values of your storage account or the Table storage service. Use the [az monitor metrics list](/cli/azure/monitor/metrics#az-monitor-metrics-list) command.
+You can read the metric values of your storage account or the Table Storage service. Use the [az monitor metrics list](/cli/azure/monitor/metrics#az-monitor-metrics-list) command.
```azurecli-interactive az monitor metrics list --resource <resource-ID> --metric "UsedCapacity" --interval PT1H ```
-#### Reading metric values with dimensions
+#### Read metric values with dimensions
When a metric supports dimensions, you can read metric values and filter them by using dimension values. Use the [az monitor metrics list](/cli/azure/monitor/metrics#az-monitor-metrics-list) command.
az monitor metrics list --resource <resource-ID> --metric "Transactions" --inter
### [.NET](#tab/dotnet)
-Azure Monitor provides the [.NET SDK](https://www.nuget.org/packages/Microsoft.Azure.Management.Monitor/) to read metric definition and values. The [sample code](https://azure.microsoft.com/resources/samples/monitor-dotnet-metrics-api/) shows how to use the SDK with different parameters. You need to use `0.18.0-preview` or a later version for storage metrics.
-
-In these examples, replace the `<resource-ID>` placeholder with the resource ID of the entire storage account or the Table storage service. You can find these resource IDs on the **Properties** pages of your storage account in the Azure portal.
+Azure Monitor provides the [.NET SDK](https://www.nuget.org/packages/microsoft.azure.management.monitor/) to read metric definition and values. The [sample code](https://azure.microsoft.com/resources/samples/monitor-dotnet-metrics-api/) shows how to use the SDK with different parameters. You need to use `0.18.0-preview` or a later version for storage metrics.
-Replace the `<subscription-ID>` variable with the ID of your subscription. For guidance on how to obtain values for `<tenant-ID>`, `<application-ID>`, and `<AccessKey>`, see [Use the portal to create a Microsoft Entra application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md).
+In these examples, replace the `<resource-ID>` placeholder with the resource ID of the entire storage account or the Table Storage service. You can find these resource IDs on the **Properties** pages of your storage account in the Azure portal.
+
+Replace the `<subscription-ID>` variable with the ID of your subscription. For guidance on how to obtain values for `<tenant-ID>`, `<application-ID>`, and `<AccessKey>`, see [Use the portal to create a Microsoft Entra application and service principal that can access resources](/azure/active-directory/develop/howto-create-service-principal-portal).
#### List the account-level metric definition
The following example shows how to list a metric definition at the account level
var applicationId = "<application-ID>"; var accessKey = "<AccessKey>"; - MonitorManagementClient readOnlyClient = AuthenticateWithReadOnlyClient(tenantId, applicationId, accessKey, subscriptionId).Result; IEnumerable<MetricDefinition> metricDefinitions = await readOnlyClient.MetricDefinitions.ListAsync(resourceUri: resourceId, cancellationToken: new CancellationToken());
The following example shows how to list a metric definition at the account level
```
-#### Reading account-level metric values
+#### Read account-level metric values
The following example shows how to read `UsedCapacity` data at the account level:
The following example shows how to read `UsedCapacity` data at the account level
```
-#### Reading multidimensional metric values
+#### Read multidimensional metric values
For multidimensional metrics, you need to define metadata filters if you want to read metric data on specific dimension values.
The following example shows how to read metric data on the metric supporting mul
-## Analyzing logs
+<a name="analyzing-logs"></a>
+### Analyze logs for Azure Table Storage
-You can access resource logs either as a blob in a storage account, as event data, or through Log Analytic queries. For information about how to find those logs, see [Azure resource logs](../../azure-monitor/essentials/resource-logs.md).
-
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../../azure-monitor/essentials/resource-logs-schema.md). The schema for Azure Table Storage resource logs is found in [Azure Table storage monitoring data reference](monitor-table-storage-reference.md).
+You can access resource logs either as a blob in a storage account, as event data, or through Log Analytics queries. For information about how to find those logs, see [Azure resource logs](/azure/azure-monitor/essentials/resource-logs).
To get the list of SMB and REST operations that are logged, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages).
-Log entries are created only if there are requests made against the service endpoint. For example, if a storage account has activity in its file endpoint but not in its table or queue endpoints, only logs that pertain to the Azure Blob Storage service are created. Azure Storage logs contain detailed information about successful and failed requests to a storage service. This information can be used to monitor individual requests and to diagnose issues with a storage service. Requests are logged on a best-effort basis.
+Log entries are created only if there are requests made against the service endpoint. For example, if a storage account has activity in its table endpoint but not in its queue or blob endpoints, only logs that pertain to Table Storage are created. Azure Storage logs contain detailed information about successful and failed requests to a storage service. This information can be used to monitor individual requests and to diagnose issues with a storage service. Requests are logged on a best-effort basis.
-The [Activity log](../../azure-monitor/essentials/activity-log.md) is a type of platform log located in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+When you view a storage account in the Azure portal, the operations called by the portal are also logged. For this reason, you may see operations logged in a storage account even though you haven't written any data to the account.
-### Log authenticated requests
+#### Log authenticated requests
The following types of authenticated requests are logged:
The [Activity log](../../azure-monitor/essentials/activity-log.md) is a type of
- Requests that use a shared access signature (SAS) or OAuth, including failed and successful requests - Requests to analytics data (classic log data in the **$logs** container and class metric data in the **$metric** tables)
-Requests made by the Table storage service itself, such as log creation or deletion, aren't logged. For a full list of the logged data, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) and [Storage log format](monitor-table-storage-reference.md).
+Requests made by the Table Storage service itself, such as log creation or deletion, aren't logged. For a full list of the logged data, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) and [Storage log format](monitor-table-storage-reference.md).
-### Log anonymous requests
+#### Log anonymous requests
The following types of anonymous requests are logged: - Successful requests - Server errors - Time out errors for both client and server-- Failed GET requests with the error code 304 (Not Modified)-
-All other failed anonymous requests aren't logged. For a full list of the logged data, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) and [Storage log format](monitor-table-storage-reference.md).
+- Failed GET requests with the error code 304 (`Not Modified`)
-### Sample Kusto queries
-
-If you send logs to Log Analytics, you can access those logs by using Azure Monitor log queries. For more information, see [Log Analytics tutorial](../../azure-monitor/logs/log-analytics-tutorial.md).
-
-Here are some queries that you can enter in the **Log search** bar to help you monitor your Blob storage. These queries work with the [new language](../../azure-monitor/logs/log-query-overview.md).
-
-> [!IMPORTANT]
-> When you select **Logs** from the storage account resource group menu, Log Analytics is opened with the query scope set to the current resource group. This means that log queries will only include data from that resource group. If you want to run a query that includes data from other resources or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../../azure-monitor/logs/scope.md) for details.
-
-Use these queries to help you monitor your Azure Storage accounts:
+<!-- ### Sample Kusto queries. Required section. If you have sample Kusto queries for your service, add them after the include. -->
+<!-- Add sample Kusto queries for your service here. -->
+Here are some queries that you can enter in the **Log search** bar to help you monitor your Table Storage. These queries work with the [new language](../../azure-monitor/logs/log-query-overview.md). For more information, see [Log Analytics tutorial](/azure/azure-monitor/logs/log-analytics-tutorial).
* To list the 10 most common errors over the last three days.
Use these queries to help you monitor your Azure Storage accounts:
```
-## Alerts
+<!-- ### Azure Table Storage service-specific analytics. Optional section.
+Add short information or links to specific articles that outline how to analyze data for your service. -->
+
+<!-- ANALYSIS SECTION END ->
+
+<!-- ALERTS SECTION START -->
+
+<!-- ## Alerts. Required section. -->
+
+<!-- ### Azure Table Storage alert rules. Required section.
+**MUST HAVE** service-specific alert rules. Include useful alerts on metrics, logs, log conditions, or activity log.
+Fill in the following table with metric and log alerts that would be valuable for your service. Change the format as necessary for readability. You can instead link to an article that discusses your common alerts in detail.
+Ask your PMs if you don't know. This information is the BIGGEST request we get in Azure Monitor, so don't avoid it long term. People don't know what to monitor for best results. Be prescriptive. -->
+
+### Azure Table Storage alert rules
+The following table lists common and recommended alert rules for Azure Table Storage and the proper metric to use for the alert:
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../../azure-monitor/alerts/alerts-metric-overview.md), [logs](../../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../../azure-monitor/alerts/activity-log-alerts.md).
+| Alert type | Condition | Description |
+|-|-|-|
+| Metric | Table Storage service is throttled. | Transactions<br>Dimension name: Response type |
+| Metric | Table Storage requests are successful 99% of the time. | Availability<br>Dimension names: Geo type, API name, Authentication |
+| Metric | Table Storage egress has exceeded 500 GiB in one day. | Egress<br>Dimension names: Geo type, API name, Authentication |
-The following table lists some example scenarios to monitor and the proper metric to use for the alert:
+<!-- ### Advisor recommendations -->
-| Scenario | Metric to use for alert |
-|-|-|
-| Table Storage service is throttled. | Metric: Transactions<br>Dimension name: Response type |
-| Table Storage requests are successful 99% of the time. | Metric: Availability<br>Dimension names: Geo type, API name, Authentication |
-| Table Storage egress has exceeded 500 GiB in one day. | Metric: Egress<br>Dimension names: Geo type, API name, Authentication |
+<!-- ALERTS SECTION END -->
-## FAQ
+## Related content
+<!-- You can change the wording and add more links if useful. -->
-**Does Azure Storage support metrics for Managed Disks or Unmanaged Disks?**
+Other Table Storage monitoring content:
+- [Azure Table Storage monitoring data reference](monitor-table-storage-reference.md). A reference of the logs and metrics created by Azure Table Storage.
+- [Performance and scalability checklist for Table Storage](storage-performance-checklist.md)
-No. Azure Compute supports the metrics on disks. For more information, see [Per disk metrics for Managed and Unmanaged Disks](https://azure.microsoft.com/blog/per-disk-metrics-managed-disks/).
+Overall Azure Storage monitoring content:
+- [Monitor storage with Azure Monitor Storage insights](../common/storage-insights-overview.md). Get a unified view of storage performance, capacity, and availability.
+- [Transition to metrics in Azure Monitor](../common/storage-metrics-migration.md). Move from Storage Analytics metrics to metrics in Azure Monitor.
+- [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=/azure/storage/blobs/toc.json). See common performance issues and guidance about how to troubleshoot them.
+- [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=/azure/storage/blobs/toc.json). See common availability issues and guidance about how to troubleshoot them.
+- [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=/azure/storage/blobs/toc.json). See common issues with connecting clients and how to troubleshoot them.
+- [Monitor, diagnose, and troubleshoot your Azure Storage (training module)](/training/modules/monitor-diagnose-and-troubleshoot-azure-storage/). Troubleshoot storage account issues, with step-by-step guidance.
-## Next steps
+Azure Monitor content:
+- [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource). General details on monitoring Azure resources.
+- [Azure Monitor Metrics overview](/azure/azure-monitor/essentials/data-platform-metrics). The basics of metrics and metric dimensions.
+- [Azure Monitor Logs overview](/azure/azure-monitor/logs/data-platform-logs). The basics of logs and how to collect and analyze them.
+- [Analyze metrics with Azure Monitor metrics explorer](/azure/azure-monitor/essentials/analyze-metrics). A tour of Metrics Explorer.
+- [Overview of Log Analytics in Azure Monitor](/azure/azure-monitor/logs/log-analytics-overview). A tour of Log Analytics.
-| Guide | Description |
-|||
-| [Monitor, diagnose, and troubleshoot your Azure Storage](/training/modules/monitor-diagnose-and-troubleshoot-azure-storage/) | Troubleshoot storage account issues (contains step-by-step guidance). |
-| [Monitor storage with Azure Monitor Storage insights](../common/storage-insights-overview.md) | A unified view of storage performance, capacity, and availability |
-| [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md) | A tour of Metrics Explorer.
-| [Overview of Log Analytics in Azure Monitor](../../azure-monitor/logs/log-analytics-overview.md) | A tour of Log Analytics. |
-| [Azure Monitor Metrics overview](../../azure-monitor/essentials/data-platform-metrics.md) | The basics of metrics and metric dimensions |
-| [Azure Monitor Logs overview](../../azure-monitor/logs/data-platform-logs.md)| The basics of logs and how to collect and analyze them |
-| [Transition to metrics in Azure Monitor](../common/storage-metrics-migration.md) | Move from Storage Analytics metrics to metrics in Azure Monitor. |
-| [Azure Table storage monitoring data reference](monitor-table-storage-reference.md)| A reference of the logs and metrics created by Azure Table Storage |
-| [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=/azure/storage/tables/toc.json)| Common performance issues and guidance about how to troubleshoot them. |
-| [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=/azure/storage/tables/toc.json)| Common availability issues and guidance about how to troubleshoot them.|
-| [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=/azure/storage/tables/toc.json)| Common issues with connecting clients and how to troubleshoot them.|
stream-analytics Stream Analytics Real Time Fraud Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-real-time-fraud-detection.md
Previously updated : 02/27/2023 Last updated : 02/14/2024 #Customer intent: As an IT admin/developer, I want to run a Stream Analytics job to analyze phone call data and visualize results in a Power BI dashboard. # Tutorial: Analyze fraudulent call data with Stream Analytics and visualize results in Power BI dashboard
-This tutorial shows you how to analyze phone call data using Azure Stream Analytics. The phone call data, generated by a client application, contains fraudulent calls, which are detected by the Stream Analytics job. You can use the techniques from this tutorial for other types of fraud detection, such as credit card fraud or identity theft.
+This tutorial shows you how to analyze phone call data using Azure Stream Analytics. The phone call data, generated by a client application, contains fraudulent calls, which are detected by the Stream Analytics job. You can use techniques from this tutorial for other types of fraud detection, such as credit card fraud or identity theft.
-In this tutorial, you learn how to:
+In this tutorial, you perform the following tasks:
> [!div class="checklist"] > * Generate sample phone call data and send it to Azure Event Hubs.
If you want to archive every event, you can use a pass-through query to read all
The Stream Analytics job runs the query against the sample data from the input and displays the output at the bottom of the window. The results indicate that the Event Hubs and the Streaming Analytics job are configured correctly.
- :::image type="content" source="media/stream-analytics-real-time-fraud-detection/sample-output-passthrough.png" alt-text="Sample output from test query":::
+ :::image type="content" source="media/stream-analytics-real-time-fraud-detection/sample-output-passthrough.png" alt-text="Sample output from test query.":::
- The exact number of records you see will depend on how many records were captured in the sample.
+ The exact number of records you see depends on how many records were captured in the sample.
### Reduce the number of fields using a column projection
FROM
Suppose you want to count the number of incoming calls per region. In streaming data, when you want to perform aggregate functions like counting, you need to segment the stream into temporal units, since the data stream itself is effectively endless. You do this using a Streaming Analytics [window function](stream-analytics-window-functions.md). You can then work with the data inside that window as a unit.
-For this transformation, you want a sequence of temporal windows that don't overlapΓÇöeach window will have a discrete set of data that you can group and aggregate. This type of window is referred to as a *Tumbling window*. Within the Tumbling window, you can get a count of the incoming calls grouped by `SwitchNum`, which represents the country/region where the call originated.
+For this transformation, you want a sequence of temporal windows that don't overlapΓÇöeach window has a discrete set of data that you can group and aggregate. This type of window is referred to as a *Tumbling window*. Within the Tumbling window, you can get a count of the incoming calls grouped by `SwitchNum`, which represents the country/region where the call originated.
1. Paste the following query in the query editor:
For this transformation, you want a sequence of temporal windows that don't over
The projection includes `System.Timestamp`, which returns a timestamp for the end of each window.
- To specify that you want to use a Tumbling window, you use the [TUMBLINGWINDOW](/stream-analytics-query/tumbling-window-azure-stream-analytics) function in the `GROUP BY` clause. In the function, you specify a time unit (anywhere from a microsecond to a day) and a window size (how many units). In this example, the Tumbling window consists of 5-second intervals, so you'll get a count by country/region for every 5 seconds' worth of calls.
+ To specify that you want to use a Tumbling window, you use the [TUMBLINGWINDOW](/stream-analytics-query/tumbling-window-azure-stream-analytics) function in the `GROUP BY` clause. In the function, you specify a time unit (anywhere from a microsecond to a day) and a window size (how many units). In this example, the Tumbling window consists of 5-second intervals, so you get a count by country/region for every 5 seconds' worth of calls.
2. Select **Test query**. In the results, notice that the timestamps under **WindowEnd** are in 5-second increments.
When you use a join with streaming data, the join must provide some limits on ho
* Add a value and select **fraudulent calls**. * For **Time window to display**, select the last 10 minutes.
-7. Your dashboard should look like the example below once both tiles are added. Notice that, if your event hub sender application and Streaming Analytics application are running, your Power BI dashboard periodically updates as new data arrives.
+7. Your dashboard should look like the following example once both tiles are added. Notice that, if your event hub sender application and Streaming Analytics application are running, your Power BI dashboard periodically updates as new data arrives.
- ![View results in Power BI dashboard](media/stream-analytics-real-time-fraud-detection/power-bi-results-dashboard.png)
+ ![Screenshot of results in Power BI dashboard.](media/stream-analytics-real-time-fraud-detection/power-bi-results-dashboard.png)
## Embedding your Power BI Dashboard in a web application
-For this part of the tutorial, you'll use a sample [ASP.NET](https://asp.net/) web application created by the Power BI team to embed your dashboard. For more information about embedding dashboards, see [embedding with Power BI](/power-bi/developer/embedding) article.
+For this part of the tutorial, you use a sample [ASP.NET](https://asp.net/) web application created by the Power BI team to embed your dashboard. For more information about embedding dashboards, see [embedding with Power BI](/power-bi/developer/embedding) article.
To set up the application, go to the [PowerBI-Developer-Samples](https://github.com/Microsoft/PowerBI-Developer-Samples) GitHub repository and follow the instructions under the **User Owns Data** section (use the redirect and homepage URLs under the **integrate-web-app** subsection). Since we're using the Dashboard example, use the **integrate-web-app** sample code located in the [GitHub repository](https://github.com/microsoft/PowerBI-Developer-Samples/tree/master/.NET%20Framework/Embed%20for%20your%20organization/).
-Once you've got the application running in your browser, follow these steps to embed the dashboard you created earlier into the web page:
+Once you have the application running in your browser, follow these steps to embed the dashboard you created earlier into the web page:
1. Select **Sign in to Power BI**, which grants the application access to the dashboards in your Power BI account.
update-manager Guidance Migration Automation Update Management Azure Update Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/guidance-migration-automation-update-management-azure-update-manager.md
description: Guidance overview on migration from Automation Update Management to
Previously updated : 02/01/2024 Last updated : 02/14/2024
Migration automation runbook ignores resources that aren't onboarded to Arc. It'
**B. Run the script**
- Download and run the PowerShell script `MigrationPrerequisiteScript` locally. This script takes AutomationAccountResourceId of the Automation account to be migrated as the input.
+ Download and run the PowerShell script [`MigrationPrerequisiteScript`](https://github.com/azureautomation/Preqrequisite-for-Migration-from-Azure-Automation-Update-Management-to-Azure-Update-Manager/blob/main/MigrationPrerequisites.ps1) locally. This script takes AutomationAccountResourceId of the Automation account to be migrated as the input.
:::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/run-script.png" alt-text="Screenshot that shows how to download and run the script." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/run-script.png":::
This step involves using an automation runbook to migrate all the machines and s
**Follow these steps:**
-1. Import migration runbook from the runbooks gallery and publish. Search for **azure automation update** from browse gallery, and import the migration runbook named **Migrate from Azure Automation Update Management to Azure Update Manager** and publish the runbook.
+1. Import [migration runbook](https://github.com/azureautomation/Migrate-from-Azure-Automation-Update-Management-to-Azure-Update-Manager/blob/main/Migration.ps1) from the runbooks gallery and publish. Search for **azure automation update** from browse gallery, and import the migration runbook named **Migrate from Azure Automation Update Management to Azure Update Manager** and publish the runbook.
:::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/migrate-from-automation-update-management.png" alt-text="Screenshot that shows how to migrate from Automation Update Management." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/migrate-from-automation-update-management.png":::
virtual-desktop Install Office On Wvd Master Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/install-office-on-wvd-master-image.md
This sample configuration XML we've provided will do the following things:
>[!NOTE] >Visio's stencil search feature may not work as expected in Azure Virtual Desktop.
-Here's what this sample configuration XML won't do:
-
- - Install Skype for Business
- - Install OneDrive in per-user mode. To learn more, see [Install OneDrive in per-machine mode](#install-onedrive-in-per-machine-mode).
+This sample configuration XML won't install OneDrive in per-user mode. To learn more, see [Install OneDrive in per-machine mode](#install-onedrive-in-per-machine-mode).
>[!NOTE] >Shared Computer Activation can be set up through Group Policy Objects (GPOs) or registry settings. The GPO is located at **Computer Configuration\\Policies\\Administrative Templates\\Microsoft Office 2016 (Machine)\\Licensing Settings**
OneDrive is normally installed per-user. In this environment, it should be insta
Here's how to install OneDrive in per-machine mode:
-1. First, create a location to stage the OneDrive installer. A local disk folder or [\\\\unc] (file://unc) location is fine.
+1. First, create a location to stage the OneDrive installer. A local disk folder or UNC path is fine.
2. Download [OneDriveSetup.exe](https://go.microsoft.com/fwlink/?linkid=844652) to your staged location.
Here's how to install OneDrive in per-machine mode:
> [!TIP] > You can configure OneDrive so that it will attempt to automatically sign-in when a user connects to a session. For more information, see [Silently configure user accounts](/sharepoint/use-silent-account-configuration).
-## Microsoft Teams and Skype for Business
-
-To learn how to install Microsoft Teams, see [Use Microsoft Teams on Azure Virtual desktop](./teams-on-avd.md).
+## Microsoft Teams
-Azure Virtual Desktop doesn't support Skype for Business.
+To learn how to install Microsoft Teams, see [Use Microsoft Teams on Azure Virtual desktop](./teams-on-avd.md). Azure Virtual Desktop doesn't support Skype for Business.
## Next steps
virtual-desktop Install Windows Client Per User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/install-windows-client-per-user.md
Here's how to install the client on a per-user basis using a PowerShell script w
| Name | Enter `Remote Desktop`. | | Publisher | Enter `Microsoft Corporation`. | | Install command | `powershell.exe -ExecutionPolicy Bypass -WindowStyle Hidden -File .\Install.ps1` |
- | Install command | `powershell.exe -ExecutionPolicy Bypass -WindowStyle Hidden -File .\Install.ps1` |
| Uninstall command | `powershell.exe -ExecutionPolicy Bypass -WindowStyle Hidden -File .\Uninstall.ps1` | | Install behavior | Select **User**. | | Operating system architecture | Select **64-bit** or **32-bit**, depending on the version of the Remote Desktop client you downloaded. |
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
description: Learn about recent changes to the Remote Desktop client for Windows
Previously updated : 02/13/2024 Last updated : 02/14/2024 # What's new in the Remote Desktop client for Windows
The following table lists the current versions available for the public and Insi
| Release | Latest version | Download | ||-|-| | Public | 1.2.5112 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370) |
-| Insider | 1.2.5126 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) |
+| Insider | 1.2.5248 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) |
-## Updates for version 1.2.5126 (Insider)
+## Updates for version 1.2.5248 (Insider)
-*Published: January 24, 2024*
+*Date published: February 13, 2024*
+
+Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368)
+
+In this release, we've made the following changes:
-Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368)
+- Fixed an issue that caused artifacts to appear on the screen during RemoteApp sessions.
+- Fixed an issue where resizing the Teams video call window caused the client to temporarily stop responding.
+- Fixed an issue that made Teams calls echo after expanding a two-person call to meeting call.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+
+## Updates for version 1.2.5126
+
+*Published: January 24, 2024*
In this release, we've made the following changes:
virtual-machines Custom Script Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/custom-script-windows.md
If your script is on a local server, you might still need to open other firewall
### Tips - Output is limited to the last 4,096 bytes.-- Properly escaping characters will help ensure that strings are parsed correctly. For example, you always need two backslashes to escape a single literal backslash when dealing with file paths. Sample: {"commandToExecute": "C:\\Windows\\System32\\systeminfo.exe >> D:\\test.txt"}
+- Properly escaping characters will help ensure that strings are parsed correctly. For example, you always need two backslashes to escape a single literal backslash when dealing with file paths. Sample: `{"commandToExecute": "C:\\Windows\\System32\\systeminfo.exe >> D:\\test.txt"}`
- The highest failure rate for this extension is due to syntax errors in the script. Verify that the script runs without errors. Put more logging into the script to make it easier to find failures. - Write scripts that are idempotent, so that running them more than once accidentally doesn't cause system changes. - Ensure that the scripts don't require user input when they run.
virtual-machines Image Builder Api Update Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-builder-api-update-release-notes.md
description: This article offers the latest release notes, known issues, bug fix
Previously updated : 11/10/2023- Last updated : 02/13/2024+
Azure Image Builder is enabling Isolated Image Builds using Azure Container Inst
You might observe a different set of transient Azure resources appear temporarily in the staging resource group but that does not impact your actual builds or the way you interact with Azure Image Builder. For more information, please see [Isolated Image Builds](./security-isolated-image-builds-image-builder.md). > [!IMPORTANT]
-> Make sure your subscription is registered for the `Microsoft.ContainerInstance` provider.
+>Make sure your subscription is registered for the `Microsoft.ContainerInstance` provider and there are no policies blocking deployment of Azure Container Instances resources. Also ensure that quota is available for Azure Container Instances resources.
### April 2023 New portal functionality has been added for Azure Image Builder. Search ΓÇ£Image TemplatesΓÇ¥ in Azure portal, then click ΓÇ£CreateΓÇ¥. You can also [get started here](https://ms.portal.azure.com/#create/Microsoft.ImageTemplate) with building and validating custom images inside the portal.
virtual-machines Image Builder Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-builder-overview.md
Title: Azure VM Image Builder overview
description: In this article, you learn about VM Image Builder for virtual machines in Azure. Previously updated : 12/20/2023 Last updated : 02/13/2024 -+ # Azure VM Image Builder overview
virtual-machines Security Isolated Image Builds Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/security-isolated-image-builds-image-builder.md
Title: Isolated Image Builds for Azure VM Image Builder description: Isolated Image Builds is achieved by transitioning core process of VM image customization/validation from shared infrastructure to dedicated Azure Container Instances resources in your subscription providing compute and network isolation. Previously updated : 11/10/2023 Last updated : 02/13/2024 -+
Isolated Image Builds enable defense-in-depth by limiting network access of your
This is a platform level change and doesn't affect AIB's interfaces. So, your existing Image Template and Trigger resources continue to function and there's no change in the way you deploy new resources of these types. Similarly, customization logs continue to be available in the storage account.
-You might observe a few new resources temporarily appear in the staging resource group (for example, Azure Container Instance, and Private Endpoint) while some other resource will no longer appear (for example, Public IP Address). As earlier, these temporary resources exist only during the build and will be deleted by Image Builder thereafter.
+You might observe a few new resources temporarily appear in the staging resource group (for example, Azure Container Instance, Virtual Network, Network Security Group, and Private Endpoint) while some other resource may no longer appear (for example, Public IP Address). As earlier, these temporary resources exist only during the build and will be deleted by Image Builder thereafter.
Your image builds will automatically be migrated to Isolated Image Builds and you need to take no action to opt in. > [!NOTE]
-> Image Builder is in the process of rolling this change out to all locations and customers. Some of these details might change as the process is fine-tuned based on service telemetry and feedback. Please refer to the [troubleshooting guide](./linux/image-builder-troubleshoot.md#troubleshoot-build-failures) for more information.
+> Image Builder is in the process of rolling this change out to all locations and customers. Some of these details (especially around deployment of new Networking related resources) might change as the process is fine-tuned based on service telemetry and feedback. Please refer to the [troubleshooting guide](./linux/image-builder-troubleshoot.md#troubleshoot-build-failures) for more information.
> [!IMPORTANT] > Make sure your subscription is registered for `Microsoft.ContainerInstance provider`: > - Azure CLI: `az provider register -n Microsoft.ContainerInstance` > - PowerShell: `Register-AzResourceProvider -ProviderNamespace Microsoft.ContainerInstance`-
+>
+> After successfully registering your subscription, make sure there are no Azure Policies in your subscription that deny deployment of required resources. Policies allowing only a restricted set of resource types not including Azure Container Instance would block deployment.
+>
+> Ensure that your subscription also has a sufficient [quota of resources](../container-instances/container-instances-resource-and-quota-limits.md) required for deployment of Azure Container Instance resources.
+>
+
+> [!IMPORTANT]
+> Image Builder may need to deploy temporary networking related resources in the staging resource group in your subscription. Ensure that no Azure Policies deny the deployment of such resources (Virtual Network with Subnets, Network Security Group, Private endpoint) in the resource group.
+>
+> If you have Azure Policies applying DDoS protection plans to any newly created Virtual Network, either relax the Policy for the resource group or ensure that the Template Managed Identity has permissions to join the plan.
## Next steps
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
There are several things to note with the virtual hub router upgrade:
* If you have a network virtual appliance (NVA) in the virtual hub, you'll have to work with your NVA partner to obtain instructions on how to upgrade your Virtual WAN hub.
+* If your virtual hub is configured with more than 15 routing infrastructure units, please scale in your virtual hub to 2 routing infrastructure units before attempting to upgrade. You can scale back out your hub to more than 15 routing infrastructure units after upgrading your hub.
+ If the update fails for any reason, your hub will be auto recovered to the old version to ensure there's still a working setup. Additional things to note: