Updates from: 07/01/2024 01:06:40
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Protected Material https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/concepts/protected-material.md
+
+ Title: "Protected material detection in Azure AI Content Safety"
+
+description: Learn about Protected material detection and the related flags that the Azure AI Content Safety service returns.
+++++ Last updated : 06/24/2024+
+keywords:
+++
+# Protected material detection
+
+The [Protected material text API](../quickstart-protected-material.md) flags known text content (for example, song lyrics, articles, recipes, and selected web content) that might be output by large language models. This guide provides details about the kind of content that the protected material API detects.
+
+## Protected material examples
+
+Refer to this table for details of the major categories of protected material text detection. All four categories are applied when you call the API.
+
+| Category | Scope | Considered acceptable | Considered harmful |
+||-||--|--|
+| Recipes | Copyrighted content related to Recipes. <br><br> Other harmful or sensitive text is out of scope for this task, unless it intersects with Recipes IP copyright harm. | <ul><li>Links to web pages that contain information about recipes  </li><li>Any content from recipes that have no or low IP/Copyright protections: <ul><li>Lists of ingredients</li><li>Basic instructions for combining and cooking ingredients</li></ul></li><li>Rejection or refusal to provide copyrighted content: <ul><li>Changing a topic to avoid sharing copyrighted content</li><li>Refusal to share copyrighted content</li><li>Providing nonresponsive information</li></ul></li></ul> | <ul><li>Other literary content in a recipe <ul><li>Matching anecdotes, stories, or personal commentary about the recipe (40 characters or more)</li><li>Creative names for the recipe that are not limited to the well-known name of the dish, or a plain descriptive summary of the dish indicating what the primary ingredient is (40 characters or more)</li><li>Creative descriptions of the ingredients or steps for combining or cooking ingredients, including descriptions that contain more information than needed to create the dish, rely on imprecise wording, or contain profanity (40 characters or more)</li></ul></li><li>Methods to access copyrighted content:<ul><li>Ways to bypass paywalls to access recipes</li></ul></li></ul> |
+| Web Content | All websites that have `webmd.com` as their URL domain name. Only focuses on issues of copyrighted content around Selected Web Content. <br><br> Other harmful or sensitive text is out of scope for this task, unless it intersects Selected Web Content harm. | <ul><li>Links to web pages </li><li>Short excerpts or snippets of Selected Web Content as long as:<ul><li>They are relevant to the user's query</li><li>They are fewer than 200 characters</li></ul></li></ul> | <ul><li>Substantial content of Selected Web Content  <ul><li>Response sections longer than 200 characters that bear substantial similarity to a block of text from the Selected Web Content</li><li>Excerpts from Selected Web Content that are longer than 200 characters</li><li>Quotes from Selected Web Content that are longer than 200 characters</li></ul></li><li>Methods to access copyrighted content:<ul><li>Ways to bypass paywalls or DRM protections to access copyrighted Selected Web Content</li></ul></li></ul> |
+| News | Only focus on issues of copyrighted content around News. <br><br> Other harmful or sensitive text is out of scope for this task, unless it intersects News IP Copyright harm. | <ul><li>Links to web pages that host news or information about news, magazines, or blog articles as long as:<ul><li>They have legitimate permissions</li><li>They have licensed news coverage</li><li>They are authorized platforms</li></ul></li><li>Links to authorized web pages that contain embedded audio/video players as long as:<ul><li>They have legitimate permissions</li><li>They have licensed news coverage</li><li>They are authorized streaming platforms</li><li>They are official YouTube channels</li></ul></li><li>Short excerpts/snippets like headlines or captions from news articles as long as:<ul><li>They are relevant to the user's query</li><li>They are not a substantial part of the article</li><li>They are not the entire article</li></ul></li><li>Summary of news articles as long as:<ul><li>It is relevant to the user's query</li><li>It is brief and factual</li><li>It does not copy/paraphrase a substantial part of the article</li><li>It is clearly and visibly cited as a summary</li></ul></li><li>Analysis/Critique/Review of news articles as long as:<ul><li>It is relevant to the user's query</li><li>It is brief and factual</li><li>It does not copy/paraphrase a substantial part of the article</li><li>It is clearly and visibly cited as an analysis/critique/review</li></ul></li><li>Any news content that has no IP/Copyright protections:<ul><li>News/Magazines/Blogs that are in the public domain</li><li>News/Magazines/Blogs for which Copyright protection has elapsed, been surrendered, or never existed</li></ul></li><li>Rejection or refusal to provide copyrighted content:<ul><li>Changing topic to avoid sharing copyrighted content</li><li>Refusal to share copyrighted content</li><li>Providing nonresponsive information</li></ul></li></ul> | <ul><li>Links to pdf or any other file containing full text of news/magazine/blog articles, unless:<ul><li>They are sourced from authorized platforms with legitimate permissions and licenses</li></ul></li><li>News content<ul><li>More than 200 characters taken verbatim from any news article</li><li>More than 200 characters substantially similar to a block of text from any news article</li><li>Direct access to news/magazine/blog articles that are behind paywalls</li></ul></li><li>Methods to access copyrighted content:<ul><li>Steps to download news from an unauthorized website</li><li>Ways to bypass paywalls or DRM protections to access copyrighted news or videos</li></ul></li></ul> |
+| Lyrics | Only focuses on issues of copyrighted content around Songs. <br><br> Other harmful or sensitive text is out of scope for this task, unless it intersects Songs IP Copyright harm. | <ul><li>Links to web pages that contain information about songs such as:<ul><li>Lyrics of the songs</li><li>Chords or tabs of the associated music</li><li>Analysis or reviews of the song/music</li></ul></li><li>Links to authorized web pages that contain embedded audio/video players as long as:<ul><li>They have legitimate permissions</li><li>They have licensed music</li><li>They are authorized streaming platforms</li><li>They are official YouTube channels</li></ul></li><li>Short excerpts or snippets from lyrics of the songs as long as:<ul><li>They are relevant to the user's query</li><li>They are not a substantial part of the lyrics</li><li>They are not the entire lyrics</li><li>They are not more than 11 words long</li></ul></li><li>Short excerpts or snippets from chords/tabs of the songs as long as:<ul><li>They are relevant to the user's query</li><li>They are not a substantial part of the chords/tabs</li><li>They are not the entire chords/tabs</li></ul></li><li>Any content from songs that have no IP/Copyright protections:<ul><li>Songs/Lyrics/Chords/Tabs that are in the public domain</li><li>Songs/Lyrics/Chords/Tabs for which Copyright protection has elapsed, been surrendered, or never existed</li></ul></li><li>Rejection or refusal to provide copyrighted content:<ul><li>Changing topic to avoid sharing copyrighted content</li><li>Refusal to share copyrighted content</li><li>Providing nonresponsive information</li></ul></li></ul> | <ul><li>Lyrics of a song<ul><li>Entire lyrics</li><li>Substantial part of the lyrics</li><li>Part of lyrics that contain more than 11 words</li></ul></li><li>Chords or Tabs of a song<ul><li>Entire chords/tabs</li><li>Substantial part of the chords/tabs</li></ul></li><li>Links to webpages that contain embedded audio/video players that:<ul><li>Do not have legitimate permissions</li><li>Do not have licensed music</li><li>Are not authorized streaming platforms</li><li>Are not official YouTube channels</li></ul></li><li>Methods to access copyrighted content:<ul><li>Steps to download songs from an unauthorized website</li><li>Ways to bypass paywalls or DRM protections to access copyrighted songs or videos</li></ul></li></ul> |
+++
+## Next steps
+
+Follow the quickstart to get started using Azure AI Content Safety to detect protected material.
+
+> [!div class="nextstepaction"]
+> [Detect protected material](../quickstart-protected-material.md)
++++
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/language-support.md
> [!NOTE] > **Language auto-detection** >
-> You don't need to specify a language code for text moderation. The service automatically detects your input language.
+> You don't need to specify a language code for text moderation and Prompt Shields. The service automatically detects your input language.
| Language name | Language code | Supported Languages | Specially trained languages| |--||--|--|
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/overview.md
There are different types of analysis available from this service. The following
| :-- | :- | | [Prompt Shields](/rest/api/cognitiveservices/contentsafety/text-operations/detect-text-jailbreak) (preview) | Scans text for the risk of a [User input attack](./concepts/jailbreak-detection.md) on a Large Language Model. [Quickstart](./quickstart-jailbreak.md) | | [Groundedness detection](/rest/api/cognitiveservices/contentsafety/text-groundedness-detection-operations/detect-groundedness-options) (preview) | Detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. [Quickstart](./quickstart-groundedness.md) |
-| [Protected material text detection](/rest/api/cognitiveservices/contentsafety/text-operations/detect-text-protected-material) (preview) | Scans AI-generated text for known text content (for example, song lyrics, articles, recipes, selected web content). [Quickstart](./quickstart-protected-material.md)|
+| [Protected material text detection](/rest/api/cognitiveservices/contentsafety/text-operations/detect-text-protected-material) (preview) | Scans AI-generated text for [known text content](./concepts/protected-material.md) (for example, song lyrics, articles, recipes, selected web content). [Quickstart](./quickstart-protected-material.md)|
| Custom categories (rapid) API (preview) | Lets you define [emerging harmful content patterns](./concepts/custom-categories-rapid.md) and scan text and images for matches. [How-to guide](./how-to/custom-categories-rapid.md) | | [Analyze text](/rest/api/cognitiveservices/contentsafety/text-operations/analyze-text) API | Scans text for sexual content, violence, hate, and self harm with multi-severity levels. | | [Analyze image](/rest/api/cognitiveservices/contentsafety/image-operations/analyze-image) API | Scans images for sexual content, violence, hate, and self harm with multi-severity levels. |
See the following list for the input requirements for each feature.
- Maximum text and query length: 7,500 characters. - **Protected material detection (preview)**: - Default maximum length: 1K characters.
- - Minimum length: 111 characters (for scanning LLM completions, not user prompts).
+ - Default minimum length: 110 characters (for scanning LLM completions, not user prompts).
### Language support
ai-services Quickstart Protected Material https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-protected-material.md
# Quickstart: Detect protected material (preview)
-The protected material text describes language that matches known text content (for example, song lyrics, articles, recipes, selected web content). This feature can be used to identify and block known text content from being displayed in language model output (English content only).
+Protected material text describes language that matches known text content (for example, song lyrics, articles, recipes, selected web content). This feature can be used to identify and block known text content from being displayed in language model output (English content only).
## Prerequisites
ai-services Fine Tuning Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/fine-tuning-functions.md
As with the example before, this example is artificially expanded for readabilit
## Next steps -- Explore the fine-tuning capabilities in the [Azure OpenAI fine-tuning tutorial](../tutorials/fine-tune.md).-- Review fine-tuning [model regional availability](../concepts/models.md#fine-tuning-models)
+* [Function calling fine-tuning scenarios](https://techcommunity.microsoft.com/t5/ai-azure-ai-services-blog/fine-tuning-with-function-calling-on-azure-openai-service/ba-p/4065968).
+* Explore the fine-tuning capabilities in the [Azure OpenAI fine-tuning tutorial](../tutorials/fine-tune.md).
+* Review fine-tuning [model regional availability](../concepts/models.md#fine-tuning-models).
ai-services Function Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/function-calling.md
Previously updated : 12/04/2023 Last updated : 06/28/2024
At a high level you can break down working with functions into three steps:
> [!IMPORTANT] > The `functions` and `function_call` parameters have been deprecated with the release of the [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json) version of the API. The replacement for `functions` is the [`tools`](../reference.md#chat-completions) parameter. The replacement for `function_call` is the [`tool_choice`](../reference.md#chat-completions) parameter.
-## Parallel function calling
+## Function calling support
-Parallel function calls are supported with:
-
-### Supported models
+### Parallel function calling
* `gpt-35-turbo` (1106) * `gpt-35-turbo` (0125) * `gpt-4` (1106-Preview) * `gpt-4` (0125-Preview)-
-### API support
+* `gpt-4` (vision-preview)
+* `gpt-4` (2024-04-09)
+* `gpt-4o` (2024-05-13)
Support for parallel function was first added in API version [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
-Parallel function calls allow you to perform multiple function calls together, allowing for parallel execution and retrieval of results. This reduces the number of calls to the API that need to be made and can improve overall performance.
-
-For example for a simple weather app you may want to retrieve the weather in multiple locations at the same time. This will result in a chat completion message with three function calls in the `tool_calls` array, each with a unique `id`. If you wanted to respond to these function calls, you would add 3 new messages to the conversation, each containing the result of one function call, with a `tool_call_id` referencing the `id` from `tools_calls`.
+### Basic function calling with tools
-Below we provide a modified version of OpenAI's `get_current_weather` example. This example as with the original from OpenAI is to provide the basic structure, but is not a fully functioning standalone example. Attempting to execute this code without further modification would result in an error.
+* All the models that support parallel function calling
+* `gpt-4` (0613)
+* `gpt-4-32k` (0613)
+* `gpt-35-turbo-16k` (0613)
+* `gpt-35-turbo` (0613)
-In this example, a single function get_current_weather is defined. The model calls the function multiple times, and after sending the function response back to the model, it decides the next step. It responds with a user-facing message which was telling the user the temperature in San Francisco, Tokyo, and Paris. Depending on the query, it may choose to call a function again.
+## Single tool/function calling example
-To force the model to call a specific function set the `tool_choice` parameter with a specific function name. You can also force the model to generate a user-facing message by setting `tool_choice: "none"`.
-
-> [!NOTE]
-> The default behavior (`tool_choice: "auto"`) is for the model to decide on its own whether to call a function and if so which function to call.
-
-#### [Non-streaming](#tab/non-streaming)
+First we will demonstrate a simple toy function call that can check the time in three hardcoded locations with a single tool/function defined. We have added print statements to help make the code execution easier to follow:
```python import os
-from openai import AzureOpenAI
import json
+from openai import AzureOpenAI
+from datetime import datetime
+from zoneinfo import ZoneInfo
+# Initialize the Azure OpenAI client
client = AzureOpenAI(
- azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
- api_key=os.getenv("AZURE_OPENAI_API_KEY"),
- api_version="2024-03-01-preview"
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
+ api_version="2024-05-01-preview"
)
+# Define the deployment you want to use for your chat completions API calls
-# Example function hard coded to return the same weather
-# In production, this could be your backend API or an external API
-def get_current_weather(location, unit="fahrenheit"):
- """Get the current weather in a given location"""
- if "tokyo" in location.lower():
- return json.dumps({"location": "Tokyo", "temperature": "10", "unit": unit})
- elif "san francisco" in location.lower():
- return json.dumps({"location": "San Francisco", "temperature": "72", "unit": unit})
- elif "paris" in location.lower():
- return json.dumps({"location": "Paris", "temperature": "22", "unit": unit})
- else:
- return json.dumps({"location": location, "temperature": "unknown"})
+deployment_name = "<YOUR_DEPLOYMENT_NAME_HERE>"
+
+# Simplified timezone data
+TIMEZONE_DATA = {
+ "tokyo": "Asia/Tokyo",
+ "san francisco": "America/Los_Angeles",
+ "paris": "Europe/Paris"
+}
+
+def get_current_time(location):
+ """Get the current time for a given location"""
+ print(f"get_current_time called with location: {location}")
+ location_lower = location.lower()
+
+ for key, timezone in TIMEZONE_DATA.items():
+ if key in location_lower:
+ print(f"Timezone found for {key}")
+ current_time = datetime.now(ZoneInfo(timezone)).strftime("%I:%M %p")
+ return json.dumps({
+ "location": location,
+ "current_time": current_time
+ })
+
+ print(f"No timezone data found for {location_lower}")
+ return json.dumps({"location": location, "current_time": "unknown"})
def run_conversation():
- # Step 1: send the conversation and available functions to the model
- messages = [{"role": "user", "content": "What's the weather like in San Francisco, Tokyo, and Paris?"}]
+ # Initial user message
+ messages = [{"role": "user", "content": "What's the current time in San Francisco"}] # Single function call
+ #messages = [{"role": "user", "content": "What's the current time in San Francisco, Tokyo, and Paris?"}] # Parallel function call with a single tool/function defined
+
+ # Define the function for the model
tools = [ { "type": "function", "function": {
- "name": "get_current_weather",
- "description": "Get the current weather in a given location",
+ "name": "get_current_time",
+ "description": "Get the current time in a given location",
"parameters": { "type": "object", "properties": { "location": { "type": "string",
- "description": "The city and state, e.g. San Francisco, CA",
+ "description": "The city name, e.g. San Francisco",
},
- "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
}, "required": ["location"], },
- },
+ }
} ]+
+ # First API call: Ask the model to use the function
response = client.chat.completions.create(
- model="<REPLACE_WITH_YOUR_MODEL_DEPLOYMENT_NAME>",
+ model=deployment_name,
messages=messages, tools=tools,
- tool_choice="auto", # auto is default, but we'll be explicit
+ tool_choice="auto",
)+
+ # Process the model's response
response_message = response.choices[0].message
- tool_calls = response_message.tool_calls
- # Step 2: check if the model wanted to call a function
- if tool_calls:
- # Step 3: call the function
- # Note: the JSON response may not always be valid; be sure to handle errors
- available_functions = {
- "get_current_weather": get_current_weather,
- } # only one function in this example, but you can have multiple
- messages.append(response_message) # extend conversation with assistant's reply
- # Step 4: send the info for each function call and function response to the model
- for tool_call in tool_calls:
- function_name = tool_call.function.name
- function_to_call = available_functions[function_name]
- function_args = json.loads(tool_call.function.arguments)
- function_response = function_to_call(
- location=function_args.get("location"),
- unit=function_args.get("unit"),
- )
- messages.append(
- {
+ messages.append(response_message)
+
+ print("Model's response:")
+ print(response_message)
+
+ # Handle function calls
+ if response_message.tool_calls:
+ for tool_call in response_message.tool_calls:
+ if tool_call.function.name == "get_current_time":
+ function_args = json.loads(tool_call.function.arguments)
+ print(f"Function arguments: {function_args}")
+ time_response = get_current_time(
+ location=function_args.get("location")
+ )
+ messages.append({
"tool_call_id": tool_call.id, "role": "tool",
- "name": function_name,
- "content": function_response,
- }
- ) # extend conversation with function response
- second_response = client.chat.completions.create(
- model="<REPLACE_WITH_YOUR_1106_MODEL_DEPLOYMENT_NAME>",
- messages=messages,
- ) # get a new response from the model where it can see the function response
- return second_response
-print(run_conversation())
-```
+ "name": "get_current_time",
+ "content": time_response,
+ })
+ else:
+ print("No tool calls were made by the model.")
-#### [Streaming](#tab/streaming)
+ # Second API call: Get the final response from the model
+ final_response = client.chat.completions.create(
+ model=deployment_name,
+ messages=messages,
+ )
-```python
-import os
-from openai import AzureOpenAI
-import json
+ return final_response.choices[0].message.content
-client = AzureOpenAI(
- azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
- api_key=os.getenv("AZURE_OPENAI_API_KEY"),
- api_version="2024-03-01-preview"
-)
+# Run the conversation and print the result
+print(run_conversation())
-from typing_extensions import override
-from openai import AssistantEventHandler
-
-class EventHandler(AssistantEventHandler):
- @override
- def on_event(self, event):
- # Retrieve events that are denoted with 'requires_action'
- # since these will have our tool_calls
- if event.event == 'thread.run.requires_action':
- run_id = event.data.id # Retrieve the run ID from the event data
- self.handle_requires_action(event.data, run_id)
-
- def handle_requires_action(self, data, run_id):
- tool_outputs = []
-
- for tool in data.required_action.submit_tool_outputs.tool_calls:
- if tool.function.name == "get_current_temperature":
- tool_outputs.append({"tool_call_id": tool.id, "output": "57"})
- elif tool.function.name == "get_rain_probability":
- tool_outputs.append({"tool_call_id": tool.id, "output": "0.06"})
-
- # Submit all tool_outputs at the same time
- self.submit_tool_outputs(tool_outputs, run_id)
-
- def submit_tool_outputs(self, tool_outputs, run_id):
- # Use the submit_tool_outputs_stream helper
- with client.beta.threads.runs.submit_tool_outputs_stream(
- thread_id=self.current_run.thread_id,
- run_id=self.current_run.id,
- tool_outputs=tool_outputs,
- event_handler=EventHandler(),
- ) as stream:
- for text in stream.text_deltas:
- print(text, end="", flush=True)
- print()
-
-
-with client.beta.threads.runs.stream(
- thread_id=thread.id,
- assistant_id=assistant.id,
- event_handler=EventHandler()
-) as stream:
- stream.until_done()
```
-
-
-## Using function in the chat completions API (Deprecated)
-
-Function calling is available in the `2023-07-01-preview` API version and works with version 0613 of gpt-35-turbo, gpt-35-turbo-16k, gpt-4, and gpt-4-32k.
-
-To use function calling with the Chat Completions API, you need to include two new properties in your request: `functions` and `function_call`. You can include one or more `functions` in your request and you can learn more about how to define functions in the [defining functions](#defining-functions) section. Keep in mind that functions are injected into the system message under the hood so functions count against your token usage.
-
-When functions are provided, by default the `function_call` is set to `"auto"` and the model decides whether or not a function should be called. Alternatively, you can set the `function_call` parameter to `{"name": "<insert-function-name>"}` to force the API to call a specific function or you can set the parameter to `"none"` to prevent the model from calling any functions.
+**Output:**
-# [OpenAI Python 0.28.1](#tab/python)
+```output
+Model's response:
+ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_pOsKdUlqvdyttYB67MOj434b', function=Function(arguments='{"location":"San Francisco"}', name='get_current_time'), type='function')])
+Function arguments: {'location': 'San Francisco'}
+get_current_time called with location: San Francisco
+Timezone found for san francisco
+The current time in San Francisco is 09:24 AM.
+```
+If we are using a model deployment that supports parallel function calls we could convert this into a parallel function calling example by changing the messages array to ask for the time in multiple locations instead of one.
+To accomplish this swap the comments in these two lines:
```python-
-import os
-import openai
-
-openai.api_key = os.getenv("AZURE_OPENAI_API_KEY")
-openai.api_version = "2023-07-01-preview"
-openai.api_type = "azure"
-openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
-
-messages= [
- {"role": "user", "content": "Find beachfront hotels in San Diego for less than $300 a month with free breakfast."}
-]
-
-functions= [
- {
- "name": "search_hotels",
- "description": "Retrieves hotels from the search index based on the parameters provided",
- "parameters": {
- "type": "object",
- "properties": {
- "location": {
- "type": "string",
- "description": "The location of the hotel (i.e. Seattle, WA)"
- },
- "max_price": {
- "type": "number",
- "description": "The maximum price for the hotel"
- },
- "features": {
- "type": "string",
- "description": "A comma separated list of features (i.e. beachfront, free wifi, etc.)"
- }
- },
- "required": ["location"]
- }
- }
-]
-
-response = openai.ChatCompletion.create(
- engine="gpt-35-turbo-0613", # engine = "deployment_name"
- messages=messages,
- functions=functions,
- function_call="auto",
-)
-
-print(response['choices'][0]['message'])
+ messages = [{"role": "user", "content": "What's the current time in San Francisco"}] # Single function call
+ #messages = [{"role": "user", "content": "What's the current time in San Francisco, Tokyo, and Paris?"}] # Parallel function call with a single tool/function defined
```
-```json
-{
- "role": "assistant",
- "function_call": {
- "name": "search_hotels",
- "arguments": "{\n \"location\": \"San Diego\",\n \"max_price\": 300,\n \"features\": \"beachfront,free breakfast\"\n}"
- }
-}
-```
-
-# [OpenAI Python 1.x](#tab/python-new)
+To look like this, and run the code again:
```python
-import os
-from openai import AzureOpenAI
-
-client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_API_KEY"),
- api_version="2023-10-01-preview",
- azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
-)
-
-messages= [
- {"role": "user", "content": "Find beachfront hotels in San Diego for less than $300 a month with free breakfast."}
-]
-
-functions= [
- {
- "name": "search_hotels",
- "description": "Retrieves hotels from the search index based on the parameters provided",
- "parameters": {
- "type": "object",
- "properties": {
- "location": {
- "type": "string",
- "description": "The location of the hotel (i.e. Seattle, WA)"
- },
- "max_price": {
- "type": "number",
- "description": "The maximum price for the hotel"
- },
- "features": {
- "type": "string",
- "description": "A comma separated list of features (i.e. beachfront, free wifi, etc.)"
- }
- },
- "required": ["location"]
- }
- }
-]
-
-response = client.chat.completions.create(
- model="gpt-35-turbo-0613", # model = "deployment_name"
- messages= messages,
- functions = functions,
- function_call="auto",
-)
-
-print(response.choices[0].message.model_dump_json(indent=2))
-```
-
-```json
-{
- "content": null,
- "role": "assistant",
- "function_call": {
- "arguments": "{\n \"location\": \"San Diego\",\n \"max_price\": 300,\n \"features\": \"beachfront, free breakfast\"\n}",
- "name": "search_hotels"
- }
-}
+ #messages = [{"role": "user", "content": "What's the current time in San Francisco"}] # Single function call
+ messages = [{"role": "user", "content": "What's the current time in San Francisco, Tokyo, and Paris?"}] # Parallel function call with a single tool/function defined
```
-# [PowerShell](#tab/powershell)
-
-```powershell-interactive
-$openai = @{
- api_key = $Env:AZURE_OPENAI_API_KEY
- api_base = $Env:AZURE_OPENAI_ENDPOINT # should look like https:/YOUR_RESOURCE_NAME.openai.azure.com/
- api_version = '2023-10-01-preview' # may change in the future
- name = 'YOUR-DEPLOYMENT-NAME-HERE' # the custom name you chose for your deployment
-}
-
-$headers = [ordered]@{
- 'api-key' = $openai.api_key
-}
-
-$messages = @()
-$messages += [ordered]@{
- role = 'user'
- content = 'Find beachfront hotels in San Diego for less than $300 a month with free breakfast.'
-}
-
-$functions = @()
-$functions += [ordered]@{
- name = 'search_hotels'
- description = 'Retrieves hotels from the search index based on the parameters provided'
- parameters = @{
- type = 'object'
- properties = @{
- location = @{
- type = 'string'
- description = 'The location of the hotel (i.e. Seattle, WA)'
- }
- max_price = @{
- type = 'number'
- description = 'The maximum price for the hotel'
- }
- features = @{
- type = 'string'
- description = 'A comma separated list of features (i.e. beachfront, free wifi, etc.)'
- }
- }
- required = @('location')
- }
-}
-
-# these API arguments are introduced in model version 0613
-$body = [ordered]@{
- messages = $messages
- functions = $functions
- function_call = 'auto'
-} | ConvertTo-Json -depth 6
+This generates the following output:
-$url = "$($openai.api_base)/openai/deployments/$($openai.name)/chat/completions?api-version=$($openai.api_version)"
+**Output:**
-$response = Invoke-RestMethod -Uri $url -Headers $headers -Body $body -Method Post -ContentType 'application/json'
-$response.choices[0].message | ConvertTo-Json
-```
-
-```json
-{
- "role": "assistant",
- "function_call": {
- "name": "search_hotels",
- "arguments": "{\n \"max_price\": 300,\n \"features\": \"beachfront, free breakfast\",\n \"location\": \"San Diego\"\n}"
- }
-}
+```output
+Model's response:
+ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_IjcAVz9JOv5BXwUx1jd076C1', function=Function(arguments='{"location": "San Francisco"}', name='get_current_time'), type='function'), ChatCompletionMessageToolCall(id='call_XIPQYTCtKIaNCCPTdvwjkaSN', function=Function(arguments='{"location": "Tokyo"}', name='get_current_time'), type='function'), ChatCompletionMessageToolCall(id='call_OHIB5aJzO8HGqanmsdzfytvp', function=Function(arguments='{"location": "Paris"}', name='get_current_time'), type='function')])
+Function arguments: {'location': 'San Francisco'}
+get_current_time called with location: San Francisco
+Timezone found for san francisco
+Function arguments: {'location': 'Tokyo'}
+get_current_time called with location: Tokyo
+Timezone found for tokyo
+Function arguments: {'location': 'Paris'}
+get_current_time called with location: Paris
+Timezone found for paris
+As of now, the current times are:
+
+- **San Francisco:** 11:15 AM
+- **Tokyo:** 03:15 AM (next day)
+- **Paris:** 08:15 PM
``` -
-The response from the API includes a `function_call` property if the model determines that a function should be called. The `function_call` property includes the name of the function to call and the arguments to pass to the function. The arguments are a JSON string that you can parse and use to call your function.
+Parallel function calls allow you to perform multiple function calls together, allowing for parallel execution and retrieval of results. This reduces the number of calls to the API that need to be made and can improve overall performance.
-In some cases, the model generates both `content` and a `function_call`. For example, for the prompt above the content could say something like "Sure, I can help you find some hotels in San Diego that match your criteria" along with the function_call.
+For example in our simple time app we retrieved multiple times at the same time. This resulted in a chat completion message with three function calls in the `tool_calls` array, each with a unique `id`. If you wanted to respond to these function calls, you would add three new messages to the conversation, each containing the result of one function call, with a `tool_call_id` referencing the `id` from `tools_calls`.
-## Working with function calling
-The following section goes into more detail on how to effectively use functions with the Chat Completions API.
+To force the model to call a specific function set the `tool_choice` parameter with a specific function name. You can also force the model to generate a user-facing message by setting `tool_choice: "none"`.
-### Defining functions
+> [!NOTE]
+> The default behavior (`tool_choice: "auto"`) is for the model to decide on its own whether to call a function and if so which function to call.
-A function has three main parameters: `name`, `description`, and `parameters`. The `description` parameter is used by the model to determine when and how to call the function so it's important to give a meaningful description of what the function does.
+## Parallel function calling with multiple functions
-`parameters` is a JSON schema object that describes the parameters that the function accepts. You can learn more about JSON schema objects in the [JSON schema reference](https://json-schema.org/understanding-json-schema/).
+Now we will demonstrate another toy function calling example this time with two different tools/functions defined.
-If you want to describe a function that doesn't accept any parameters, use `{"type": "object", "properties": {}}` as the value for the `parameters` property.
+```python
+import os
+import json
+from openai import AzureOpenAI
+from datetime import datetime, timedelta
+from zoneinfo import ZoneInfo
-### Managing the flow with functions
+# Initialize the Azure OpenAI client
+client = AzureOpenAI(
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
+ api_key=os.getenv("AZURE_OPENAI_API_KEY"),
+ api_version="2024-05-01-preview"
+)
-Example in Python.
+# Provide the model deployment name you want to use for this example
-```python
+deployment_name = "YOUR_DEPLOYMENT_NAME_HERE"
-response = openai.ChatCompletion.create(
- deployment_id="gpt-35-turbo-0613",
- messages=messages,
- functions=functions,
- function_call="auto",
-)
-response_message = response["choices"][0]["message"]
+# Simplified weather data
+WEATHER_DATA = {
+ "tokyo": {"temperature": "10", "unit": "celsius"},
+ "san francisco": {"temperature": "72", "unit": "fahrenheit"},
+ "paris": {"temperature": "22", "unit": "celsius"}
+}
-# Check if the model wants to call a function
-if response_message.get("function_call"):
+# Simplified timezone data
+TIMEZONE_DATA = {
+ "tokyo": "Asia/Tokyo",
+ "san francisco": "America/Los_Angeles",
+ "paris": "Europe/Paris"
+}
- # Call the function. The JSON response may not always be valid so make sure to handle errors
- function_name = response_message["function_call"]["name"]
+def get_current_weather(location, unit=None):
+ """Get the current weather for a given location"""
+ print(f"get_current_weather called with location: {location}, unit: {unit}")
+
+ for key in WEATHER_DATA:
+ if key in location_lower:
+ print(f"Weather data found for {key}")
+ weather = WEATHER_DATA[key]
+ return json.dumps({
+ "location": location,
+ "temperature": weather["temperature"],
+ "unit": unit if unit else weather["unit"]
+ })
+
+ print(f"No weather data found for {location_lower}")
+ return json.dumps({"location": location, "temperature": "unknown"})
- available_functions = {
- "search_hotels": search_hotels,
- }
- function_to_call = available_functions[function_name]
+def get_current_time(location):
+ """Get the current time for a given location"""
+ print(f"get_current_time called with location: {location}")
+ location_lower = location.lower()
+
+ for key, timezone in TIMEZONE_DATA.items():
+ if key in location_lower:
+ print(f"Timezone found for {key}")
+ current_time = datetime.now(ZoneInfo(timezone)).strftime("%I:%M %p")
+ return json.dumps({
+ "location": location,
+ "current_time": current_time
+ })
+
+ print(f"No timezone data found for {location_lower}")
+ return json.dumps({"location": location, "current_time": "unknown"})
- function_args = json.loads(response_message["function_call"]["arguments"])
- function_response = function_to_call(**function_args)
+def run_conversation():
+ # Initial user message
+ messages = [{"role": "user", "content": "What's the weather and current time in San Francisco, Tokyo, and Paris?"}]
- # Add the assistant response and function response to the messages
- messages.append( # adding assistant response to messages
+ # Define the functions for the model
+ tools = [
{
- "role": response_message["role"],
- "function_call": {
- "name": function_name,
- "arguments": response_message["function_call"]["arguments"],
- },
- "content": None
- }
- )
- messages.append( # adding function response to messages
+ "type": "function",
+ "function": {
+ "name": "get_current_weather",
+ "description": "Get the current weather in a given location",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "location": {
+ "type": "string",
+ "description": "The city name, e.g. San Francisco",
+ },
+ "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
+ },
+ "required": ["location"],
+ },
+ }
+ },
{
- "role": "function",
- "name": function_name,
- "content": function_response,
+ "type": "function",
+ "function": {
+ "name": "get_current_time",
+ "description": "Get the current time in a given location",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "location": {
+ "type": "string",
+ "description": "The city name, e.g. San Francisco",
+ },
+ },
+ "required": ["location"],
+ },
+ }
}
- )
-
- # Call the API again to get the final response from the model
- second_response = openai.ChatCompletion.create(
- messages=messages,
- deployment_id="gpt-35-turbo-0613"
- # optionally, you could provide functions in the second call as well
- )
- print(second_response["choices"][0]["message"])
-else:
- print(response["choices"][0]["message"])
-```
-
-Example in PowerShell.
+ ]
-```powershell-interactive
-# continues from the previous PowerShell example
+ # First API call: Ask the model to use the functions
+ response = client.chat.completions.create(
+ model=deployment_name,
+ messages=messages,
+ tools=tools,
+ tool_choice="auto",
+ )
-$response = Invoke-RestMethod -Uri $url -Headers $headers -Body $body -Method Post -ContentType 'application/json'
-$response.choices[0].message | ConvertTo-Json
+ # Process the model's response
+ response_message = response.choices[0].message
+ messages.append(response_message)
-# Check if the model wants to call a function
-if ($null -ne $response.choices[0].message.function_call) {
+ print("Model's response:")
+ print(response_message)
- $functionName = $response.choices[0].message.function_call.name
- $functionArgs = $response.choices[0].message.function_call.arguments
+ # Handle function calls
+ if response_message.tool_calls:
+ for tool_call in response_message.tool_calls:
+ function_name = tool_call.function.name
+ function_args = json.loads(tool_call.function.arguments)
+ print(f"Function call: {function_name}")
+ print(f"Function arguments: {function_args}")
+
+ if function_name == "get_current_weather":
+ function_response = get_current_weather(
+ location=function_args.get("location"),
+ unit=function_args.get("unit")
+ )
+ elif function_name == "get_current_time":
+ function_response = get_current_time(
+ location=function_args.get("location")
+ )
+ else:
+ function_response = json.dumps({"error": "Unknown function"})
+
+ messages.append({
+ "tool_call_id": tool_call.id,
+ "role": "tool",
+ "name": function_name,
+ "content": function_response,
+ })
+ else:
+ print("No tool calls were made by the model.")
- # Add the assistant response and function response to the messages
- $messages += @{
- role = $response.choices[0].message.role
- function_call = @{
- name = $functionName
- arguments = $functionArgs
- }
- content = 'None'
- }
- $messages += @{
- role = 'function'
- name = $response.choices[0].message.function_call.name
- content = "$functionName($functionArgs)"
- }
-
- # Call the API again to get the final response from the model
-
- # these API arguments are introduced in model version 0613
- $body = [ordered]@{
- messages = $messages
- functions = $functions
- function_call = 'auto'
- } | ConvertTo-Json -depth 6
+ # Second API call: Get the final response from the model
+ final_response = client.chat.completions.create(
+ model=deployment_name,
+ messages=messages,
+ )
- $url = "$($openai.api_base)/openai/deployments/$($openai.name)/chat/completions?api-version=$($openai.api_version)"
+ return final_response.choices[0].message.content
- $secondResponse = Invoke-RestMethod -Uri $url -Headers $headers -Body $body -Method Post -ContentType 'application/json'
- $secondResponse.choices[0].message | ConvertTo-Json
-}
+# Run the conversation and print the result
+print(run_conversation())
```
-Example output.
-
-```output
-{
- "role": "assistant",
- "content": "I'm sorry, but I couldn't find any beachfront hotels in San Diego for less than $300 a month with free breakfast."
-}
+**Output**
+
+```Output
+Model's response:
+ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_djHAeQP0DFEVZ2qptrO0CYC4', function=Function(arguments='{"location": "San Francisco", "unit": "celsius"}', name='get_current_weather'), type='function'), ChatCompletionMessageToolCall(id='call_q2f1HPKKUUj81yUa3ITLOZFs', function=Function(arguments='{"location": "Tokyo", "unit": "celsius"}', name='get_current_weather'), type='function'), ChatCompletionMessageToolCall(id='call_6TEY5Imtr17PaB4UhWDaPxiX', function=Function(arguments='{"location": "Paris", "unit": "celsius"}', name='get_current_weather'), type='function'), ChatCompletionMessageToolCall(id='call_vpzJ3jElpKZXA9abdbVMoauu', function=Function(arguments='{"location": "San Francisco"}', name='get_current_time'), type='function'), ChatCompletionMessageToolCall(id='call_1ag0MCIsEjlwbpAqIXJbZcQj', function=Function(arguments='{"location": "Tokyo"}', name='get_current_time'), type='function'), ChatCompletionMessageToolCall(id='call_ukOu3kfYOZR8lpxGRpdkhhdD', function=Function(arguments='{"location": "Paris"}', name='get_current_time'), type='function')])
+Function call: get_current_weather
+Function arguments: {'location': 'San Francisco', 'unit': 'celsius'}
+get_current_weather called with location: San Francisco, unit: celsius
+Weather data found for san francisco
+Function call: get_current_weather
+Function arguments: {'location': 'Tokyo', 'unit': 'celsius'}
+get_current_weather called with location: Tokyo, unit: celsius
+Weather data found for tokyo
+Function call: get_current_weather
+Function arguments: {'location': 'Paris', 'unit': 'celsius'}
+get_current_weather called with location: Paris, unit: celsius
+Weather data found for paris
+Function call: get_current_time
+Function arguments: {'location': 'San Francisco'}
+get_current_time called with location: San Francisco
+Timezone found for san francisco
+Function call: get_current_time
+Function arguments: {'location': 'Tokyo'}
+get_current_time called with location: Tokyo
+Timezone found for tokyo
+Function call: get_current_time
+Function arguments: {'location': 'Paris'}
+get_current_time called with location: Paris
+Timezone found for paris
+Here's the current information for the three cities:
+
+### San Francisco
+- **Time:** 09:13 AM
+- **Weather:** 72┬░C (quite warm!)
+
+### Tokyo
+- **Time:** 01:13 AM (next day)
+- **Weather:** 10┬░C
+
+### Paris
+- **Time:** 06:13 PM
+- **Weather:** 22┬░C
+
+Is there anything else you need?
```
-In the examples, we don't do any validation or error handling so you'll want to make sure to add that to your code.
-
-For a full example of working with functions, see the [sample notebook on function calling](https://aka.ms/oai/functions-samples). You can also apply more complex logic to chain multiple function calls together, which is covered in the sample as well.
+> [!IMPORTANT]
+> The JSON response might not always be valid so you need to add additional logic to your code to be able to handle errors. For some use cases you may find you need to use fine-tuning to improve [function calling performance](./fine-tuning-functions.md).
-### Prompt engineering with functions
+## Prompt engineering with functions
When you define a function as part of your request, the details are injected into the system message using specific syntax that the model has been trained on. This means that functions consume tokens in your prompt and that you can apply prompt engineering techniques to optimize the performance of your function calls. The model uses the full context of the prompt to determine if a function should be called including function definition, the system message, and the user messages. #### Improving quality and reliability+ If the model isn't calling your function when or how you expect, there are a few things you can try to improve the quality. ##### Provide more details in your function definition+ It's important that you provide a meaningful `description` of the function and provide descriptions for any parameter that might not be obvious to the model. For example, in the description for the `location` parameter, you could include extra details and examples on the format of the location. ```json "location": {
It's important that you provide a meaningful `description` of the function and p
``` ##### Provide more context in the system message+ The system message can also be used to provide more context to the model. For example, if you have a function called `search_hotels` you could include a system message like the following to instruct the model to call the function when a user asks for help with finding a hotel. ```json {"role": "system", "content": "You're an AI assistant designed to help users search for hotels. When a user asks for help finding a hotel, you should call the search_hotels function."} ``` ##### Instruct the model to ask clarifying questions+ In some cases, you want to instruct the model to ask clarifying questions to prevent making assumptions about what values to use with functions. For example, with `search_hotels` you would want the model to ask for clarification if the user request didn't include details on `location`. To instruct the model to ask a clarifying question, you could include content like the next example in your system message. ```json {"role": "system", "content": "Don't make assumptions about what values to use with functions. Ask for clarification if a user request is ambiguous."}
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/overview.md
Previously updated : 02/15/2024 Last updated : 06/30/2024 recommendations: false # What is Azure OpenAI Service?
-Azure OpenAI Service provides REST API access to OpenAI's powerful language models including the GPT-4, GPT-4 Turbo with Vision, GPT-3.5-Turbo, and Embeddings model series. In addition, the new GPT-4 and GPT-3.5-Turbo model series have now reached general availability. These models can be easily adapted to your specific task including but not limited to content generation, summarization, image understanding, semantic search, and natural language to code translation. Users can access the service through REST APIs, Python SDK, or our web-based interface in the Azure OpenAI Studio.
+Azure OpenAI Service provides REST API access to OpenAI's powerful language models including GPT-4o, GPT-4 Turbo with Vision, GPT-4, GPT-3.5-Turbo, and Embeddings model series. These models can be easily adapted to your specific task including but not limited to content generation, summarization, image understanding, semantic search, and natural language to code translation. Users can access the service through REST APIs, Python SDK, or our web-based interface in the Azure OpenAI Studio.
### Features overview | Feature | Azure OpenAI | | | |
-| Models available | **GPT-4 series (including GPT-4 Turbo with Vision)** <br>**GPT-3.5-Turbo series**<br> Embeddings series <br> Learn more in our [Models](./concepts/models.md) page.|
-| Fine-tuning (preview) | `GPT-3.5-Turbo` (0613) <br> `babbage-002` <br> `davinci-002`.|
+| Models available | **GPT-4o**<br> **GPT-4 series (including GPT-4 Turbo with Vision)** <br>**GPT-3.5-Turbo series**<br> Embeddings series <br> Learn more in our [Models](./concepts/models.md) page.|
+| Fine-tuning | `GPT-4` (preview) <br>`GPT-3.5-Turbo` (0613) <br> `babbage-002` <br> `davinci-002`.|
| Price | [Available here](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) <br> For details on GPT-4 Turbo with Vision, see the [special pricing information](../openai/concepts/gpt-with-vision.md#special-pricing-information).| | Virtual network support & private link support | Yes, unless using [Azure OpenAI on your data](./concepts/use-your-data.md). | | Managed Identity| Yes, via Microsoft Entra ID |
ai-studio Index Lookup Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/index-lookup-tool.md
The prompt flow Index Lookup tool enables the use of common vector indices (such
## Build with the Index Lookup tool
+1. If you have a flow that contains one of the deprecated legacy index tools ( the Vector Index Lookup tool, Vector DB Lookup tool, and Faiss Index Lookup tool) you will first need to [upgrade your flow](#upgrade-your-tools).
1. Create or open a flow in [Azure AI Studio](https://ai.azure.com). For more information, see [Create a flow](../flow-develop.md). 1. Select **+ More tools** > **Index Lookup** to add the Index Lookup tool to your flow.
The following JSON format response is an example returned by the tool that inclu
] ```- ## Migrate from legacy tools to the Index Lookup tool The Index Lookup tool looks to replace the three deprecated legacy index tools: the Vector Index Lookup tool, the Vector DB Lookup tool, and the Faiss Index Lookup tool.
aks App Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing.md
az aks create \
### Enable on an existing cluster
-To enable application routing on an existing cluster, use the [`az aks approuting enable`][az-aks-approuting-enable] command.
+To enable application routing on an existing cluster, use the [`az aks approuting enable`][az-aks-approuting-enable] or the [`az aks enable-addons`][az-aks-enable-addons] command with the `--addons` parameter set to `http_application_routing`.
```azurecli-interactive
+# az aks approuting enable
az aks approuting enable --resource-group <ResourceGroupName> --name <ClusterName>+
+# az aks enable-addons
+az aks enable-addons --resource-group <ResourceGroupName> --name <ClusterName> --addons http_application_routing
``` # [Open Service Mesh (OSM) (retired)](#tab/with-osm)
app-service Configure Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate.md
After the certificate renews inside your key vault, App Service automatically sy
## Frequently asked questions -- [How can I automate adding a bring-your-owncertificate to an app?](#how-can-i-automate-adding-a-bring-your-owncertificate-to-an-app)-- [Frequently asked questions for App Service certificates](configure-ssl-app-service-certificate.md#frequently-asked-questions)
-#### How can I automate adding a bring-your-owncertificate to an app?
+### How can I automate adding a bring-your-owncertificate to an app?
- [Azure CLI: Bind a custom TLS/SSL certificate to a web app](scripts/cli-configure-ssl-certificate.md) - [Azure PowerShell Bind a custom TLS/SSL certificate to a web app using PowerShell](scripts/powershell-configure-ssl-certificate.md)
-#### Can I configure a private CA certificate on my app?
-
-App Service has a list of Trusted Root Certificates which you cannot modify in the multi-tenant variant version of App Service, but you can load your own CA certificate in the Trusted Root Store in an App Service Environment (ASE), which is a single-tenant environment in App Service. (The Free, Basic, Standard, and Premium App Service Plans are all multi-tenant, and the Isolated Plans are single-tenant.)
-- [Private client certificate](environment/overview-certificates.md)-
+### Can I use a private CA (certificate authority) certificate for inbound TLS on my app?
+You can use a private CA certificate for inbound TLS in an [App Service Environment version 3 (ASEv3)](./environment/overview-certificates.md). This isn't possible in App Service (multi-tenant). For more information on App Service multi-tenant vs. single-tenant, see [App Service Environment v3 and App Service public multitenant comparison](./environment/ase-multi-tenant-comparison.md).
+
+### Can I make outbound calls using a private CA (certificate authority) client certificate from my app?
+This is only supported for Windows container apps in multi-tenant App Service. In addition, you can make outbound calls using a private CA client certificate with both code-based and container-based apps in an [App Service Environment version 3 (ASEv3)](./environment/overview-certificates.md). For more information on App Service multi-tenant vs. single-tenant, see [App Service Environment v3 and App Service public multitenant comparison](./environment/ase-multi-tenant-comparison.md).
+
+### Can I load a private CA (certificate authority) certificate in my App Service Trusted Root Store?
+You can load your own CA certificate into the Trusted Root Store in an [App Service Environment version 3 (ASEv3)](./environment/overview-certificates.md). You can't modify the list of Trusted Root Certificates in App Service (multi-tenant). For more information on App Service multi-tenant vs. single-tenant, see [App Service Environment v3 and App Service public multitenant comparison](./environment/ase-multi-tenant-comparison.md).
## More resources
automation Automation Linux Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-linux-hrw-install.md
description: This article tells how to install an agent-based Hybrid Runbook Wo
Previously updated : 04/21/2024 Last updated : 06/29/2024
The Hybrid Runbook Worker role requires the [Log Analytics agent](../azure-monit
The Hybrid Runbook Worker feature supports the following distributions. All operating systems are assumed to be x64. x86 isn't supported for any operating system. * Amazon Linux 2012.09 to 2015.09
-* CentOS Linux 5, 6, 7, and 8
* Oracle Linux 6, 7, and 8 * Red Hat Enterprise Linux Server 5, 6, 7, and 8 * Debian GNU/Linux 6, 7, and 8
Linux Hybrid Runbook Workers support a limited set of runbook types in Azure Aut
|Runbook type | Supported | |-|--|
-|Python 3 (preview)|Yes, required for these distros only: SUSE LES 15, RHEL 8, and CentOS 8|
+|Python 3 (preview)|Yes, required for these distros only: SUSE LES 15, RHEL 8|
|Python 2 |Yes, for any distro that doesn't require Python 3<sup>1</sup> | |PowerShell |Yes<sup>2</sup> | |PowerShell Workflow |No |
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview.md
Title: Azure Automation Change Tracking and Inventory overview
description: This article describes the Change Tracking and Inventory feature, which helps you identify software and Microsoft service changes in your environment. Previously updated : 12/13/2023 Last updated : 06/30/2024
Change Tracking and Inventory now support Python 2 and Python 3. If your machine
> To use the OMS agent compatible with Python 3, ensure that you first uninstall Python 2; otherwise, the OMS agent will continue to run with python 2 by default. #### [Python 2](#tab/python-2)-- Red Hat, CentOS, Oracle:
+- Red Hat, Oracle:
```bash sudo yum install -y python2
Change Tracking and Inventory now support Python 2 and Python 3. If your machine
#### [Python 3](#tab/python-3) -- Red Hat, CentOS, Oracle:
+- Red Hat, Oracle:
```bash sudo yum install -y python3
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md
description: This article provides information about deploying the extension-bas
Previously updated : 05/22/2024 Last updated : 06/29/2024 #Customer intent: As a developer, I want to learn about extension so that I can efficiently deploy Hybrid Runbook Workers.
Azure Automation stores and manages runbooks and then delivers them to one or mo
| Windows (x64) | Linux (x64) | |||
-| &#9679; Windows Server 2022 (including Server Core) <br> &#9679; Windows Server 2019 (including Server Core) <br> &#9679; Windows Server 2016, version 1709, and 1803 (excluding Server Core) <br> &#9679; Windows Server 2012, 2012 R2 (excluding Server Core) <br> &#9679; Windows 10 Enterprise (including multi-session) and Pro | &#9679; Debian GNU/Linux 8, 9, 10, and 11 <br> &#9679; Ubuntu 18.04 LTS, 20.04 LTS, and 22.04 LTS <br> &#9679; SUSE Linux Enterprise Server 15.2, and 15.3 <br> &#9679; Red Hat Enterprise Linux Server 7, 8, and 9ΓÇ»<br> &#9679; CentOS Linux 7 and 8 <br> &#9679; SUSE Linux Enterprise Server (SLES) 15 <br> &#9679; Rocky Linux 9 </br> &#9679; Oracle Linux 7 and 8 <br> *Hybrid Worker extension would follow support timelines of the OS vendor*.|
+| &#9679; Windows Server 2022 (including Server Core) <br> &#9679; Windows Server 2019 (including Server Core) <br> &#9679; Windows Server 2016, version 1709, and 1803 (excluding Server Core) <br> &#9679; Windows Server 2012, 2012 R2 (excluding Server Core) <br> &#9679; Windows 10 Enterprise (including multi-session) and Pro | &#9679; Debian GNU/Linux 8, 9, 10, and 11 <br> &#9679; Ubuntu 18.04 LTS, 20.04 LTS, and 22.04 LTS <br> &#9679; SUSE Linux Enterprise Server 15.2, and 15.3 <br> &#9679; Red Hat Enterprise Linux Server 7, 8, and 9ΓÇ»<br> &#9679; SUSE Linux Enterprise Server (SLES) 15 <br> &#9679; Rocky Linux 9 </br> &#9679; Oracle Linux 7 and 8 <br> *Hybrid Worker extension would follow support timelines of the OS vendor*.|
### Other Requirements
automation Migrate Existing Agent Based Hybrid Worker To Extension Based Workers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md
Title: Migrate an existing agent-based hybrid workers to extension-based-workers
description: This article provides information on how to migrate an existing agent-based hybrid worker to extension based workers. Previously updated : 05/17/2024 Last updated : 06/29/2024 #Customer intent: As a developer, I want to learn about extension so that I can efficiently migrate agent based hybrid workers to extension based workers.
The purpose of the Extension-based approach is to simplify the installation and
| Windows (x64) | Linux (x64) | |||
-| &#9679; Windows Server 2022 (including Server Core) <br> &#9679; Windows Server 2019 (including Server Core) <br> &#9679; Windows Server 2016, version 1709 and 1803 (excluding Server Core) <br> &#9679; Windows Server 2012, 2012 R2 (excluding Server Core) <br> &#9679; Windows 10 Enterprise (including multi-session) and Pro| &#9679; Debian GNU/Linux 8,9,10, and 11 <br> &#9679; Ubuntu 18.04 LTS, 20.04 LTS, and 22.04 LTS <br> &#9679; SUSE Linux Enterprise Server 15.2, and 15.3 <br> &#9679; Red Hat Enterprise Linux Server 7, 8, and 9 <br> &#9679; CentOS Linux 7 and 8 <br> &#9679; SUSE Linux Enterprise Server (SLES) 15 <br> &#9679; Rocky Linux 9 </br> &#9679; Oracle Linux 7 and 8 <br> *Hybrid Worker extension would follow support timelines of the OS vendor*.ΓÇ»|
+| &#9679; Windows Server 2022 (including Server Core) <br> &#9679; Windows Server 2019 (including Server Core) <br> &#9679; Windows Server 2016, version 1709 and 1803 (excluding Server Core) <br> &#9679; Windows Server 2012, 2012 R2 (excluding Server Core) <br> &#9679; Windows 10 Enterprise (including multi-session) and Pro| &#9679; Debian GNU/Linux 8,9,10, and 11 <br> &#9679; Ubuntu 18.04 LTS, 20.04 LTS, and 22.04 LTS <br> &#9679; SUSE Linux Enterprise Server 15.2, and 15.3 <br> &#9679; Red Hat Enterprise Linux Server 7, 8, and 9 <br> &#9679; SUSE Linux Enterprise Server (SLES) 15 <br> &#9679; Rocky Linux 9 </br> &#9679; Oracle Linux 7 and 8 <br> *Hybrid Worker extension would follow support timelines of the OS vendor*.ΓÇ»|
### Other Requirements
automation Dsc Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstarts/dsc-configuration.md
By enabling Azure Automation State Configuration, you can manage and monitor the
To complete this quickstart, you need: * An Azure subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/).
-* An Azure Resource Manager virtual machine running Red Hat Enterprise Linux, CentOS, or Oracle Linux. For instructions on creating a VM, see [Create your first Linux virtual machine in the Azure portal](../../virtual-machines/linux/quick-create-portal.md)
+* An Azure Resource Manager virtual machine running Red Hat Enterprise Linux, or Oracle Linux. For instructions on creating a VM, see [Create your first Linux virtual machine in the Azure portal](../../virtual-machines/linux/quick-create-portal.md)
## Sign in to Azure Sign in to the [Azure portal](https://portal.azure.com).
automation Update Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-management.md
Title: Troubleshoot Azure Automation Update Management issues
description: This article tells how to troubleshoot and resolve issues with Azure Automation Update Management. Previously updated : 05/26/2023 Last updated : 06/29/2024
When an assessment of OS updates pending for your Linux machine is done, [Open V
You can manually check the Linux machine, the applicable updates, and their classification per the distro's package manager. To understand which updates are classified as **Security** by your package manager, run the following commands.
-For YUM, the following command returns a non-zero list of updates categorized as **Security** by Red Hat. Note that in the case of CentOS, it always returns an empty list and no security classification occurs.
+For YUM, the following command returns a non-zero list of updates categorized as **Security** by Red Hat.
```bash sudo yum -q --security check-update
Updates are often superseded by other updates. For more information, see [Update
### Installing updates by classification on Linux
-Deploying updates to Linux by classification ("Critical and security updates") has important caveats, especially for CentOS. These limitations are documented on the [Update Management overview page](../update-management/overview.md#update-classifications).
+Deploying updates to Linux by classification ("Critical and security updates") has important caveats. These limitations are documented on the [Update Management overview page](../update-management/overview.md#update-classifications).
### KB2267602 is consistently missing
automation Deploy Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/deploy-updates.md
Title: How to create update deployments for Azure Automation Update Management
description: This article describes how to schedule update deployments and review their status. Previously updated : 11/05/2021 Last updated : 06/30/2024
automation Manage Updates For Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/manage-updates-for-vm.md
Previously updated : 08/25/2021 Last updated : 06/30/2024 # Manage updates and patches for your VMs
automation Operating System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/operating-system-requirements.md
description: This article describes the supported Windows and Linux operating sy
Previously updated : 08/01/2023 Last updated : 06/30/2024
All operating systems are assumed to be x64. x86 is not supported for any operat
|Operating system |Notes | |||
-|CentOS 6, 7, and 8 | Linux agents require access to an update repository. Classification-based patching requires `yum` to return security data that CentOS doesn't have in its RTM releases. For more information on classification-based patching on CentOS, see [Update classifications on Linux](view-update-assessments.md#linux). |
|Oracle Linux 6.x, 7.x, 8x | Linux agents require access to an update repository. | |Red Hat Enterprise 6, 7, and 8 | Linux agents require access to an update repository. | |SUSE Linux Enterprise Server 12, 15, and 15.1 | Linux agents require access to an update repository. |
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/overview.md
description: This article provides an overview of the Update Management feature
Previously updated : 12/13/2023 Last updated : 06/30/2024
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
The following table summarizes the Basic and Analytics log data plans.
|:|:|:| | Ingestion | Regular ingestion cost. | Reduced ingestion cost. | | Log queries | Full query capabilities<br/>No extra cost. | [Basic query capabilities](basic-logs-query.md#limitations).<br/>Pay-per-use.|
-| Retention | [Configure retention from 30 days to two years](data-retention-archive.md). | Retention fixed at eight days.<br/>When you change an existing table's plan to Basic logs, [Azure archives data](data-retention-archive.md) that's more than eight days old but still within the table's original retention period. |
+| Retention | [Configure retention from 30 days to two years](data-retention-archive.md). | Retention fixed at thirty days.<br/>When you change an existing table's plan to Basic logs, [Azure archives data](data-retention-archive.md) that's more than thirty days old but still within the table's original retention period. |
| Alerts | Supported. | Not supported. | > [!NOTE]
By default, all tables in your Log Analytics workspace are Analytics tables, and
Configure a table for Basic logs if: -- You don't require more than eight days of data retention for the table.
+- You don't require more than thirty days of data retention for the table.
- You only require basic queries of the data using a [limited version of the query language](basic-logs-query.md#limitations). - The cost savings for data ingestion exceed the expected cost for any expected queries. - The table [supports Basic logs](#supported-tables). ## Set a table's log data plan
-When you change a table's plan from Analytics to Basic, Log Analytics immediately archives any data that's older than eight days and up to original data retention of the table. In other words, the total retention period of the table remains unchanged, unless you explicitly [modify the archive period](../logs/data-retention-archive.md).
+When you change a table's plan from Analytics to Basic, Log Analytics immediately archives any data that's older than thirty days and up to original data retention of the table. In other words, the total retention period of the table remains unchanged, unless you explicitly [modify the archive period](../logs/data-retention-archive.md).
When you change a table's plan from Basic to Analytics, the changes take affect on existing data in the table immediately.
Status code: 200
```http { "properties": {
- "retentionInDays": 8,
+ "retentionInDays": 30,
"totalRetentionInDays": 30, "archiveRetentionInDays": 22, "plan": "Basic",
Update-AzOperationalInsightsTable -ResourceGroupName RG-NAME -WorkspaceName WOR
+ ## Supported tables These tables currently support Basic logs:
backup Backup Azure Auto Enable Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-auto-enable-backup.md
Title: Auto-Enable Backup on VM Creation using Azure Policy description: 'An article describing how to use Azure Policy to auto-enable backup for all VMs created in a given scope' Previously updated : 10/17/2022 Last updated : 06/29/2024 +
-# Auto-Enable Backup on VM Creation using Azure Policy
+# Auto-enable backup on VM creation using Azure Policy
One of the key responsibilities of a Backup or Compliance Admin in an organization is to ensure that all business-critical machines are backed up with the appropriate retention.
The below steps describe the end-to-end process of assigning Policy 1: **Configu
> [!NOTE] >
-> Azure Policy can also be used on existing VMs, using [remediation](../governance/policy/how-to/remediate-resources.md).
+> - Azure Policy can also be used on existing VMs, using [remediation](../governance/policy/how-to/remediate-resources.md).
+> - It's recommended that this policy not be assigned to more than 200 VMs at a time. If the policy is assigned to more than 200 VMs, it can result in the backup being triggered a few hours later than that specified by the schedule.
-> [!NOTE]
->
-> It's recommended that this policy not be assigned to more than 200 VMs at a time. If the policy is assigned to more than 200 VMs, it can result in the backup being triggered a few hours later than that specified by the schedule.
-
-## Next Steps
+## Next step
[Learn more about Azure Policy](../governance/policy/overview.md)
backup Backup Azure Diagnostics Mode Data Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-diagnostics-mode-data-model.md
Title: Azure Monitor logs data model description: In this article, learn about the Azure Monitor Log Analytics data model details for Azure Backup data. Previously updated : 11/30/2022 Last updated : 06/29/2024 -+
To update your queries to remove dependency on V1 schema, follow these steps:
| distinct BackupItemUniqueId_s, ProtectedContainerUniqueId_s ````
-## Next steps
+## Next step
After the data model review is complete, start [creating custom queries](../azure-monitor/visualize/tutorial-logs-dashboards.md) in Azure Monitor logs to build your own dashboard.
backup Manage Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/manage-telemetry.md
Title: Manage telemetry settings in Microsoft Azure Backup Server (MABS) description: This article provides information about how to manage the telemetry settings in MABS. Previously updated : 07/27/2021- Last updated : 06/28/2024+
batch Batch Aad Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-aad-auth.md
Title: Authenticate Azure Batch services with Microsoft Entra ID description: Learn how to authenticate Azure Batch service applications with Microsoft Entra ID by using integrated authentication or a service principal. Previously updated : 06/25/2024 Last updated : 06/27/2024
To authenticate with a service principal from Batch .NET:
1. Call this method by using the following code. The `.default` scope ensures that the application has permission to access all the scopes for the resource. ```csharp
- var token = await GetAccessToken(new string[] { "BatchResourceId/.default" });
+ var token = await GetAccessToken(new string[] { $"{BatchResourceUri}/.default" });
``` 1. Construct a **BatchTokenCredentials** object that takes the delegate as a parameter. Use those credentials to open a **BatchClient** object. Then use the **BatchClient** object for subsequent operations against the Batch service:
batch Batch Apis Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-apis-tools.md
Title: APIs and tools for developers description: Learn about the APIs and tools available for developing solutions with the Azure Batch service. Previously updated : 06/26/2024 Last updated : 06/27/2024
Your applications and services can issue direct REST API calls or use one or mor
| | | | | | | | **Batch REST** |[Azure REST API - Docs](/rest/api/batchservice/) |N/A |- |- | [Supported versions](/rest/api/batchservice/batch-service-rest-api-versioning) | | **Batch .NET** |[Azure SDK for .NET - Docs](/dotnet/api/overview/azure/batch) |[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Batch/) |[Tutorial](tutorial-parallel-dotnet.md) |[GitHub](https://github.com/Azure-Samples/azure-batch-samples/tree/master/CSharp) | [Release notes](https://aka.ms/batch-net-dataplane-changelog) |
-| **Batch Python** |[Azure SDK for Python - Docs](/python/api/overview/azure/mgmt-datafactory-readme?view=azure-python&preserve-view=true) |[PyPI](https://pypi.org/project/azure-batch/) |[Tutorial](tutorial-parallel-python.md)|[GitHub](https://github.com/Azure-Samples/azure-batch-samples/tree/master/Python/Batch) | [Readme](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/batch/azure-batch/README.md) |
+| **Batch Python** |[Azure SDK for Python - Docs](/python/api/overview/azure/batch) |[PyPI](https://pypi.org/project/azure-batch/) |[Tutorial](tutorial-parallel-python.md)|[GitHub](https://github.com/Azure-Samples/azure-batch-samples/tree/master/Python/Batch) | [Readme](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/batch/azure-batch/README.md) |
| **Batch JavaScript** |[Azure SDK for JavaScript - Docs](/javascript/api/overview/azure/batch) |[npm](https://www.npmjs.com/package/@azure/batch) |[Tutorial](batch-js-get-started.md) |- | [Readme](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/batch/batch) | | **Batch Java** |[Azure SDK for Java - Docs](/java/api/overview/azure/batch) |[Maven](https://search.maven.org/search?q=a:azure-batch) |- |[GitHub](https://github.com/Azure-Samples/azure-batch-samples/tree/master/Java) | [Readme](https://github.com/Azure/azure-batch-sdk-for-java)|
batch Batch Pool Node Error Checking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-node-error-checking.md
Title: Pool and node errors description: Learn about background operations, errors to check for, and how to avoid errors when you create Azure Batch pools and nodes. Previously updated : 06/10/2024 Last updated : 06/27/2024
After you make sure to retrieve any data you need from the node or upload it to
You can delete old completed jobs or tasks whose task data is still on the nodes. Look in the `recentTasks` collection in the [taskInformation](/rest/api/batchservice/computenode/get#taskinformation) on the node, or use the [File - List From Compute Node](/rest/api/batchservice/file/listfromcomputenode) API. Deleting a job deletes all the tasks in the job. Deleting the tasks in the job triggers deletion of data in the task directories on the nodes, and frees up space. Once you've freed up enough space, reboot the node. The node should move out of `unusable` state and into `idle` again.
-To recover an unusable node in [VirtualMachineConfiguration](/rest/api/batchservice/pool/add#virtualmachineconfiguration) pools, you can remove the node from the pool by using the [Pool - Remove Nodes](/rest/api/batchservice/pool/removenodes) API. Then you can grow the pool again to replace the bad node with a fresh one. For [CloudServiceConfiguration](/rest/api/batchservice/pool/add#cloudserviceconfiguration) pools, you can reimage the node by using the [Compute Node - Reimage](/rest/api/batchservice/computenode/reimage) API to clean the entire disk. Reimage isn't currently supported for [VirtualMachineConfiguration](/rest/api/batchservice/pool/add#virtualmachineconfiguration) pools.
+To recover an unusable node in [VirtualMachineConfiguration](/rest/api/batchservice/pool/add#virtualmachineconfiguration) pools, you can remove the node from the pool by using the [Pool - Remove Nodes](/rest/api/batchservice/pool/removenodes) API. Then you can grow the pool again to replace the bad node with a fresh one.
+
+> [!Important]
+> Reimage isn't currently supported for [VirtualMachineConfiguration](/rest/api/batchservice/pool/add#virtualmachineconfiguration) pools.
## Next steps
batch Batch Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-virtual-network.md
Title: Provision a pool in a virtual network description: Learn how to create a Batch pool in an Azure virtual network so that compute nodes can communicate securely with other VMs in the network, such as a file server. Previously updated : 12/06/2023 Last updated : 06/27/2024 # Create an Azure Batch pool in a virtual network
To allow compute nodes to communicate securely with other virtual machines, or w
* Multiple pools can be created in the same virtual network or in the same subnet (as long as it has sufficient address space). A single pool can't exist across multiple virtual networks or subnets.
-Other virtual network requirements differ, depending on whether the Batch pool is in the `VirtualMachineConfiguration`
-or `CloudServiceConfiguration`. `VirtualMachineConfiguration` for Batch pools is recommended, because `CloudServiceConfiguration`
-pools are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/).
- > [!IMPORTANT] > Batch pools can be configured in one of two node communication modes. Classic node communication mode is > where the Batch service initiates communication to the compute nodes.
You can also disable default remote access on these ports through configuring [p
Outbound to BatchNodeManagement.*region* service tag is required in `classic` pool communication mode if you're using Job Manager tasks or if your tasks must communicate back to the Batch service. For outbound to BatchNodeManagement.*region* in `simplified` pool communication mode, the Batch service currently only uses TCP protocol, but UDP might be required for future compatibility. For [pools without public IP addresses](simplified-node-communication-pool-no-public-ip.md) using `simplified` communication mode and with a node management private endpoint, an NSG isn't needed. For more information about outbound security rules for the BatchNodeManagement.*region* service tag, see [Use simplified compute node communication](simplified-compute-node-communication.md).
-## Pools in the Cloud Services Configuration
-
-> [!WARNING]
-> Cloud Services Configuration pools are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). Use Virtual Machine Configuration pools instead.
-
-Requirements:
--- Supported Virtual Networks: Classic Virtual Networks only.-- Subnet ID: when specifying the subnet using the Batch APIs, use the *resource identifier* of the subnet. The subnet identifier is of the form:-
- `/subscriptions/{subscription}/resourceGroups/{group}/providers/Microsoft.ClassicNetwork/virtualNetworks/{network}/subnets/{subnet}`
--- Permissions: the `Microsoft Azure Batch` service principal must have the `Classic Virtual Machine Contributor` Azure role for the specified Virtual Network.-
-### Network security groups for Cloud Services Configuration pools
-
-The subnet must allow inbound communication from the Batch service to be able to schedule tasks on the compute nodes, and it must allow outbound communication to communicate with Azure Storage or other resources.
-
-You don't need to specify an NSG, because Batch configures inbound communication only from Batch IP addresses to the pool nodes. However, If the specified subnet has associated NSGs and/or a firewall, configure the inbound and outbound security rules as shown in the following tables. If communication to the compute nodes in the specified subnet is denied by an NSG, the Batch service sets the state of the compute nodes to **unusable**.
-
-Configure inbound traffic on port 3389 for Windows if you need to permit RDP access to the pool nodes. This rule isn't required for the pool nodes to be usable.
-
-**Inbound security rules**
-
-| Source IP addresses | Source ports | Destination | Destination ports | Protocol | Action |
-| | | | | | |
-| Any <br /><br />Although this rule effectively requires *allow all*, the Batch service applies an ACL rule at the level of each node that filters out all non-Batch service IP addresses. | * | Any | 10100, 20100, 30100 | TCP | Allow |
-| Optional, to allow RDP access to compute nodes. | * | Any | 3389 | TCP | Allow |
-
-**Outbound security rules**
-
-| Source | Source ports | Destination | Destination ports | Protocol | Action |
-| | | | | | |
-| Any | * | Any | 443 | Any | Allow |
- ## Create a pool with a Virtual Network in the Azure portal After you've created your Virtual Network and assigned a subnet to it, you can create a Batch pool with that Virtual Network. Follow these steps to create a pool from the Azure portal: 
batch Credential Access Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/credential-access-key-vault.md
Title: Use certificates and securely access Azure Key Vault with Batch description: Learn how to programmatically access your credentials from Key Vault using Azure Batch. Previously updated : 06/13/2024 Last updated : 06/27/2024
> [!WARNING] > Batch account certificates as detailed in this article are [deprecated](batch-certificate-migration-guide.md). To securely access Azure Key Vault, simply use [Pool managed identities](managed-identity-pools.md) with the appropriate access permissions configured for the user-assigned managed identity to access your Key Vault. If you need to provision certificates on Batch nodes, please utilize the available Azure Key Vault VM extension in conjunction with pool Managed Identity to install and manage certificates on your Batch pool. For more information on deploying certificates from Azure Key Vault with Managed Identity on Batch pools, see [Enable automatic certificate rotation in a Batch pool](automatic-certificate-rotation.md).
->
-> `CloudServiceConfiguration` pools do not provide the ability to specify either Managed Identity or the Azure Key Vault VM extension, and these pools are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). You should migrate to `VirtualMachineConfiguration` pools which provide the aforementioned alternatives.
In this article, you'll learn how to set up Batch nodes with certificates to securely access credentials stored in [Azure Key Vault](../key-vault/general/overview.md).
business-continuity-center Tutorial Recover Deleted Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/business-continuity-center/tutorial-recover-deleted-item.md
Title: Recover deleted item description: Learn how to recover deleted item- Previously updated : 11/15/2023+ Last updated : 06/28/2024
chaos-studio Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/troubleshooting.md
From the **Experiments** list in the Azure portal, select the experiment name to
![Screenshot that shows experiment history.](images/run-experiment-history.png)
+Alternatively, use the REST API to obtain the experiment's execution details. Learn more in the [REST API sample article](chaos-studio-samples-rest-api.md).
+
+```azurecli
+az rest --method post --url "https://management.azure.com/{experimentId}/executions/{executionDetailsId}/getExecutionDetails?api-version={apiVersion}"
+```
+ ### My agent-based fault failed with the error "Verify that the target is correctly added and proper read permissions are provided to the experiment msi" This error might happen if you added the agent by using the Azure portal, which has a known issue. Enabling an agent-based target doesn't assign the user-assigned managed identity to the VM or virtual machine scale set.
To resolve this problem, go to the VM or virtual machine scale set in the Azure
This error will happen if you try to run multiple agent faults at the same time. Today the agent only supports running a single agent-fault at a time, and will fail if you define an experiment that runs multiple agent faults at the same time.
+### The experiment didn't start or failed immediately
+
+After starting an experiment, you might see an error message like: `The long-running operation has failed. InternalServerError. The target resource(s) could not be resolved. Error Code: OperationFailedException`. Usually, this indicates that the experiment's identity doesn't have the necessary permissions.
+
+To resolve this error, ensure that the experiment's system-assigned or user-assigned managed identity has permission to all resources in the experiment. Learn more about permissions here: [Permissions and security in Azure Chaos Studio](chaos-studio-permissions-security.md). For example, if the experiment targets a virtual machine, navigate to the virtual machine's identity page and assign the "Virtual Machine Contributor" role to the experiment's managed identity.
+ ## Problems when setting up a managed identity ### When I try to add a system-assigned/user-assigned managed identity to my existing experiment, it fails to save.
cosmos-db Ai Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/ai-agents.md
Title: AI agents
+ Title: AI agent
description: AI agent key concepts and implementation of AI agent memory system.
Last updated 06/26/2024
-# AI agents
+# AI agent
AI agents are designed to perform specific tasks, answer questions, and automate processes for users. These agents vary widely in complexity, ranging from simple chatbots, to copilots, to advanced AI assistants in the form of digital or robotic systems that can execute complex workflows autonomously. This article provides conceptual overviews and detailed implementation samples on AI agents. ## What are AI Agents?
-Unlike standalone large language models (LLMs) or rule-based software/hardware systems, AI agents possess the follow common features:
+Unlike standalone large language models (LLMs) or rule-based software/hardware systems, AI agent possesses the follow common features:
-- [Planning](#reasoning-and-planning). AI agents can plan and sequence actions to achieve specific goals. The integration of LLMs has revolutionized their planning capabilities.-- [Tool usage](#frameworks). Advanced AI agents can utilize various tools, such as code execution, search, and computation capabilities, to perform tasks effectively. Tool usage is often done through function calling.-- [Perception](#frameworks). AI agents can perceive and process information from their environment, including visual, auditory, and other sensory data, making them more interactive and context aware.-- [Memory](#ai-agent-memory-system). AI agents possess the ability to remember past interactions (tool usage and perception) and behaviors (tool usage and planning). They store these experiences and even perform self-reflection to inform future actions. This memory component allows for continuity and improvement in agent performance over time.
+- [Planning](#reasoning-and-planning). AI agent can plan and sequence actions to achieve specific goals. The integration of LLMs has revolutionized their planning capabilities.
+- [Tool usage](#frameworks). Advanced AI agent can utilize various tools, such as code execution, search, and computation capabilities, to perform tasks effectively. Tool usage is often done through function calling.
+- [Perception](#frameworks). AI agent can perceive and process information from their environment, including visual, auditory, and other sensory data, making them more interactive and context aware.
+- [Memory](#ai-agent-memory-system). AI agent possess the ability to remember past interactions (tool usage and perception) and behaviors (tool usage and planning). It stores these experiences and even perform self-reflection to inform future actions. This memory component allows for continuity and improvement in agent performance over time.
> [!NOTE]
-> The usage of the term "memory" in the context of AI agents should not be confused with the concept of computer memory (like volatile, non-volatile, and persistent memory).
+> The usage of the term "memory" in the context of AI agent should not be confused with the concept of computer memory (like volatile, non-volatile, and persistent memory).
### Copilots
A multi-agent system provides the following advantages over a copilot or a singl
- Sophisticated abilities: Multi-agent systems can handle complex or large-scale problems by conducting thorough decision-making processes and distributing tasks among multiple agents. - Enhanced memory: Multi-agent systems with memory can overcome large language models' context windows, enabling better understanding and information retention.
-## Implement AI agents
+## Implement AI agent
### Reasoning and planning
Reflexion agents verbally reflect on task feedback signals, then maintain their
### Frameworks
-Various frameworks and tools can facilitate the development and deployment of AI agents.
+Various frameworks and tools can facilitate the development and deployment of AI agent.
For tool usage and perception that do not require sophisticated planning and memory, some popular LLM orchestrator frameworks are LangChain, LlamaIndex, Prompt Flow, and Semantic Kernel.
For advanced and autonomous planning and execution workflows, [AutoGen](https://
The prevalent practice for experimenting with AI-enhanced applications in 2022 through 2024 has been using standalone database management systems for various data workflows or types. For example, an in-memory database for caching, a relational database for operational data (including tracing/activity logs and LLM conversation history), and a [pure vector database](vector-database.md#integrated-vector-database-vs-pure-vector-database) for embedding management.
-However, this practice of using a complex web of standalone databases can hurt AI agent's performance. Integrating all these disparate databases into a cohesive, interoperable, and resilient memory system for AI agents is a significant challenge in and of itself. Moreover, many of the frequently used database services are not optimal for the speed and scalability that AI agent systems need. These databases' individual weaknesses are exacerbated in multi-agent systems:
+However, this practice of using a complex web of standalone databases can hurt AI agent's performance. Integrating all these disparate databases into a cohesive, interoperable, and resilient memory system for AI agent is a significant challenge in and of itself. Moreover, many of the frequently used database services are not optimal for the speed and scalability that AI agent systems need. These databases' individual weaknesses are exacerbated in multi-agent systems:
#### In-memory databases
-In-memory databases are excellent for speed but may struggle with the large-scale data persistence that AI agents require.
+In-memory databases are excellent for speed but may struggle with the large-scale data persistence that AI agent requires.
#### Relational databases Relational databases are not ideal for the varied modalities and fluid schemas of data handled by agents. Moreover, relational databases require manual efforts and even downtime to manage provisioning, partitioning, and sharding.
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/introduction.md
Today's applications are required to be highly responsive and always online. The
The surge of AI-powered applications created another layer of complexity, because many of these applications integrate a multitude of data stores. For example, some organizations built applications that simultaneously connect to MongoDB, Postgres, Redis, and Gremlin. These databases differ in implementation workflow and operational performances, posing extra complexity for scaling applications.
-Azure Cosmos DB simplifies and expedites your application development by being the single database for your operational data needs, from [geo-replicated distributed caching](https://medium.com/@marcodesanctis2/using-azure-cosmos-db-as-your-persistent-geo-replicated-distributed-cache-b381ad80f8a0) to backup to [vector indexing and search](vector-database.md). It provides the data infrastructure for modern applications like [AI agents](ai-agents.md), digital commerce, Internet of Things, and booking management. It can accommodate all your operational data models, including relational, document, vector, key-value, graph, and table.
+Azure Cosmos DB simplifies and expedites your application development by being the single database for your operational data needs, from [geo-replicated distributed caching](https://medium.com/@marcodesanctis2/using-azure-cosmos-db-as-your-persistent-geo-replicated-distributed-cache-b381ad80f8a0) to backup to [vector indexing and search](vector-database.md). It provides the data infrastructure for modern applications like [AI agent](ai-agents.md), digital commerce, Internet of Things, and booking management. It can accommodate all your operational data models, including relational, document, vector, key-value, graph, and table.
## An AI database providing industry-leading capabilities...
cosmos-db Vector Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/vector-database.md
Title: Vector database- description: Vector database functionalities, implementation, and comparison.
defender-for-cloud Ai Security Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/ai-security-posture.md
Title: AI security posture management (Preview) description: Learn about AI security posture management in Microsoft Defender for Cloud and how it protects resources from AI threats. Previously updated : 05/05/2024 Last updated : 06/30/2024
The Defender Cloud Security Posture Management (CSPM) plan in Microsoft Defender
:::image type="content" source="media/ai-security-posture/ai-lifecycle.png" alt-text="Diagram of the development lifecycle that is covered by Defender for Cloud's AI security posture management.":::
+> [!IMPORTANT]
+> To enable AI security posture management's capabilities on an AWS account that already:
+> - Is connected to your Azure account.
+> - Has Defender CSPM enabled.
+> - Has permissions type set as **Least privilege access**.
+>
+> You must reconfigure the permissions on that connector to enable the relevant permissions using these steps:
+> 1. In the Azure Portal navigate to Environment Settings page and select the appropriate AWS connector.
+> 1. Select **Configure access**.
+> 1. Ensure the permissions type is set to **Least privilege access**.
+> 1. [Follow steps 5 - 8](quickstart-onboard-aws.md#select-defender-plans) to finish the configuration.
+ ## Discovering generative AI apps Defender for Cloud discovers AI workloads and identifies details of your organization's AI BOM. This visibility allows you to identify and address vulnerabilities and protect generative AI applications from potential threats.
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
Title: Cloud Security Posture Management (CSPM) description: Learn more about Cloud Security Posture Management (CSPM) in Microsoft Defender for Cloud and how it helps improve your security posture. Previously updated : 05/23/2024 Last updated : 06/30/2024 #customer intent: As a reader, I want to understand the concept of Cloud Security Posture Management (CSPM) in Microsoft Defender for Cloud.
The following table summarizes each plan and their cloud availability.
| [ServiceNow Integration](integration-servicenow.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP | | [Critical assets protection](critical-assets-protection.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP | | [Governance to drive remediation at-scale](governance-rules.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
-| [Data security posture management (DSPM), Sensitive data scanning](concept-data-security-posture.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP* |
+| [Data security posture management (DSPM), Sensitive data scanning](concept-data-security-posture.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP<sup>[1](#footnote1)</sup> |
| [Agentless discovery for Kubernetes](concept-agentless-containers.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP | | [Agentless code-to-cloud containers vulnerability assessment](agentless-vulnerability-assessment-azure.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
-(*) In GCP sensitive data discovery [only supports Cloud Storage](concept-data-security-posture-prepare.md#whats-supported).
+<sup><a name="footnote1"></a>1</sup>: GCP sensitive data discovery [only supports Cloud Storage](concept-data-security-posture-prepare.md#whats-supported).
> [!NOTE] > Starting March 7, 2024, Defender CSPM must be enabled to have premium DevOps security capabilities that include code-to-cloud contextualization powering security explorer and attack paths and pull request annotations for Infrastructure-as-Code security findings. See DevOps security [support and prerequisites](devops-support.md) to learn more.
defender-for-iot Quickstart Onboard Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/quickstart-onboard-iot-hub.md
This article explains how to enable Microsoft Defender for IoT on an Azure IoT h
- The ability to create a standard tier IoT Hub.
+- For the [resource group and access management setup process](#allow-access-to-the-iot-hub), you need the following roles:
+
+ - To add role assignments, you need the Owner, Role Based Access Control Administrator and User Access Administrator roles.
+ - To register resource providers, you need th Owner and Contributor roles.
+
+ Learn more about [privileged administrator roles in Azure](../../role-based-access-control/role-assignments-steps.md#privileged-administrator-roles).
+ > [!NOTE] > Defender for IoT currently only supports standard tier IoT Hubs.
You can create a hub in the Azure portal. For all new IoT hubs, Defender for IoT
:::image type="content" source="media/quickstart-onboard-iot-hub/management-tab.png" alt-text="Ensure the Defender for IoT toggle is set to on.":::
+1. Follow these steps to [allow access to the IoT Hub](#allow-access-to-the-iot-hub).
+ ## Enable Defender for IoT on an existing IoT Hub You can onboard Defender for IoT to an existing IoT Hub, where you can then monitor the device identity management, device to cloud, and cloud to device communication patterns.
You can onboard Defender for IoT to an existing IoT Hub, where you can then moni
1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Follow these steps to [allow access to the IoT Hub](#allow-access-to-the-iot-hub).
+ 1. Navigate to **IoT Hub** > **`Your hub`** > **Defender for IoT** > **Overview**. 1. Select **Secure your IoT solution**, and complete the onboarding form. :::image type="content" source="media/quickstart-onboard-iot-hub/secure-your-iot-solution.png" alt-text="Select the secure your IoT solution button to secure your solution." lightbox="media/quickstart-onboard-iot-hub/secure-your-iot-solution-expanded.png":::
-The **Secure your IoT solution** button will only appear if the IoT Hub hasn't already been onboarded, or if you set the Defender for IoT toggle to **Off** while onboarding.
+ The **Secure your IoT solution** button will only appear if the IoT Hub hasn't already been onboarded, or if you set the Defender for IoT toggle to **Off** while onboarding.
+ :::image type="content" source="media/quickstart-onboard-iot-hub/toggle-is-off.png" alt-text="If your toggle was set to off during onboarding.":::
## Verify that Defender for IoT is enabled
Configure data collection settings for Defender for IoT in your IoT hub, such as
1. Select **Save** to save your settings.
+## Set up resource providers and access control
+
+To set up permissions needed to access the IoT hub:
+
+1. [Set up resource providers and access control for the IoT hub](#allow-access-to-the-iot-hub).
+1. To allow access to a Log Analytics workspace, also [set up resource providers and access control for Log Analytics workspace](#allow-access-to-a-log-analytics-workspace).
+
+Learn more about [resource providers and resource types](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
+
+### Allow access to the IoT Hub
+
+To allow access to the IoT Hub:
+
+#### Set up resource providers for the IoT hub
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) and navigate to the **Subscriptions** page.
+
+1. In the subscriptions table, select your subscription.
+
+1. In the subscription page that opens, from the left menu bar, select **Resource providers**.
+
+1. In the search bar, type: *Microsoft.iot*.
+
+1. Select the **Microsoft.IoTSecurity** provider and verify that its status is **Registered**.
+
+#### Set up access control for the IoT hub
+
+1. In your IoT hub, from the left menu bar, select **Access control (IAM)**, and from the top menu, select **Add > Add role assignment**.
+
+1. In the **Role tab**, select the **Privileged administrator roles** tab, and select the **Contributor** role.
+
+1. Select the **Members** tab, and next to **Members**, select **Select members**.
+
+1. In the **Select members** page, in the **Select** field, type *Azure security*, select **Azure Security for IoT**, and select **Select** at the bottom.
+
+1. Back in the **Members** tab, select **Review + assign** at the bottom of the tab, in the **Review and assign tab**, select **Review + assign** at the bottom again.
+
+### Allow access to a Log Analytics workspace
+
+To connect to a Log Analytics workspace:
+
+#### Set up resource providers for the Log Analytics workspace
+
+1. In the Azure portal, navigate to the **Subscriptions** page.
+
+1. In the subscriptions table, select your subscription.
+
+1. In the subscription page that opens, from the left menu bar, select **Resource providers**.
+
+1. In the search bar, type: *Microsoft.OperationsManagement*.
+
+1. Select the **Microsoft.OperationsManagement** provider and verify that its status is **Registered**.
+
+#### Set up access control for the Log Analytics workspace
+
+1. In the Azure portal, search for and navigate to the **Log analytics workspaces** page, select your workspace, and from the left menu, select **Access control (IAM)**.
+
+1. From the top menu, select **Add > Add role assignment**.
+
+1. In the **Role tab**, under **Job function roles**, search for *Log analytics*, and select the **Log Analytics Contributor** role.
+
+1. Select the **Members** tab, and next to **Members**, select **Select members**.
+
+1. In the **Select members** page, in the **Select** field, type *Azure security*, select **Azure Security for IoT**, and select **Select** at the bottom.
+
+1. Back in the **Members** tab, select **Review + assign** at the bottom of the tab, in the **Review and assign tab**, select **Review + assign** at the bottom again.
+
+#### Enable Defender for IoT
+
+1. In your IoT hub, from the left menu, select **Settings**, and in the **Settings page**, select **Data Collection**.
+
+1. Toggle on **Enable Microsoft Defender for IoT**, and select **Save** at the bottom.
+
+1. Under **Choose the Log Analytics workspace you want to connect to**, set the toggle to **On**.
+
+1. Select the subscription for which you [set up the resource provider](#set-up-resource-providers-for-the-log-analytics-workspace) and workspace.
+ ## Next steps Advance to the next article to add a resource group to your solution.
defender-for-iot Integrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrate-overview.md
Integrate Microsoft Defender for IoT with partner services to view data from acr
|Name |Description |Support scope |Supported by |Learn more | |||||| | **Aruba ClearPass** (cloud) | View Defender for IoT data together with Aruba ClearPass data, using Microsoft Sentinel to create custom dashboards, custom alerts, and improve your investigation ability.<br><br> Connect to [Microsoft Sentinel](concept-sentinel-integration.md), and install the [Aruba ClearPass data connector](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-arubaclearpass?tab=Overview). | - OT networks <br>- Cloud-connected or locally managed OT sensors | Microsoft | [Microsoft Sentinel documentation](/azure/sentinel/data-connectors/aruba-clearpass) |
-| **Aruba ClearPass** (on-premises) | View Defender for IoT data together with Aruba ClearPass data by doing one of the following:<br><br>- Configure your sensor to send syslog files directly to ClearPass. <br>- | - OT networks <br>- Cloud-connected or locally managed OT sensors | Microsoft | [Forward on-premises OT alert information](how-to-forward-alert-information-to-partners.md) <br><br>[Defender for IoT API reference](references-work-with-defender-for-iot-apis.md)|
-|**Aruba ClearPass** (legacy) | Share Defender for IoT data directly with ClearPass Security Exchange and update the ClearPass Policy Manager Endpoint Database with Defender for IoT data. | - OT networks<br>- Locally managed sensors and on-premises management consoles | Microsoft | [Integrate ClearPass with Microsoft Defender for IoT](tutorial-clearpass.md) |
+| **Aruba ClearPass** (on-premises) | View Defender for IoT data together with Aruba ClearPass data by doing one of the following:<br><br>- Configure your sensor to send syslog files directly to ClearPass. <br> | - OT networks <br>- Cloud-connected or locally managed OT sensors | Microsoft | [Forward on-premises OT alert information](how-to-forward-alert-information-to-partners.md) <br><br>[Defender for IoT API reference](references-work-with-defender-for-iot-apis.md)|
## Axonius
defender-for-iot Tutorial Clearpass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-clearpass.md
- Title: Integrate ClearPass with Microsoft Defender for IoT
-description: In this tutorial, you learn how to integrate Microsoft Defender for IoT with ClearPass using Defender for IoT's legacy, on-premises integration.
- Previously updated : 09/06/2023---
-# Integrate ClearPass with Microsoft Defender for IoT
-
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
--
-This article describes how to integrate Aruba ClearPass with Microsoft Defender for IoT, in order to view both ClearPass and Defender for IoT information in a single place.
-
-Viewing both Defender for IoT and ClearPass information together provides SOC analysts with multidimensional visibility into the specialized OT protocols and devices deployed in industrial environments, along with ICS-aware behavioral analytics to rapidly detect suspicious or anomalous behavior.
-
-## Cloud-based integrations
-
-> [!TIP]
-> Cloud-based security integrations provide several benefits over on-premises solutions, such as centralized, simpler sensor management and centralized security monitoring.
->
-> Other benefits include real-time monitoring, efficient resource use, increased scalability and robustness, improved protection against security threats, simplified maintenance and updates, and seamless integration with third-party solutions.
->
-
-If you're integrating a cloud-connected OT sensor with Aruba ClearPass, we recommend that you connect to [Microsoft Sentinel](concept-sentinel-integration.md), and then install the [Aruba ClearPass data connector](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-arubaclearpass?tab=Overview).
-
-Microsoft Sentinel is a scalable cloud service for security information event management (SIEM) security orchestration automated response (SOAR). SOC teams can use the integration between Microsoft Defender for IoT and Microsoft Sentinel to collect data across networks, detect and investigate threats, and respond to incidents.
-
-In Microsoft Sentinel, the Defender for IoT data connector and solution brings out-of-the-box security content to SOC teams, helping them to view, analyze and respond to OT security alerts, and understand the generated incidents in the broader organizational threat contents.
-
-For more information, see:
--- [Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel](iot-solution.md)-- [Tutorial: Investigate and detect threats for IoT devices](iot-advanced-threat-monitoring.md)-- [Microsoft Sentinel documentation](/azure/sentinel/data-connectors/aruba-clearpass).-
-## On-premises integrations
-
-If you're working with an air-gapped, locally managed OT sensor, you'll need an on-premises solution to view Defender for IoT and Splunk information in the same place.
-
-In such cases, we recommend that you configure your OT sensor to send syslog files directly to ClearPass, or use Defender for IoT's built-in API.
-
-For more information, see:
--- [Forward on-premises OT alert information](how-to-forward-alert-information-to-partners.md)-- [Defender for IoT API reference](references-work-with-defender-for-iot-apis.md)--
-## On-premises integration (legacy)
-
-This section describes how to integrate Defender for IoT and ClearPass Policy Manager (CPPM) using the legacy, on-premises integration.
-
-> [!IMPORTANT]
-> The legacy Aruba ClearPass integration is supported through October 2024 using sensor version 23.1.3, and won't be supported in upcoming major software versions.. For customers using the legacy integration, we recommend moving to one of the following methods:
->
-> - If you're integrating your security solution with cloud-based systems, we recommend that you use data connectors through [Microsoft Sentinel](#cloud-based-integrations).
-> - For on-premises integrations, we recommend that you either configure your OT sensor to [forward syslog events, or use Defender for IoT APIs](#on-premises-integrations).
->
-
-### Prerequisites
-
-Before you begin, make sure that you have the following prerequisites:
-
-|Prerequisite |Description |
-|||
-|**Aruba ClearPass requirements** | CPPM runs on hardware appliances with pre-installed software or as a Virtual Machine under the following hypervisors. <br>- VMware ESXi 5.5, 6.0, 6.5, 6.6 or higher. <br>- Microsoft Hyper-V Server 2012 R2 or 2016 R2. <br>- Hyper-V on Microsoft Windows Server 2012 R2 or 2016 R2. <br>- KVM on CentOS 7.5 or later. <br><br>Hypervisors that run on a client computer such as VMware Player aren't supported. |
-|**Defender for IoT requirements** | - Defender for IoT version 2.5.1 or higher. <br>- Access to a Defender for IoT OT sensor as an [Admin user](roles-on-premises.md). |
-
-### Create a ClearPass API user
-
-As part of the communications channel between the two products, Defender for IoT uses many APIs (both TIPS, and REST). Access to the TIPS APIs is validated via username and password combination credentials. This user ID needs to have minimum levels of access. Don't use a Super Administrator profile, but instead use API Administrator as shown below.
-
-**To create a ClearPass API user**:
-
-1. Select **Administration** > **Users and Privileges**, and then select **ADD**.
-
-1. In the **Add Admin User** dialog box, set the following parameters:
-
- | Parameter | Description |
- |--|--|
- | **UserID** | Enter the user ID. |
- | **Name** | Enter the user name. |
- | **Password** | Enter the password. |
- | **Enable User** | Verify that this option is enabled. |
- | **Privilege Level** | Select **API Administrator**. |
-
-1. Select **Add**.
-
-### Create a ClearPass operator profile
-
-Defender for IoT uses the REST API as part of the integration. REST APIs are authenticated under an OAuth framework. To sync with Defender for IoT, you need to create an API Client.
-
-In order to secure access to the REST API for the API Client, create a restricted access operator profile.
-
-**To create a ClearPass operator profile**:
-
-1. Navigate to the **Edit Operator Profile** window.
-
-1. Set all of the options to **No Access** except for the following:
-
- | Parameter | Description |
- |--|--|
- | **API Services** | Set to **Allow Access** |
- | **Policy Manager** | Set the following: <br />- **Dictionaries**: **Attributes** set to **Read, Write, Delete**<br />- **Dictionaries**: **Fingerprints** set to **Read, Write, Delete**<br />- **Identity**: **Endpoints** set to **Read, Write, Delete** |
-
-### Create a ClearPass OAuth API client
-
-1. In the main window, select **Administrator** > **API Services** > **API Clients**.
-
-1. In the **Create API Client** tab, set the following parameters:
-
- - **Operating Mode**: This parameter is used for API calls to ClearPass. Select **ClearPass REST API ΓÇô Client**.
-
- - **Operator Profile**: Use the profile you created previously.
-
- - **Grant Type**: Set **Client credentials (grant_type = client_credentials)**.
-
-1. Ensure you record the **Client Secret** and the **Client ID**. For example, `defender-rest`.
-
-1. In the Policy Manager, ensure you collected the following list of information before proceeding to the next step.
-
- - CPPM UserID
-
- - CPPM UserId Password
-
- - CPPM OAuth2 API Client ID
-
- - CPPM OAuth2 API Client Secret
-
-### Configure Defender for IoT to integrate with ClearPass
-
-To enable viewing the device inventory in ClearPass, you need to set up Defender for IoT-ClearPass sync. When the sync configuration is complete, the Defender for IoT platform updates the ClearPass Policy Manager EndpointDb as it discovers new endpoints.
-
-**To configure ClearPass sync on the Defender for IoT sensor**:
-
-1. In the Defender for IoT sensor, select **System settings** > **Integrations** > **ClearPass**.
-
-1. Set the following parameters:
-
- | Parameter | Description |
- |--|--|
- | **Enable Sync** | Toggle on to enable the sync between Defender for IoT and ClearPass. |
- | **Sync Frequency (minutes)** | Define the sync frequency in minutes. The default is 60 minutes. The minimum is 5 minutes. |
- | **ClearPass Host** | The IP address of the ClearPass system with which Defender for IoT is in sync. |
- | **Client ID** | The client ID that was created on ClearPass for syncing the data with Defender for IoT. |
- | **Client Secret** | The client secret that was created on ClearPass for syncing the data with Defender for IoT. |
- | **Username** | The ClearPass administrator user. |
- | **Password** | The ClearPass administrator password. |
-
-1. Select **Save**.
-
-### Define a ClearPass forwarding rule
-
-To enable viewing the alerts discovered by Defender for IoT in Aruba, you need to set the forwarding rule. This rule defines which information about the ICS, and SCADA security threats identified by Defender for IoT security engines is sent to ClearPass.
-
-For more information, see [On-premises integrations](#on-premises-integrations).
-
-### Monitor ClearPass and Defender for IoT communication
-
-Once the sync has started, endpoint data is populated directly into the Policy Manager EndpointDb, you can view the last update time from the integration configuration screen.
-
-**To review the last sync time to ClearPass**:
-
-1. Sign in to the Defender for IoT sensor.
-
-1. Select **System settings** > **Integrations** > **ClearPass**.
-
- :::image type="content" source="media/tutorial-clearpass/last-sync.png" alt-text="Screenshot of the view the time and date of your last sync." lightbox="media/tutorial-clearpass/last-sync.png":::
-
-If the sync isn't working, or shows an error, then itΓÇÖs likely youΓÇÖve missed capturing some of the information. Recheck the data recorded.
-
-Additionally, you can view the API calls between Defender for IoT and ClearPass from **Guest** > **Administration** > **Support** > **Application Log**.
-
-For example, API logs between Defender for IoT and ClearPass:
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Integrations with Microsoft and partner services](integrate-overview.md)
dns Private Dns Autoregistration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-autoregistration.md
Previously updated : 11/30/2023 Last updated : 06/28/2024
To enable auto registration, select the checkbox for "Enable auto registration"
* Auto registration works only for virtual machines. For all other resources like internal load balancers, you can create DNS records manually in the private DNS zone linked to the virtual network. * DNS records are created automatically only for the primary virtual machine NIC. If your virtual machines have more than one NIC, you can manually create the DNS records for other network interfaces.
-* DNS records are created automatically only if the primary virtual machine NIC is using DHCP. If you're using static IPs, such as a configuration with [multiple IP addresses in Azure](../virtual-network/ip-services/virtual-network-multiple-ip-addresses-portal.md), auto registration doesn't create records for that virtual machine.
-* A specific virtual network can be linked to only one private DNS zone when automatic VM DNS registration is enabled. You can, however, link multiple virtual networks to a single DNS zone.
+* A specific virtual network can be linked to only one private DNS zone when automatic registration is enabled. You can, however, link multiple virtual networks to a single DNS zone.
## Next steps
dns Private Dns Getstarted Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-getstarted-portal.md
Title: Quickstart - Create an Azure private DNS zone using the Azure portal
-description: In this quickstart, you create and test a private DNS zone and record in Azure DNS. This is a step-by-step guide to create and manage your first private DNS zone and record using the Azure portal.
+description: In this quickstart, you create and test a private DNS zone and record in Azure DNS. This article is a step-by-step guide to create and manage your first private DNS zone and record using the Azure portal.
Previously updated : 06/19/2023 Last updated : 06/20/2024
# Quickstart: Create an Azure private DNS zone using the Azure portal
-This quickstart walks you through the steps to create your first private DNS zone and record using the Azure portal.
+This quickstart walks you through the steps to create your first private DNS zone and record using the Azure portal.
-A DNS zone is used to host the DNS records for a particular domain. To start hosting your domain in Azure DNS, you need to create a DNS zone for that domain name. Each DNS record for your domain is then created inside this DNS zone. To publish a private DNS zone to your virtual network, you specify the list of virtual networks that are allowed to resolve records within the zone. These are called *linked* virtual networks. When autoregistration is enabled, Azure DNS also updates the zone records whenever a virtual machine is created, changes its IP address, or is deleted.
+A DNS zone is used to host the DNS records for a particular domain. Public DNS zones have unique names and are visible on the Internet. However, a private DNS zone name must only be unique within its resource group and the DNS records are not visible on the Internet. To start hosting your private domain in Azure Private DNS, you first need to create a DNS zone for that domain name. Next, the DNS records for your private domain are created inside this DNS zone.
> [!IMPORTANT]
-> When you create a private DNS zone, Azure stores the zone data as a global resource. This means that the private zone is not dependent on a single VNet or region. You can link the same private zone to multiple VNets in different regions. If service is interrupted in one VNet, your private zone is still available. For more information, see [Azure Private DNS zone resiliency](private-dns-resiliency.md).
+> When you create a private DNS zone, Azure stores the zone data as a global resource. This means that the private zone isn't dependent on a single virtual network or region. You can link the same private zone to multiple virtual networks in different regions. If service is interrupted in one virtual network, your private zone is still available. For more information, see [Azure Private DNS zone resiliency](private-dns-resiliency.md).
-In this article, two VMs are used in a single VNet linked to your private DNS zone with autoregistration enabled. The setup is summarized in the following figure.
+## Virtual private links
+
+To resolve DNS records in a private DNS zone, resources must typically be *linked* to the private zone. Linking is accomplished by creating a [virtual network link](private-dns-virtual-network-links.md) that associates the virtual network to the private zone.
+
+When you create a virtual network link, you can (optionally) enable autoregistration of DNS records for devices in the virtual network. If autoregistration is enabled, Azure private DNS updates DNS records whenever a virtual machine inside the linked virtual network is created, changes its IP address, or is deleted. For more information, see [What is the autoregistration feature in Azure DNS private zones](private-dns-autoregistration.md).
+
+> [!NOTE]
+> Other methods are available for resolving DNS records in private DNS zones that don't always require a virtual network link. These methods are beyond the scope of this quickstart article. For more information, see [What is Azure DNS Private Resolver](dns-private-resolver-overview.md).
+
+In this article, a virtual machines is used in a single virtual network. The virtual network is linked to your private DNS zone with autoregistration enabled. The setup is summarized in the following figure.
:::image type="content" source="media/private-dns-portal/private-dns-quickstart-summary.png" alt-text="Summary diagram of the quickstart setup." border="false" lightbox="media/private-dns-portal/private-dns-quickstart-summary.png":::
If you prefer, you can complete this quickstart using [Azure PowerShell](private
## Create a private DNS zone
-The following example creates a DNS zone called **private.contoso.com** in a resource group called **MyAzureResourceGroup**.
+The following example creates a DNS zone called **private.contoso.com** in a resource group called **MyResourceGroup**.
-A DNS zone contains the DNS entries for a domain. To start hosting your domain in Azure DNS, you create a DNS zone for that domain name.
+1. On the portal search bar, type **private dns zones** in the search text box and press **Enter**.
+2. Under **Marketplace**, select **Private DNS zone**. The **Create Private DNS Zone** page opens.
+ ![Screenshot of private DNS zones search.](./media/private-dns-portal/search-private-dns.png)
-1. On the portal search bar, type **private dns zones** in the search text box and press **Enter**.
-1. Select **Private DNS zone**.
-1. Select **Create private dns zone**.
+3. On the **Create Private DNS Zone** page, type or select the following values:
+ - **Resource group**: Select an existing resource group, or choose **Create new**. Enter a resource group name, and then select **OK**. For example: **MyResourceGroup**. The resource group name must be unique within the Azure subscription.
+ - **Name**: Type **private.contoso.com** for this example.
+4. The **Resource group location** is selected already if you use an existing resource group. If you created a new resource group, choose a location, for example: **(US) West US**.
-1. On the **Create Private DNS zone** page, type or select the following values:
+ ![Screenshot of creating a private DNS zone.](./media/private-dns-portal/create-private-zone.png)
- - **Resource group**: Select **Create new**, enter *MyAzureResourceGroup*, and select **OK**. The resource group name must be unique within the Azure subscription.
- - **Name**: Type *private.contoso.com* for this example.
-1. For **Resource group location**, select **West Central US**.
+5. Select **Review + Create** and then select **Create**. It might take a few minutes to create the zone.
-1. Select **Review + Create**.
+## Create the virtual network and subnet
-1. Select **Create**.
+1. From the Azure portal home page, select **Create a resource** > **Networking** > **Virtual network**, or search for **Virtual network** in the search box and then select **+ Create**.
+2. On the **Create virtual network** page, enter the following:
+- **Subscription**: Select your Azure subscription.
+- **Resource group**: Select an existing resource group or create a new one. The resource group doesn't need to be the same as the one used for the private DNS zone. In this example the same resource group is used (**MyResourceGroup**).
+- **Virtual network name**: Enter a name for the new virtual network. **MyVNet** is used in this example.
+- **Region**: If you created a new resource group, choose a location. **(US) West US** is used in this example.
-It may take a few minutes to create the zone.
+ ![Screenshot of creating a virtual network basics tab.](./media/private-dns-portal/create-virtual-network-basics.png)
-## Virtual network and parameters
+3. Select the **IP addresses** tab, and under **Add IPv4 address space** edit the default address space by entering **10.2.0.0/16**.
-In this section you'll need to replace the following parameters in the steps with the information below:
+ ![Screenshot of specifying VNet IPv4 address space.](./media/private-dns-portal/vnet-ipv4-space.png)
-| Parameter | Value |
-|--|-|
-| **\<resource-group-name>** | MyAzureResourceGroup (Select existing resource group) |
-| **\<virtual-network-name>** | MyAzureVNet |
-| **\<region-name>** | West Central US |
-| **\<IPv4-address-space>** | 10.2.0.0/16 |
-| **\<subnet-name>** | MyAzureSubnet |
-| **\<subnet-address-range>** | 10.2.0.0/24 |
+4. In the subnets area, select the pen icon to edit the name of the default subnet, or delete the default subnet and select **+ Add a subnet**. The **Edit subnet** or **Add a subnet** pane opens, respectively. The Edit subnet pane is shown in this example.
+5. Next to Name, enter mySubnet and verify that the **Subnet address range** is **10.2.0.0 - 10.2.0.255**. The **Size** should be **/24 (256 addresses)**. These values are set by default based on the parent VNet address range.
+ ![Screenshot of specifying subnet IPv4 address space.](./media/private-dns-portal/subnet-ipv4-space.png)
+6. Select **Save**, select **Review + create**, and then select **Create**.
## Link the virtual network
-To link the private DNS zone to a virtual network, you create a virtual network link.
+Next, link the private DNS zone to the virtual network by adding a virtual network link.
+1. Search for and select **Private DNS zones** and then select your private zone. For example: **private.contoso.com**.
+2. Under **DNS Management**, select **Virtual Network Links** and then select **+ Add**.
+3. Enter the following parameters:
+- **Link name**: Provide a name for the link, for example: **MyVNetLink**.
+- **Subscription**: Select our subscription.
+- **Virtual Network**: Select the virtual network that you created, for example: **MyVNet**.
+4. Under **Configuration**, select the checkbox next to **Enable auto registration**.
+ ![Screenshot of adding a virtual network link.](./media/private-dns-portal/dns-add-virtual-network-link.png)
-1. Open the **MyAzureResourceGroup** resource group and select the **private.contoso.com** private zone.
-1. On the left pane, select **Virtual network links**.
-1. Select **Add**.
-1. Type **myLink** for the **Link name**.
-1. For **Virtual network**, select **myAzureVNet**.
-1. Select the **Enable auto registration** check box.
-1. Select **OK**.
+5. Select **Create**, wait until the virtual link is created, and then verify that it is listed on the **Virtual Network Links** page.
-## Create the test virtual machines
+## Create the test virtual machine
-Now, create two virtual machines so you can test your private DNS zone:
+Now, create a virtual machine to test autoregistgration in your private DNS zone:
-1. On the portal page upper left, select **Create a resource**, and then select **Windows Server 2016 Datacenter**.
-1. Select **MyAzureResourceGroup** for the resource group.
-1. Type **myVM01** - for the name of the virtual machine.
-1. Select **West Central US** for the **Region**.
-1. Enter a name for the administrator user name.
-1. Enter a password and confirm the password.
-1. For **Public inbound ports**, select **Allow selected ports**, and then select **RDP (3389)** for **Select inbound ports**.
-1. Accept the other defaults for the page and then click **Next: Disks >**.
-1. Accept the defaults on the **Disks** page, then click **Next: Networking >**.
-1. Make sure that **myAzureVNet** is selected for the virtual network.
-1. Accept the other defaults for the page, and then click **Next: Management >**.
-1. For **Boot diagnostics**, select **Disable**, accept the other defaults, and then select **Review + create**.
-1. Review the settings and then click **Create**.
+1. On the portal page upper left, select **Create a resource**, and then select **Windows Server 2019 Datacenter**.
+2. Select **MyResourceGroup** for the resource group.
+3. Type **myVM01** - for the name of the virtual machine.
+4. Select ***(US) West US** for the **Region**.
+5. Enter a name for the administrator user name.
+6. Enter a password and confirm the password.
+7. For **Public inbound ports**, select **Allow selected ports**, and then select **RDP (3389)** for **Select inbound ports**.
+8. Accept the other defaults for the page and then click **Next: Disks >**.
+9. Accept the defaults on the **Disks** page, then click **Next: Networking >**.
+10. Make sure that **myAzureVNet** is selected for the virtual network.
+11. Accept the other defaults for the page, and then click **Next: Management >**.
+12. For **Boot diagnostics**, select **Disable**, accept the other defaults, and then select **Review + create**.
+13. Review the settings and then click **Create**. It will take a few minutes for the virtual machine allocation to complete.
+14. Search for and select **Virtual machines** and then verify that the VM status is **Running**. If it isn't running, start the virtual machine.
-Repeat these steps and create another virtual machine named **myVM02**.
+## Review autoregistion
-It will take a few minutes for both virtual machines to complete.
+1. Search for or select **Private DNS zones** and then select the **private.contoso.com** zone.
+2. Under DNS Management, select Recordsets.
+3. Verify that a DNS record exists of **Type** **A** with an **Auto registered** value of **True**. See the following example:
-## Create an additional DNS record
+ ![Screenshot of an auto registered DNS record.](./media/private-dns-portal/create-dns-record.png)
- The following example creates a record with the relative name **db** in the DNS Zone **private.contoso.com**, in resource group **MyAzureResourceGroup**. The fully qualified name of the record set is **db.private.contoso.com**. The record type is "A", with the IP address of **myVM01**.
+## Create another DNS record
-1. Open the **MyAzureResourceGroup** resource group and select the **private.contoso.com** private zone.
-1. Select **+ Record set**.
-1. For **Name**, type **db**.
-1. For **IP Address**, type the IP address you see for **myVM01**. This should be auto registered when the virtual machine started.
-1. Select **OK**.
+ You can also add records to the private DNS zone manually. The following example creates a record with the hostname **db** in the DNS Zone **private.contoso.com**. The fully qualified name of the record set is **db.private.contoso.com**. The record type is **A**, with an IP address corresponding to the autoregistered IP address of **myVM01.private.contoso.com**.
+1. Search for or select **Private DNS zones** and then select the **private.contoso.com** zone.
+2. Under **DNS Management**, select **Recordsets**.
+3. Select **+ Add**.
+3. Under **Name**, enter **db**.
+4. Next to **IP Address**, type the IP address you see for **myVM01**.
+5. Select **OK**.
-## Test the private zone
+## Search and display records
-Now you can test the name resolution for your **private.contoso.com** private zone.
+By default, the **Recordsets** node displays all record sets in the zone. A record set is a collection of records that have the same name and are the same type. Record sets are automatically fetched in batches of 100 as you scroll through the list.
-### Configure VMs to allow inbound ICMP
+You can also search and display specific DNS record sets in the zone by entering a value in the search box. In the following example, one record with the name **db** is displayed:
-You can use the ping command to test name resolution. So, configure the firewall on both virtual machines to allow inbound ICMP packets.
+ ![Screenshot of searching for a DNS record.](./media/private-dns-portal/search-for-record.png)
-1. Connect to myVM01, and open a Windows PowerShell window with administrator privileges.
-1. Run the following command:
+You can search by name, type, TTL, value, or autoregistration status. For example, the record **db** in this example is also displayed by searching for **A** (display all records of type A), **3600** (the record's TTL value), **10.2.0.5** (the IP address of the A record), or **False** (non-autoregistered records). All records in the zone that match the search criteria are displayed in batches of 100.
- ```powershell
- New-NetFirewallRule ΓÇôDisplayName "Allow ICMPv4-In" ΓÇôProtocol ICMPv4
- ```
+## Test the private zone
-Repeat for myVM02.
+Now you can test the name resolution for your **private.contoso.com** private zone.
-### Ping the VMs by name
+You can use the ping command to test name resolution. You can do this by connecting to the virtual machine and opening a command prompt, or by using the **Run command** on this virtual machine.
-1. From the myVM02 Windows PowerShell command prompt, ping myVM01 using the automatically registered host name:
- ```
- ping myVM01.private.contoso.com
- ```
- You should see output that looks similar to this:
- ```
- PS C:\> ping myvm01.private.contoso.com
+To use the Run command:
- Pinging myvm01.private.contoso.com [10.2.0.4] with 32 bytes of data:
- Reply from 10.2.0.4: bytes=32 time<1ms TTL=128
- Reply from 10.2.0.4: bytes=32 time=1ms TTL=128
- Reply from 10.2.0.4: bytes=32 time<1ms TTL=128
- Reply from 10.2.0.4: bytes=32 time<1ms TTL=128
+1. Select **Virtual machines**, select your virtual machine, and then under **Operations** select **Run command**.
+2. Select **RunPowerShellScript**, under **Run Command Script** enter **ping myvm01.private.contoso.com** and then select **Run**. See the following example:
- Ping statistics for 10.2.0.4:
- Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
- Approximate round trip times in milli-seconds:
- Minimum = 0ms, Maximum = 1ms, Average = 0ms
- PS C:\>
- ```
-1. Now ping the **db** name you created previously:
- ```
- ping db.private.contoso.com
- ```
- You should see output that looks similar to this:
- ```
- PS C:\> ping db.private.contoso.com
+ [ ![Screenshot of the ping command.](./media/private-dns-portal/ping-vm.png) ](./media/private-dns-portal/ping-vm.png#lightbox)
- Pinging db.private.contoso.com [10.2.0.4] with 32 bytes of data:
- Reply from 10.2.0.4: bytes=32 time<1ms TTL=128
- Reply from 10.2.0.4: bytes=32 time<1ms TTL=128
- Reply from 10.2.0.4: bytes=32 time<1ms TTL=128
- Reply from 10.2.0.4: bytes=32 time<1ms TTL=128
+3. Now ping the **db** name you created previously:
+ ```
+ Pinging db.private.contoso.com [10.10.2.5] with 32 bytes of data:
+ Reply from 10.10.2.5: bytes=32 time<1ms TTL=128
+ Reply from 10.10.2.5: bytes=32 time<1ms TTL=128
+ Reply from 10.10.2.5: bytes=32 time<1ms TTL=128
+ Reply from 10.10.2.5: bytes=32 time<1ms TTL=128
- Ping statistics for 10.2.0.4:
+ Ping statistics for 10.10.2.5:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms
- PS C:\>
``` ## Clean up resources
-When no longer needed, delete the **MyAzureResourceGroup** resource group to delete the resources created in this quickstart.
+When no longer needed, delete the **MyResourceGroup** resource group to delete the resources created in this quickstart.
## Next steps
dns Private Dns Import Export Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-import-export-portal.md
+
+ Title: Import and export a private DNS zone file - Azure portal
+
+description: Learn how to import and export a private DNS (Domain Name System) zone file to Azure DNS by using Azure portal.
+++ Last updated : 06/20/2024++++
+# Import and export a private DNS zone file using the Azure portal
+
+In this article, you learn how to import and export a DNS zone file in Azure Private DNS using Azure portal. You can also [import and export a zone file using Azure CLI](private-dns-import-export.md).
+
+## Introduction to DNS zone migration
+
+A DNS zone file is a text file containing information about every DNS record in the zone. It follows a standard format, making it suitable for transferring DNS records between DNS systems. Using a zone file is a fast and convenient way to import DNS zones into Azure DNS. You can also export a zone file from Azure DNS to use with other DNS systems.
+
+Azure DNS supports importing and exporting zone files via the Azure CLI and the Azure portal.
+
+## Obtain your existing DNS zone file
+
+Before you import a DNS zone file into Azure DNS, you need to obtain a copy of the zone file. The source of this file depends on where the DNS zone is hosted.
+
+* If your DNS zone is hosted by a partner service, the service should provide a way for you to download the DNS zone file. Partner services include domain registrar, dedicated DNS hosting provider, or an alternative cloud provider.
+* If your DNS zone is hosted on Windows DNS, the default folder for the zone files is **%systemroot%\system32\dns**. The full path to each zone file is also shown on the **General** tab of the DNS console.
+* If your DNS zone is hosted using BIND, the location of the zone file for each zone gets specified in the BIND configuration file **named.conf**.
+
+> [!IMPORTANT]
+> If the zone file that you import contains CNAME entries that point to names in a another zone, Azure DNS must be able to resolve resource records in the other zone.
+
+## Import a DNS zone file into Azure DNS
+
+Importing a zone file creates a new zone in Azure DNS if the zone doesn't already exist. If the zone exists, then the record sets in the zone file are merged with the existing record sets.
+
+### Merge behavior
+
+* By default, the new record sets get merged with the existing record sets. Identical records within a merged record set aren't duplicated.
+* When record sets are merged, the time to live (TTL) of pre-existing record sets is used.
+* Start of Authority (SOA) parameters, except `host` are always taken from the imported zone file. The name server record set at the zone apex also always uses the TTL taken from the imported zone file.
+* An imported CNAME record will replace the existing CNAME record that has the same name.
+* When a conflict happens between a CNAME record and another record with the same name of different type, the existing record gets used.
+
+### Additional information about importing
+
+The following notes provide more details about the zone import process.
+
+* The `$TTL` directive is optional, and is supported. When no `$TTL` directive is given, records without an explicit TTL are imported set to a default TTL of 3600 seconds. When two records in the same record set specify different TTLs, the lower value is used.
+* The `$ORIGIN` directive is optional, and is supported. When no `$ORIGIN` is set, the default value used is the zone name as specified on the command line, including the ending dot (.).
+* The `$INCLUDE` and `$GENERATE` directives aren't supported.
+* The following record types are supported: A, AAAA, CAA, CNAME, MX, NS, SOA, SRV, and TXT.
+* The SOA record is created automatically by Azure DNS when a zone is created. When you import a zone file, all SOA parameters are taken from the zone file *except* the `host` parameter. This parameter uses the value provided by Azure DNS because it needs to refer to the primary name server provided by Azure DNS.
+* The name server record set at the zone apex is also created automatically by Azure DNS when the zone is created. Only the TTL of this record set is imported. These records contain the name server names provided by Azure DNS. The record data isn't overwritten by the values contained in the imported zone file.
+* Azure DNS supports only single-string TXT records. Multistring TXT records are to be concatenated and truncated to 255 characters.
+* The zone file to be imported must contain 10k or fewer lines with no more than 3k record sets.
+
+## Import a zone file
+
+1. Obtain a copy of the zone file for the zone you wish to import.
+
+ > [!NOTE]
+ > If the Start of Authority (SOA) record is present in the zone, it is overwritten with values that are compatible with Azure Private DNS. Nameserver (NS) records must be removed prior to import. Compatible resource record types for Azure Private DNS include A, AAAA, CNAME, MX, PTR, SOA, SRV, and TXT. Incompatible records are underlined in red in the Private DNS Zone Editor.
+
+ The following small zone file and resource records are used in this example:
+
+ ```text
+ ; MX Records
+
+ ; A Records
+ aa1 3600 IN A 10.10.0.1
+ db1002 3600 IN A 10.1.1.2
+ myvm 10 IN A 10.10.2.5
+ server1 3600 IN A 10.0.1.1
+ server2 3600 IN A 10.0.1.2
+
+ ; AAAA Records
+
+ ; CNAME Records
+ app1 3600 IN CNAME aa1.private.contoso.com.
+
+ ; PTR Records
+
+ ; TXT Records
+
+ ; SRV Records
+ ```
+ Names used:
+ - Origin zone name:ΓÇ»**private.contoso.com**ΓÇ»
+ - Destination zone name: **private.contoso.com**
+ - Zone filename:ΓÇ»**private.contoso.com.txt**ΓÇ»
+ - Resource group:ΓÇ»**myresourcegroup**
+2. Open the **Private DNS zones** overview page and select **Create**.
+3. On the **Create DNS zone** page, type or select the following values:
+ - **Resource group**: Choose an existing resource group, or select **Create new**, enter **myresourcegroup**, and select **OK**.
+ - **Name**: Type **private.contoso.com** for this example.
+4. Select the **Private DNS Zone Editor** tab and then drag and drop or browse and select the **private.contoso.com.txt** file. The **Private DNS Zone Editor** opens.
+5. If changes to the zone are needed, you can edit the values that are displayed.
+
+ ![Screenshot showing the private.contoso.com zone displayed in the DNS Zone Editor.](./media/private-dns-import-export-portal/dns-zone-editor.png)
+
+6. Select **Review + Create** and then select **Create**.
+7. When deployment is complete, select **Go to resource** and then select **Recordsets**. An SOA record compatible with Azure Private DNS is automatically added to the zone. See the following example:
+
+ [ ![creenshot showing the private.contoso.com zone record sets.](./media/private-dns-import-export-portal/recordsets.png) ](./media/private-dns-import-export-portal/recordsets.png#lightbox)
+
+## Export a zone file
+
+1. Open the **Private DNS zones** overview page and select the zone you wish to export. For example, **private.contoso.com**. See the following example:
+
+ ![Screenshot showing the private.contoso.com zone is ready to export.](./media/private-dns-import-export-portal/export.png)
+
+2. Select **Export**. The file is downloaded to your default downloads directory as a text file with the name AzurePrivateDnsZone-private.contoso.com-`number`.txt where `number` is an autogenerated index number.
+3. Open the file to view the contents. See the following example:
+
+ ```text
+ ; Exported zone file from Azure Private DNS
+ ; Zone name: private.contoso.com
+ ; Date and time (UTC): Mon, 17 Jun 2024 20:35:47 GMT
+
+ $TTL 10
+ $ORIGIN private.contoso.com
+
+ ; SOA Record
+ @ 3600 IN SOA azureprivatedns.net azureprivatedns-host.microsoft.com (
+ 1 ;serial
+ 3600 ;refresh
+ 300 ;retry
+ 2419200 ;expire
+ 10 ;minimum ttl
+ )
+
+ ; MX Records
+
+ ; A Records
+ aa1 3600 IN A 10.10.0.1
+ db1002 3600 IN A 10.1.1.2
+ myvm 10 IN A 10.10.2.5
+ server1 3600 IN A 10.0.1.1
+ server2 3600 IN A 10.0.1.2
+
+ ; AAAA Records
+
+ ; CNAME Records
+ app1 3600 IN CNAME aa1.private.contoso.com.
+
+ ; PTR Records
+
+ ; TXT Records
+
+ ; SRV Records
+ ```
+
+## Next steps
+
+* Learn how to [manage record sets and records](./dns-getstarted-cli.md) in your DNS zone.
dns Private Dns Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-import-export.md
Previously updated : 10/20/2023 Last updated : 06/13/2024
dns Private Dns Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-overview.md
Previously updated : 06/09/2023 Last updated : 06/21/2024 #Customer intent: As an administrator, I want to evaluate Azure Private DNS so I can determine if I want to use it instead of my current DNS service. # What is Azure Private DNS?
-The Domain Name System, or DNS, is responsible for translating (or resolving) a service name to an IP address. Azure DNS is a hosting service for domains and provides naming resolution using the Microsoft Azure infrastructure. Azure DNS not only supports internet-facing DNS domains, but it also supports private DNS zones.
+The Domain Name System (DNS) is responsible for translating (resolving) a service name to an IP address. Azure DNS is a hosting service for domains and provides naming resolution using the Microsoft Azure infrastructure. Azure DNS not only supports internet-facing DNS domains, but it also supports private DNS zones.
Azure Private DNS provides a reliable and secure DNS service for your virtual networks. Azure Private DNS manages and resolves domain names in the virtual network without the need to configure a custom DNS solution. By using private DNS zones, you can use your own custom domain name instead of the Azure-provided names during deployment. Using a custom domain name helps you tailor your virtual network architecture to best suit your organization's needs. It provides a naming resolution for virtual machines (VMs) within a virtual network and connected virtual networks. Additionally, you can configure zones names with a split-horizon view, which allows a private and a public DNS zone to share the name.
-To resolve the records of a private DNS zone from your virtual network, you must link the virtual network with the zone. Linked virtual networks have full access and can resolve all DNS records published in the private zone. You can also enable autoregistration on a virtual network link. When you enable autoregistration on a virtual network link, the DNS records for the virtual machines in that virtual network are registered in the private zone. When autoregistration gets enabled, Azure DNS will update the zone record whenever a virtual machine gets created, changes its' IP address, or gets deleted.
+To resolve the records of a private DNS zone from your virtual network, you must link the virtual network with the zone. Linked virtual networks have full access and can resolve all DNS records published in the private zone. You can also enable [autoregistration](private-dns-autoregistration.md) on a [virtual network link](private-dns-virtual-network-links.md). When you enable autoregistration on a virtual network link, the DNS records for the virtual machines in that virtual network are registered in the private zone. When autoregistration gets enabled, Azure DNS will update the zone record whenever a virtual machine gets created, changes its' IP address, or gets deleted.
![DNS overview](./media/private-dns-overview/scenario.png)
dns Private Reverse Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-reverse-dns.md
description: Learn how to use Azure Private DNS to create reverse DNS lookup zon
Previously updated : 03/22/2024 Last updated : 06/20/2024
firewall Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/features.md
Forced Tunnel mode can't be configured at run time. You can either redeploy the
## Outbound SNAT support
-All outbound virtual network traffic IP addresses are translated to the Azure Firewall public IP (Source Network Address Translation). You can identify and allow traffic originating from your virtual network to remote Internet destinations. When Azure Firewall has multiple public IPs configured for providing outbound connectivity, it will use IPs as needed based on available ports. It will only use the next available public IP once the connections cannot be made from the current public IP.
+All outbound virtual network traffic IP addresses are translated to the Azure Firewall public IP (Source Network Address Translation). You can identify and allow traffic originating from your virtual network to remote Internet destinations. When Azure Firewall has multiple public IPs configured for providing outbound connectivity, it will use the Public IPs as needed based on available ports. It will **randomly pick the first Public IP** and only use the **next available Public IP** after no more connections can be made from the current public IP **due to SNAT port exhaustion**.
-In scenarios where you have high throughput or dynamic traffic patterns, it is recommended to us an [Azure NAT Gateway](/azure/nat-gateway/nat-overview). Azure NAT Gateway dynamically selects SNAT ports for providing outbound connectivity,
-so all the SNAT ports provided by its associated IP addresses is available on demand. To learn more about how to integrate NAT Gateway with Azure Firewall, see [Scale SNAT ports with Azure NAT Gateway](/azure/firewall/integrate-with-nat-gateway).
+In scenarios where you have high throughput or dynamic traffic patterns, it is recommended to use an [Azure NAT Gateway](/azure/nat-gateway/nat-overview). Azure NAT Gateway dynamically selects public IPs for providing outbound connectivity. To learn more about how to integrate NAT Gateway with Azure Firewall, see [Scale SNAT ports with Azure NAT Gateway](/azure/firewall/integrate-with-nat-gateway).
Azure NAT Gateway can be used with Azure Firewall by associating NAT Gateway to the Azure Firewall subnet. See the [Integrate NAT gateway with Azure Firewall](/azure/nat-gateway/tutorial-hub-spoke-nat-firewall) tutorial for guidance on this configuration.
firewall Tutorial Firewall Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-firewall-deploy-portal.md
This virtual network has two subnets.
1. Select **Review + create**. 1. Select **Create**.
+> [!NOTE]
+> Azure Firewall uses public IPs as needed based on available ports. After randomly selecting a public IP to connect outbound from, it will only use the next available public IP after no more connections can be made from the current public IP. In scenarios with high traffic volume and throughput, it is recommended to use a NAT Gateway to provide outbound connectivity. SNAT ports are dynamically allocated across all public IPs associated with NAT Gateway. To learn more see [integrate NAT Gateway with Azure Firewall](/azure/firewall/integrate-with-nat-gateway).
+ ### Create a virtual machine Now create the workload virtual machine, and place it in the **Workload-SN** subnet.
machine-learning Concept Onnx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-onnx.md
For the complete Python API reference, see the [ONNX Runtime reference docs](htt
## Examples -- For example Python notebooks that create and deploy ONNX models, see [how-to-use-azureml/deployment/onnx](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx). - [!INCLUDE [aml-clone-in-azure-notebook](includes/aml-clone-for-examples.md)] - For samples that show ONNX usage in other languages, see the [ONNX Runtime GitHub](https://github.com/microsoft/onnxruntime/tree/master/samples).
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/get-started.md
Previously updated : 04/01/2024 Last updated : 06/28/2024
In the SAP workload documentation space, you can find the following areas:
## Change Log
+- June 26, 2024: Adapt [Azure Storage types for SAP workload](./planning-guide-storage.md) to latest features, like snapshot capabilities for Premium SSD v2 and Ultra disk. Adapt ANF to support of mix of NFS and block storage between /hana/data and /hana/log
+- June 26, 2024: Fix wrong memory stated for some VMs in [SAP HANA Azure virtual machine Premium SSD storage configurations](./hana-vm-premium-ssd-v1.md) and [SAP HANA Azure virtual machine Premium SSD v2 storage configurations](./hana-vm-premium-ssd-v2.md)
- May 21, 2024: Update timeouts and added start delay for pacemaker scheduled events in [Set up Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md) and [Set up Pacemaker on SUSE Linux Enterprise Server (SLES) in Azure](./high-availability-guide-suse-pacemaker.md). - April 1, 2024: Reference the considerations section for sizing HANA shared file system in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md), [SAP HANA Azure virtual machine Premium SSD storage configurations](./hana-vm-premium-ssd-v1.md), [SAP HANA Azure virtual machine Premium SSD v2 storage configurations](./hana-vm-premium-ssd-v2.md), and [Azure Files NFS for SAP](planning-guide-storage-azure-files.md) - March 18, 2024: Added considerations for sizing the HANA shared file system in [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md)
sap Hana Vm Premium Ssd V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-premium-ssd-v1.md
keywords: 'SAP, Azure HANA, Storage Ultra disk, Premium storage'
Previously updated : 04/01/2024 Last updated : 06/28/2024 # SAP HANA Azure virtual machine Premium SSD storage configurations
-This document is about HANA storage configurations for Azure premium storage or premium ssd as it was introduced years back as low latency storage for DBMS and other applications that need low latency storage. For general considerations around stripe sizes when using LVM, HANA data volume partitioning or other considerations that are independent of the particular storage type, check these two documents:
+This document is about HANA storage configurations for Azure premium storage or premium ssd as it was introduced years back as low latency storage for database management systems (DBMS) and other applications that need low latency storage. For general considerations around stripe sizes when using Logical Volume Manager (LVM), HANA data volume partitioning or other considerations that are independent of the particular storage type, check these two documents:
- [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md) - [Azure Storage types for SAP workload](./planning-guide-storage.md) > [!IMPORTANT]
-> The suggestions for the storage configurations in this document are meant as directions to start with. Running workload and analyzing storage utilization patterns, you might realize that you aren't utilizing all the storage bandwidth or IOPS provided. You might consider downsizing on storage then. Or in contrary, your workload might need more storage throughput than suggested with these configurations. As a result, you might need to deploy more capacity, IOPS or throughput. In the field of tension between storage capacity required, storage latency needed, storage throughput and IOPS required and least expensive configuration, Azure offers enough different storage types with different capabilities and different price points to find and adjust to the right compromise for you and your HANA workload.
+> The suggestions for the storage configurations in this document are meant as directions to start with. Running workload and analyzing storage utilization patterns, you might realize that you aren't utilizing all the storage bandwidth or IOPS (I/O operations per second) provided. You might consider downsizing on storage then. Or in contrary, your workload might need more storage throughput than suggested with these configurations. As a result, you might need to deploy more capacity, IOPS or throughput. In the field of tension between storage capacity required, storage latency needed, storage throughput and IOPS required and least expensive configuration, Azure offers enough different storage types with different capabilities and different price points to find and adjust to the right compromise for you and your HANA workload.
## Solutions with premium storage and Azure Write Accelerator for Azure M-Series virtual machines
-Azure Write Accelerator is a functionality that is available for Azure M-Series VMs exclusively in combination with Azure premium storage. As the name states, the purpose of the functionality is to improve I/O latency of writes against the Azure premium storage. For SAP HANA, Write Accelerator is supposed to be used against the **/hana/log** volume only. Therefore, the **/hana/data** and **/hana/log** are separate volumes with Azure Write Accelerator supporting the **/hana/log** volume only.
+Azure Write Accelerator is a functionality that is available for Azure M-Series Virtual Machines (VM) exclusively in combination with Azure premium storage. As the name states, the purpose of the functionality is to improve I/O latency of writes against the Azure premium storage. For SAP HANA, Write Accelerator is supposed to be used against the **/hana/log** volume only. Therefore, the **/hana/data** and **/hana/log** are separate volumes with Azure Write Accelerator supporting the **/hana/log** volume only.
> [!IMPORTANT] > When using Azure premium storage, the usage of Azure [Write Accelerator](../../virtual-machines/how-to-enable-write-accelerator.md) for the **/hana/log** volume is mandatory. Write Accelerator is available for premium storage and M-Series and Mv2-Series VMs only. Write Accelerator is not working in combination with other Azure VM families, like Esv3 or Edsv4.
The caching recommendations for Azure premium disks below are assuming the I/O c
### Azure burst functionality for premium storage
-For Azure premium storage disks smaller or equal to 512 GiB in capacity, burst functionality is offered. The exact way how disk bursting works is described in the article [Disk bursting](../../virtual-machines/disk-bursting.md). When you read the article, you understand the concept of accruing IOPS and throughput in the times when your I/O workload is below the nominal IOPS and throughput of the disks (for details on the nominal throughput see [Managed Disk pricing](https://azure.microsoft.com/pricing/details/managed-disks/)). You're going to accrue the delta of IOPS and throughput between your current usage and the nominal values of the disk. The bursts are limited to a maximum of 30 minutes.
+For Azure premium storage disks smaller or equal to 512 GiB in capacity, burst functionality is offered. The exact way how disk bursting works is described in the article [Disk bursting](../../virtual-machines/disk-bursting.md). When you read the article, you understand the concept of accruing I/O Operations per second (IOPS) and throughput in the times when your I/O workload is below the nominal IOPS and throughput of the disks (for details on the nominal throughput see [Managed Disk pricing](https://azure.microsoft.com/pricing/details/managed-disks/)). You're going to accrue the delta of IOPS and throughput between your current usage and the nominal values of the disk. The bursts are limited to a maximum of 30 minutes.
The ideal cases where this burst functionality can be planned in is likely going to be the volumes or disks that contain data files for the different DBMS. The I/O workload expected against those volumes, especially with small to mid-ranged systems is expected to look like: - Low to moderate read workload since data ideally is cached in memory, or like with SAP HANA should be completely in memory-- Bursts of write triggered by database checkpoints or savepoints that are issued on a regular basis
+- Bursts of write triggered by database checkpoints or savepoints that are issued regularly
- Backup workload that reads in a continuous stream in cases where backups aren't executed via storage snapshots - For SAP HANA, load of the data into memory after an instance restart
Configuration for SAP **/hana/data** volume:
| M192i(d)s_v2 | 2,048 GiB | 2,000 MBps | 4 x P20 | 600 MBps | 680 MBps | 9,200| 14,000 | | M128ms, M128(d)ms_v2 | 3,892 GiB | 2,000 MBps | 4 x P30 | 800 MBps | no bursting | 20,000 | no bursting | | M176(d)s_3_v3 | 2,794 GiB | 4,000 MBps | 4 x P30 | 800 MBps | no bursting | 20,000| no bursting |
-| M176(d)s_4_v3 | 3,750 GiB | 4,000 MBps | 4 x P30 | 800 MBps | no bursting | 20,000 | no bursting |
+| M176(d)s_4_v3 | 3,892 GiB | 4,000 MBps | 4 x P30 | 800 MBps | no bursting | 20,000 | no bursting |
| M192i(d)ms_v2 | 4,096 GiB | 2,000 MBps | 4 x P30 | 800 MBps | no bursting | 20,000 | no bursting | | M208s_v2 | 2,850 GiB | 1,000 MBps | 4 x P30 | 800 MBps | no bursting | 20,000| no bursting | | M208ms_v2 | 5,700 GiB | 1,000 MBps | 4 x P40 | 1,000 MBps | no bursting | 30,000 | no bursting |
Configuration for SAP **/hana/data** volume:
| M832ixs<sup>1</sup> | 14,902 GiB | larger than 2,000 Mbps | 4 x P60<sup>1</sup> | 2,000 MBps | no bursting | 64,000 | no bursting | | M832ixs_v2<sup>1</sup> | 23,088 GiB | larger than 2,000 Mbps | 4 x P60<sup>1</sup> | 2,000 MBps | no bursting | 64,000 | no bursting |
-<sup>1</sup> VM type not available by default. Please contact your Microsoft account team
+<sup>1</sup> VM type not available by default. Contact your Microsoft account team
<sup>2</sup> Maximum throughput provided by the VM and throughput requirement by SAP HANA workload, especially savepoint activity, can force you to deploy significant more premium storage v1 capacity.
For the **/hana/log** volume. the configuration would look like:
| M128s, M128(d)s_v2 | 2,048 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500| | M192i(d)s_v2 | 2,048 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500| | M176(d)s_3_v3 | 2,794 GiB | 4,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
-| M176(d)s_4_v3 | 3,750 GiB | 4,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
+| M176(d)s_4_v3 | 3,892 GiB | 4,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
| M192i(d)ms_v2 | 4,096 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 | | M208s_v2 | 2,850 GiB | 1,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 | | M208ms_v2 | 5,700 GiB | 1,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 |
For the **/hana/log** volume. the configuration would look like:
| M832ixs<sup>1</sup> | 14,902 GiB | larger than 2,000 Mbps | 4 x P20 | 600 MBps | 680 MBps | 9,200 | 14,000 | | M832ixs_v2<sup>1</sup> | 23,088 GiB | larger than 2,000 Mbps | 4 x P20 | 600 MBps | 680 MBps | 9,200 | 14,000 |
-<sup>1</sup> VM type not available by default. Please contact your Microsoft account team
+<sup>1</sup> VM type not available by default. Contact your Microsoft account team
For the other volumes, the configuration would look like:
For the other volumes, the configuration would look like:
| M128s, M128(d)s_v2 | 2,048 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 | | M192i(d)s_v2 | 2,048 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 | | M176(d)s_3_v3 | 2,794 GiB | 4,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 |
-| M176(d)s_4_v3 | 3,750 GiB | 4,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 |
+| M176(d)s_4_v3 | 3,892 GiB | 4,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 |
| M128ms, M128dms_v2, M128ms_v2 | 3,892 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 | | M192i(d)ms_v2 | 4,096 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 | | M208s_v2 | 2,850 GiB | 1,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 |
For the other volumes, the configuration would look like:
| M832ixs<sup>1</sup> | 14,902 GiB | larger than 2,000 Mbps | 1 x P30 | 1 x P10 | 1 x P6 | | M832ixs_v2<sup>1</sup> | 23,088 GiB | larger than 2,000 Mbps |1 x P30 | 1 x P10 | 1 x P6 |
-<sup>1</sup> VM type not available by default. Please contact your Microsoft account team
+<sup>1</sup> VM type not available by default. Contact your Microsoft account team
<sup>2</sup> Review carefully the [considerations for sizing **/han#considerations-for-the-hana-shared-file-system) Check whether the storage throughput for the different suggested volumes meets the workload that you want to run. If the workload requires higher volumes for **/hana/data** and **/hana/log**, you need to increase the number of Azure premium storage VHDs. Sizing a volume with more VHDs than listed increases the IOPS and I/O throughput within the limits of the Azure virtual machine type.
A less costly alternative for such configurations could look like:
| M96(d)s_2_v3 | 1,946 GiB | 3,120 MBps | 6 x P20 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> | | M128s, M128(d)s_v2 | 2,048 GiB | 2,000 MB/s |6 x P20 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> | | M192i(d)s_v2 | 2,048 GiB | 2,000 MB/s |6 x P20 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> |
-| M128ms, M128(d)ms_v2 | 3,800 GiB | 2,000 MB/s | 5 x P30 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> |
+| M128ms, M128(d)ms_v2 | 3,892 GiB | 2,000 MB/s | 5 x P30 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> |
| M176(d)s_3_v3 | 2,794 GiB | 4,000 MBps | 4 x P30 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> |
-| M176(d)s_4_v3 | 3,750 GiB | 4,000 MBps | 5 x P30 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> |
+| M176(d)s_4_v3 | 3,892 GiB | 4,000 MBps | 5 x P30 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> |
| M192i(d)ms_v2 | 4,096 GiB | 2,000 MB/s | 5 x P30 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> | | M208s_v2 | 2,850 GiB | 1,000 MB/s | 4 x P30 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> | | M208ms_v2 | 5,700 GiB | 1,000 MB/s | 4 x P40 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> |
sap Hana Vm Premium Ssd V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-premium-ssd-v2.md
keywords: 'SAP, Azure HANA, Storage Ultra disk, Premium storage, Premium SSD v2'
Previously updated : 04/01/2024 Last updated : 06/28/2024 # SAP HANA Azure virtual machine Premium SSD v2 storage configurations
-This document is about HANA storage configurations for Azure Premium SSD v2. Azure Premium SSD v2 is a new storage that was developed to more flexible block storage with submillisecond latency for general purpose and DBMS workload. Premium SSD v2 simplifies the way how you build storage architectures and let's you tailor and adapt the storage capabilities to your workload. Premium SSD v2 allows you to configure and pay for capacity, IOPS, and throughput independent of each other.
+This document is about HANA storage configurations for Azure Premium SSD v2. Azure Premium SSD v2 is a new storage that was developed to more flexible block storage with submillisecond latency for general purpose and DBMS workload. Premium SSD v2 simplifies the way how you build storage architectures and let's you tailor and adapt the storage capabilities to your workload. Premium SSD v2 allows you to configure and pay for capacity, IOPS (I/O operations per second), and throughput independent of each other.
For general considerations around stripe sizes when using LVM, HANA data volume partitioning or other considerations that are independent of the particular storage type, check these two documents:
The major difference of Premium SSD v2 to the existing netWeaver and HANA certif
- Latency of Premium SSD v2 is lower than premium storage, but higher than Ultra disk. But is submilliseconds, so, that it passes the SAP HANA KPIs without the help of any other functionality, like Azure Write Accelerator - **Like with Ultra disk, you can use Premium SSD v2 for /hana/data and /hana/log volumes without the need of any accelerators or other caches**. - Like Ultra disk, Azure Premium SSD doesn't offer caching options as premium storage does-- With Premium SSD v2, the same storage configuration applies to the HANA certified Ev4, Ev5, and M-series VMs that offer the same memory
+- With Premium SSD v2, the same storage configuration applies to the HANA certified Ev4, Ev5, and M-series virtual machines (VM) that offer the same memory
- Unlike premium storage, there's no disk bursting for Premium SSD v2 Not having Azure Write Accelerator support or support by other caches makes the configuration of Premium SSD v2 for the different VM families easier and more unified and avoid variations that need to be considered in deployment automation. Not having bursting capabilities makes throughput and IOPS delivered more deterministic and reliable. Since Premium SSD v2 is a new storage type, there are still some restrictions related to its features and capabilities. to read up on these limitations and differences between the different storages, start with reading the document [Azure managed disk types](../../virtual-machines/disks-types.md).
Configuration for SAP **/hana/data** volume:
| M32ts | 192 GiB | 500 MBps | 20,000 | 224 GB | 425 MBps | 3,000| | M32ls | 256 GiB | 500 MBps | 20,000 | 304 GB | 425 MBps | 3,000 | | M64ls | 512 GiB | 1,000 MBps | 40,000 | 608 GB | 425 MBps | 3,000 |
-| M32(d)ms_v2 | 875 GiB | 500 MBps | 30,000 | 1056 GB | 425 MBps | 3,000 |
-| M48(d)s_1_v3, M96(d)s_1_v3 | 974 GiB | 1,560 MBps | 65,000 | 1232 GB | 600 MBps | 5,000 |
-| M64s, M64(d)s_v2 | 1,024 GiB | 1,000 MBps | 40,000 | 1232 GB | 600 MBps | 5,000 |
-| M64ms, M64(d)ms_v2 | 1,792 GiB | 1,000 MBps | 50,000 | 2144 GB | 600 MBps | 5,000 |
-| M96(d)s_2_v3 | 1,946 GiB | 3,120 MBps | 130,000 | 2464 GB | 800 MBps | 12,000|
-| M128s, M128(d)s_v2 | 2,048 GiB | 2,000 MBps | 80,000 | 2464 GB | 800 MBps | 12,000|
-| M192i(d)s_v2 | 2,048 GiB | 2,000 MBps | 80,000| 2464 GB | 800 MBps | 12,000|
-| M176(d)s_3_v3 | 2,794 GiB | 4,000 MBps | 130,000 | 3424 GB | 1,000 MBps| 15,000 |
-| M176(d)s_4_v3 | 3,750 GiB | 4,000 MBps | 130,000 | 4672 GB | 800 MBps | 12,000 |
-| M128ms, M128(d)ms_v2 | 3,892 GiB | 2,000 MBps | 80,000 | 4672 GB | 800 MBps | 12,000 |
-| M192i(d)ms_v2 | 4,096 GiB | 2,000 MBps | 80,000 | 4912 GB | 800 MBps | 12,000 |
-| M208s_v2 | 2,850 GiB | 1,000 MBps | 40,000 | 3424 GB | 1,000 MBps| 15,000 |
+| M32(d)ms_v2 | 875 GiB | 500 MBps | 30,000 | 1,056 GB | 425 MBps | 3,000 |
+| M48(d)s_1_v3, M96(d)s_1_v3 | 974 GiB | 1,560 MBps | 65,000 | 1,232 GB | 600 MBps | 5,000 |
+| M64s, M64(d)s_v2 | 1,024 GiB | 1,000 MBps | 40,000 | 1,232 GB | 600 MBps | 5,000 |
+| M64ms, M64(d)ms_v2 | 1,792 GiB | 1,000 MBps | 50,000 | 2,144 GB | 600 MBps | 5,000 |
+| M96(d)s_2_v3 | 1,946 GiB | 3,120 MBps | 130,000 | 2,464 GB | 800 MBps | 12,000|
+| M128s, M128(d)s_v2 | 2,048 GiB | 2,000 MBps | 80,000 | 2,464 GB | 800 MBps | 12,000|
+| M192i(d)s_v2 | 2,048 GiB | 2,000 MBps | 80,000| 2,464 GB | 800 MBps | 12,000|
+| M176(d)s_3_v3 | 2,794 GiB | 4,000 MBps | 130,000 | 3,424 GB | 1,000 MBps| 15,000 |
+| M176(d)s_4_v3 | 3,892 GiB | 4,000 MBps | 130,000 | 4,672 GB | 800 MBps | 12,000 |
+| M128ms, M128(d)ms_v2 | 3,892 GiB | 2,000 MBps | 80,000 | 4,672 GB | 800 MBps | 12,000 |
+| M192i(d)ms_v2 | 4,096 GiB | 2,000 MBps | 80,000 | 4,912 GB | 800 MBps | 12,000 |
+| M208s_v2 | 2,850 GiB | 1,000 MBps | 40,000 | 3,424 GB | 1,000 MBps| 15,000 |
| M208ms_v2 | 5,700 GiB | 1,000 MBps | 40,000 | 6,848 GB | 1,000 MBps | 15,000 | | M416s_v2 | 5,700 GiB | 2,000 MBps | 80,000 | 6,848 GB | 1,200 MBps| 17,000 | | M416s_8_v2 | 7,600 GiB | 2,000 MBps | 80,000 | 9,120 GB | 1,250 MBps| 20,000 |
Configuration for SAP **/hana/data** volume:
| M832ixs<sup>1</sup> | 14,902 GiB | larger than 2,000 Mbps | 80,000 | 19,200 GB | 2,000 MBps<sup>2</sup> | 40,000 | | M832ixs_v2<sup>1</sup> | 23,088 GiB | larger than 2,000 Mbps | 80,000 | 28,400 GB | 2,000 MBps<sup>2</sup> | 60,000 |
-<sup>1</sup> VM type not available by default. Please contact your Microsoft account team
+<sup>1</sup> VM type not available by default. Contact your Microsoft account team
<sup>2</sup> Maximum throughput provided by the VM and throughput requirement by SAP HANA workload, especially savepoint activity, can force you to deploy significant more throughput and IOPS
For the **/hana/log** volume. the configuration would look like:
| M128s, M128(d)s_v2 | 2,048 GiB | 2,000 MBps | 80,000 | 512 GB | 300 MBps | 4,000 | 1,024 GB | | M192i(d)s_v2 | 2,048 GiB | 2,000 MBps | 80,000 | 512 GB | 300 MBps | 4,000 | 1,024 GB | | M176(d)s_3_v3 | 2,794 GiB | 4,000 MBps | 130,000 | 512 GB | 300 MBps | 4,000 | 1,024 GB |
-| M176(d)s_4_v3 | 3,750 GiB | 4,000 MBps | 130,000 | 512 GB | 300 MBps | 4,000 | 1,024 GB |
+| M176(d)s_4_v3 | 3,892 GiB | 4,000 MBps | 130,000 | 512 GB | 300 MBps | 4,000 | 1,024 GB |
| M128ms, M128(d)ms_v2 | 3,892 GiB | 2,000 MBps | 80,000 | 512 GB | 300 MBps | 4,000 | 1,024 GB | | M192i(d)ms_v2 | 4,096 GiB | 2,000 MBps | 80,000 | 512 GB | 300 MBps | 4,000 | 1,024 GB | | M208s_v2 | 2,850 GiB | 1,000 MBps | 40,000 | 512 GB | 300 MBps | 4,000 | 1,024 GB |
For the **/hana/log** volume. the configuration would look like:
| M832ixs<sup>1</sup> | 14,902 GiB | larger than 2,000 Mbps | 80,000 | 512 GB | 600 MBps | 9,000 | 1,024 GB | | M832ixs_v2<sup>1</sup> | 23,088 GiB | larger than 2,000 Mbps | 80,000 | 512 GB | 600 MBps | 9,000 | 1,024 GB |
-<sup>1</sup> VM type not available by default. Please contact your Microsoft account team
+<sup>1</sup> VM type not available by default. Contact your Microsoft account team
<sup>2</sup> Review carefully the [considerations for sizing **/han#considerations-for-the-hana-shared-file-system)
A few examples on how combining multiple Premium SSD v2 disks with a stripe set
| M416ms_v2 | 11,400 GiB | 1 | 13,680 | 25,000 | 3,000 | 22,000 | 1,200 MBps | 125 MBps | 1,075 MBps | | M416ms_v2 | 11,400 GiB | 2 | 6,840 | 25,000 | 6,000 | 19,000 | 1,200 MBps | 250 MBps | 950 MBps | | M416ms_v2 | 11,400 GiB | 4 | 3,420 | 25,000 | 12,000 | 13,000 | 1,200 MBps | 500 MBps | 700 MBps |
-| M832ixs<sup>1</sup> | 14,902 GiB | 2 | 7,451 GB | 40,000 | 6,000 | 34,000 | 2,000 MBps | 250 MBps | 1750 MBps |
-| M832ixs<sup>1</sup> | 14,902 GiB | 4 | 3,726 GB | 40,000 | 12,000 | 28,000 | 2,000 MBps | 500 MBps | 1500 MBps |
-| M832ixs<sup>1</sup> | 14,902 GiB | 8 | 1,863 GB | 40,000 | 24,000 | 16,000 | 2,000 MBps | 1,000 MBps | 1000 MBps |
+| M832ixs<sup>1</sup> | 14,902 GiB | 2 | 7,451 GB | 40,000 | 6,000 | 34,000 | 2,000 MBps | 250 MBps | 1,750 MBps |
+| M832ixs<sup>1</sup> | 14,902 GiB | 4 | 3,726 GB | 40,000 | 12,000 | 28,000 | 2,000 MBps | 500 MBps | 1,500 MBps |
+| M832ixs<sup>1</sup> | 14,902 GiB | 8 | 1,863 GB | 40,000 | 24,000 | 16,000 | 2,000 MBps | 1,000 MBps | 1,000 MBps |
-<sup>1</sup> VM type not available by default. Please contact your Microsoft account team
+<sup>1</sup> VM type not available by default. Contact your Microsoft account team
For **/hana/log**, a similar approach of using two disks could look like:
sap Planning Guide Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/planning-guide-storage.md
ms.assetid: d7c59cc1-b2d0-4d90-9126-628f9c7a5538
Previously updated : 07/13/2023 Last updated : 06/26/2024
Before going into the details, we're presenting the summary and recommendations
| OS disk | Not suitable | Restricted suitable (non-prod) | Recommended | Not possible | Not possible | Not possible | Not possible | | Global transport Directory | Not supported | Not supported | Recommended | Recommended | Recommended | Recommended | Highly Recommended | | /sapmnt | Not suitable | Restricted suitable (non-prod) | Recommended | Recommended | Recommended | Recommended | Highly Recommended |
-| DBMS Data volume SAP HANA M/Mv2 VM families | Not supported | Not supported | Recommended | Recommended | Recommended | Recommended<sup>2</sup> | Not supported |
-| DBMS log volume SAP HANA M/Mv2 VM families | Not supported | Not supported | Recommended<sup>1</sup> | Recommended | Recommended | Recommended<sup>2</sup> | Not supported |
-| DBMS Data volume SAP HANA Esv3/Edsv4 VM families | Not supported | Not supported | Recommended | Recommended | Recommended | Recommended<sup>2</sup> | Not supported |
-| DBMS log volume SAP HANA Esv3/Edsv4 VM families | Not supported | Not supported | Not supported | Recommended | Recommended | Recommended<sup>2</sup> | Not supported |
-| HANA shared volume | Not supported | Not supported | Recommended | Recommended | Recommended | Recommended | Recommended<sup>3</sup> |
+| DBMS Data volume SAP HANA M/Mv2 VM families | Not supported | Not supported | Recommended | Recommended | Recommended | Recommended | Not supported |
+| DBMS log volume SAP HANA M/Mv2 VM families | Not supported | Not supported | Recommended<sup>1</sup> | Recommended | Recommended | Recommended | Not supported |
+| DBMS Data volume SAP HANA Esv3/Edsv4 VM families | Not supported | Not supported | Recommended | Recommended | Recommended | Recommended | Not supported |
+| DBMS log volume SAP HANA Esv3/Edsv4 VM families | Not supported | Not supported | Not supported | Recommended | Recommended | Recommended | Not supported |
+| HANA shared volume | Not supported | Not supported | Recommended | Recommended | Recommended | Recommended | Recommended |
| DBMS Data volume non-HANA | Not supported | Restricted suitable (non-prod) | Recommended | Recommended | Recommended | Only for specific Oracle releases on Oracle Linux, Db2 and SAP ASE on SLES/RHEL Linux | Not supported | | DBMS log volume non-HANA M/Mv2 VM families | Not supported | Restricted suitable (non-prod) | Recommended<sup>1</sup> | Recommended | Recommended | Only for specific Oracle releases on Oracle Linux, Db2 and SAP ASE on SLES/RHEL Linux | Not supported | | DBMS log volume non-HANA non-M/Mv2 VM families | Not supported | restricted suitable (non-prod) | Suitable for up to medium workload | Recommended | Recommended | Only for specific Oracle releases on Oracle Linux, Db2 and SAP ASE on SLES/RHEL Linux | Not supported |
Before going into the details, we're presenting the summary and recommendations
<sup>1</sup> With usage of [Azure Write Accelerator](../../virtual-machines/how-to-enable-write-accelerator.md) for M/Mv2 VM families for log/redo log volumes
-<sup>2</sup> Using Azure NetApp Files requires /hana/data and /hana/log to be on Azure NetApp Files
-
-<sup>3</sup> So far tested on SLES only
Characteristics you can expect from the different storage types list like:
Characteristics you can expect from the different storage types list like:
| Latency Reads | High | Medium to high | Low | submillisecond | submillisecond | submillisecond | low | | Latency Writes | High | Medium to high | Low (submillisecond<sup>1</sup>) | submillisecond | submillisecond | submillisecond | low | | HANA supported | No | No | yes<sup>1</sup> | Yes | Yes | Yes | No |
-| Disk snapshots possible | Yes | Yes | Yes | No | No | Yes | No |
+| Disk snapshots possible | Yes | Yes | Yes | Yes<sup>3</sup> | No<sup>2</sup> | Yes | No |
| Allocation of disks on different storage clusters when using availability sets | Through managed disks | Through managed disks | Through managed disks | Disk type not supported with VMs deployed through availability sets | Disk type not supported with VMs deployed through availability sets | No<sup>3</sup> | No | | Aligned with Availability Zones | Yes | Yes | Yes | Yes | Yes | In public preview | No | | Synchronous Zonal redundancy | Not for managed disks | Not for managed disks | Not supported for DBMS | No | No | No | Yes |
Characteristics you can expect from the different storage types list like:
<sup>1</sup> With usage of [Azure Write Accelerator](../../virtual-machines/how-to-enable-write-accelerator.md) for M/Mv2 VM families for log/redo log volumes
-<sup>2</sup> Costs depend on provisioned IOPS and throughput
+<sup>2</sup> Creation of different Azure NetApp Files capacity pools doesn't guarantee deployment of capacity pools onto different storage units
-<sup>3</sup> Creation of different Azure NetApp Files capacity pools doesn't guarantee deployment of capacity pools onto different storage units
+<sup>3</sup> (Incremental) Snapshots of a Premium SSD v2 or an Ultra disk can't be used immediately after they're created. The background copy must complete before you can create a disk from the snapshot
> [!IMPORTANT]
The capability matrix for SAP workload looks like:
| HANA certified | Yes | - | | Azure Write Accelerator support | No | - | | Disk bursting | No | - |
-| Disk snapshots possible | No | - |
-| Azure Backup VM snapshots possible | No | - |
+| Disk snapshots possible | Yes<sup>1</sup> | - |
+| Azure Backup VM snapshots possible | Yes | - |
| Costs | Medium | - |
+<sup>1</sup> (Incremental) Snapshots of a Premium SSD v2 or an Ultra disk can't be used immediately after they're created. The background copy must complete before you can create a disk from the snapshot
+ In opposite to Azure premium storage, Azure Premium SSD v2 fulfills SAP HANA storage latency KPIs. As a result, you **DON'T need to use Azure Write Accelerator caching** as described in the article [Enable Write Accelerator](../../virtual-machines/how-to-enable-write-accelerator.md). **Summary:** Azure Premium SSD v2 is the block storage that fits the best price/performance ratio for SAP workloads. Azure Premium SSD v2 is suited to handle database workloads. The submillisecond latency is ideal storage for demanding DBMS workloads. Though it's a newer storage type that got released in November 2022. Therefore, there still might be some limitations that are going to go away over the next few months.
The capability matrix for SAP workload looks like:
| Throughput linear to capacity | Semi linear in brackets | [Managed Disk pricing](https://azure.microsoft.com/pricing/details/managed-disks/) | | HANA certified | Yes | - | | Azure Write Accelerator support | No | - |
-| Disk bursting | No | - |
-| Disk snapshots possible | No | - |
-| Azure Backup VM snapshots possible | No | - |
+| Disk bursting | Yes | - |
+| Disk snapshots possible | Yes<sup>1</sup> | - |
+| Azure Backup VM snapshots possible | Yes | - |
| Costs | Higher than Premium storage | - |
+<sup>1</sup> (Incremental) Snapshots of a Premium SSD v2 or an Ultra disk can't be used immediately after they're created. The background copy must complete before you can create a disk from the snapshot
-**Summary:** Azure ultra disks are a suitable storage with low submillisecond latency for all kinds of SAP workload. So far, Ultra disk can only be used in combinations with VMs that have been deployed through Availability Zones (zonal deployment). Ultra disk isn't supporting storage snapshots. In opposite to all other storage, Ultra disk can't be used for the base VHD disk. Ultra disk is ideal for cases where I/O workload fluctuates a lot and you want to adapt deployed storage throughput or IOPS to storage workload patterns instead of sizing for maximum usage of bandwidth and IOPS.
+**Summary:** Azure ultra disks are a suitable storage with low submillisecond latency for all kinds of SAP workload. So far, Ultra disk can only be used in combinations with VMs that have been deployed through Availability Zones (zonal deployment). In opposite to all other storage, Ultra disk can't be used for the base VHD disk. Ultra disk is ideal for cases where I/O workload fluctuates a lot and you want to adapt deployed storage throughput or IOPS to storage workload patterns instead of sizing for maximum usage of bandwidth and IOPS.
## Azure NetApp Files
Azure NetApp Files is currently supported for several SAP workload scenarios:
- [High availability for SAP NetWeaver on Azure VMs on Windows with Azure NetApp Files(SMB) for SAP applications](./high-availability-guide-windows-netapp-files-smb.md) - [High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with Azure NetApp Files for SAP applications](./high-availability-guide-suse-netapp-files.md) - [Azure Virtual Machines high availability for SAP NetWeaver on Red Hat Enterprise Linux with Azure NetApp Files for SAP applications](./high-availability-guide-rhel-netapp-files.md)-- SAP HANA deployments using NFS v4.1 shares for /han)
+- SAP HANA deployments using NFS v4.1 shares for /han)
- IBM Db2 in Suse or Red Hat Linux guest OS - Oracle deployments in Oracle Linux guest OS using [dNFS](https://docs.oracle.com/en/database/oracle/oracle-database/19/ntdbi/creating-an-oracle-database-on-direct-nfs.html#GUID-2A0CCBAB-9335-45A8-B8E3-7E8C4B889DEA) for Oracle data and redo log volumes. Some more details can be found in the article [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms-guide-oracle.md) - SAP ASE in Suse or Red Hat Linux guest OS
Other built-in functionality of Azure NetApp Files storage:
> Specifically for database deployments you want to achieve low latencies for at least your redo logs. Especially for SAP HANA, SAP requires a latency of less than 1 millisecond for HANA redo log writes of smaller sizes. To get to such latencies, see the possibilities below. > [!IMPORTANT]
-> Even for non-DBMS usage, you should use the preview functionality that allows you to create the NFS share in the same Azure Availability Zones as you placed your VM(s) that should mount the NFS shares into. This functionality is documented in the article [Manage availability zone volume placement for Azure NetApp Files](../../azure-netapp-files/manage-availability-zone-volume-placement.md). The motivation to have this type of Availability Zone alignment is the reduction of risk surface by having the NFS shares yet in another AvZone where you don't run VMs in.
+> Even for non-DBMS usage, you should use the functionality that allows you to create the NFS share in the same Azure Availability Zones as you placed your VM(s) that should mount the NFS shares into. This functionality is documented in the article [Manage availability zone volume placement for Azure NetApp Files](../../azure-netapp-files/manage-availability-zone-volume-placement.md). The motivation to have this type of Availability Zone alignment is the reduction of risk surface by having the NFS shares yet in another AvZone where you don't run VMs in.
- You go for the closest proximity between VM and NFS share that can be arranged by using [Application Volume Groups](../../azure-netapp-files/application-volume-group-introduction.md). The advantage of Application Volume Groups, besides allocating best proximity and with that creating lowest latency, is that your different NFS shares for SAP HANA deployments are distributed across different controllers in the Azure NetApp Files backend clusters. Disadvantage of this method is that you need to go through a pinning process again. A process that will end restricting your VM deployment to a single datacenter. Instead of an Availability Zones as the first method introduced. This means less flexibility in changing VM sizes and VM families of the VMs that have the NFS volumes mounted.
The capability matrix for SAP workload looks like:
| Throughput SLA | Yes | - | | Throughput linear to capacity | strictly linear | - | | HANA certified | No| - |
-| Disk snapshots possible | No | - |
+| Disk snapshots possible | Yes | - |
| Azure Backup VM snapshots possible | No | - | | Costs | low | - |
Creating a stripe set out of multiple Azure disks into one larger volume allows
Some rules need to be followed on striping: -- No in-VM configured storage should be used since Azure storage keeps the data redundant already
+- No in-VM configured storage redundancy should be used since Azure storage keeps the data disk redundant already at the Azure storage backend
- The disks the stripe set is applied to, need to be of the same size - With Premium SSD v2 and Ultra disk, the capacity, provisioned IOPS and provisioned throughput needs to be the same
security Cyber Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/cyber-services.md
ms.assetid: 925ba3c6-fe35-413a-98ea-e1a1461f3022
Previously updated : 04/03/2023 Last updated : 06/28/2024
Microsoft services can create solutions that integrate, and enhance the latest s
Our team of technical professionals consists of highly trained experts who offer a wealth of security and identity experience.
-[Learn more](https://aka.ms/cyberserv) about Microsoft Services Security consulting services.
+[Learn more](https://www.microsoft.com/security/business/partnerships) about Microsoft Services Security consulting services.
security End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/end-to-end.md
ms.assetid: a5a7f60a-97e2-49b4-a8c5-7c010ff27ef8
Previously updated : 01/29/2023 Last updated : 06/28/2024
The [Microsoft cloud security benchmark](/security/benchmark/azure/introduction)
## Secure and protect - | Service | Description | ||--| | [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md)| A unified infrastructure security management system that strengthens the security posture of your data centers, and provides advanced threat protection across your hybrid workloads in the cloud - whether they're in Azure or not - as well as on premises. |
The [Microsoft cloud security benchmark](/security/benchmark/azure/introduction)
## Detect threats - | Service | Description | ||--| | [Microsoft Defender for Cloud](../../security-center/azure-defender.md) | Brings advanced, intelligent, protection of your Azure and hybrid resources and workloads. The workload protection dashboard in Defender for Cloud provides visibility and control of the cloud workload protection features for your environment. |
The [Microsoft cloud security benchmark](/security/benchmark/azure/introduction)
## Investigate and respond - | Service | Description | ||--| | [Microsoft Sentinel](../../sentinel/hunting.md) | Powerful search and query tools to hunt for security threats across your organization's data sources. |
security Infrastructure Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-availability.md
ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e
Previously updated : 01/20/2023 Last updated : 06/28/2024
security Infrastructure Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-components.md
ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e
Previously updated : 02/09/2023 Last updated : 06/28/2024
security Paas Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/paas-deployments.md
Previously updated : 03/31/2023 Last updated : 06/27/2024
security Ransomware Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/ransomware-prepare.md
Previously updated : 01/10/2022 Last updated : 06/28/2024 # Prepare for a ransomware attack
Ultimately, the Framework is aimed at reducing and better managing cybersecurity
## Prioritize mitigation
-Based on our experience with ransomware attacks, we've found that prioritization should focus on: 1) prepare, 2) limit, 3) prevent. This may seem counterintuitive, since most people want to prevent an attack and move on. Unfortunately, we must assume breach (a key Zero Trust principle) and focus on reliably mitigating the most damage first. This prioritization is critical because of the high likelihood of a worst-case scenario with ransomware. While it's not a pleasant truth to accept, we're facing creative and motivated human attackers who are adept at finding a way to control the complex real-world environments in which we operate. Against that reality, it's important to prepare for the worst and establish frameworks to contain and prevent attackers' ability to get what they're after.
+Based on our experience with ransomware attacks, we find that prioritization should focus on: 1) prepare, 2) limit, 3) prevent. This may seem counterintuitive, since most people want to prevent an attack and move on. Unfortunately, we must assume breach (a key Zero Trust principle) and focus on reliably mitigating the most damage first. This prioritization is critical because of the high likelihood of a worst-case scenario with ransomware. While it's not a pleasant truth to accept, we're facing creative and motivated human attackers who are adept at finding a way to control the complex real-world environments in which we operate. Against that reality, it's important to prepare for the worst and establish frameworks to contain and prevent attackers' ability to get what they're after.
-While these priorities should govern what to do first, we encourage organizations to run as many steps in parallel as possible (including pulling quick wins forward from step 1 whenever you can).
+While these priorities should govern what to do first, we encourage organizations to run steps in parallel where possible, including pulling quick wins forward from step 1 when you can.
## Make it harder to get in
-Prevent a ransomware attacker from entering your environment and rapidly respond to incidents to remove attacker access before they can steal and encrypt data. This will cause attackers to fail earlier and more often, undermining the profit of their attacks. While prevention is the preferred outcome, it is a continuous journey and may not be possible to achieve 100% prevention and rapid response across a real-world organizations (complex multi-platform and multi-cloud estate with distributed IT responsibilities).
+Prevent a ransomware attacker from entering your environment and rapidly respond to incidents to remove attacker access before they can steal and encrypt data. This causes attackers to fail earlier and more often, undermining the profit of their attacks. While prevention is the preferred outcome, it's a continuous journey and may not be possible to achieve 100% prevention and rapid response across a real-world organizations (complex multi-platform and multicloud estate with distributed IT responsibilities).
-To achieve this, organizations should identify and execute quick wins to strengthen security controls to prevent entry and rapidly detect/evict attackers while implementing a sustained program that helps them stay secure. Microsoft recommends organizations follow the principles outlined in the Zero Trust strategy [here](https://aka.ms/zerotrust). Specifically, against Ransomware, organizations should prioritize:
+To achieve this, organizations should identify and execute quick wins to strengthen security controls to prevent entry, and rapidly detect/evict attackers while implementing a sustained program that helps them stay secure. Microsoft recommends organizations follow the principles outlined in the Zero Trust strategy [here](https://aka.ms/zerotrust). Specifically, against Ransomware, organizations should prioritize:
- Improving security hygiene by focusing efforts on attack surface reduction and threat and vulnerability management for assets in their estate. -- Implementing Protection, Detection and Response controls for their digital assets that can protect against commodity and advanced threats, provide visibility and alerting on attacker activity and respond to active threats.
+- Implementing Protection, Detection and Response controls for their digital assets that can protect against commodity and advanced threats, provide visibility and alerting on attacker activity and respond to active threats.
## Limit scope of damage
-Ensure you have strong controls (prevent, detect, respond) for privileged accounts like IT Admins and other roles with control of business-critical systems. This slows and/or blocks attackers from gaining complete access to your resources to steal and encrypt them. Taking away the attackers' ability to use IT Admin accounts as a shortcut to resources will drastically lower the chances they are successful at attacking you and demanding payment / profiting.
+Ensure you have strong controls (prevent, detect, respond) for privileged accounts like IT Admins and other roles with control of business-critical systems. This slows and/or blocks attackers from gaining complete access to your resources to steal and encrypt them. Taking away the attackers' ability to use IT Admin accounts as a shortcut to resources drastically lowers the chances they're successful at attacking you and demanding payment / profiting.
Organizations should have elevated security for privileged accounts (tightly protect, closely monitor, and rapidly respond to incidents related to these roles). See Microsoft's Security rapid modernization plan, which covers: - End to End Session Security (including multifactor authentication (MFA) for admins)
Organizations should have elevated security for privileged accounts (tightly pro
## Prepare for the worst
-Plan for the worst-case scenario and expect that it will happen (at all levels of the organization). This will both help your organization and others in the world you depend on:
+Plan for the worst-case scenario and expect that it happens (at all levels of the organization). This helps your organization and others in the world you depend on:
-- Limits damage for the worst-case scenario ΓÇô While restoring all systems from backups is highly disruptive to business, this is more effective and efficient than trying to recovery using (low quality) attacker-provided decryption tools after paying to get the key. Note: Paying is an uncertain path ΓÇô You have no formal or legal guarantee that the key works on all files, the tools work will work effectively, or that the attacker (who may be an amateur affiliate using a professional's toolkit) will act in good faith.-- Limit the financial return for attackers ΓÇô If an organization can restore business operations without paying the attackers, the attack has effectively failed and resulted in zero return on investment (ROI) for the attackers. This makes it less likely that they will target the organization in the future (and deprives them of additional funding to attack others).
+- Limits damage for the worst-case scenario ΓÇô While restoring all systems from backups is highly disruptive to business, this is more effective and efficient than trying to recovery using (low quality) attacker-provided decryption tools after paying to get the key. Note: Paying is an uncertain path ΓÇô You have no formal or legal guarantee that the key works on all files, the tools work effectively, or that the attacker (who may be an amateur affiliate using a professional's toolkit) will act in good faith.
+- Limit the financial return for attackers ΓÇô If an organization can restore business operations without paying the attackers, the attack fails and results in zero return on investment (ROI) for the attackers. This makes it less likely that they'll target the organization in the future (and deprives them of more funding to attack others).
The attackers may still attempt to extort the organization through data disclosure or abusing/selling the stolen data, but this gives them less leverage than if they have the only access path to your data and systems.
To realize this, organizations should ensure they:
- Register Risk - Add ransomware to risk register as high likelihood and high impact scenario. Track mitigation status via Enterprise Risk Management (ERM) assessment cycle. - Define and Backup Critical Business Assets ΓÇô Define systems required for critical business operations and automatically back them up on a regular schedule (including correct backup of critical dependencies like Active Directory) Protect backups against deliberate erasure and encryption with offline storage, immutable storage, and/or out of band steps (MFA or PIN) before modifying/erasing online backups.-- Test 'Recover from Zero' Scenario ΓÇô test to ensure your business continuity / disaster recovery (BC/DR) can rapidly bring critical business operations online from zero functionality (all systems down). Conduct practice exercise(s) to validate cross-team processes and technical procedures, including out-of-band employee and customer communications (assume all email/chat/etc. is down).
- It is critical to protect (or print) supporting documents and systems required for recovery including restoration procedure documents, CMDBs, network diagrams, SolarWinds instances, etc. Attackers destroy these regularly.
+- Test 'Recover from Zero' Scenario ΓÇô test to ensure your business continuity / disaster recovery (BC/DR) can rapidly bring critical business operations online from zero functionality (all systems down). Conduct practice exercises to validate cross-team processes and technical procedures, including out-of-band employee and customer communications (assume all email/chat/etc. is down).
+ It's critical to protect (or print) supporting documents and systems required for recovery including restoration procedure documents, CMDBs, network diagrams, SolarWinds instances, etc. Attackers destroy these regularly.
- Reduce on-premises exposure ΓÇô by moving data to cloud services with automatic backup & self-service rollback.
-## Promote awareness and ensure there is no knowledge gap
+## Promote awareness and ensure there's no knowledge gap
There are a number of activities that may be undertaken to prepare for potential ransomware incidents. ### Educate end users on the dangers of ransomware
-As most ransomware variants rely on end-users to install the ransomware or connect to compromised Web sites, all end users should be educated about the dangers. This would typically be part of annual security awareness training as well as ad hoc training available through the company's learning management systems. The awareness training should also extend to the company's customers via the company's portals or other appropriate channels.
+As most ransomware variants rely on end-users to install the ransomware or connect to compromised Web sites, all end users should be educated about the dangers. This would typically be part of annual security awareness training as well as ad hoc training available through the company's learning management systems. The awareness training should also extend to the company's customers via the company's portals or other appropriate channels.
### Educate security operations center (SOC) analysts and others on how to respond to ransomware incidents
-SOC analysts and others involved in ransomware incidents should know the fundamentals of malicious software and ransomware specifically. They should be aware of major variants/families of ransomware, along with some of their typical characteristics. Customer call center staff should also be aware of how to handle ransomware reports from the company's end users and customers.
+SOC analysts and others involved in ransomware incidents should know the fundamentals of malicious software and ransomware specifically. They should be aware of major variants/families of ransomware, along with some of their typical characteristics. Customer call center staff should also be aware of how to handle ransomware reports from the company's end users and customers.
## Ensure that you have appropriate technical controls in place
-There are a wide variety of technical controls that should be in place to protect, detect, and respond to ransomware incidents with a strong emphasis on prevention. At a minimum, SOC analysts should have access to the telemetry generated by antimalware systems in the company, understand what preventive measures are in place, understand the infrastructure targeted by ransomware, and be able to assist the company teams to take appropriate action.
+There are a wide variety of technical controls that should be in place to protect, detect, and respond to ransomware incidents with a strong emphasis on prevention. At a minimum, SOC analysts should have access to the telemetry generated by antimalware systems in the company, understand what preventive measures are in place, understand the infrastructure targeted by ransomware, and be able to assist the company teams to take appropriate action.
This should include some or all of the following essential tools:
This should include some or all of the following essential tools:
- Enterprise server antimalware product suites (such as Microsoft Defender for Cloud) - Network antimalware solutions (such as Azure Anti-malware) - Security data analytics platforms (such as Azure Monitor, Sentinel)
- - Next generation intrusion detection and prevention systems
+ - Next generation intrusion detection and prevention systems
- Next generation firewall (NGFW) - Malware analysis and response toolkits - Automated malware analysis systems with support for most major end-user and server operating systems in the organization
- - Static and dynamic malware analysis tools
+ - Static and dynamic malware analysis tools
- Digital forensics software and hardware - Non- Organizational Internet access (for example, 4G dongle) - For maximum effectiveness, SOC analysts should have extensive access to almost all antimalware platforms through their native interfaces in addition to unified telemetry within the security data analysis platforms. The platform for Azure native Antimalware for Azure Cloud Services and Virtual Machines provides step-by-step guides on how to accomplish this. - Enrichment and intelligence sources - Online and offline threat and malware intelligence sources (such as sentinel, Azure Network Watcher) - Active directory and other authentication systems (and related logs)
+ - Internal Configuration Management Databases (CMDBs) containing endpoint device info
-- Data protection
+- Data protection
- Implement data protection to ensure rapid and reliable recovery from a ransomware attack + block some techniques. - Designate Protected Folders ΓÇô to make it more difficult for unauthorized applications to modify the data in these folders. - Review Permissions ΓÇô to reduce risk from broad access enabling ransomware
This should include some or all of the following essential tools:
## Establish an incident handling process
-Ensure your organization undertakes a number of activities roughly following the incident response steps and guidance described in the US National Institute of Standards and Technology (NIST) Computer Security Incident Handling Guide (Special Publication 800-61r2) to prepare for potential ransomware incidents. These steps include:
+Ensure your organization undertakes a number of activities roughly following the incident response steps and guidance described in the US National Institute of Standards and Technology (NIST) Computer Security Incident Handling Guide (Special Publication 800-61r2) to prepare for potential ransomware incidents. These steps include:
-1. **Preparation**: This stage describes the various measures that should be put into place prior to an incident. This may include both technical preparations (such as the implementation of suitable security controls and other technologies) and non-technical preparations (such as the preparation of processes and procedures).
-1. **Triggers / Detection**: This stage describes how this type of incident may be detected and what triggers may be available that should be used to initiate either further investigation or the declaration of an incident. These are generally separated into high-confidence and low-confidence triggers.
-1. **Investigation / Analysis**: This stage describes the activities that should be undertaken to investigate and analyze available data when it isnΓÇÖt clear that an incident has occurred, with the goal of either confirming that an incident should be declared or concluded that an incident hasn't occurred.
-1. **Incident Declaration**: This stage covers the steps that must be taken to declare an incident, typically with the raising of a ticket within the enterprise incident management (ticketing) system and directing the ticket to the appropriate personnel for further evaluation and action.
-1. **Containment / Mitigation**: This stage covers the steps that may be taken either by the Security Operations Center (SOC), or by others, to contain or mitigate (stop) the incident from continuing to occur or limiting the effect of the incident using available tools, techniques, and procedures.
-1. **Remediation / Recovery**: This stage covers the steps that may be taken to remediate or recover from damage that was caused by the incident before it was contained and mitigated.
-1. **Post-Incident Activity**: This stage covers the activities that should be performed once the incident has been closed. This can include capturing the final narrative associated with the incident as well as identifying lessons learned.
+1. **Preparation**: This stage describes the various measures that should be put into place prior to an incident. This may include both technical preparations (such as the implementation of suitable security controls and other technologies) and non-technical preparations (such as the preparation of processes and procedures).
+1. **Triggers / Detection**: This stage describes how this type of incident may be detected and what triggers may be available that should be used to initiate either further investigation or the declaration of an incident. These are generally separated into high-confidence and low-confidence triggers.
+1. **Investigation / Analysis**: This stage describes the activities that should be undertaken to investigate and analyze available data when it isn't clear that an incident has occurred, with the goal of either confirming that an incident should be declared or concluded that an incident hasn't occurred.
+1. **Incident Declaration**: This stage covers the steps that must be taken to declare an incident, typically with the raising of a ticket within the enterprise incident management (ticketing) system and directing the ticket to the appropriate personnel for further evaluation and action.
+1. **Containment / Mitigation**: This stage covers the steps that may be taken either by the Security Operations Center (SOC), or by others, to contain or mitigate (stop) the incident from continuing to occur or limiting the effect of the incident using available tools, techniques, and procedures.
+1. **Remediation / Recovery**: This stage covers the steps that may be taken to remediate or recover from damage that was caused by the incident before it was contained and mitigated.
+1. **Post-Incident Activity**: This stage covers the activities that should be performed once the incident has been closed. This can include capturing the final narrative associated with the incident as well as identifying lessons learned.
:::image type="content" source="./media/ransomware/ransomware-17.png" alt-text="Flowchart of an incident handling process"::: ## Prepare for a quick recovery
-Ensure that you have appropriate processes and procedures in place. Almost all ransomware incidents result in the need to restore compromised systems. So appropriate and tested backup and restore processes and procedures should be in place for most systems. There should also be suitable containment strategies in place with suitable procedures to stop ransomware from spreading and recovery from ransomware attacks.
+Ensure that you have appropriate processes and procedures in place. Almost all ransomware incidents result in the need to restore compromised systems. So appropriate and tested backup and restore processes and procedures should be in place for most systems. There should also be suitable containment strategies in place with suitable procedures to stop ransomware from spreading and recovery from ransomware attacks.
-Ensure that you have well-documented procedures for engaging any third-party support, particularly support from threat intelligence providers, antimalware solution providers and from the malware analysis provider. These contacts may be useful if the ransomware variant may have known weaknesses or decryption tools may be available.
+Ensure that you have well-documented procedures for engaging any third-party support, particularly support from threat intelligence providers, antimalware solution providers and from the malware analysis provider. These contacts may be useful if the ransomware variant may have known weaknesses or decryption tools may be available.
-The Azure platform provides backup and recovery options through Azure Backup as well built-in within various data services and workloads.
+The Azure platform provides backup and recovery options through Azure Backup as well built-in within various data services and workloads.
Isolated backups with [Azure Backup](../../backup/backup-azure-security-feature.md#prevent-attacks) - Azure Virtual Machines - Databases in Azure VMs: SQL, SAP HANA - Azure Database for PostgreSQL-- On-prem Windows Servers (back up to cloud using MARS agent)
+- On-premises Windows Servers (back up to cloud using MARS agent)
Local (operational) backups with Azure Backup - Azure Files
security Ransomware Protection With Azure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/ransomware-protection-with-azure-firewall.md
ms.assetid: 9dcb190e-e534-4787-bf82-8ce73bf47dba
Previously updated : 02/24/2022 Last updated : 06/28/2024 # Improve your security defenses for ransomware attacks with Azure Firewall Premium
security Ransomware Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/ransomware-protection.md
Previously updated : 8/31/2023 Last updated : 06/28/2024 # Ransomware protection in Azure
-Ransomware and extortion are a high profit, low-cost business, which has a debilitating impact on targeted organizations, national/regional security, economic security, and public health and safety. What started as simple, single-PC ransomware has grown to include a variety of extortion techniques directed at all types of corporate networks and cloud platforms.
+Ransomware and extortion are a high profit, low-cost business, which has a debilitating impact on targeted organizations, national/regional security, economic security, and public health and safety. What started as simple, single-PC ransomware grew to include various extortion techniques directed at all types of corporate networks and cloud platforms.
-To ensure customers running on Azure are protected against ransomware attacks, Microsoft has invested heavily on the security of our cloud platforms, and provides security controls you need to protect your Azure cloud workloads
+To ensure customers running on Azure are protected against ransomware attacks, Microsoft invests heavily on the security of our cloud platforms, and provides security controls you need to protect your Azure cloud workloads
-By leveraging Azure native ransomware protections and implementing the best practices recommended in this article, you're taking measures that ensure your organization is optimally positioned to prevent, protect, and detect potential ransomware attacks on your Azure assets.
+By using Azure native ransomware protections and implementing the best practices recommended in this article, you're taking measures that positions your organization to prevent, protect, and detect potential ransomware attacks on your Azure assets.
-This article lays out key Azure native capabilities and defenses for ransomware attacks and guidance on how to proactively leverage these to protect your assets on Azure cloud.
+This article lays out key Azure native capabilities and defenses for ransomware attacks and guidance on how to proactively use these to protect your assets on Azure cloud.
## A growing threat
-Ransomware attacks have become one of the biggest security challenges facing businesses today. When successful, ransomware attacks can disable a business core IT infrastructure, and cause destruction that could have a debilitating impact on the physical, economic security or safety of a business. Ransomware attacks are targeted to businesses of all types. This requires that all businesses take preventive measures to ensure protection.
+Ransomware attacks are one of the biggest security challenges facing businesses today. When successful, ransomware attacks can disable a business core IT infrastructure, and cause destruction that could have a debilitating impact on the physical, economic security or safety of a business. Ransomware attacks are targeted to businesses of all types. This requires that all businesses take preventive measures to ensure protection.
-Recent trends on the number of attacks are quite alarming. While 2020 wasn't a good year for ransomware attacks on businesses, 2021 started on a bad trajectory. On May 7, the Colonial pipeline (Colonial) attack shut down services such as pipeline transportation of diesel, gasoline, and jet fuel were temporary halted. Colonial shut the critical fuel network supplying the populous eastern states.
+Recent trends on the number of attacks are alarming. While 2020 wasn't a good year for ransomware attacks on businesses, 2021 started on a bad trajectory. On May 7, the Colonial pipeline (Colonial) attack shut down services such as pipeline transportation of diesel, gasoline, and jet fuel were temporary halted. Colonial shut the critical fuel network supplying the populous eastern states.
-Historically, cyberattacks were seen as a sophisticated set of actions targeting particular industries, which left the remaining industries believing they were outside the scope of cybercrime, and without context about which cybersecurity threats they should prepare for. Ransomware represents a major shift in this threat landscape, and it's made cyberattacks a very real and omnipresent danger for everyone. Encrypted and lost files and threatening ransom notes have now become the top-of-mind fear for most executive teams.
+Historically, cyberattacks were seen as a sophisticated set of actions targeting particular industries, which left the remaining industries believing they were outside the scope of cybercrime, and without context about which cybersecurity threats they should prepare for. Ransomware represents a major shift in this threat landscape, and it's made cyberattacks a real and omnipresent danger for everyone. Encrypted and lost files and threatening ransom notes have now become the top-of-mind fear for most executive teams.
Ransomware's economic model capitalizes on the misperception that a ransomware attack is solely a malware incident. Whereas in reality ransomware is a breach involving human adversaries attacking a network.
For many organizations, the cost to rebuild from scratch after a ransomware inci
## What is ransomware
-Ransomware is a type of malware that infects a computer and restricts a user's access to the infected system or specific files in order to extort them for money. After the target system has been compromised, it typically locks out most interaction and displays an on-screen alert, typically stating that the system has been locked or that all of their files have been encrypted. It then demands a substantial ransom be paid before the system is released or files decrypted.
+Ransomware is a type of malware that infects a computer and restricts a user's access to the infected system or specific files in order to extort them for money. After the target system is compromised, it typically locks out most interaction and displays an on-screen alert, typically stating that the system is locked or that all of their files have been encrypted. It then demands a substantial ransom be paid before the system is released or files decrypted.
-Ransomware will typically exploit the weaknesses or vulnerabilities in your organization's IT systems or infrastructures to succeed. The attacks are so obvious that it does not take much investigation to confirm that your business has been attacked or that an incident should be declared. The exception would be a spam email that demands ransom in exchange for supposedly compromising materials. In this case, these types of incidents should be dealt with as spam unless the email contains highly specific information.
+Ransomware will typically exploit the weaknesses or vulnerabilities in your organization's IT systems or infrastructures to succeed. The attacks are so obvious that it doesn't take much investigation to confirm that your business has been attacked or that an incident should be declared. The exception would be a spam email that demands ransom in exchange for supposedly compromising materials. In this case, these types of incidents should be dealt with as spam unless the email contains highly specific information.
Any business or organization that operates an IT system with data in it can be attacked. Although individuals can be targeted in a ransomware attack, most attacks are targeted at businesses. While the Colonial ransomware attack of May 2021 drew considerable public attention, our Detection and Response team (DART)'s ransomware engagement data shows that the energy sector represents one of the most targeted sectors, along with the financial, healthcare, and entertainment sectors. And despite continued promises not to attack hospitals or healthcare companies during a pandemic, healthcare remains the number one target of human operated ransomware.
Any business or organization that operates an IT system with data in it can be a
When attacking cloud infrastructure, adversaries often attack multiple resources to try to obtain access to customer data or company secrets. The cloud "kill chain" model explains how attackers attempt to gain access to any of your resources running in the public cloud through a four-step process: exposure, access, lateral movement, and actions.
-1. Exposure is where attackers look for opportunities to gain access to your infrastructure. For example, attackers know customer-facing applications must be open for legitimate users to access them. Those applications are exposed to the Internet and therefore susceptible to attacks.
-1. Attackers will try to exploit an exposure to gain access to your public cloud infrastructure. This can be done through compromised user credentials, compromised instances, or misconfigured resources.
-1. During the lateral movement stage, attackers discover what resources they have access to and what the scope of that access is. Successful attacks on instances give attackers access to databases and other sensitive information. The attacker then searches for additional credentials. Our Microsoft Defender for Cloud data shows that without a security tool to quickly notify you of the attack, it takes organizations on average 101 days to discover a breach. Meanwhile, in just 24-48 hours after a breach, the attacker will usually have complete control of the network.
+1. Exposure is where attackers look for opportunities to gain access to your infrastructure. For example, attackers know customer-facing applications must be open for legitimate users to access them. Those applications are exposed to the Internet and therefore susceptible to attacks.
+1. Attackers try to exploit an exposure to gain access to your public cloud infrastructure. This can be done through compromised user credentials, compromised instances, or misconfigured resources.
+1. During the lateral movement stage, attackers discover what resources they have access to and what the scope of that access is. Successful attacks on instances give attackers access to databases and other sensitive information. The attacker then searches for other credentials. Our Microsoft Defender for Cloud data shows that without a security tool to quickly notify you of the attack, it takes organizations on average 101 days to discover a breach. Meanwhile, in just 24-48 hours after a breach, the attacker usually have complete control of the network.
1. The actions an attacker takes after lateral movement are largely dependent on the resources they were able to gain access to during the lateral movement phase. Attackers can take actions that cause data exfiltration, data loss or launch other attacks. For enterprises, the average financial impact of data loss is now reaching $1.23 million. :::image type="content" source="./media/ransomware/ransomware-2.png" alt-text="Flowcharting showing how cloud infrastructure is attacked: Exposure, Access, Lateral movement, and Actions":::
When attacking cloud infrastructure, adversaries often attack multiple resources
There are several reasons why ransomware attacks succeed. Businesses that are vulnerable often fall victim to ransomware attacks. The following are some of the attack's critical success factors: -- The attack surface has increased as more and more businesses offer more services through digital outlets
+- The attack surface is increased as more businesses offer more services through digital outlets
- There's a considerable ease of obtaining off-the-shelf malware, Ransomware-as-a-Service (RaaS)-- The option to use cryptocurrency for blackmail payments has opened new avenues for exploit
+- The option to use cryptocurrency for blackmail payments openes new avenues for exploit
- Expansion of computers and their usage in different workplaces (local school districts, police departments, police squad cars, etc.) each of which is a potential access point for malware, resulting in potential attack surface-- Prevalence of old, outdated, and antiquated infrastructure systems and software
+- Prevalence of old, outdated, and antiquated infrastructure systems and software
- Poor patch-management regimens-- Outdated or very old operating systems that are close to or have gone beyond end-of-support dates
+- Outdated or old operating systems that are close to or have gone beyond end-of-support dates
- Lack of resources to modernize the IT footprint-- Knowledge gap
+- Knowledge gap
- Lack of skilled staff and key personnel overdependency - Poor security architecture
Attackers use different techniques, such as Remote Desktop Protocol (RDP) brute
## Should you pay?
-There are varying opinions on what the best option is when confronted with this vexing demand. The Federal Bureau of Investigation (FBI) advises victims not to pay ransom but to instead be vigilant and take proactive measures to secure their data before an attack. They contend that paying doesn't guarantee that locked systems and encrypted data will be released again. The FBI says another reason not to pay is that payments to cyber criminals incentivizes them to continue to attack organizations.
+There are varying opinions on what the best option is when confronted with this vexing demand. The Federal Bureau of Investigation (FBI) advises victims not to pay ransom but to instead be vigilant and take proactive measures to secure their data before an attack. They contend that paying doesn't guarantee that locked systems and encrypted data are released again. The FBI says another reason not to pay is that payments to cyber criminals incentivize them to continue to attack organizations
Nevertheless, some victims elect to pay the ransom demand even though system and data access isn't guaranteed after paying the ransom. By paying, such organizations take the calculated risk to pay in hopes of getting back their system and data and quickly resuming normal operations. Part of the calculation is reduction in collateral costs such as lost productivity, decreased revenue over time, exposure of sensitive data, and potential reputational damage.
-The best way to prevent paying ransom is not to fall victim by implementing preventive measures and having tool saturation to protect your organization from every step that attacker takes wholly or incrementally to hack into your system. In addition, having the ability to recover impacted assets will ensure restoration of business operations in a timely fashion. Azure Cloud has a robust set of tools to guide you all the way.
+The best way to prevent paying ransom isn't to fall victim by implementing preventive measures and having tool saturation to protect your organization from every step that attacker takes wholly or incrementally to hack into your system. In addition, having the ability to recover impacted assets ensure restoration of business operations in a timely fashion. Azure Cloud has a robust set of tools to guide you all the way.
### What is the typical cost to a business?
The impact of a ransomware attack on any organization is difficult to quantify a
- Intellectual property theft - Compromised customer trust and a tarnished reputation
-Colonial Pipeline paid about $4.4 Million in ransom to have their data released. This doesn't include the cost of downtime, lost productive, lost sales and the cost of restoring services. More broadly, a significant impact is the "knock-on effect" of impacting high numbers of businesses and organizations of all kinds including towns and cities in their local areas. The financial impact is also staggering. According to Microsoft, the global cost associated with ransomware recovery is projected to exceed $20 billion in 2021.
+Colonial Pipeline paid about $4.4 Million in ransom to have their data released. This doesn't include the cost of downtime, lost productive, lost sales and the cost of restoring services. More broadly, a significant impact is the "knock-on effect" of impacting high numbers of businesses and organizations of all kinds including towns and cities in their local areas. The financial impact is also staggering. According to Microsoft, the global cost associated with ransomware recovery is projected to exceed $20 billion in 2021.
:::image type="content" source="./media/ransomware/ransomware-4.png" alt-text="Bar chart showing impact to business":::
security Technical Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/technical-capabilities.md
Previously updated : 01/20/2023 Last updated : 06/28/2024
security Zero Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/zero-trust.md
Previously updated : 03/31/2023 Last updated : 06/28/2024 # Zero Trust security
sentinel Connect Log Forwarder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-log-forwarder.md
Title: Deploy a log forwarder to ingest Syslog and CEF logs to Microsoft Sentinel | Microsoft Docs description: Learn how to deploy a log forwarder, consisting of a Syslog daemon and the Log Analytics agent, as part of the process of ingesting Syslog and CEF logs to Microsoft Sentinel.-++ Previously updated : 01/09/2023- Last updated : 06/18/2024 # Deploy a log forwarder to ingest Syslog and CEF logs to Microsoft Sentinel > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that has reached End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
To ingest Syslog and CEF logs into Microsoft Sentinel, particularly from devices and appliances onto which you can't install the Log Analytics agent directly, you'll need to designate and configure a Linux machine that will collect the logs from your devices and forward them to your Microsoft Sentinel workspace. This machine can be a physical or virtual machine in your on-premises environment, an Azure VM, or a VM in another cloud.
Your machine must meet the following requirements:
- **Operating system**
- - CentOS 7 and 8 (not 6), including minor versions (64-bit/32-bit)
- Amazon Linux 2 (64-bit only) - Oracle Linux 7, 8 (64-bit/32-bit) - Red Hat Enterprise Linux (RHEL) Server 7 and 8 (not 6), including minor versions (64-bit/32-bit) - Debian GNU/Linux 8 and 9 (64-bit/32-bit) - Ubuntu Linux 20.04 LTS (64-bit only) - SUSE Linux Enterprise Server 12, 15 (64-bit only)
+ - CentOS distributions **are no longer supported**, as they have reached End Of Life (EOL) status. See note at the beginning of this article.
- **Daemon versions**
sentinel Connect Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-syslog.md
Title: Connect Syslog data to Microsoft Sentinel description: Connect any machine or appliance that supports Syslog to Microsoft Sentinel by using an agent on a Linux machine between the appliance and Microsoft Sentinel. + Previously updated : 06/14/2023- Last updated : 06/18/2024 # Collect data from Linux-based sources using Syslog > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that has reached End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
[!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)]
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
Title: Find your Microsoft Sentinel data connector | Microsoft Docs
description: Learn about specific configuration steps for Microsoft Sentinel data connectors. Previously updated : 06/27/2024 Last updated : 06/28/2024 appliesto:
Contact the solution provider for more information or where information is unava
- [Cisco Meraki](data-connectors/cisco-meraki.md) - [Cisco Secure Endpoint (AMP) (using Azure Functions)](data-connectors/cisco-secure-endpoint-amp.md) - [Cisco Secure Cloud Analytics](data-connectors/cisco-secure-cloud-analytics.md)-- [Cisco Stealthwatch](data-connectors/cisco-stealthwatch.md) - [Cisco UCS](data-connectors/cisco-ucs.md) - [Cisco Umbrella (using Azure Functions)](data-connectors/cisco-umbrella.md) - [Cisco Web Security Appliance](data-connectors/cisco-web-security-appliance.md)
Contact the solution provider for more information or where information is unava
- [Holm Security Asset Data (using Azure Functions)](data-connectors/holm-security-asset-data.md)
-## HYAS Infosec Inc
--- [HYAS Protect (using Azure Functions)](data-connectors/hyas-protect.md)- ## Illumio - [[Deprecated] Illumio Core via Legacy Agent](data-connectors/deprecated-illumio-core-via-legacy-agent.md)
sentinel Cisco Stealthwatch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-stealthwatch.md
- Title: "Cisco Stealthwatch connector for Microsoft Sentinel"
-description: "Learn how to install the connector Cisco Stealthwatch to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# Cisco Stealthwatch connector for Microsoft Sentinel
-
-The [Cisco Stealthwatch](https://www.cisco.com/c/en/us/products/security/stealthwatch/https://docsupdatetracker.net/index.html) data connector provides the capability to ingest Cisco Stealthwatch events into Microsoft Sentinel.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | Syslog (StealthwatchEvent)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
-
-## Query samples
-
-**Top 10 Sources**
-
- ```kusto
-StealthwatchEvent
-
- | summarize count() by tostring(DvcHostname)
-
- | top 10 by count_
- ```
---
-## Vendor installation instructions
--
-> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected [**StealthwatchEvent**](https://aka.ms/sentinel-stealthwatch-parser) which is deployed with the Microsoft Sentinel Solution.
--
-> [!NOTE]
- > This data connector has been developed using Cisco Stealthwatch version 7.3.2
-
-1. Install and onboard the agent for Linux or Windows
-
-Install the agent on the Server where the Cisco Stealthwatch logs are forwarded.
-
-> Logs from Cisco Stealthwatch Server deployed on Linux or Windows servers are collected by **Linux** or **Windows** agents.
----
-2. Configure Cisco Stealthwatch event forwarding
-
-Follow the configuration steps below to get Cisco Stealthwatch logs into Microsoft Sentinel.
-1. Log in to the Stealthwatch Management Console (SMC) as an administrator.
-2. In the menu bar, click **Configuration** **>** **Response Management**.
-3. From the **Actions** section in the **Response Management** menu, click **Add > Syslog Message**.
-4. In the Add Syslog Message Action window, configure parameters.
-5. Enter the following custom format:
-|Lancope|Stealthwatch|7.3|{alarm_type_id}|0x7C|src={source_ip}|dst={target_ip}|dstPort={port}|proto={protocol}|msg={alarm_type_description}|fullmessage={details}|start={start_active_time}|end={end_active_time}|cat={alarm_category_name}|alarmID={alarm_id}|sourceHG={source_host_group_names}|targetHG={target_host_group_names}|sourceHostSnapshot={source_url}|targetHostSnapshot={target_url}|flowCollectorName={device_name}|flowCollectorIP={device_ip}|domain={domain_name}|exporterName={exporter_hostname}|exporterIPAddress={exporter_ip}|exporterInfo={exporter_label}|targetUser={target_username}|targetHostname={target_hostname}|sourceUser={source_username}|alarmStatus={alarm_status}|alarmSev={alarm_severity_name}
-
-6. Select the custom format from the list and click **OK**
-7. Click **Response Management > Rules**.
-8. Click **Add** and select **Host Alarm**.
-9. Provide a rule name in the **Name** field.
-10. Create rules by selecting values from the Type and Options menus. To add more rules, click the ellipsis icon. For a Host Alarm, combine as many possible types in a statement as possible.
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscostealthwatch?tab=Overview) in the Azure Marketplace.
sentinel Hyas Protect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/hyas-protect.md
- Title: "HYAS Protect (using Azure Functions) connector for Microsoft Sentinel"
-description: "Learn how to install the connector HYAS Protect (using Azure Functions) to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# HYAS Protect (using Azure Functions) connector for Microsoft Sentinel
-
-HYAS Protect provide logs based on reputation values - Blocked, Malicious, Permitted, Suspicious.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Azure function app code** | https://aka.ms/sentinel-HYASProtect-functionapp |
-| **Log Analytics table(s)** | HYASProtectDnsSecurityLogs_CL<br/> |
-| **Data collection rules support** | Not currently supported |
-| **Supported by** | [HYAS](https://www.hyas.com/contact) |
-
-## Query samples
-
-**All Logs**
-
- ```kusto
-HYASProtectDnsSecurityLogs_CL
- ```
---
-## Prerequisites
-
-To integrate with HYAS Protect (using Azure Functions) make sure you have:
--- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).-- **REST API Credentials/permissions**: **HYAS API Key** is required for making API calls.--
-## Vendor installation instructions
--
-> [!NOTE]
- > This connector uses Azure Functions to connect to the HYAS API to pull Logs into Microsoft Sentinel. This might result in additional costs for data ingestion and for storing data in Azure Blob Storage costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) and [Azure Blob Storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/) for details.
--
->**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
----
-Option 1 - Azure Resource Manager (ARM) Template
-
-Use this method for automated deployment of the HYAS Protect data connector using an ARM Tempate.
-
-1. Click the **Deploy to Azure** button below.
-
- [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-HYASProtect-azuredeploy)
-2. Select the preferred **Subscription**, **Resource Group** and **Location**.
-3. Enter the **Function Name**, **Table Name**, **Workspace ID**, **Workspace Key**, **API Key**, **TimeInterval**, **FetchBlockedDomains**, **FetchMaliciousDomains**, **FetchSuspiciousDomains**, **FetchPermittedDomains** and deploy.
-4. Mark the checkbox labeled **I agree to the terms and conditions stated above**.
-5. Click **Purchase** to deploy.
-
-Option 2 - Manual Deployment of Azure Functions
-
-Use the following step-by-step instructions to deploy the HYAS Protect Logs data connector manually with Azure Functions (Deployment via Visual Studio Code).
--
-**1. Deploy a Function App**
-
-> NOTE:You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
-
-1. Download the [Azure Function App](https://aka.ms/sentinel-HYASProtect-functionapp) file. Extract archive to your local development computer.
-2. Start VS Code. Choose File in the main menu and select Open Folder.
-3. Select the top level folder from extracted files.
-4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button.
-If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure**
-If you're already signed in, go to the next step.
-5. Provide the following information at the prompts:
-
- a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app.
-
- b. **Select Subscription:** Choose the subscription to use.
-
- c. Select **Create new Function App in Azure** (Don't choose the Advanced option)
-
- d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. HyasProtectLogsXXX).
-
- e. **Select a runtime:** Choose Python 3.8.
-
- f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft sentinel is located.
-
-6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied.
-7. Go to Azure Portal for the Function App configuration.
--
-**2. Configure the Function App**
-
-1. In the Function App, select the Function App Name and select **Configuration**.
-2. In the **Application settings** tab, select **+ New application setting**.
-3. Add each of the following application settings individually, with their respective string values (case-sensitive):
- APIKey
- Polling
- WorkspaceID
- WorkspaceKey
-. Once all application settings have been entered, click **Save**.
sentinel Troubleshooting Cef Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/troubleshooting-cef-syslog.md
Title: Troubleshoot a connection between Microsoft Sentinel and a CEF or Syslog data connector| Microsoft Docs description: Learn how to troubleshoot issues with your Microsoft Sentinel CEF or Syslog data connector.-++ Previously updated : 01/09/2023- Last updated : 06/18/2024 # Troubleshoot your CEF or Syslog data connector > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that has reached End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article describes common methods for verifying and troubleshooting a CEF or Syslog data connector for Microsoft Sentinel.
-For example, if your logs are not appearing in Microsoft Sentinel, either in the Syslog or the Common Security Log tables, your data source may be failing to connect or there may be another reason your data is not being ingested.
+For example, if your log messages aren't appearing in the *Syslog* or *CommonSecurityLog* tables, your data source might not be connecting properly. There might also be another reason your data isn't being received.
-Other symptoms of a failed connector deployment include when either the **security_events.conf** or the **security-omsagent.config.conf** files are missing, or if the rsyslog server is not listening on port 514.
+Other symptoms of a failed connector deployment include when either the **security_events.conf** or the **security-omsagent.config.conf** files are missing, or if the rsyslog server isn't listening on port 514.
For more information, see [Connect your external solution using Common Event Format](connect-common-event-format.md) and [Collect data from Linux-based sources using Syslog](connect-syslog.md).
-If you've deployed your connector using a method different than the documented procedure and are having issues, we recommend that you purge the deployment and install again as documented.
+If you deployed your connector using a different method than the documented procedure, and if you're having issues, we recommend that you scrap the deployment and start over, this time following the documented instructions.
This article shows you how to troubleshoot CEF or Syslog connectors with the Log Analytics agent. For troubleshooting information related to ingesting CEF logs via the Azure Monitor Agent (AMA), review the [Common Event Format (CEF) via AMA](connect-cef-ama.md) connector instructions.
This article shows you how to troubleshoot CEF or Syslog connectors with the Log
## How to use this article
-When information in this article is relevant only for Syslog or only for CEF connectors, we've organized the page into tabs. Make sure that you're using the instructions on the correct tab for your connector type.
+When information in this article is relevant only for Syslog or only for CEF connectors, it's presented in separate tabs. Make sure that you're using the instructions on the correct tab for your connector type.
-For example, if you're troubleshooting a CEF connector, start with [Validate CEF connectivity](#validate-cef-connectivity). If you're troubleshooting a Syslog connector, start below, with [Verify your data connector prerequisites](#verify-your-data-connector-prerequisites).
+For example, if you're troubleshooting a CEF connector, start with [Validate CEF connectivity](#validate-cef-connectivity). If you're troubleshooting a Syslog connector, start with [Verify your data connector prerequisites](#verify-your-data-connector-prerequisites).
# [CEF](#tab/cef) ### Validate CEF connectivity
-After you've [deployed your log forwarder](connect-common-event-format.md) and [configured your security solution to send it CEF messages](./connect-common-event-format.md), use the steps in this section to verify connectivity between your security solution and Microsoft Sentinel.
+After you [deploy your log forwarder](connect-common-event-format.md) and [configure your security solution to send it CEF messages](./connect-common-event-format.md), use the steps in this section to verify connectivity between your security solution and Microsoft Sentinel.
This procedure is relevant only for CEF connections, and is *not* relevant for Syslog connections.
This procedure is relevant only for CEF connections, and is *not* relevant for S
- You must have **python 2.7** or **3** installed on your log forwarder machine. Use the `python --version` command to check.
- - You may need the Workspace ID and Workspace Primary Key at some point in this process. You can find them in the workspace resource, under **Agents management**.
+ - You might need the Workspace ID and Workspace Primary Key at some point in this process. You can find them in the workspace resource, under **Agents management**.
-1. From the Microsoft Sentinel navigation menu, open **Logs**. Run a query using the **CommonSecurityLog** schema to see if you are receiving logs from your security solution.
+1. From the Microsoft Sentinel navigation menu, open **Logs**. Run a query using the **CommonSecurityLog** schema to see if you're receiving logs from your security solution.
- It may take about 20 minutes until your logs start to appear in **Log Analytics**.
+ It might take about 20 minutes until your logs start to appear in **Log Analytics**.
-1. If you don't see any results from the query, verify that events are being generated from your security solution, or try generating some, and verify they are being forwarded to the Syslog forwarder machine you designated.
+1. If you don't see any results from the query, verify that your security solution is generating log messages. Or, try taking some actions to generate log messages, and verify that the messages are forwarded to your designated Syslog forwarder machine.
-1. Run the following script on the log forwarder (applying the Workspace ID in place of the placeholder) to check connectivity between your security solution, the log forwarder, and Microsoft Sentinel. This script checks that the daemon is listening on the correct ports, that the forwarding is properly configured, and that nothing is blocking communication between the daemon and the Log Analytics agent. It also sends mock messages 'TestCommonEventFormat' to check end-to-end connectivity. <br>
+1. To check connectivity between your security solution, the log forwarder, and Microsoft Sentinel, run the following script on the log forwarder (applying the Workspace ID in place of the placeholder). This script checks that the daemon is listening on the correct ports, that the forwarding is properly configured, and that nothing is blocking communication between the daemon and the Log Analytics agent. It also sends mock messages 'TestCommonEventFormat' to check end-to-end connectivity. <br>
```bash sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py [WorkspaceID] ```
- - You may get a message directing you to run a command to correct an issue with the **mapping of the *Computer* field**. See the [explanation in the validation script](#mapping-command) for details.
+ - You might get a message directing you to run a command to correct an issue with the **mapping of the *Computer* field**. See the [explanation in the validation script](#mapping-command) for details.
- - You may get a message directing you to run a command to correct an issue with the **parsing of Cisco ASA firewall logs**. See the [explanation in the validation script](#parsing-command) for details.
+ - You might get a message directing you to run a command to correct an issue with the **parsing of Cisco ASA firewall logs**. See the [explanation in the validation script](#parsing-command) for details.
### CEF validation script explained
For an rsyslog daemon, the CEF validation script runs the following checks:
grep -i "return ident if ident.include?('%ASA')" /opt/microsoft/omsagent/plugin/security_lib.rb ```
- - <a name="parsing-command"></a>If there is an issue with the parsing, the script will produce an error message directing you to **manually run the following command** (applying the Workspace ID in place of the placeholder). The command will ensure the correct parsing and restart the agent.
+ - <a name="parsing-command"></a>If there's an issue with the parsing, the script produces an error message directing you to **manually run the following command** (applying the Workspace ID in place of the placeholder). The command ensures the correct parsing and restarts the agent.
```bash # Cisco ASA parsing fix
For an rsyslog daemon, the CEF validation script runs the following checks:
grep -i "'Host' => record\['host'\]" /opt/microsoft/omsagent/plugin/filter_syslog_security.rb ```
- - <a name="mapping-command"></a>If there is an issue with the mapping, the script will produce an error message directing you to **manually run the following command** (applying the Workspace ID in place of the placeholder). The command will ensure the correct mapping and restart the agent.
+ - <a name="mapping-command"></a>If there's an issue with the mapping, the script produces an error message directing you to **manually run the following command** (applying the Workspace ID in place of the placeholder). The command ensures the correct mapping and restarts the agent.
```bash # Computer field mapping fix
For a syslog-ng daemon, the CEF validation script runs the following checks:
grep -i "return ident if ident.include?('%ASA')" /opt/microsoft/omsagent/plugin/security_lib.rb ```
- - <a name="parsing-command"></a>If there is an issue with the parsing, the script will produce an error message directing you to **manually run the following command** (applying the Workspace ID in place of the placeholder). The command will ensure the correct parsing and restart the agent.
+ - <a name="parsing-command"></a>If there's an issue with the parsing, the script produces an error message directing you to **manually run the following command** (applying the Workspace ID in place of the placeholder). The command ensures the correct parsing and restarts the agent.
```bash # Cisco ASA parsing fix
For a syslog-ng daemon, the CEF validation script runs the following checks:
grep -i "'Host' => record\['host'\]" /opt/microsoft/omsagent/plugin/filter_syslog_security.rb ```
- - <a name="mapping-command"></a>If there is an issue with the mapping, the script will produce an error message directing you to **manually run the following command** (applying the Workspace ID in place of the placeholder). The command will ensure the correct mapping and restart the agent.
+ - <a name="mapping-command"></a>If there's an issue with the mapping, the script produces an error message directing you to **manually run the following command** (applying the Workspace ID in place of the placeholder). The command ensures the correct mapping and restarts the agent.
```bash # Computer field mapping fix
For a syslog-ng daemon, the CEF validation script runs the following checks:
### Troubleshooting Syslog data connectors
-If you are troubleshooting a Syslog data connector, start with verifying your prerequisites in the section [below](#verify-your-data-connector-prerequisites), using the information in the **Syslog** tab.
+If you're troubleshooting a Syslog data connector, start with verifying your prerequisites in the [next section](#verify-your-data-connector-prerequisites), using the information in the **Syslog** tab.
If you're using an Azure Virtual Machine as a CEF collector, verify the followin
### On-premises or a non-Azure Virtual Machine
-If you are using an on-premises machine or a non-Azure virtual machine for your data connector, make sure that you've run the installation script on a fresh installation of a supported Linux operating system:
+If you're using an on-premises machine or a non-Azure virtual machine for your data connector, make sure that you've run the installation script on a fresh installation of a supported Linux operating system:
> [!TIP] > You can also find this script from the **Common Event Format** data connector page in Microsoft Sentinel.
127.0.0.1:25226
If you're using an Azure Virtual Machine as a Syslog collector, verify the following: -- While you are setting up your Syslog data connector, make sure to turn off your [Microsoft Defender for Cloud auto-provisioning settings](../security-center/security-center-enable-data-collection.md) for the [MM#connector-options).
+- While you're setting up your Syslog data connector, make sure to turn off your [Microsoft Defender for Cloud auto-provisioning settings](../security-center/security-center-enable-data-collection.md) for the [MM#connector-options).
You can turn them back on after your data connector is completely set up.
This section describes how to troubleshoot issues that are certainly derived fro
1. Do one of the following:
- - If you do not see any packets arriving, confirm the NSG security group permissions and the routing path to the Syslog Collector.
+ - If you don't see any packets arriving, confirm the NSG security group permissions and the routing path to the Syslog Collector.
- - If you do see packets arriving, confirm that they are not being rejected.
+ - If you do see packets arriving, confirm that they aren't being rejected.
- If you see rejected packets, confirm that the IP tables are not blocking the connections.
+ If you see rejected packets, confirm that the IP tables aren't blocking the connections.
- To confirm that packets are not being rejected, run:
+ To confirm that packets aren't being rejected, run:
```config watch -n 2 -d iptables -nvL
This section describes how to troubleshoot issues that are certainly derived fro
0 127.0.0.1:36120 127.0.0.1:25226 ESTABLISHED 1055/rsyslogd ```
- If the connection is blocked, you may have a [blocked SELinux connection to the OMS agent](#selinux-blocking-connection-to-the-oms-agent), or a [blocked firewall process](#blocked-firewall-policy). Use the relevant instructions below to determine the issue.
+ If the connection is blocked, you may have a [blocked SELinux connection to the OMS agent](#selinux-blocking-connection-to-the-oms-agent), or a [blocked firewall process](#blocked-firewall-policy). Use the relevant instructions further on to determine the issue.
# [Syslog](#tab/syslog)
This section describes how to troubleshoot issues that are certainly derived fro
1. Do one of the following:
- - If you do not see any packets arriving, confirm the NSG security group permissions and the routing path to the Syslog Collector.
+ - If you don't see any packets arriving, confirm the NSG security group permissions and the routing path to the Syslog Collector.
- - If you do see packets arriving, confirm that they are not being rejected.
+ - If you do see packets arriving, confirm that they aren't being rejected.
- If you see rejected packets, confirm that the IP tables are not blocking the connections.
+ If you see rejected packets, confirm that the IP tables aren't blocking the connections.
- To confirm that packets are not being rejected, run:
+ To confirm that packets aren't being rejected, run:
```config watch -n 2 -d iptables -nvL
This procedure describes how to verify whether a firewall policy is blocking the
# [CEF](#tab/cef)
-If the steps described earlier in this article do not solve your issue, you may have a connectivity problem between the OMS Agent and the Microsoft Sentinel workspace.
+If the steps described earlier in this article don't solve your issue, you may have a connectivity problem between the OMS Agent and the Microsoft Sentinel workspace.
In such cases, continue troubleshooting by verifying the following:
A log entry is returned if the agent is communicating successfully. Otherwise, t
# [Syslog](#tab/syslog)
-If the steps described earlier in this article do not solve your issue, you may have a connectivity problem between the OMS Agent and the Microsoft Sentinel workspace.
+If the steps described earlier in this article don't solve your issue, you may have a connectivity problem between the OMS Agent and the Microsoft Sentinel workspace.
In such cases, continue troubleshooting by verifying the following:
A log entry is returned if the agent is communicating successfully. Otherwise, t
## Next steps
-If the troubleshooting steps in this article have not helped your issue, open a support ticket or use the Microsoft Sentinel community resources. For more information, see [Useful resources for working with Microsoft Sentinel](resources.md).
+If the troubleshooting steps in this article haven't helped your issue, open a support ticket or use the Microsoft Sentinel community resources. For more information, see [Useful resources for working with Microsoft Sentinel](resources.md).
To learn more about Microsoft Sentinel, see the following articles:
update-manager Manage Pre Post Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/manage-pre-post-events.md
Title: Manage the pre and post maintenance configuration events (preview) in Azure Update Manager description: The article provides the steps to manage the pre and post maintenance events in Azure Update Manager. Previously updated : 02/03/2024 Last updated : 06/29/2024
# Manage pre and post events (preview)
-**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers :heavy_check_mark: Azure VMs.
++
+Pre and post events allows you to execute user-defined actions before and after the scheduled maintenance configuration. For more information, go through the [workings of a pre and post event in Azure Update Manager](pre-post-scripts-overview.md).
+This article describes on how to create and manage the pre and post events in Azure Update Manager.
+
+## Event Grid in schedule maintenance configurations
+
+Azure Update Manager leverages Event grid to create and manage pre and post events. For more information, go through the [overview of Event Grid](../event-grid/overview.md). To trigger an event either before or after a schedule maintenance window, you require the following:
+
+1. **Schedule maintenance configuration** - You can create Pre and post events for a schedule maintenance configuration in Azure Update Manager. For more information, see [schedule updates using maintenance configurations](scheduled-patching.md).
+1. **Actions to be performed in the pre or post event** - You can use the [Event handlers](../event-grid/event-handlers.md) (Endpoints) supported by Event Grid to define actions or tasks. Here are examples on how to create Azure Automation Runbooks via Webhooks and Azure Functions. Within these Event handlers/Endpoints, you must define the actions that should be performed as part of pre and post events.
+ 1. **Webhook** - Create a PowerShell 7.2 Runbook.[Learn more](../automation/automation-runbook-types.md#powershell-runbooks) and link the Runbook to a webhook. [Learn more](../automation/automation-webhooks.md).
+ 1. **Azure Function** - Create an Azure Function. [Learn more][Create your first function in the Azure portal](../azure-functions/functions-create-function-app-portal.md).
+1. **Pre and post event** - You can follow the steps shared in the following section to create a pre and post event for schedule maintenance configuration. For more information in the Basics tab of Event
+++
-Pre and post events allows you to execute user-defined actions before and after the schedule patch installation. This article describes on how to create, view, and cancel the pre and post events in Azure Update Manager.
## Register your subscription for public preview
update-manager Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/support-matrix.md
description: This article provides a summary of supported regions and operating
Previously updated : 05/24/2024 Last updated : 06/30/2024
We support VMs created from customized images (including images uploaded to [Azu
|**Linux operating system**| ||
- |CentOS 7 |
|Oracle Linux 7.x, 8x| |Red Hat Enterprise 7, 8, 9| |SUSE Linux Enterprise Server 12.x, 15.0-15.4|
The following table lists the operating systems supported on [Azure Arc-enabled
| Amazon Linux 2023 | | Windows Server 2012 R2 and higher (including Server Core) | | Ubuntu 16.04, 18.04, 20.04, and 22.04 LTS |
- | CentOS Linux 7 and 8 (x64) |
| SUSE Linux Enterprise Server (SLES) 12 and 15 (x64) | | Red Hat Enterprise Linux (RHEL) 7, 8, 9 (x64) | | Amazon Linux 2 (x64) |
virtual-machines Azure Hpc Vm Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/azure-hpc-vm-images.md
This article shares some information on HPC VM images to be used to launch Infin
The Azure HPC team is pleased to announce the availability of optimized and pre-configured Linux VM images for HPC and AI workloads. These VM images are: -- Based on the vanilla Ubuntu and AlmaLinux marketplace VM images.
+- Based on upstream Ubuntu and AlmaLinux marketplace VM images.
- Pre-configured with NVIDIA Mellanox OFED driver for InfiniBand, NVIDIA GPU drivers, popular MPI libraries, vendor tuned HPC libraries, and recommended performance optimizations. - Including optimizations and recommended configurations to deliver optimal performance, consistency, and reliability.
virtual-machines Iaas Antimalware Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/iaas-antimalware-windows.md
vm-windows Previously updated : 04/10/2023 Last updated : 06/28/2024 # Microsoft Antimalware Extension for Windows
virtual-machines Expand Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/expand-disks.md
# Expand virtual hard disks on a Linux VM
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
- **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-This article describes how to expand managed disks for a Linux virtual machine (VM). You can [add data disks](add-disk.md) to provide for additional storage space, and you can also expand an existing data disk. The default virtual hard disk size for the operating system (OS) is typically 30 GB on a Linux VM in Azure. This article covers expanding either OS disks or data disks. You can't expand the size of striped volumes.
+This article describes how to expand managed disks for a Linux virtual machine (VM). You can [add data disks](add-disk.md) to provide for more storage space, and you can also expand an existing data disk. The default virtual hard disk size for the operating system (OS) is typically 30 GB on a Linux VM in Azure. This article covers expanding either OS disks or data disks. You can't expand the size of striped volumes.
-An OS disk has a maximum capacity of 4,095 GiB. However, many operating systems are partitioned with [master boot record (MBR)](https://wikipedia.org/wiki/Master_boot_record) by default. MBR limits the usable size to 2 TiB. If you need more than 2 TiB, create and attach data disks and use them for data storage. If you need to store data on the OS disk and require the additional space, convert it to GUID Partition Table (GPT).
+An OS disk has a maximum capacity of 4,095 GiB. However, many operating systems are partitioned with [master boot record (MBR)](https://wikipedia.org/wiki/Master_boot_record) by default. MBR limits the usable size to 2 TiB. If you need more than 2 TiB, consider attaching data disks for data storage. If you do need to store data on the OS disk and require extra space, convert it to GUID Partition Table (GPT).
> [!WARNING] > Always make sure that your filesystem is in a healthy state, your disk partition table type (GPT or MBR) will support the new size, and ensure your data is backed up before you perform disk expansion operations. For more information, see the [Azure Backup quickstart](../../backup/quick-backup-vm-portal.md). ## <a id="identifyDisk"></a>Identify Azure data disk object within the operating system ##
-In the case of expanding a data disk when there are several data disks present on the VM, it may be difficult to relate the Azure LUNs to the Linux devices. If the OS disk needs expansion, it is clearly labeled in the Azure portal as the OS disk.
+When expanding a data disk, when there are several data disks present on the VM, it may be difficult to relate the Azure LUNs to the Linux devices. If the OS disk needs expansion, it is clearly labeled in the Azure portal as the OS disk.
Start by identifying the relationship between disk utilization, mount point, and device, with the ```df``` command.
Filesystem Type Size Used Avail Use% Mounted on
/dev/sde1 ext4 32G 49M 30G 1% /opt/db/log ```
-Here we can see, for example, the `/opt/db/data` filesystem is nearly full, and is located on the `/dev/sdd1` partition. The output of `df` shows the device path regardless of whether the disk is mounted by device path or the (preferred) UUID in the fstab. Also take note of the Type column, indicating the format of the filesystem. This is important later.
+Here we can see, for example, the `/opt/db/data` filesystem is nearly full, and is located on the `/dev/sdd1` partition. The output of `df` shows the device path whether the disk is mounted using the device path or the (preferred) UUID in the fstab. Also take note of the Type column, indicating the format of the filesystem. The format is important later.
-Now locate the LUN that correlates to `/dev/sdd` by examining the contents of `/dev/disk/azure/scsi1`. The output of the following `ls` command shows that the device known as `/dev/sdd` within the Linux OS is located at LUN1 when looking in the Azure portal.
+Now locate the LUN that correlates to `/dev/sdd` by examining the contents of `/dev/disk/azure/scsi1`. The output of the following `ls` command shows that the device known as `/dev/sdd` within the Linux OS is located at LUN1 when looking in the Azure portal.
```bash sudo ls -alF /dev/disk/azure/scsi1/
In the following samples, replace example parameter names such as *myResourceGro
## Expand a disk partition and filesystem > [!NOTE]
-> While there are many tools that may be used for performing the partition resizing, the tools detailed in the remainder of this document are the same tools used by certain automated processes such as cloud-init. As described here, the `growpart` tool with the `gdisk` package provides universal compatibility with GUID Partition Table (GPT) disks, as older versions of some tools such as `fdisk` did not support GPT.
+> While there are many tools that may be used for performing the partition resizing, the tools detailed in the remainder of this document are the same tools used by certain automated processes such as cloud-init. As described here, the `growpart` tool with the `gdisk` package provides universal compatibility with GUID Partition Table (GPT) disks, as older versions of some tools such as `fdisk` did not support GPT.
### Detecting a changed disk size
-If a data disk was expanded without downtime using the procedure mentioned previously, the disk size won't be changed until the device is rescanned, which normally only happens during the boot process. This rescan can be called on-demand with the following procedure. In this example we have detected using the methods in this document that the data disk is currently `/dev/sda` and has been resized from 256 GiB to 512 GiB.
+If a data disk was expanded without downtime using the procedure mentioned previously, the reported disk size doesn't change until the device is rescanned, which normally only happens during the boot process. This rescan can be called on-demand with the following procedure. In this example, we find using the methods in this document that the data disk is currently `/dev/sda` and was resized from 256 GiB to 512 GiB.
1. Identify the currently recognized size on the first line of output from `fdisk -l /dev/sda`
If a data disk was expanded without downtime using the procedure mentioned previ
/dev/sda1 2048 536870878 536868831 256G 83 Linux ```
-1. Insert a `1` character into the rescan file for this device. Note the reference to sda, this would change if a different disk device was resized.
+1. Insert a `1` character into the rescan file for this device. Note the reference to sda in the example. The disk identifier would change if a different disk device was resized.
```bash echo 1 | sudo tee /sys/class/block/sda/device/rescan ```
-1. Verify that the new disk size has been recognized
+1. Verify that the new disk size is now recognized
```bash sudo fdisk -l /dev/sda
If a data disk was expanded without downtime using the procedure mentioned previ
/dev/sda1 2048 536870878 536868831 256G 83 Linux ```
-The remainder of this article uses the OS disk for the examples of the procedure for increasing the size of a volume at the OS level. If the expanded disk is a data disk, use the [previous guidance for identifying the data disk device](#identifyDisk), and follow these instructions as a guideline, substituting the data disk device (for example `/dev/sda`), partition numbers, volume names, mount points, and filesystem formats, as necessary.
+The remainder of this article uses the OS disk for the examples of the procedure for increasing the size of a volume at the OS level. If the expanded disk is a data disk, use the [previous guidance for identifying the data disk device](#identifyDisk), and follow these instructions as a guideline, substituting the data disk device (for example `/dev/sda`), partition numbers, volume names, mount points, and filesystem formats, as necessary.
-All Linux OS guidance should be viewed as generic and may apply on any distribution, but generally matches the conventions of the named marketplace publisher. Reference the Red Hat documents for the package requirements on any distribution claiming Red Hat compatibility, such as CentOS and Oracle.
+All Linux OS guidance should be viewed as generic and may apply on any distribution, but generally matches the conventions of the named marketplace publisher. Reference the Red Hat documents for the package requirements on any distribution based on Red Hat or claiming Red Hat compatibility.
### Increase the size of the OS disk
The following instructions apply to endorsed Linux distributions.
# [Ubuntu](#tab/ubuntu)
-On Ubuntu 16.x and newer, the root partition of the OS disk and filesystems will be automatically expanded to utilize all free contiguous space on the root disk by cloud-init, provided there's a small bit of free space for the resize operation. For this circumstance the sequence is simply
+On Ubuntu 16.x and newer, the root partition of the OS disk and filesystems are automatically expanded to utilize all free contiguous space on the root disk by cloud-init, provided there's a small bit of free space for the resize operation. In this case, the sequence is simply
1. Increase the size of the OS disk as detailed previously 1. Restart the VM, and then access the VM using the **root** user account. 1. Verify that the OS disk now displays an increased file system size.
-As shown in the following example, the OS disk has been resized from the portal to 100 GB. The **/dev/sda1** file system mounted on **/** now displays 97 GB.
+As shown in the following example, the OS disk was resized from the portal to 100 GB. The **/dev/sda1** file system mounted on **/** now displays 97 GB.
```bash df -Th
user@ubuntu:~#
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15, and SUSE SLES 15 for SAP:
-1. Follow the procedure above to expand the disk in the Azure infrastructure.
+1. Follow the procedure previously described to expand the disk in the Azure infrastructure.
1. Access your VM as the **root** user by using the ```sudo``` command after logging in as another user:
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
sudo -i ```
-1. Use the following command to install the **growpart** package, which will be used to resize the partition, if it isn't already present:
+1. Use the following command to install the **growpart** package, which is used to resize the partition, if it isn't already present:
```bash zypper install growpart
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
CHANGED: partition=4 start=3151872 old: size=59762655 end=62914527 new: size=97511391 end=100663263 ```
-1. Run the `lsblk` command again to check whether the partition has been increased.
+1. Run the `lsblk` command again to check whether the partition was increased.
- The following output shows that the **/dev/sda4** partition has been resized to 46.5 GB:
+ The following output shows that the **/dev/sda4** partition was resized to 46.5 GB:
```bash lsblk
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
tmpfs tmpfs 92M 0 92M 0% /run/user/490 ```
- In the preceding example, we can see that the file system size for the OS disk has been increased.
+ In the preceding example, we can see that the file system size for the OS disk was increased.
-# [Red Hat/CentOS with LVM](#tab/rhellvm)
+# [Red Hat with LVM](#tab/rhellvm)
-1. Follow the procedure above to expand the disk in the Azure infrastructure.
+1. Follow the procedure previously described to expand the disk in the Azure infrastructure.
1. Access your VM as the **root** user by using the ```sudo``` command after logging in as another user:
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
ΓööΓöÇrootvg-rootlv xfs 4f3e6f40-61bf-4866-a7ae-5c6a94675193 / ```
-1. Check whether there's free space in the LVM volume group (VG) containing the root partition. If there's free space, skip to step 12.
+1. Check whether there's free space in the LVM volume group (VG) containing the root partition. If there's free space, skip to step 12.
```bash vgdisplay rootvg
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
In this example, the line **Free PE / Size** shows that there's 38.02 GB free in the volume group, as the disk has already been resized.
-1. Install the **cloud-utils-growpart** package to provide the **growpart** command, which is required to increase the size of the OS disk and the gdisk handler for GPT disk layouts This package is preinstalled on most marketplace images
+1. Install the **cloud-utils-growpart** package to provide the **growpart** command, which is required to increase the size of the OS disk and the gdisk handler for GPT disk layouts This package is preinstalled on most marketplace images
```bash
- yum install cloud-utils-growpart gdisk
+ dnf install cloud-utils-growpart gdisk
```
- In RHEL/CentOS 8.x VMs you can use `dnf` command instead of `yum`.
+ In Red Hat versions 7 and below you can use `yum` command instead of `dnf`.
1. Determine which disk and partition holds the LVM physical volume (PV) or volumes in the volume group named **rootvg** by using the **pvscan** command. Note the size and free space listed between the brackets (**[** and **]**).
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
CHANGED: partition=4 start=2054144 old: size=132161536 end=134215680 new: size=199272414 end=201326558 ```
-1. Verify that the partition has resized to the expected size by using the `lsblk` command again. Notice that in the example **sda4** has changed from 63G to 95G.
+1. Verify that the partition has resized to the expected size by using the `lsblk` command again. Notice that in the example **sda4** changed from 63G to 95G.
```bash lsblk /dev/sda4
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
PV /dev/sda4 VG rootvg lvm2 [<95.02 GiB / <70.02 GiB free] ```
-1. Expand the LV by the required amount, which doesn't need to be all the free space in the volume group. In the following example, **/dev/mapper/rootvg-rootlv** is resized from 2 GB to 12 GB (an increase of 10 GB) through the following command. This command will also resize the file system on the LV.
+1. Expand the LV by the required amount, which doesn't need to be all the free space in the volume group. In the following example, **/dev/mapper/rootvg-rootlv** is resized from 2 GB to 12 GB (an increase of 10 GB) through the following command. This command also resizes the file system on the LV.
```bash lvresize -r -L +10G /dev/mapper/rootvg-rootlv
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
> [!NOTE] > To use the same procedure to resize any other logical volume, change the **lv** name in step **12**.
-# [Red Hat/CentOS without LVM](#tab/rhelraw)
+# [Red Hat without LVM](#tab/rhelraw)
-1. Follow the procedure above to expand the disk in the Azure infrastructure.
+1. Follow the procedure previously described to expand the disk in the Azure infrastructure.
1. Access your VM as the **root** user by using the ```sudo``` command after logging in as another user:
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
sudo -i ```
-1. When the VM has restarted, perform the following steps:
+1. When the VM restarts completely, perform the following steps:
1. Install the **cloud-utils-growpart** package to provide the **growpart** command, which is required to increase the size of the OS disk and the gdisk handler for GPT disk layouts. This package is preinstalled on most marketplace images ```bash
- yum install cloud-utils-growpart gdisk
+ dnf install cloud-utils-growpart gdisk
```
- In RHEL/CentOS 8.x VMs you can use `dnf` command instead of `yum`.
-
+ In Red Hat versions 7 and below you can use `yum` command instead of `dnf`.
+
1. Use the **lsblk -f** command to verify the partition and filesystem type holding the root (**/**) partition ```bash
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
ΓööΓöÇsdb1 ext4 923f51ff-acbd-4b91-b01b-c56140920098 /mnt/resource ```
-1. For verification, start by listing the partition table of the sda disk with **gdisk**. In this example, we see a 48.0 GiB disk with partition #2 sized 29.0 GiB. The disk was expanded from 30 GB to 48 GB in the Azure portal.
+1. For verification, start by listing the partition table of the sda disk with **gdisk**. In this example, we see a 48.0 GiB disk with partition #2 sized 29.0 GiB. The disk was expanded from 30 GB to 48 GB in the Azure portal.
```bash gdisk -l /dev/sda
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
15 10240 1024000 495.0 MiB EF00 EFI System Partition ```
-1. Expand the partition for root, in this case sda2 by using the **growpart** command. Using this command expands the partition to use all of the contiguous space on the disk.
+1. Expand the partition for root, in this case sda2 by using the **growpart** command. Using this command expands the partition to use all of the contiguous space on the disk.
```bash growpart /dev/sda 2
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
CHANGED: partition=2 start=2050048 old: size=60862464 end=62912512 new: size=98613214 end=100663262 ```
-1. Now print the new partition table with **gdisk** again. Notice that partition 2 has is now sized 47.0 GiB
+1. Now print the new partition table with **gdisk** again. Notice that partition 2 has is now sized 47.0 GiB
```bash gdisk -l /dev/sda
virtual-network Configure Public Ip Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-firewall.md
In this section, you add a public IP configuration to Azure Firewall. For more i
This example is a simple deployment of Azure Firewall. For advanced configuration and setup, see [Tutorial: Deploy and configure Azure Firewall and policy by using the Azure portal](../../firewall/tutorial-firewall-deploy-portal-policy.md). You can associate an Azure firewall with a network address translation (NAT) gateway to extend the extensibility of source network address translation (SNAT). A NAT gateway can be used to provide outbound connectivity associated with the firewall. With this configuration, all outbound traffic uses the public IP address or addresses of the NAT gateway. For more information, see [Scale SNAT ports with Azure Virtual Network NAT](../../firewall/integrate-with-nat-gateway.md). > [!NOTE]
-> Azure firewall uses the Standard SKU load balancer. Protocols other than Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) in network filter rules are unsupported for SNAT to the public IP of the firewall.
+> Azure Firewall randomly selects one of its associated Public IPs for outbound connectivity and only uses the next available Public IP after no more connections can be made from the current public IP due to SNAT port exhaustion. It is recommended to instead use NAT Gateway to provide dynamic scalability of your outbound connectivity.
+> Protocols other than Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) in network filter rules are unsupported for SNAT to the public IP of the firewall.
> You can integrate an Azure firewall with the Standard SKU load balancer to protect backend pool resources. If you associate the firewall with a public load balancer, configure ingress traffic to be directed to the firewall public IP address. Configure egress via a user-defined route to the firewall public IP address. For more information and setup instructions, see [Integrate Azure Firewall with Azure Standard Load Balancer](../../firewall/integrate-lb.md). ## Next steps