Updates from: 01/19/2024 02:21:36
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure A Sample Node Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-a-sample-node-web-app.md
Title: Configure authentication in a sample Node.js web application by using Azure Active Directory B2C (Azure AD B2C)
-description: This article discusses how to use Azure Active Directory B2C to sign in and sign up users in a Node.js web application.
+description: This article discusses how to use Azure Active Directory B2C to sign in and sign up users in a Node.js web application.
-+ Last updated 01/11/2024
# Configure authentication in a sample Node.js web application by using Azure Active Directory B2C
-This sample article uses a sample Node.js application to show how to add Azure Active Directory B2C (Azure AD B2C) authentication to a Node.js web application. The sample application enables users to sign in, sign out, update profile and reset password using Azure AD B2C user flows. The sample web application uses [Microsoft Authentication Library (MSAL) for Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) to handle authentication and authorization.
+This sample article uses a sample Node.js application to show how to add Azure Active Directory B2C (Azure AD B2C) authentication to a Node.js web application. The sample application enables users to sign in, sign out, update profile and reset password using Azure AD B2C user flows. The sample web application uses [Microsoft Authentication Library (MSAL) for Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) to handle authentication and authorization.
In this article, youΓÇÖll do the following tasks: - Register a web application in the Azure portal. - Create combined **Sign in and sign up**, **Profile editing**, and **Password reset** user flows for the app in the Azure portal. - Update a sample Node application to use your own Azure AD B2C application and user flows.-- Test the sample application.
+- Test the sample application.
## Prerequisites
In this article, youΓÇÖll do the following tasks:
## Step 1: Configure your user flows ## Step 2: Register a web application
-To enable your application sign in with Azure AD B2C, register your app in the Azure AD B2C directory. The app registration establishes a trust relationship between the app and Azure AD B2C.
+To enable your application sign in with Azure AD B2C, register your app in the Azure AD B2C directory. The app registration establishes a trust relationship between the app and Azure AD B2C.
-During app registration, you'll specify the *Redirect URI*. The redirect URI is the endpoint to which the user is redirected by Azure AD B2C after they authenticate with Azure AD B2C. The app registration process generates an *Application ID*, also known as the *client ID*, that uniquely identifies your app. After your app is registered, Azure AD B2C uses both the application ID, and the redirect URI to create authentication requests.
+During app registration, you'll specify the *Redirect URI*. The redirect URI is the endpoint to which the user is redirected by Azure AD B2C after they authenticate with Azure AD B2C. The app registration process generates an *Application ID*, also known as the *client ID*, that uniquely identifies your app. After your app is registered, Azure AD B2C uses both the application ID, and the redirect URI to create authentication requests.
-### Step 2.1: Register the app
+### Step 2.1: Register the app
To register the web app, follow these steps:
To register the web app, follow these steps:
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select **New registration**. 1. Under **Name**, enter a name for the application (for example, *webapp1*).
-1. Under **Supported account types**, select **Accounts in any identity provider or organizational directory (for authenticating users with user flows)**.
+1. Under **Supported account types**, select **Accounts in any identity provider or organizational directory (for authenticating users with user flows)**.
1. Under **Redirect URI**, select **Web** and then, in the URL box, enter `http://localhost:3000/redirect`. 1. Under **Permissions**, select the **Grant admin consent to openid and offline_access permissions** checkbox. 1. Select **Register**.
The `views` folder contains Handlebars files for the application's user interfac
## Step 5: Configure the sample web app
-Open your web app in a code editor such as Visual Studio Code. Under the project root folder, open the *.env* file. This file contains information about your Azure AD B2C identity provider. Update the following app settings properties:
+Open your web app in a code editor such as Visual Studio Code. Under the project root folder, open the *.env* file. This file contains information about your Azure AD B2C identity provider. Update the following app settings properties:
|Key |Value | |||
Your final configuration file should look like the following sample:
You can now test the sample app. You need to start the Node server and access it through your browser on `http://localhost:3000`. 1. In your terminal, run the following code to start the Node.js web server:
-
+ ```bash node index.js ```
You can now test the sample app. You need to start the Node server and access it
### Test profile editing
-1. After you sign in, select **Edit profile**.
-1. Enter new changes as required, and then select **Continue**. You should see the page with sign-in status with the new changes, such as **Given Name**.
+1. After you sign in, select **Edit profile**.
+1. Enter new changes as required, and then select **Continue**. You should see the page with sign-in status with the new changes, such as **Given Name**.
### Test password reset
-1. After you sign in, select **Reset password**.
+1. After you sign in, select **Reset password**.
1. In the next dialog that appears, you can cancel the operation by selecting **Cancel**. Alternatively, enter your email address, and then select **Send verification code**. You'll receive a verification code to your email account. Copy the verification code in your email, enter it into the password reset dialog, and then select **Verify code**. 1. Select **Continue**. 1. Enter your new password, confirm it, and then select **Continue**. You should see the page that shows sign-in status. ### Test sign-out
-After you sign in, select **Sign out**. You should see the page that has a **Sign in** button.
+After you sign in, select **Sign out**. You should see the page that has a **Sign in** button.
## Next steps
active-directory-b2c Enable Authentication Python Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-python-web-app.md
Title: Enable authentication in your own Python web application using Azure Active Directory B2C
-description: This article explains how to enable authentication in your own Python web application using Azure AD B2C
+description: This article explains how to enable authentication in your own Python web application using Azure AD B2C
-+ Last updated 01/11/2024
This article uses [Python 3.9+](https://www.python.org/) and [Flask 2.1](https:/
1. On your file system, create a project folder for this tutorial, such as `my-python-web-app`. 1. In your terminal, change directory into your Python app folder, such as `cd my-python-web-app`. 1. Run the following command to create and activate a virtual environment named `.venv` based on your current interpreter.
-
- # [Linux](#tab/linux)
-
+
+ # [Linux](#tab/linux)
+ ```bash sudo apt-get install python3-venv # If needed python3 -m venv .venv
This article uses [Python 3.9+](https://www.python.org/) and [Flask 2.1](https:/
``` # [macOS](#tab/macos)
-
+ ```zsh python3 -m venv .venv source .venv/bin/activate ```
-
+ # [Windows](#tab/windows)
-
+ ```cmd py -3 -m venv .venv .venv\scripts\activate
This article uses [Python 3.9+](https://www.python.org/) and [Flask 2.1](https:/
``` python -m pip install --upgrade pip
- ```
+ ```
1. To enable the Flask debug features, switch Flask to the development environment to `development` mode. For more information about debugging Flask apps, check out the [Flask documentation](https://flask.palletsprojects.com/en/2.1.x/config/#environment-and-debug-features).
- # [Linux](#tab/linux)
-
+ # [Linux](#tab/linux)
+ ```bash export FLASK_ENV=development ``` # [macOS](#tab/macos)
-
+ ```zsh export FLASK_ENV=development ```
-
+ # [Windows](#tab/windows)
-
+ ```cmd set FLASK_ENV=development ```
msal>=1.7,<2
In your terminal, install the dependencies by running the following commands:
-# [Linux](#tab/linux)
+# [Linux](#tab/linux)
```bash python -m pip install -r requirements.txt
py -m pip install -r requirements.txt
-## Step 3: Build app UI components
+## Step 3: Build app UI components
-Flask is a lightweight Python framework for web applications that provides the basics for URL routing and page rendering. It leverages Jinja2 as its template engine to render the content of your app. For more information, check out the [template designer documentation](https://jinja.palletsprojects.com/en/3.1.x/templates/). In this section, you add the required templates that provide the basic functionality of your web app.
+Flask is a lightweight Python framework for web applications that provides the basics for URL routing and page rendering. It leverages Jinja2 as its template engine to render the content of your app. For more information, check out the [template designer documentation](https://jinja.palletsprojects.com/en/3.1.x/templates/). In this section, you add the required templates that provide the basic functionality of your web app.
### Step 3.1 Create a base template
Add the following templates under the templates folder. These templates extend t
{% extends "base.html" %} {% block title %}Home{% endblock %} {% block content %}
-
+ <h1>Microsoft Identity Python Web App</h1>
-
+ {% if user %} <h2>Claims:</h2> <pre>{{ user |tojson(indent=4) }}</pre>
-
-
++ {% if config.get("ENDPOINT") %} <li><a href='/graphcall'>Call Microsoft Graph API</a></li> {% endif %}
-
+ {% if config.get("B2C_PROFILE_AUTHORITY") %} <li><a href='{{_build_auth_code_flow(authority=config["B2C_PROFILE_AUTHORITY"])["auth_uri"]}}'>Edit Profile</a></li> {% endif %}
-
+ <li><a href="/logout">Logout</a></li>
-
+ {% else %} <li><a href='{{ auth_url }}'>Sign In</a></li> {% endif %}
-
+ {% endblock %} ```
Add the following templates under the templates folder. These templates extend t
```html {% extends "base.html" %} {% block title%}Error{% endblock%}
-
+ {% block metadata %} {% if config.get("B2C_RESET_PASSWORD_AUTHORITY") and "AADB2C90118" in result.get("error_description") %} <!-- See also https://learn.microsoft.com/azure/active-directory-b2c/active-directory-b2c-reference-policies#linking-user-flows -->
Add the following templates under the templates folder. These templates extend t
content='0;{{_build_auth_code_flow(authority=config["B2C_RESET_PASSWORD_AUTHORITY"])["auth_uri"]}}'> {% endif %} {% endblock %}
-
+ {% block content %} <h2>Login Failure</h2> <dl> <dt>{{ result.get("error") }}</dt> <dd>{{ result.get("error_description") }}</dd> </dl>
-
+ <a href="{{ url_for('index') }}">Homepage</a> {% endblock %} ```
B2C_PROFILE_AUTHORITY = authority_template.format(
B2C_RESET_PASSWORD_AUTHORITY = authority_template.format( tenant=b2c_tenant, user_flow=resetpassword_user_flow)
-REDIRECT_PATH = "/getAToken"
+REDIRECT_PATH = "/getAToken"
# This is the API resource endpoint ENDPOINT = '' # Application ID URI of app registration in Azure portal
if __name__ == "__main__":
In the Terminal, run the app by entering the following command, which runs the Flask development server. The development server looks for `app.py` by default. Then, open your browser and navigate to the web app URL: `http://localhost:5000`.
-# [Linux](#tab/linux)
+# [Linux](#tab/linux)
```bash python -m flask run --host localhost --port 5000
active-directory-b2c Partner Eid Me https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-eid-me.md
To configure your tenant application as an eID-ME relying party in eID-Me, suppl
| Application privacy policy URL| Appears to the end user| >[!NOTE]
->When the relying party is configurede, ID-Me provides a Client ID and a Client Secret. Note the Client ID and Client Secret to configure the identity provider (IdP) in Azure AD B2C.
+>When the relying party is configured, ID-Me provides a Client ID and a Client Secret. Note the Client ID and Client Secret to configure the identity provider (IdP) in Azure AD B2C.
## Add a new Identity provider in Azure AD B2C
ai-services Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/manage-costs.md
Enabling capabilities such as sending data to Azure Monitor Logs and alerting in
You can pay for Azure OpenAI Service charges with your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay for charges for third party products and services including those products and services found in the Azure Marketplace.
+### HTTP Error response code and billing status in Azure OpenAI Service
+
+If the service performs processing, you may be charged even if the status code is not successful (not 200).
+For example, a 400 error due to a content filter or input limit, or a 408 error due to a timeout.
+
+If the service doesn't perform processing, you won't be charged.
+For example, a 401 error due to authentication or a 429 error due to exceeding the Rate Limit.
+ ## Monitor costs As you use Azure resources with Azure OpenAI, you incur costs. Azure resource usage unit costs vary by time intervals, such as seconds, minutes, hours, and days, or by unit usage, such as bytes and megabytes. As soon as Azure OpenAI use starts, costs can be incurred and you can see the costs in the [cost analysis](../../../cost-management/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
ai-services Audio Processing Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/audio-processing-overview.md
Previously updated : 09/07/2022 Last updated : 1/18/2024
The Microsoft Audio Stack is a set of enhancements optimized for speech processi
* **Beamforming** - Localize the origin of sound and optimize the audio signal using multiple microphones. * **Dereverberation** - Reduce the reflections of sound from surfaces in the environment. * **Acoustic echo cancellation** - Suppress audio being played out of the device while microphone input is active.
-* **Automatic gain control** - Dynamically adjust the personΓÇÖs voice level to account for soft speakers, long distances, or non-calibrated microphones.
+* **Automatic gain control** - Dynamically adjust the personΓÇÖs voice level to account for soft speakers, long distances, or noncalibrated microphones.
[ ![Block diagram of Microsoft Audio Stack's enhancements.](media/audio-processing/mas-block-diagram.png) ](media/audio-processing/mas-block-diagram.png#lightbox)
-Different scenarios and use-cases can require different optimizations that influence the behavior of the audio processing stack. For example, in telecommunications scenarios such as telephone calls, it is acceptable to have minor distortions in the audio signal after processing has been applied. This is because humans can continue to understand the speech with high accuracy. However, it is unacceptable and disruptive for a person to hear their own voice in an echo. This contrasts with speech processing scenarios, where distorted audio can adversely impact a machine-learned speech recognition modelΓÇÖs accuracy, but it is acceptable to have minor levels of echo residual.
+Different scenarios and use-cases can require different optimizations that influence the behavior of the audio processing stack. For example, in telecommunications scenarios such as telephone calls, it's acceptable to have minor distortions in the audio signal after processing has been applied. This is because humans can continue to understand the speech with high accuracy. However, it's unacceptable and disruptive for a person to hear their own voice in an echo. This contrasts with speech processing scenarios, where distorted audio can adversely affect a machine-learned speech recognition model's accuracy, but it's acceptable to have minor levels of echo residual.
Processing is performed fully locally where the Speech SDK is being used. No audio data is streamed to MicrosoftΓÇÖs cloud services for processing by the Microsoft Audio Stack. The only exception to this is for the Conversation Transcription Service, where raw audio is sent to MicrosoftΓÇÖs cloud services for processing.
The Microsoft Audio Stack also powers a wide range of Microsoft products:
The Speech SDK integrates Microsoft Audio Stack (MAS), allowing any application or product to use its audio processing capabilities on input audio. Some of the key Microsoft Audio Stack features available via the Speech SDK include: * **Real-time microphone input & file input** - Microsoft Audio Stack processing can be applied to real-time microphone input, streams, and file-based input.
-* **Selection of enhancements** - To allow for full control of your scenario, the SDK allows you to disable individual enhancements like dereverberation, noise suppression, automatic gain control, and acoustic echo cancellation. For example, if your scenario does not include rendering output audio that needs to be suppressed from the input audio, you have the option to disable acoustic echo cancellation.
+* **Selection of enhancements** - To allow for full control of your scenario, the SDK allows you to disable individual enhancements like dereverberation, noise suppression, automatic gain control, and acoustic echo cancellation. For example, if your scenario doesn't include rendering output audio that needs to be suppressed from the input audio, you have the option to disable acoustic echo cancellation.
* **Custom microphone geometries** - The SDK allows you to provide your own custom microphone geometry information, in addition to supporting preset geometries like linear two-mic, linear four-mic, and circular 7-mic arrays (see more information on supported preset geometries at [Microphone array recommendations](speech-sdk-microphone.md#microphone-geometry)). * **Beamforming angles** - Specific beamforming angles can be provided to optimize audio input originating from a predetermined location, relative to the microphones.
ai-services Audio Processing Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/audio-processing-speech-sdk.md
Previously updated : 09/16/2022 Last updated : 1/18/2024 ms.devlang: cpp
-# ms.devlang: cpp, csharp, java
SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, audioInput);
## Custom microphone geometry This sample shows how to use MAS with a custom microphone geometry on a specified audio input device. In this example:
-* **Enhancement options** - The default enhancements will be applied on the input audio stream.
+* **Enhancement options** - The default enhancements are applied on the input audio stream.
* **Custom geometry** - A custom microphone geometry for a 7-microphone array is provided via the microphone coordinates. The units for coordinates are millimeters. * **Audio input** - The audio input is from a file, where the audio within the file is expected from an audio input device corresponding to the custom geometry specified.
SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, audioInput);
This sample shows how to use MAS with a custom set of enhancements on the input audio. By default, all enhancements are enabled but there are options to disable dereverberation, noise suppression, automatic gain control, and echo cancellation individually by using `AudioProcessingOptions`. In this example:
-* **Enhancement options** - Echo cancellation and noise suppression will be disabled, while all other enhancements remain enabled.
+* **Enhancement options** - Echo cancellation and noise suppression are disabled, while all other enhancements remain enabled.
* **Audio input device** - The audio input device is the default microphone of the device. ### [C#](#tab/csharp)
SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, audioInput);
## Specify beamforming angles This sample shows how to use MAS with a custom microphone geometry and beamforming angles on a specified audio input device. In this example:
-* **Enhancement options** - The default enhancements will be applied on the input audio stream.
+* **Enhancement options** - The default enhancements are applied on the input audio stream.
* **Custom geometry** - A custom microphone geometry for a 4-microphone array is provided by specifying the microphone coordinates. The units for coordinates are millimeters.
-* **Beamforming angles** - Beamforming angles are specified to optimize for audio originating in that range. The units for angles are degrees. In the sample code below, the start angle is set to 70 degrees and the end angle is set to 110 degrees.
+* **Beamforming angles** - Beamforming angles are specified to optimize for audio originating in that range. The units for angles are degrees.
* **Audio input** - The audio input is from a push stream, where the audio within the stream is expected from an audio input device corresponding to the custom geometry specified.
+In the following code example, the start angle is set to 70 degrees and the end angle is set to 110 degrees.
+ ### [C#](#tab/csharp) ```csharp
ai-services Batch Synthesis Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-synthesis-properties.md
Previously updated : 11/16/2022 Last updated : 1/18/2024
Batch synthesis properties are described in the following table.
|`description`|The description of the batch synthesis.<br/><br/>This property is optional.| |`displayName`|The name of the batch synthesis. Choose a name that you can refer to later. The display name doesn't have to be unique.<br/><br/>This property is required.| |`id`|The batch synthesis job ID.<br/><br/>This property is read-only.|
-|`inputs`|The plain text or SSML to be synthesized.<br/><br/>When the `textType` is set to `"PlainText"`, provide plain text as shown here: `"inputs": [{"text": "The rainbow has seven colors."}]`. When the `textType` is set to `"SSML"`, provide text in the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) as shown here: `"inputs": [{"text": "<speak version='\''1.0'\'' xml:lang='\''en-US'\''><voice xml:lang='\''en-US'\'' xml:gender='\''Female'\'' name='\''en-US-JennyNeural'\''>The rainbow has seven colors.</voice></speak>"}]`.<br/><br/>Include up to 1,000 text objects if you want multiple audio output files. Here's example input text that should be synthesized to two audio output files: `"inputs": [{"text": "synthesize this to a file"},{"text": "synthesize this to another file"}]`. However, if the `properties.concatenateResult` property is set to `true`, then each synthesized result will be written to the same audio output file.<br/><br/>You don't need separate text inputs for new paragraphs. Within any of the (up to 1,000) text inputs, you can specify new paragraphs using the "\r\n" (newline) string. Here's example input text with two paragraphs that should be synthesized to the same audio output file: `"inputs": [{"text": "synthesize this to a file\r\nsynthesize this to another paragraph in the same file"}]`<br/><br/>There are no paragraph limits, but keep in mind that the maximum JSON payload size (including all text inputs and other properties) that will be accepted is 500 kilobytes.<br/><br/>This property is required when you create a new batch synthesis job. This property isn't included in the response when you get the synthesis job.|
+|`inputs`|The plain text or SSML to be synthesized.<br/><br/>When the `textType` is set to `"PlainText"`, provide plain text as shown here: `"inputs": [{"text": "The rainbow has seven colors."}]`. When the `textType` is set to `"SSML"`, provide text in the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) as shown here: `"inputs": [{"text": "<speak version='\''1.0'\'' xml:lang='\''en-US'\''><voice xml:lang='\''en-US'\'' xml:gender='\''Female'\'' name='\''en-US-JennyNeural'\''>The rainbow has seven colors.</voice></speak>"}]`.<br/><br/>Include up to 1,000 text objects if you want multiple audio output files. Here's example input text that should be synthesized to two audio output files: `"inputs": [{"text": "synthesize this to a file"},{"text": "synthesize this to another file"}]`. However, if the `properties.concatenateResult` property is set to `true`, then each synthesized result is written to the same audio output file.<br/><br/>You don't need separate text inputs for new paragraphs. Within any of the (up to 1,000) text inputs, you can specify new paragraphs using the "\r\n" (newline) string. Here's example input text with two paragraphs that should be synthesized to the same audio output file: `"inputs": [{"text": "synthesize this to a file\r\nsynthesize this to another paragraph in the same file"}]`<br/><br/>There are no paragraph limits, but the maximum JSON payload size (including all text inputs and other properties) is 500 kilobytes.<br/><br/>This property is required when you create a new batch synthesis job. This property isn't included in the response when you get the synthesis job.|
|`lastActionDateTime`|The most recent date and time when the `status` property value changed.<br/><br/>This property is read-only.| |`outputs.result`|The location of the batch synthesis result files with audio output and logs.<br/><br/>This property is read-only.| |`properties`|A defined set of optional batch synthesis configuration settings.|
Batch synthesis properties are described in the following table.
|`properties.durationInTicks`|The audio output duration in ticks.<br/><br/>This property is read-only.| |`properties.failedAudioCount`|The count of batch synthesis inputs to audio output failed.<br/><br/>This property is read-only.| |`properties.outputFormat`|The audio output format.<br/><br/>For information about the accepted values, see [audio output formats](rest-text-to-speech.md#audio-outputs). The default output format is `riff-24khz-16bit-mono-pcm`.|
-|`properties.sentenceBoundaryEnabled`|Determines whether to generate sentence boundary data. This optional `bool` value ("true" or "false") is "false" by default.<br/><br/>If sentence boundary data is requested, then a corresponding `[nnnn].sentence.json` file will be included in the results data ZIP file.|
+|`properties.sentenceBoundaryEnabled`|Determines whether to generate sentence boundary data. This optional `bool` value ("true" or "false") is "false" by default.<br/><br/>If sentence boundary data is requested, then a corresponding `[nnnn].sentence.json` file is included in the results data ZIP file.|
|`properties.succeededAudioCount`|The count of batch synthesis inputs to audio output succeeded.<br/><br/>This property is read-only.| |`properties.timeToLive`|A duration after the synthesis job is created, when the synthesis results will be automatically deleted. The value is an ISO 8601 encoded duration. For example, specify `PT12H` for 12 hours. This optional setting is `P31D` (31 days) by default. The maximum time to live is 31 days. The date and time of automatic deletion (for synthesis jobs with a status of "Succeeded" or "Failed") is equal to the `lastActionDateTime` + `timeToLive` properties.<br/><br/>Otherwise, you can call the [delete](./batch-synthesis.md#delete-batch-synthesis) synthesis method to remove the job sooner.|
-|`properties.wordBoundaryEnabled`|Determines whether to generate word boundary data. This optional `bool` value ("true" or "false") is "false" by default.<br/><br/>If word boundary data is requested, then a corresponding `[nnnn].word.json` file will be included in the results data ZIP file.|
+|`properties.wordBoundaryEnabled`|Determines whether to generate word boundary data. This optional `bool` value ("true" or "false") is "false" by default.<br/><br/>If word boundary data is requested, then a corresponding `[nnnn].word.json` file is included in the results data ZIP file.|
|`status`|The batch synthesis processing status.<br/><br/>The status should progress from "NotStarted" to "Running", and finally to either "Succeeded" or "Failed".<br/><br/>This property is read-only.| |`synthesisConfig`|The configuration settings to use for batch synthesis of plain text.<br/><br/>This property is only applicable when `textType` is set to `"PlainText"`.| |`synthesisConfig.pitch`|The pitch of the audio output.<br/><br/>For information about the accepted values, see the [adjust prosody](speech-synthesis-markup-voice.md#adjust-prosody) table in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.|
The latency for batch synthesis is as follows (approximately):
### Best practices
-When considering batch synthesis for your application, it's recommended to assess whether the latency meets your requirements. If the latency aligns with your desired performance, batch synthesis can be a suitable choice. However, if the latency does not meet your needs, you might consider using real-time API.
+When considering batch synthesis for your application, it's recommended to assess whether the latency meets your requirements. If the latency aligns with your desired performance, batch synthesis can be a suitable choice. However, if the latency doesn't meet your needs, you might consider using real-time API.
## HTTP status codes
Here are examples that can result in the 400 error:
- The number of requested text inputs exceeded the limit of 1,000. - The `top` query parameter exceeded the limit of 100. - You tried to use an invalid deployment ID or a custom voice that isn't successfully deployed. Make sure the Speech resource has access to the custom voice, and the custom voice is successfully deployed. You must also ensure that the mapping of `{"your-custom-voice-name": "your-deployment-ID"}` is correct in your batch synthesis request.-- You tried to delete a batch synthesis job that hasn't started or hasn't completed running. You can only delete batch synthesis jobs that have a status of "Succeeded" or "Failed".
+- You tried to delete a batch synthesis job that isn't started or hasn't completed running. You can only delete batch synthesis jobs that have a status of "Succeeded" or "Failed".
- You tried to use a *F0* Speech resource, but the region only supports the *Standard* Speech resource pricing tier. - You tried to create a new batch synthesis job that would exceed the limit of 200 active jobs. Each Speech resource can have up to 200 batch synthesis jobs that don't have a status of "Succeeded" or "Failed".
Here's an example request that results in an HTTP 400 error, because the `top` q
curl -v -X GET "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis?skip=0&top=200" -H "Ocp-Apim-Subscription-Key: YourSpeechKey" ```
-In this case, the response headers will include `HTTP/1.1 400 Bad Request`.
+In this case, the response headers include `HTTP/1.1 400 Bad Request`.
-The response body will resemble the following JSON example:
+The response body resembles the following JSON example:
```json {
ai-services Batch Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-synthesis.md
Previously updated : 11/16/2022 Last updated : 1/18/2024
The `values` property in the json response lists your synthesis requests. The li
## Delete batch synthesis
-Delete the batch synthesis job history after you retrieved the audio output results. The Speech service will keep each synthesis history for up to 31 days, or the duration of the request `timeToLive` property, whichever comes sooner. The date and time of automatic deletion (for synthesis jobs with a status of "Succeeded" or "Failed") is equal to the `lastActionDateTime` + `timeToLive` properties.
+Delete the batch synthesis job history after you retrieved the audio output results. The Speech service keeps batch synthesis history for up to 31 days, or the duration of the request `timeToLive` property, whichever comes sooner. The date and time of automatic deletion (for synthesis jobs with a status of "Succeeded" or "Failed") is equal to the `lastActionDateTime` + `timeToLive` properties.
To delete a batch synthesis job, make an HTTP DELETE request using the URI as shown in the following example. Replace `YourSynthesisId` with your batch synthesis ID, replace `YourSpeechKey` with your Speech resource key, and replace `YourSpeechRegion` with your Speech resource region.
The summary file contains the synthesis results for each text input. Here's an e
} ```
-If sentence boundary data was requested (`"sentenceBoundaryEnabled": true`), then a corresponding `[nnnn].sentence.json` file will be included in the results. Likewise, if word boundary data was requested (`"wordBoundaryEnabled": true`), then a corresponding `[nnnn].word.json` file will be included in the results.
+If sentence boundary data was requested (`"sentenceBoundaryEnabled": true`), then a corresponding `[nnnn].sentence.json` file is included in the results. Likewise, if word boundary data was requested (`"wordBoundaryEnabled": true`), then a corresponding `[nnnn].word.json` file is included in the results.
Here's an example word data file with both audio offset and duration in milliseconds:
The latency for batch synthesis is as follows (approximately):
### Best practices
-When considering batch synthesis for your application, it's recommended to assess whether the latency meets your requirements. If the latency aligns with your desired performance, batch synthesis can be a suitable choice. However, if the latency does not meet your needs, you might consider using real-time API.
+When considering batch synthesis for your application, it's recommended to assess whether the latency meets your requirements. If the latency aligns with your desired performance, batch synthesis can be a suitable choice. However, if the latency doesn't meet your needs, you might consider using real-time API.
## HTTP status codes
Here are examples that can result in the 400 error:
- The number of requested text inputs exceeded the limit of 1,000. - The `top` query parameter exceeded the limit of 100. - You tried to use an invalid deployment ID or a custom voice that isn't successfully deployed. Make sure the Speech resource has access to the custom voice, and the custom voice is successfully deployed. You must also ensure that the mapping of `{"your-custom-voice-name": "your-deployment-ID"}` is correct in your batch synthesis request.-- You tried to delete a batch synthesis job that hasn't started or hasn't completed running. You can only delete batch synthesis jobs that have a status of "Succeeded" or "Failed".
+- You tried to delete a batch synthesis job that isn't started or hasn't completed running. You can only delete batch synthesis jobs that have a status of "Succeeded" or "Failed".
- You tried to use a *F0* Speech resource, but the region only supports the *Standard* Speech resource pricing tier. - You tried to create a new batch synthesis job that would exceed the limit of 200 active jobs. Each Speech resource can have up to 200 batch synthesis jobs that don't have a status of "Succeeded" or "Failed".
Here's an example request that results in an HTTP 400 error, because the `top` q
curl -v -X GET "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis?skip=0&top=200" -H "Ocp-Apim-Subscription-Key: YourSpeechKey" ```
-In this case, the response headers will include `HTTP/1.1 400 Bad Request`.
+In this case, the response headers include `HTTP/1.1 400 Bad Request`.
-The response body will resemble the following JSON example:
+The response body resembles the following JSON example:
```json {
ai-services Batch Transcription Audio Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-audio-data.md
Previously updated : 10/21/2022 Last updated : 1/18/2024 ms.devlang: csharp
You can specify one or multiple audio files when creating a transcription. We re
## Supported audio formats and codecs
-The batch transcription API supports a number of different formats and codecs, such as:
+The batch transcription API supports many different formats and codecs, such as:
- WAV - MP3
Follow these steps to create a storage account and upload wav files from your lo
Follow these steps to create a storage account and upload wav files from your local directory to a new container.
-1. Set the `RESOURCE_GROUP` environment variable to the name of an existing resource group where the new storage account will be created. Use the same subscription and resource group as your Speech resource.
+1. Set the `RESOURCE_GROUP` environment variable to the name of an existing resource group where the new storage account is created. Use the same subscription and resource group as your Speech resource.
```azurecli-interactive set RESOURCE_GROUP=<your existing resource group name>
This section explains how to set up and limit access to your batch transcription
> [!NOTE] > With the trusted Azure services security mechanism, you need to use [Azure Blob storage](../../storage/blobs/storage-blobs-overview.md) to store audio files. Usage of [Azure Files](../../storage/files/storage-files-introduction.md) is not supported.
-If you perform all actions in this section, your Storage account will be in the following configuration:
+If you perform all actions in this section, your Storage account is configured as follows:
- Access to all external network traffic is prohibited. - Access to Storage account using Storage account key is prohibited. - Access to Storage account blob storage using [shared access signatures (SAS)](../../storage/common/storage-sas-overview.md) is prohibited.
If you perform all actions in this section, your Storage account will be in the
So in effect your Storage account becomes completely "locked" and can't be used in any scenario apart from transcribing audio files that were already present by the time the new configuration was applied. You should consider this configuration as a model as far as the security of your audio data is concerned and customize it according to your needs.
-For example, you may allow traffic from selected public IP addresses and Azure Virtual networks. You may also set up access to your Storage account using [private endpoints](../../storage/common/storage-private-endpoints.md) (see as well [this tutorial](../../private-link/tutorial-private-endpoint-storage-portal.md)), re-enable access using Storage account key, allow access to other Azure trusted services, etc.
+For example, you can allow traffic from selected public IP addresses and Azure Virtual networks. You can also set up access to your Storage account using [private endpoints](../../storage/common/storage-private-endpoints.md) (see as well [this tutorial](../../private-link/tutorial-private-endpoint-storage-portal.md)), re-enable access using Storage account key, allow access to other Azure trusted services, etc.
> [!NOTE] > Using [private endpoints for Speech](speech-services-private-link.md) isn't required to secure the storage account. You can use a private endpoint for batch transcription API requests, while separately accessing the source audio files from a secure storage account, or the other way around.
-By following the steps below, you'll severely restrict access to the storage account. Then you'll assign the minimum required permissions for Speech resource managed identity to access the Storage account.
+By following the steps below, you severely restrict access to the storage account. Then you assign the minimum required permissions for Speech resource managed identity to access the Storage account.
### Enable system assigned managed identity for the Speech resource
-Follow these steps to enable system assigned managed identity for the Speech resource that you will use for batch transcription.
+Follow these steps to enable system assigned managed identity for the Speech resource that you use for batch transcription.
1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account. 1. Select the Speech resource.
Follow these steps to assign the **Storage Blob Data Reader** role to the manage
Now the Speech resource managed identity has access to the Storage account and can access the audio files for batch transcription.
-With system assigned managed identity, you'll use a plain Storage Account URL (no SAS or other additions) when you [create a batch transcription](batch-transcription-create.md) request. For example:
+With system assigned managed identity, you use a plain Storage Account URL (no SAS or other additions) when you [create a batch transcription](batch-transcription-create.md) request. For example:
```json {
The previous command returns a SAS token. Append the SAS token to your container
-You will use the SAS URL when you [create a batch transcription](batch-transcription-create.md) request. For example:
+You use the SAS URL when you [create a batch transcription](batch-transcription-create.md) request. For example:
```json {
ai-services Batch Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-create.md
Previously updated : 11/7/2023 Last updated : 1/18/2024 zone_pivot_groups: speech-cli-rest
Here are some property options that you can use to configure a transcription whe
|`contentContainerUrl`| You can submit individual audio files, or a whole storage container.<br/><br/>You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property won't be returned in the response.| |`contentUrls`| You can submit individual audio files, or a whole storage container.<br/><br/>You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property won't be returned in the response.| |`destinationContainerUrl`|The result can be stored in an Azure container. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. When the transcription job is deleted, the transcription result data is also deleted. For more information such as the supported security scenarios, see [Destination container URL](#destination-container-url).|
-|`diarization`|Indicates that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains multiple voices. Specify the minimum and maximum number of people who might be speaking. You must also set the `diarizationEnabled` property to `true`. The [transcription file](batch-transcription-get.md#transcription-result-file) will contain a `speaker` entry for each transcribed phrase.<br/><br/>You need to use this property when you expect three or more speakers. For two speakers setting `diarizationEnabled` property to `true` is enough. See an example of the property usage in [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation description.<br/><br/>Diarization is the process of separating speakers in audio data. The batch pipeline can recognize and separate multiple speakers on mono channel recordings. The maximum number of speakers for diarization must be less than 36 and more or equal to the `minSpeakers` property (see [example](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create)). The feature isn't available with stereo recordings.<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1 and later. If you set this property with any previous version (such as version 3.0) then it will be ignored and only 2 speakers will be identified.|
+|`diarization`|Indicates that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains multiple voices. Specify the minimum and maximum number of people who might be speaking. You must also set the `diarizationEnabled` property to `true`. The [transcription file](batch-transcription-get.md#transcription-result-file) contains a `speaker` entry for each transcribed phrase.<br/><br/>You need to use this property when you expect three or more speakers. For two speakers setting `diarizationEnabled` property to `true` is enough. See an example of the property usage in [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation description.<br/><br/>Diarization is the process of separating speakers in audio data. The batch pipeline can recognize and separate multiple speakers on mono channel recordings. The maximum number of speakers for diarization must be less than 36 and more or equal to the `minSpeakers` property (see [example](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create)). The feature isn't available with stereo recordings.<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1 and later. If you set this property with any previous version (such as version 3.0), then it's ignored and only 2 speakers are identified.|
|`diarizationEnabled`|Specifies that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains two voices. The default value is `false`.<br/><br/>For three or more voices you also need to use property `diarization` (only with Speech to text REST API version 3.1 and later).<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.| |`displayName`|The name of the batch transcription. Choose a name that you can refer to later. The display name doesn't have to be unique.<br/><br/>This property is required.| |`displayFormWordLevelTimestampsEnabled`|Specifies whether to include word-level timestamps on the display form of the transcription results. The results are returned in the displayWords property of the transcription file. The default value is `false`.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1 and later.|
To use a Custom Speech model for batch transcription, you need the model's URI.
> [!TIP] > A [hosted deployment endpoint](how-to-custom-speech-deploy-model.md) isn't required to use custom speech with the batch transcription service. You can conserve resources if the [custom speech model](how-to-custom-speech-train-model.md) is only used for batch transcription.
-Batch transcription requests for expired models will fail with a 4xx error. You'll want to set the `model` property to a base model or custom model that hasn't yet expired. Otherwise don't include the `model` property to always use the latest base model. For more information, see [Choose a model](how-to-custom-speech-create-project.md#choose-your-model) and [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
+Batch transcription requests for expired models fail with a 4xx error. You want to set the `model` property to a base model or custom model that hasn't yet expired. Otherwise don't include the `model` property to always use the latest base model. For more information, see [Choose a model](how-to-custom-speech-create-project.md#choose-your-model) and [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
## Using Whisper models
spx csr list --base --api-version v3.2-preview.1
``` ::: zone-end
-The `displayName` property of a Whisper model will contain "Whisper Preview" as shown in this example. Whisper is a display-only model, so the lexical field isn't populated in the transcription.
+The `displayName` property of a Whisper model contains "Whisper Preview" as shown in this example. Whisper is a display-only model, so the lexical field isn't populated in the transcription.
```json {
ai-services Batch Transcription Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-get.md
Previously updated : 11/29/2022 Last updated : 1/18/2024 zone_pivot_groups: speech-cli-rest
You should receive a response body in the following format:
} ```
-The `status` property indicates the current status of the transcriptions. The transcriptions and transcription report will be available when the transcription status is `Succeeded`.
+The `status` property indicates the current status of the transcriptions. The transcriptions and transcription report are available when the transcription status is `Succeeded`.
::: zone-end
You should receive a response body in the following format:
} ```
-The `status` property indicates the current status of the transcriptions. The transcriptions and transcription report will be available when the transcription status is `Succeeded`.
+The `status` property indicates the current status of the transcriptions. The transcriptions and transcription report are available when the transcription status is `Succeeded`.
For Speech CLI help with transcriptions, run the following command:
Depending in part on the request parameters set when you created the transcripti
|`combinedRecognizedPhrases`|The concatenated results of all phrases for the channel.| |`confidence`|The confidence value for the recognition.| |`display`|The display form of the recognized text. Added punctuation and capitalization are included.|
-|`displayWords`|The timestamps for each word of the transcription. The `displayFormWordLevelTimestampsEnabled` request property must be set to `true`, otherwise this property is not present.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1.|
+|`displayWords`|The timestamps for each word of the transcription. The `displayFormWordLevelTimestampsEnabled` request property must be set to `true`, otherwise this property isn't present.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1.|
|`duration`|The audio duration. The value is an ISO 8601 encoded duration.|
-|`durationInTicks`|The audio duration in ticks (1 tick is 100 nanoseconds).|
+|`durationInTicks`|The audio duration in ticks (one tick is 100 nanoseconds).|
|`itn`|The inverse text normalized (ITN) form of the recognized text. Abbreviations such as "Doctor Smith" to "Dr Smith", phone numbers, and other transformations are applied.| |`lexical`|The actual words recognized.|
-|`locale`|The locale identified from the input the audio. The `languageIdentification` request property must be set, otherwise this property is not present.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1.|
+|`locale`|The locale identified from the input the audio. The `languageIdentification` request property must be set, otherwise this property isn't present.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1.|
|`maskedITN`|The ITN form with profanity masking applied.| |`nBest`|A list of possible transcriptions for the current phrase with confidences.| |`offset`|The offset in audio of this phrase. The value is an ISO 8601 encoded duration.|
-|`offsetInTicks`|The offset in audio of this phrase in ticks (1 tick is 100 nanoseconds).|
+|`offsetInTicks`|The offset in audio of this phrase in ticks (one tick is 100 nanoseconds).|
|`recognitionStatus`|The recognition state. For example: "Success" or "Failure".| |`recognizedPhrases`|The list of results for each phrase.| |`source`|The URL that was provided as the input audio source. The source corresponds to the `contentUrls` or `contentContainerUrl` request property. The `source` property is the only way to confirm the audio input for a transcription.|
-|`speaker`|The identified speaker. The `diarization` and `diarizationEnabled` request properties must be set, otherwise this property is not present.|
+|`speaker`|The identified speaker. The `diarization` and `diarizationEnabled` request properties must be set, otherwise this property isn't present.|
|`timestamp`|The creation date and time of the transcription. The value is an ISO 8601 encoded timestamp.|
-|`words`|A list of results with lexical text for each word of the phrase. The `wordLevelTimestampsEnabled` request property must be set to `true`, otherwise this property is not present.|
+|`words`|A list of results with lexical text for each word of the phrase. The `wordLevelTimestampsEnabled` request property must be set to `true`, otherwise this property isn't present.|
## Next steps
ai-services Batch Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription.md
Previously updated : 09/15/2023 Last updated : 1/18/2024 ms.devlang: csharp
ai-services Bring Your Own Storage Speech Resource Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/bring-your-own-storage-speech-resource-speech-to-text.md
Previously updated : 03/28/2023 Last updated : 1/18/2024
-# Use the Bring your own storage (BYOS) Speech resource for Speech to text
+# Use the Bring your own storage (BYOS) Speech resource for speech to text
-Bring your own storage (BYOS) can be used in the following Speech to text scenarios:
+Bring your own storage (BYOS) can be used in the following speech to text scenarios:
- Batch transcription-- Real-time transcription with audio and transcription result logging enabled-- Custom Speech
+- Real-time transcription with audio and transcription results logging enabled
+- Custom speech
-One Speech resource to Storage account pairing can be used for all scenarios simultaneously.
+One Speech resource to storage account pairing can be used for all scenarios simultaneously.
-This article explains in depth how to use a BYOS-enabled Speech resource in all Speech to text scenarios. The article implies, that you have [a fully configured BYOS-enabled Speech resource and associated Storage account](bring-your-own-storage-speech-resource.md).
+This article explains in depth how to use a BYOS-enabled Speech resource in all speech to text scenarios. The article implies, that you have [a fully configured BYOS-enabled Speech resource and associated Storage account](bring-your-own-storage-speech-resource.md).
## Data storage
-When using BYOS, the Speech service doesn't keep any customer artifacts after the data processing (transcription, model training, model testing) is complete. However, some metadata that isn't derived from the user content is stored within Speech service premises. For example, in Custom Speech scenario, the Service keeps certain information about the custom endpoints, like which models they use.
+When using BYOS, the Speech service doesn't keep any customer artifacts after the data processing (transcription, model training, model testing) is complete. However, some metadata that isn't derived from the user content is stored within Speech service premises. For example, in the Custom speech scenario, the Service keeps certain information about the custom endpoints, like which models they use.
BYOS-associated Storage account stores the following data:
BYOS-associated Storage account stores the following data:
**Real-time transcription with audio and transcription result logging enabled** - Audio and transcription result logs
-**Custom Speech**
+**Custom speech**
- Source files of datasets for model training and testing (optional) - All data and metadata related to Custom models hosted by the BYOS-enabled Speech resource (including copies of datasets for model training and testing)
URL of this format ensures that only Microsoft Entra identities (users, service
> [!WARNING] > If `sasValidityInSeconds` parameter is omitted in [Get Base Model Logs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListBaseModelLogs) request or similar ones, then a [User delegation SAS](../../storage/common/storage-sas-overview.md) with the validity of 30 days will be generated for each data file URL returned. This SAS is signed by the system assigned managed identity of your BYOS-enabled Speech resource. Because of it, the SAS allows access to the data, even if storage account key access is disabled. See details [here](../../storage/common/shared-key-authorization-prevent.md#understand-how-disallowing-shared-key-affects-sas-tokens).
-## Custom Speech
+## Custom speech
-With Custom Speech, you can evaluate and improve the accuracy of speech recognition for your applications and products. A custom speech model can be used for real-time speech to text, speech translation, and batch transcription. For more information, see the [Custom Speech overview](custom-speech-overview.md).
+With Custom speech, you can evaluate and improve the accuracy of speech recognition for your applications and products. A custom speech model can be used for real-time speech to text, speech translation, and batch transcription. For more information, see the [Custom speech overview](custom-speech-overview.md).
-There's nothing specific about how you use Custom Speech with BYOS-enabled Speech resource. The only difference is where all custom model related data, which Speech service collects and produces for you, is stored. The data is stored in the following Blob containers of BYOS-associated Storage account:
+There's nothing specific about how you use Custom speech with BYOS-enabled Speech resource. The only difference is where all custom model related data, which Speech service collects and produces for you, is stored. The data is stored in the following Blob containers of BYOS-associated Storage account:
-- `customspeech-models` - Location of Custom Speech models-- `customspeech-artifacts` - Location of all other Custom Speech related data
+- `customspeech-models` - Location of Custom speech models
+- `customspeech-artifacts` - Location of all other Custom speech related data
-Note that the Blob container structure is provided for your information only and subject to change without a notice.
+The Blob container structure is provided for your information only and subject to change without a notice.
> [!CAUTION]
-> Speech service relies on pre-defined Blob container paths and file names for Custom Speech module to correctly function. Don't move, rename or in any way alter the contents of `customspeech-models` container and Custom Speech related folders of `customspeech-artifacts` container.
+> Speech service relies on pre-defined Blob container paths and file names for Custom speech module to correctly function. Don't move, rename or in any way alter the contents of `customspeech-models` container and Custom speech related folders of `customspeech-artifacts` container.
> > Failure to do so very likely will result in hard to debug errors and may lead to the necessity of custom model retraining. >
-> Use standard tools, like REST API and Speech Studio to interact with the Custom Speech related data. See details in [Custom Speech section](custom-speech-overview.md).
+> Use standard tools, like REST API and Speech Studio to interact with the Custom speech related data. See details in [Custom speech section](custom-speech-overview.md).
-### Use of REST API with Custom Speech
+### Use of REST API with Custom speech
[Speech to text REST API](rest-speech-to-text.md) fully supports BYOS-enabled Speech resources. However, because the data is now stored within the BYOS-enabled Storage account, requests like [Get Dataset Files](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_ListFiles) interact with the BYOS-associated Storage account Blob storage, instead of Speech service internal resources. It allows using the same REST API based code for both "regular" and BYOS-enabled Speech resources.
URL of this format ensures that only Microsoft Entra identities (users, service
- [Set up the Bring your own storage (BYOS) Speech resource](bring-your-own-storage-speech-resource.md) - [Batch transcription overview](batch-transcription.md) - [How to log audio and transcriptions for speech recognition](logging-audio-transcription.md)-- [Custom Speech overview](custom-speech-overview.md)
+- [Custom speech overview](custom-speech-overview.md)
ai-services Bring Your Own Storage Speech Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/bring-your-own-storage-speech-resource.md
Previously updated : 03/28/2023 Last updated : 1/18/2024
Azure portal option has tighter requirements:
If any of these extra requirements don't fit your scenario, use Cognitive Services API option (PowerShell, Azure CLI, REST request).
-To use any of the methods above you need an Azure account that is assigned a role allowing to create resources in your subscription, like *Subscription Contributor*.
+To use any of the methods above, you need an Azure account that is assigned a role allowing to create resources in your subscription, like *Subscription Contributor*.
# [Azure portal](#tab/portal)
If you used Azure portal for creating a BYOS-enabled Speech resource, it's fully
### (Optional) Verify Speech resource BYOS configuration
-You may always check, whether any given Speech resource is BYOS enabled, and what is the associated Storage account. You can do it either via Azure portal, or via Cognitive Services API.
+You can always check, whether any given Speech resource is BYOS enabled, and what is the associated Storage account. You can do it either via Azure portal, or via Cognitive Services API.
# [Azure portal](#tab/portal)
Use the [Accounts - Get](/rest/api/cognitiveservices/accountmanagement/accounts/
## Configure BYOS-associated Storage account
-To achieve high security and privacy of your data you need to properly configure the settings of the BYOS-associated Storage account. In case you didn't use Azure portal to create your BYOS-enabled Speech resource, you also need to perform a mandatory step of role assignment.
+To achieve high security and privacy of your data, you need to properly configure the settings of the BYOS-associated Storage account. In case you didn't use Azure portal to create your BYOS-enabled Speech resource, you also need to perform a mandatory step of role assignment.
### Assign resource access role
This step is **mandatory** if you didn't use Azure portal to create your BYOS-en
BYOS uses the Blob storage of a Storage account. Because of this, BYOS-enabled Speech resource managed identity needs *Storage Blob Data Contributor* role assignment within the scope of BYOS-associated Storage account.
-If you used Azure portal to create your BYOS-enabled Speech resource, you may skip the rest of this subsection. Your role assignment is already done. Otherwise, follow these steps.
+If you used Azure portal to create your BYOS-enabled Speech resource, you can skip the rest of this subsection. Your role assignment is already done. Otherwise, follow these steps.
> [!IMPORTANT] > You need to be assigned the *Owner* role of the Storage account or higher scope (like Subscription) to perform the operation in the next steps. This is because only the *Owner* role can assign roles to others. See details [here](../../role-based-access-control/built-in-roles.md).
If you used Azure portal to create your BYOS-enabled Speech resource, you may sk
This section describes how to set up Storage account security settings, if you intend to use BYOS-associated Storage account only for Speech to text scenarios. In case you use the BYOS-associated Storage account for Text to speech or a combination of both Speech to text and Text to speech, use [this section](#configure-storage-account-security-settings-for-text-to-speech).
-For Speech to text BYOS is using the [trusted Azure services security mechanism](../../storage/common/storage-network-security.md#trusted-access-based-on-a-managed-identity) to communicate with Storage account. The mechanism allows setting very restricted Storage account data access rules.
+For Speech to text BYOS is using the [trusted Azure services security mechanism](../../storage/common/storage-network-security.md#trusted-access-based-on-a-managed-identity) to communicate with Storage account. The mechanism allows setting restricted storage account data access rules.
-If you perform all actions in the section, your Storage account will be in the following configuration:
+If you perform all actions in the section, your Storage account is in the following configuration:
- Access to all external network traffic is prohibited. - Access to Storage account using Storage account key is prohibited. - Access to Storage account blob storage using [shared access signatures (SAS)](../../storage/common/storage-sas-overview.md) is prohibited. (Except for [User delegation SAS](../../storage/common/shared-key-authorization-prevent.md#understand-how-disallowing-shared-key-affects-sas-tokens))
So in effect your Storage account becomes completely "locked" and can only be ac
You should consider this configuration as a model as far as the security of your data is concerned and customize it according to your needs.
-For example, you may allow traffic from selected public IP addresses and Azure Virtual networks. You may also set up access to your Storage account using [private endpoints](../../storage/common/storage-private-endpoints.md) (see as well [this tutorial](../../private-link/tutorial-private-endpoint-storage-portal.md)), re-enable access using Storage account key, allow access to other Azure trusted services, etc.
+For example, you can allow traffic from selected public IP addresses and Azure Virtual networks. You can also set up access to your Storage account using [private endpoints](../../storage/common/storage-private-endpoints.md) (see as well [this tutorial](../../private-link/tutorial-private-endpoint-storage-portal.md)), re-enable access using Storage account key, allow access to other Azure trusted services, etc.
> [!NOTE] > Using [private endpoints for Speech](speech-services-private-link.md) isn't required to secure the Storage account. Private endpoints for Speech secure the channels for Speech API requests, and can be used as an extra component in your solution.
Having restricted access to the Storage account, you need to grant networking ac
This section describes how to set up Storage account security settings, if you intend to use BYOS-associated Storage account for Text to speech or a combination of both Speech to text and Text to speech. In case you use the BYOS-associated Storage account for Speech to text only, use [this section](#configure-storage-account-security-settings-for-speech-to-text). > [!NOTE]
-> Text to speech requires more relaxed settings of Storage account firewall, compared to Speech to text. If you use both Speech to text and Text to speech, and need maximally restricted Storage account security settings to protect your data, you may consider using different Storage accounts and the corresponding Speech resources for Speech to Text and Text to speech tasks.
+> Text to speech requires more relaxed settings of Storage account firewall, compared to Speech to text. If you use both Speech to text and Text to speech, and need maximally restricted Storage account security settings to protect your data, you can consider using different Storage accounts and the corresponding Speech resources for Speech to Text and Text to speech tasks.
-If you perform all actions in the section, your Storage account will be in the following configuration:
+If you perform all actions in the section, your Storage account is in the following configuration:
- External network traffic is allowed. - Access to Storage account using Storage account key is prohibited. - Access to Storage account blob storage using [shared access signatures (SAS)](../../storage/common/storage-sas-overview.md) is prohibited. (Except for [User delegation SAS](../../storage/common/shared-key-authorization-prevent.md#understand-how-disallowing-shared-key-affects-sas-tokens)) - Access to the BYOS-enabled Speech resource is allowed using the resource [system assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) and [User delegation SAS](../../storage/common/storage-sas-overview.md#user-delegation-sas).
-These are the most restricted security settings possible for Text to speech scenario. You may further customize them according to your needs.
+These are the most restricted security settings possible for the text to speech scenario. You can further customize them according to your needs.
**Restrict access to the Storage account**
Custom neural voice uses [User delegation SAS](../../storage/common/storage-sas-
## Configure BYOS-associated Storage account for use with Speech Studio
-Many [Speech Studio](https://speech.microsoft.com/) operations like dataset upload, or custom model training and testing don't require any special configuration in the case of BYOS-enabled Speech resource.
+Many [Speech Studio](https://speech.microsoft.com/) operations like dataset upload, or custom model training and testing don't require any special configuration of a BYOS-enabled Speech resource.
-However, if you need to read data stored withing BYOS-associated Storage account through Speech Studio Web interface, you need to configure additional settings of your BYOS-associated Storage account. For example, it's required to view the contents of a dataset.
+However, if you need to read data stored withing BYOS-associated Storage account through Speech Studio Web interface, you need to configure more settings of your BYOS-associated Storage account. For example, it's required to view the contents of a dataset.
### Configure Cross-Origin Resource Sharing (CORS)
Speech Studio needs permission to make requests to the Blob storage of the BYOS-
### Configure Azure Storage firewall
-You need to allow access for the machine, where you run the browser using Speech Studio. If your Storage account firewall settings allow public access from all networks, you may skip this subsection. Otherwise, follow these steps.
+You need to allow access for the machine, where you run the browser using Speech Studio. If your Storage account firewall settings allow public access from all networks, you can skip this subsection. Otherwise, follow these steps.
1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account. 1. Select the Storage account.
ai-services Call Center Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/call-center-overview.md
Previously updated : 09/18/2022 Last updated : 1/18/2024 # Call Center Overview
-Azure AI services for Language and Speech can help you realize partial or full automation of telephony-based customer interactions, and provide accessibility across multiple channels. With the Language and Speech services, you can further analyze call center transcriptions, extract and redact conversation personally identifiable information (PII), summarize the transcription, and detect the sentiment.
+Azure AI Language and Azure AI Speech can help you realize partial or full automation of telephony-based customer interactions, and provide accessibility across multiple channels. With the Language and Speech services, you can further analyze call center transcriptions, extract and redact conversation (PII), summarize the transcription, and detect the sentiment.
Some example scenarios for the implementation of Azure AI services in call and contact centers are:-- Virtual agents: Conversational AI-based telephony-integrated voicebots and voice-enabled chatbots
+- Virtual agents: Conversational AI-based telephony-integrated voice bots and voice-enabled chatbots
- Agent-assist: Real-time transcription and analysis of a call to improve the customer experience by providing insights and suggest actions to agents - Post-call analytics: Post-call analysis to create insights into customer conversations to improve understanding and support continuous improvement of call handling, optimization of quality assurance and compliance control as well as other insight driven optimizations.
Some example scenarios for the implementation of Azure AI services in call and c
A holistic call center implementation typically incorporates technologies from the Language and Speech services.
-Audio data typically used in call centers generated through landlines, mobile phones, and radios is often narrowband, in the range of 8 KHz, which can create challenges when you're converting speech to text. The Speech service recognition models are trained to ensure that you can get high-quality transcriptions, however you choose to capture the audio.
+Audio data typically used in call centers generated through landlines, mobile phones, and radios are often narrowband, in the range of 8 KHz, which can create challenges when you're converting speech to text. The Speech service recognition models are trained to ensure that you can get high-quality transcriptions, however you choose to capture the audio.
-Once you've transcribed your audio with the Speech service, you can use the Language service to perform analytics on your call center data such as: sentiment analysis, summarizing the reason for customer calls, how they were resolved, extracting and redacting conversation PII, and more.
+Once you transcribe your audio with the Speech service, you can use the Language service to perform analytics on your call center data such as: sentiment analysis, summarizing the reason for customer calls, how they were resolved, extracting and redacting conversation PII, and more.
### Speech service
The Speech service offers the following features that can be used for call cente
- [Real-time speech to text](./how-to-recognize-speech.md): Recognize and transcribe audio in real-time from multiple inputs. For example, with virtual agents or agent-assist, you can continuously recognize audio input and control how to process results based on multiple events. - [Batch speech to text](./batch-transcription.md): Transcribe large amounts of audio files asynchronously including speaker diarization and is typically used in post-call analytics scenarios. Diarization is the process of recognizing and separating speakers in mono channel audio data.-- [Text to speech](./text-to-speech.md): Text to speech enables your applications, tools, or devices to convert text into humanlike synthesized speech.
+- [Text to speech](./text-to-speech.md): Text to speech enables your applications, tools, or devices to convert text into human like synthesized speech.
- [Speaker identification](./speaker-recognition-overview.md): Helps you determine an unknown speakerΓÇÖs identity within a group of enrolled speakers and is typically used for call center customer verification scenarios or fraud detection. - [Language Identification](./language-identification.md): Identify languages spoken in audio and can be used in real-time and post-call analysis for insights or to control the environment (such as output language of a virtual agent).
The Speech service works well with prebuilt models. However, you might want to f
| Speech customization | Description | | -- | -- |
-| [Custom Speech](./custom-speech-overview.md) | A speech to text feature used evaluate and improve the speech recognition accuracy of use-case specific entities (such as alpha-numeric customer, case, and contract IDs, license plates, and names). You can also train a custom model with your own product names and industry terminology. |
+| [Custom Speech](./custom-speech-overview.md) | A speech to text feature used to evaluate and improve the speech recognition accuracy of use-case specific entities (such as alpha-numeric customer, case, and contract IDs, license plates, and names). You can also train a custom model with your own product names and industry terminology. |
| [Custom neural voice](./custom-neural-voice.md) | A text to speech feature that lets you create a one-of-a-kind, customized, synthetic voice for your applications. | ### Language service
ai-services Call Center Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/call-center-quickstart.md
Previously updated : 09/20/2022 Last updated : 1/18/2024 ms.devlang: csharp
ai-services Call Center Telephony Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/call-center-telephony-integration.md
Previously updated : 08/10/2022 Last updated : 1/18/2024
ai-services Captioning Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/captioning-concepts.md
Previously updated : 06/02/2022 Last updated : 1/18/2024 zone_pivot_groups: programming-languages-speech-sdk-cli
The following are aspects to consider when using captioning:
> > Try the [Azure AI Video Indexer](/azure/azure-video-indexer/video-indexer-overview) as a demonstration of how you can get captions for videos that you upload.
-Captioning can accompany real-time or pre-recorded speech. Whether you're showing captions in real-time or with a recording, you can use the [Speech SDK](speech-sdk.md) or [Speech CLI](spx-overview.md) to recognize speech and get transcriptions. You can also use the [Batch transcription API](batch-transcription.md) for pre-recorded video.
+Captioning can accompany real-time or prerecorded speech. Whether you're showing captions in real-time or with a recording, you can use the [Speech SDK](speech-sdk.md) or [Speech CLI](spx-overview.md) to recognize speech and get transcriptions. You can also use the [Batch transcription API](batch-transcription.md) for pre-recorded video.
## Caption output format
For captioning of a prerecording, send file input to the Speech service. For mor
## Caption and speech synchronization
-You'll want to synchronize captions with the audio track, whether it's done in real-time or with a prerecording.
+You want to synchronize captions with the audio track, whether it's in real-time or with a prerecording.
The Speech service returns the offset and duration of the recognized speech.
For captioning of prerecorded speech or wherever latency isn't a concern, you co
Real-time captioning presents tradeoffs with respect to latency versus accuracy. You could show the text from each `Recognizing` event as soon as possible. However, if you can accept some latency, you can improve the accuracy of the caption by displaying the text from the `Recognized` event. There's also some middle ground, which is referred to as "stable partial results".
-You can request that the Speech service return fewer `Recognizing` events that are more accurate. This is done by setting the `SpeechServiceResponse_StablePartialResultThreshold` property to a value between `0` and `2147483647`. The value that you set is the number of times a word has to be recognized before the Speech service returns a `Recognizing` event. For example, if you set the `SpeechServiceResponse_StablePartialResultThreshold` property value to `5`, the Speech service will affirm recognition of a word at least five times before returning the partial results to you with a `Recognizing` event.
+You can request that the Speech service return fewer `Recognizing` events that are more accurate. This is done by setting the `SpeechServiceResponse_StablePartialResultThreshold` property to a value between `0` and `2147483647`. The value that you set is the number of times a word has to be recognized before the Speech service returns a `Recognizing` event. For example, if you set the `SpeechServiceResponse_StablePartialResultThreshold` property value to `5`, the Speech service affirms recognition of a word at least five times before returning the partial results to you with a `Recognizing` event.
::: zone pivot="programming-language-csharp" ```csharp
spx recognize --file caption.this.mp4 --format any --property SpeechServiceRespo
``` ::: zone-end
-Requesting more stable partial results will reduce the "flickering" or changing text, but it can increase latency as you wait for higher confidence results.
+Requesting more stable partial results reduce the "flickering" or changing text, but it can increase latency as you wait for higher confidence results.
### Stable partial threshold example In the following recognition sequence without setting a stable partial threshold, "math" is recognized as a word, but the final text is "mathematics". At another point, "course 2" is recognized, but the final text is "course 201".
RECOGNIZED: Text=Welcome to applied Mathematics course 201.
## Language identification
-If the language in the audio could change, use continuous [language identification](language-identification.md). Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification). You provide up to 10 candidate languages, at least one of which is expected be in the audio. The Speech service returns the most likely language in the audio.
+If the language in the audio could change, use continuous [language identification](language-identification.md). Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification). You provide up to 10 candidate languages, at least one of which is expected in the audio. The Speech service returns the most likely language in the audio.
## Customizations to improve accuracy
Examples of phrases include:
* Homonyms * Words or acronyms unique to your industry or organization
-There are some situations where [training a custom model](custom-speech-overview.md) is likely the best option to improve accuracy. For example, if you're captioning orthodontics lectures, you might want to train a custom model with the corresponding domain data.
+There are some situations where [training a custom model](custom-speech-overview.md) is likely the best option to improve accuracy. For example, if you're captioning orthodontic lectures, you might want to train a custom model with the corresponding domain data.
## Next steps
ai-services Custom Commands Encryption Of Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/custom-commands-encryption-of-data-at-rest.md
Previously updated : 07/05/2020 Last updated : 1/18/2024
[!INCLUDE [deprecation notice](./includes/custom-commands-retire.md)]
-Custom Commands automatically encrypts your data when it is persisted to the cloud. The Custom Commands service encryption protects your data and to help you to meet your organizational security and compliance commitments.
+Custom Commands automatically encrypts your data when it's persisted to the cloud. The Custom Commands service encryption protects your data and to help you to meet your organizational security and compliance commitments.
> [!NOTE] > Custom Commands service doesn't automatically enable encryption for the LUIS resources associated with your application. If needed, you must enable encryption for your LUIS resource from [here](../luis/encrypt-data-at-rest.md).
Data is encrypted and decrypted using [FIPS 140-2](https://en.wikipedia.org/wiki
## About encryption key management
-When you use Custom Commands, speech service will store following data in the cloud:
+When you use Custom Commands, speech service stores following data in the cloud:
* Configuration JSON behind the Custom Commands application * LUIS authoring and prediction key
By default, your subscription uses Microsoft-managed encryption keys. However, y
> [!IMPORTANT] > Customer-managed keys are only available resources created after 27 June, 2020. To use CMK with the Speech service, you will need to create a new Speech resource. Once the resource is created, you can use Azure Key Vault to set up your managed identity.
-To request the ability to use customer-managed keys, fill out and submit Customer-Managed Key Request Form. It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with the Speech service, you'll need to create a new Speech resource from the Azure portal.
+To request the ability to use customer-managed keys, fill out and submit Customer-Managed Key Request Form. It takes approximately 3-5 business days to hear back on the status of your request. Depending on demand, you might be placed in a queue and approved as space becomes available. Once approved for using CMK with the Speech service, you need to create a new Speech resource from the Azure portal.
> [!NOTE] > **Customer-managed keys (CMK) are supported only for custom commands.** >
To request the ability to use customer-managed keys, fill out and submit Custome
You must use Azure Key Vault to store customer-managed keys. You can either create your own keys and store them in a key vault, or you can use the Azure Key Vault APIs to generate keys. The Speech resource and the key vault must be in the same region and in the same Microsoft Entra tenant, but they can be in different subscriptions. For more information about Azure Key Vault, see [What is Azure Key Vault?](../../key-vault/general/overview.md).
-When a new Speech resource is created and used to provision Custom Commands application - data is always encrypted using Microsoft-managed keys. It's not possible to enable customer-managed keys at the time that the resource is created. Customer-managed keys are stored in Azure Key Vault, and the key vault must be provisioned with access policies that grant key permissions to the managed identity that is associated with the Azure AI services resource. The managed identity is available only after the resource is created using the Pricing Tier required for CMK.
+When a new Speech resource is created and used to provision Custom Commands applications, data is always encrypted using Microsoft-managed keys. It's not possible to enable customer-managed keys at the time that the resource is created. Customer-managed keys are stored in Azure Key Vault, and the key vault must be provisioned with access policies that grant key permissions to the managed identity that is associated with the Azure AI services resource. The managed identity is available only after the resource is created using the Pricing Tier required for CMK.
-Enabling customer managed keys will also enable a system assigned [managed identity](../../active-directory/managed-identities-azure-resources/overview.md), a feature of Microsoft Entra ID. Once the system assigned managed identity is enabled, this resource will be registered with Microsoft Entra ID. After being registered, the managed identity will be given access to the Key Vault selected during customer managed key setup.
+Enabling customer managed keys also enables a system assigned [managed identity](../../active-directory/managed-identities-azure-resources/overview.md), a feature of Microsoft Entra ID. Once the system assigned managed identity is enabled, this resource is registered with Microsoft Entra ID. After being registered, the managed identity is given access to the Key Vault selected during customer managed key setup.
> [!IMPORTANT] > If you disable system assigned managed identities, access to the key vault will be removed and any data encrypted with the customer keys will no longer be accessible. Any features depended on this data will stop working.
Enabling customer managed keys will also enable a system assigned [managed ident
## Configure Azure Key Vault
-Using customer-managed keys requires that two properties be set in the key vault, **Soft Delete** and **Do Not Purge**. These properties are not enabled by default, but can be enabled using either PowerShell or Azure CLI on a new or existing key vault.
+Using customer-managed keys requires that two properties be set in the key vault, **Soft Delete** and **Do Not Purge**. These properties aren't enabled by default, but can be enabled using either PowerShell or Azure CLI on a new or existing key vault.
> [!IMPORTANT] > If you do not have the **Soft Delete** and **Do Not Purge** properties enabled and you delete your key, you won't be able to recover the data in your Azure AI services resource.
Only RSA keys of size 2048 are supported with Azure Storage encryption. For more
To enable customer-managed keys in the Azure portal, follow these steps: 1. Navigate to your Speech resource.
-1. On the **Settings** blade for your Speech resource, select **Encryption**. Select the **Customer Managed Keys** option, as shown in the following figure.
+1. On the **Settings** page for your Speech resource, select **Encryption**. Select the **Customer Managed Keys** option, as shown in the following figure.
![Screenshot showing how to select Customer Managed Keys](media/custom-commands/select-cmk.png)
After you enable customer-managed keys, you'll have the opportunity to specify a
To specify a key as a URI, follow these steps:
-1. To locate the key URI in the Azure portal, navigate to your key vault, and select the **Keys** setting. Select the desired key, then click the key to view its versions. Select a key version to view the settings for that version.
+1. To locate the key URI in the Azure portal, navigate to your key vault, and select the **Keys** setting. Select the desired key, then select the key to view its versions. Select a key version to view the settings for that version.
1. Copy the value of the **Key Identifier** field, which provides the URI. ![Screenshot showing key vault key URI](../media/cognitive-services-encryption/key-uri-portal.png)
To change the key used for encryption, follow these steps:
You can rotate a customer-managed key in Azure Key Vault according to your compliance policies. When the key is rotated, you must update the Speech resource to use the new key URI. To learn how to update the resource to use a new version of the key in the Azure portal, see [Update the key version](#update-the-key-version).
-Rotating the key does not trigger re-encryption of data in the resource. There is no further action required from the user.
+Rotating the key doesn't trigger re-encryption of data in the resource. There's no further action required from the user.
## Revoke access to customer-managed keys
ai-services Custom Commands References https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/custom-commands-references.md
Previously updated : 06/18/2020 Last updated : 1/18/2024
Parameters are information required by the commands to complete a task. In compl
Completion rules are a series of rules to be executed after the command is ready to be fulfilled, for example, when all the conditions of the rules are satisfied. ### Interaction rules
-Interaction rules are additional rules to handle more specific or complex situations. You can add additional validations or configure advanced features such as confirmations or a one-step correction. You can also build your own custom interaction rules.
+Interaction rules are extra rules to handle more specific or complex situations. You can add more validations or configure advanced features such as confirmations or a one-step correction. You can also build your own custom interaction rules.
## Parameters configuration
A parameter is identified by the name property. You should always give a descrip
### Required This check box indicates whether a value for this parameter is required for command fulfillment or completion. You must configure responses to prompt the user to provide a value if a parameter is marked as required.
-Note that, if you configured a **required parameter** to have a **Default value**, the system will still explicitly prompt for the parameter's value.
+If you configured a **required parameter** to have a **Default value**, the system still prompts for the parameter's value.
### Type Custom Commands supports the following parameter types:
A rule in Custom Commands is defined by a set of *conditions* that, when met, ex
Custom Commands supports the following rule categories: * **Completion rules**: These rules must be executed upon command fulfillment. All the rules configured in this section for which the conditions are true will be executed.
-* **Interaction rules**: These rules can be used to configure additional custom validations, confirmations, and a one-step correction, or to accomplish any other custom dialog logic. Interaction rules are evaluated at each turn in the processing and can be used to trigger completion rules.
+* **Interaction rules**: These rules can be used to configure extra custom validations, confirmations, and a one-step correction, or to accomplish any other custom dialog logic. Interaction rules are evaluated at each turn in the processing and can be used to trigger completion rules.
The different actions configured as part of a rule are executed in the order in which they appear in the authoring portal.
Conditions are the requirements that must be met for a rule to execute. Rules co
* **All required parameters**: All the parameters that were marked as required have a value. * **Updated parameters**: One or more parameter values were updated as a result of processing the current input (utterance or activity). * **Confirmation was successful**: The input utterance or activity was a successful confirmation (yes).
-* **Confirmation was denied**: The input utterance or activity was not a successful confirmation (no).
+* **Confirmation was denied**: The input utterance or activity wasn't a successful confirmation (no).
* **Previous command needs to be updated**: This condition is used in instances when you want to catch a negated confirmation along with an update. Behind the scenes, this condition is configured for when the dialog engine detects a negative confirmation where the intent is the same as the previous turn, and the user has responded with an update. ### Actions
Expectations are used to configure hints for the processing of the next user inp
The post-execution state is the dialog state after processing the current input (utterance or activity). It's of the following types: * **Keep current state**: Keep current state only.
-* **Complete the command**: Complete the command and no additional rules of the command will be processed.
+* **Complete the command**: Complete the command and no more rules of the command are processed.
* **Execute completion rules**: Execute all the valid completion rules. * **Wait for user's input**: Wait for the next user input.
ai-services Custom Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/custom-commands.md
Previously updated : 03/11/2020 Last updated : 1/18/2024
Applications such as [Voice assistants](voice-assistants.md) listen to users and take an action in response, often speaking back. They use [speech to text](speech-to-text.md) to transcribe the user's speech, then take action on the natural language understanding of the text. This action frequently includes spoken output from the assistant generated with [text to speech](text-to-speech.md). Devices connect to assistants with the Speech SDK's `DialogServiceConnector` object.
-Custom Commands makes it easy to build rich voice commanding apps optimized for voice-first interaction experiences. It provides a unified authoring experience, an automatic hosting model, and relatively lower complexity, helping you focus on building the best solution for your voice commanding scenarios.
+Custom Commands makes it easy to build rich voice commanding apps optimized for voice-first interaction experiences. It provides a unified authoring experience, an automatic hosting model, and relatively lower complexity. Custom Commands helps you focus on building the best solution for your voice commanding scenarios.
Custom Commands is best suited for task completion or command-and-control scenarios such as "Turn on the overhead light" or "Make it 5 degrees warmer". Custom Commands is well suited for Internet of Things (IoT) devices, ambient and headless devices. Examples include solutions for Hospitality, Retail and Automotive industries, where you want voice-controlled experiences for your guests, in-store inventory management or in-car functionality.
ai-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-test-and-train.md
Use <a href="http://sox.sourceforge.net" target="_blank" rel="noopener">SoX</a>
### Custom display text formatting data for training
-Learn more about [display text formatting with speech to text](./display-text-format.md).
+Learn more about [preparing display text formatting data](./how-to-custom-speech-display-text-format.md) and [display text formatting with speech to text](./display-text-format.md).
Automatic Speech Recognition output display format is critical to downstream tasks and one-size doesnΓÇÖt fit all. Adding Custom Display Format rules allows users to define their own lexical-to-display format rules to improve the speech recognition service quality on top of Microsoft Azure Custom Speech Service.
aks App Routing Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing-migration.md
description: Learn how to migrate from the HTTP application routing feature to t
-+ Last updated 11/03/2023
In this article, you learn how to migrate your Azure Kubernetes Service (AKS) cl
- path: / pathType: Prefix backend:
- service:
+ service:
name: aks-helloworld
- port:
+ port:
number: 80 ```
In this article, you learn how to migrate your Azure Kubernetes Service (AKS) cl
- path: / pathType: Prefix backend:
- service:
+ service:
name: aks-helloworld
- port:
+ port:
number: 80 ```
aks Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/availability-zones.md
Title: Use availability zones in Azure Kubernetes Service (AKS) description: Learn how to create a cluster that distributes nodes across availability zones in Azure Kubernetes Service (AKS)-+ Last updated 12/06/2023
aks Azure Blob Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-blob-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Blob storage on Azure Kubernetes Service (AKS) description: Learn how to use the Container Storage Interface (CSI) driver for Azure Blob storage in an Azure Kubernetes Service (AKS) cluster. -+ Last updated 11/24/2023
aks Azure Csi Blob Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-blob-storage-provision.md
Title: Create a persistent volume with Azure Blob storage in Azure Kubernetes Se
description: Learn how to create a static or dynamic persistent volume with Azure Blob storage for use with multiple concurrent pods in Azure Kubernetes Service (AKS) -+ Last updated 11/28/2023
aks Azure Csi Disk Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-disk-storage-provision.md
Title: Create a persistent volume with Azure Disks in Azure Kubernetes Service (
description: Learn how to create a static or dynamic persistent volume with Azure Disks for use with multiple concurrent pods in Azure Kubernetes Service (AKS) -+ Last updated 11/28/2023
When you create an Azure disk for use with AKS, you can create the disk resource
```azurecli-interactive az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
-
+ # Output MC_myResourceGroup_myAKSCluster_eastus ```
aks Azure Csi Files Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-files-storage-provision.md
Title: Create a persistent volume with Azure Files in Azure Kubernetes Service (
description: Learn how to create a static or dynamic persistent volume with Azure Files for use with multiple concurrent pods in Azure Kubernetes Service (AKS) -+ Last updated 11/28/2023
Kubernetes needs credentials to access the file share created in the previous st
```bash kubectl delete pod mypod
-
+ kubectl apply -f azure-files-pod.yaml ```
spec:
readOnly: false volumes: - name: azure
- csi:
+ csi:
driver: file.csi.azure.com volumeAttributes: secretName: azure-secret # required
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Disk on Azure Kubernetes Service (AKS) description: Learn how to use the Container Storage Interface (CSI) driver for Azure Disk in an Azure Kubernetes Service (AKS) cluster. -+ Last updated 04/19/2023
metadata:
name: azuredisk-csi-waitforfirstconsumer provisioner: disk.csi.azure.com parameters:
- skuname: StandardSSD_LRS
+ skuname: StandardSSD_LRS
allowVolumeExpansion: true reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer
aks Azure Disk Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-customer-managed-keys.md
Title: Use a customer-managed key to encrypt Azure disks in Azure Kubernetes Service (AKS) description: Bring your own keys (BYOK) to encrypt AKS OS and Data disks. -+ Last updated 11/24/2023
keyVaultId=$(az keyvault show --name myKeyVaultName --query "[id]" -o tsv)
keyVaultKeyUrl=$(az keyvault key show --vault-name myKeyVaultName --name myKeyName --query "[key.kid]" -o tsv) # Create a DiskEncryptionSet
-az disk-encryption-set create -n myDiskEncryptionSetName -l myAzureRegionName -g myResourceGroup --source-vault $keyVaultId --key-url $keyVaultKeyUrl
+az disk-encryption-set create -n myDiskEncryptionSetName -l myAzureRegionName -g myResourceGroup --source-vault $keyVaultId --key-url $keyVaultKeyUrl
``` > [!IMPORTANT]
az keyvault set-policy -n myKeyVaultName -g myResourceGroup --object-id $desIden
## Create a new AKS cluster and encrypt the OS disk
-Either create a new resource group, or select an existing resource group hosting other AKS clusters, then use your key to encrypt the either using network-attached OS disks or ephemeral OS disk. By default, a cluster uses ephemeral OS disk when possible in conjunction with VM size and OS disk size.
+Either create a new resource group, or select an existing resource group hosting other AKS clusters, then use your key to encrypt the either using network-attached OS disks or ephemeral OS disk. By default, a cluster uses ephemeral OS disk when possible in conjunction with VM size and OS disk size.
Run the following command to retrieve the DiskEncryptionSet value and set a variable:
aksIdentity=$(az aks show -g $RG_NAME -n $CLUSTER_NAME --query "identity.princip
az role assignment create --role "Contributor" --assignee $aksIdentity --scope $diskEncryptionSetId ```
-Create a file called **byok-azure-disk.yaml** that contains the following information. Replace *myAzureSubscriptionId*, *myResourceGroup*, and *myDiskEncrptionSetName* with your values, and apply the yaml. Make sure to use the resource group where your DiskEncryptionSet is deployed.
+Create a file called **byok-azure-disk.yaml** that contains the following information. Replace *myAzureSubscriptionId*, *myResourceGroup*, and *myDiskEncrptionSetName* with your values, and apply the yaml. Make sure to use the resource group where your DiskEncryptionSet is deployed.
```yaml kind: StorageClass
-apiVersion: storage.k8s.io/v1
+apiVersion: storage.k8s.io/v1
metadata: name: byok provisioner: disk.csi.azure.com # replace with "kubernetes.io/azure-disk" if aks version is less than 1.21
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Files on Azure Kubernetes Service (AKS) description: Learn how to use the Container Storage Interface (CSI) driver for Azure Files in an Azure Kubernetes Service (AKS) cluster. -+ Last updated 01/11/2024
allowVolumeExpansion: true
parameters: resourceGroup: <resourceGroup> storageAccount: <storageAccountName>
- server: <storageAccountName>.file.core.windows.net
+ server: <storageAccountName>.file.core.windows.net
reclaimPolicy: Delete volumeBindingMode: Immediate mountOptions:
The output of the command resembles the following example:
```output storageclass.storage.k8s.io/private-azurefile-csi created ```
-
+ Create a file named `private-pvc.yaml`, and then paste the following example manifest in the file:
-
+ ```yaml apiVersion: v1 kind: PersistentVolumeClaim
spec:
requests: storage: 100Gi ```
-
+ Create the PVC by using the [kubectl apply][kubectl-apply] command:
-
+ ```bash kubectl apply -f private-pvc.yaml ```
aks Azure Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-netapp-files.md
Title: Configure Azure NetApp Files for Azure Kubernetes Service description: Learn how to configure Azure NetApp Files for an Azure Kubernetes Service cluster. -+ Last updated 05/08/2023
The following considerations apply when you use Azure NetApp Files:
* The Azure CLI version 2.0.59 or higher installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. * After the initial deployment of an AKS cluster, you can choose to provision Azure NetApp Files volumes statically or dynamically. * To use dynamic provisioning with Azure NetApp Files with Network File System (NFS), install and configure [Astra Trident][astra-trident] version 19.07 or higher. To use dynamic provisioning with Azure NetApp Files with Secure Message Block (SMB), install and configure Astra Trident version 22.10 or higher. Dynamic provisioning for SMB shares is only supported on windows worker nodes.
-* Before you deploy Azure NetApp Files SMB volumes, you must identify the AD DS integration requirements for Azure NetApp Files to ensure that Azure NetApp Files is well connected to AD DS. For more information, see [Understand guidelines for Active Directory Domain Services site design and planning](../azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md). Both the AKS cluster and Azure NetApp Files must have connectivity to the same AD.
+* Before you deploy Azure NetApp Files SMB volumes, you must identify the AD DS integration requirements for Azure NetApp Files to ensure that Azure NetApp Files is well connected to AD DS. For more information, see [Understand guidelines for Active Directory Domain Services site design and planning](../azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md). Both the AKS cluster and Azure NetApp Files must have connectivity to the same AD.
## Configure Azure NetApp Files for AKS workloads
-This section describes how to set up Azure NetApp Files for AKS workloads. It's applicable for all scenarios within this article.
+This section describes how to set up Azure NetApp Files for AKS workloads. It's applicable for all scenarios within this article.
1. Define variables for later usage. Replace *myresourcegroup*, *mylocation*, *myaccountname*, *mypool1*, *poolsize*, *premium*, *myvnet*, *myANFSubnet*, and *myprefix* with appropriate values for your environment.
This section describes how to set up Azure NetApp Files for AKS workloads. It's
SUBNET_NAME="myANFSubnet" ADDRESS_PREFIX="myprefix" ```
-
+ 2. Register the *Microsoft.NetApp* resource provider by running the following command: ```azurecli-interactive
This section describes how to set up Azure NetApp Files for AKS workloads. It's
--service-level $SERVICE_LEVEL ```
-5. Create a subnet to [delegate to Azure NetApp Files][anf-delegate-subnet] using the command [`az network vnet subnet create`][az-network-vnet-subnet-create]. Specify the resource group hosting the existing virtual network for your AKS cluster. Replace the variables shown in the command with your Azure NetApp Files information.
+5. Create a subnet to [delegate to Azure NetApp Files][anf-delegate-subnet] using the command [`az network vnet subnet create`][az-network-vnet-subnet-create]. Specify the resource group hosting the existing virtual network for your AKS cluster. Replace the variables shown in the command with your Azure NetApp Files information.
> [!NOTE] > This subnet must be in the same virtual network as your AKS cluster.
This section describes how to set up Azure NetApp Files for AKS workloads. It's
## Statically or dynamically provision Azure NetApp Files volumes for NFS or SMB After you [configure Azure NetApp Files for AKS workloads](#configure-azure-netapp-files-for-aks-workloads), you can statically or dynamically provision Azure NetApp Files using NFS, SMB, or dual-protocol volumes within the capacity pool. Follow instructions in:
-* [Provision Azure NetApp Files NFS volumes for Azure Kubernetes Service](azure-netapp-files-nfs.md)
+* [Provision Azure NetApp Files NFS volumes for Azure Kubernetes Service](azure-netapp-files-nfs.md)
* [Provision Azure NetApp Files SMB volumes for Azure Kubernetes Service](azure-netapp-files-smb.md) * [Provision Azure NetApp Files dual-protocol volumes for Azure Kubernetes Service](azure-netapp-files-dual-protocol.md)
aks Best Practices Performance Scale Large https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/best-practices-performance-scale-large.md
Title: Performance and scaling best practices for large workloads in Azure Kuber
description: Learn the best practices for performance and scaling for large workloads in Azure Kubernetes Service (AKS). Previously updated : 11/03/2023 Last updated : 01/18/2024 # Best practices for performance and scaling for large workloads in Azure Kubernetes Service (AKS)
Kubernetes has a multi-dimensional scale envelope with each resource type repres
The control plane manages all the resource scaling in the cluster, so the more you scale the cluster within a given dimension, the less you can scale within other dimensions. For example, running hundreds of thousands of pods in an AKS cluster impacts how much pod churn rate (pod mutations per second) the control plane can support.
-The size of the envelope is proportional to the size of the Kubernetes control plane. AKS supports two control plane tiers as part of the Base SKU: the Free tier and the Standard tier. For more information, see [Free and Standard pricing tiers for AKS cluster management][free-standard-tier].
+The size of the envelope is proportional to the size of the Kubernetes control plane. AKS supports three control plane tiers as part of the Base SKU: Free, Standard, and Premium tier. For more information, see [Free, Standard, and Premium pricing tiers for AKS cluster management][pricing-tiers].
> [!IMPORTANT]
-> We highly recommend using the Standard tier for production or at-scale workloads. AKS automatically scales up the Kubernetes control plane to support the following scale limits:
+> We highly recommend using the Standard or Premium tier for production or at-scale workloads. AKS automatically scales up the Kubernetes control plane to support the following scale limits:
> > * Up to 5,000 nodes per AKS cluster > * 200,000 pods per AKS cluster (with Azure CNI Overlay)
As you scale your AKS clusters to larger scale points, keep the following node p
[managed-nat-gateway]: ./nat-gateway.md [azure-cni-dynamic-ip]: ./configure-azure-cni-dynamic-ip-allocation.md [azure-cni-overlay]: ./azure-cni-overlay.md
-[free-standard-tier]: ./free-standard-pricing-tiers.md
+[pricing-tiers]: ./free-standard-pricing-tiers.md
[cluster-autoscaler]: cluster-autoscaler.md [azure-npm]: ../virtual-network/kubernetes-network-policies.md
aks Configure Kube Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kube-proxy.md
Title: Configure kube-proxy (iptables/IPVS) (Preview)
description: Learn how to configure kube-proxy to utilize different load balancing configurations with Azure Kubernetes Service (AKS). -+ Last updated 09/25/2023
You can view the full `kube-proxy` configuration structure in the [AKS Cluster S
```azurecli-interactive # Create a new cluster az aks create -g <resourceGroup> -n <clusterName> --kube-proxy-config kube-proxy.json
-
+ # Update an existing cluster az aks update -g <resourceGroup> -n <clusterName> --kube-proxy-config kube-proxy.json ```
aks Configure Kubenet Dual Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet-dual-stack.md
Last updated 12/07/2023-+ # Use dual-stack kubenet networking in Azure Kubernetes Service (AKS)
AKS configures the required supporting services for dual-stack networking. This
* Load balancer setup for IPv4 and IPv6 services. > [!NOTE]
-> When using Dualstack with an [outbound type][outbound-type] of user-defined routing, you can choose to have a default route for IPv6 depending on if you need your IPv6 traffic to reach the internet or not. If you don't have a default route for IPv6, a warning will surface when creating a cluster but will not prevent cluster creation.
+> When using Dualstack with an [outbound type][outbound-type] of user-defined routing, you can choose to have a default route for IPv6 depending on if you need your IPv6 traffic to reach the internet or not. If you don't have a default route for IPv6, a warning will surface when creating a cluster but will not prevent cluster creation.
## Deploying a dual-stack cluster
aks Csi Secrets Store Configuration Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-configuration-options.md
Title: Azure Key Vault provider for Secrets Store CSI Driver for Azure Kubernetes Service (AKS) configuration and troubleshooting options description: Learn configuration and troubleshooting options for the Azure Key Vault provider for Secrets Store CSI Driver in Azure Kubernetes Service (AKS).-+ -+ Last updated 10/19/2023-+ # Azure Key Vault provider for Secrets Store CSI Driver for Azure Kubernetes Service (AKS) configuration and troubleshooting options
You might want to create a Kubernetes secret to mirror your mounted secrets cont
metadata: name: azure-sync spec:
- provider: azure
+ provider: azure
secretObjects: # [OPTIONAL] SecretObjects defines the desired state of synced Kubernetes secret objects - data: - key: username # data field to populate
You might want to create a Kubernetes secret to mirror your mounted secrets cont
spec: containers: - name: busybox
- image: registry.k8s.io/e2e-test-images/busybox:1.29-1
+ image: registry.k8s.io/e2e-test-images/busybox:1.29-1
command: - "/bin/sleep" - "10000"
aks Csi Secrets Store Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-driver.md
Title: Use the Azure Key Vault provider for Secrets Store CSI Driver for Azure Kubernetes Service (AKS) secrets description: Learn how to use the Azure Key Vault provider for Secrets Store CSI Driver to integrate secrets stores with Azure Kubernetes Service (AKS).-+ -+ Last updated 12/06/2023-+ # Use the Azure Key Vault provider for Secrets Store CSI Driver in an Azure Kubernetes Service (AKS) cluster
aks Csi Secrets Store Identity Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-identity-access.md
Title: Access Azure Key Vault with the CSI Driver Identity Provider description: Learn how to integrate the Azure Key Vault Provider for Secrets Store CSI Driver with your Azure credentials and user identities.-+ Last updated 12/19/2023-+ # Connect your Azure identity provider to the Azure Key Vault Secrets Store CSI Driver in Azure Kubernetes Service (AKS)
You can use one of the following access methods:
A [Microsoft Entra Workload ID][workload-identity] is an identity that an application running on a pod uses to authenticate itself against other Azure services, such as workloads in software. The Secret Store CSI Driver integrates with native Kubernetes capabilities to federate with external identity providers.
-In this security model, the AKS cluster acts as token issuer. Microsoft Entra ID then uses OIDC to discover public signing keys and verify the authenticity of the service account token before exchanging it for a Microsoft Entra token. For your workload to exchange a service account token projected to its volume for a Microsoft Entra token, you need the Azure Identity client library in the Azure SDK or the Microsoft Authentication Library (MSAL)
+In this security model, the AKS cluster acts as token issuer. Microsoft Entra ID then uses OIDC to discover public signing keys and verify the authenticity of the service account token before exchanging it for a Microsoft Entra token. For your workload to exchange a service account token projected to its volume for a Microsoft Entra token, you need the Azure Identity client library in the Azure SDK or the Microsoft Authentication Library (MSAL)
> [!NOTE] >
In this security model, the AKS cluster acts as token issuer. Microsoft Entra ID
export UAMI=<name for user assigned identity> export KEYVAULT_NAME=<existing keyvault name> export CLUSTER_NAME=<aks cluster name>
-
+ az account set --subscription $SUBSCRIPTION_ID ```
In this security model, the AKS cluster acts as token issuer. Microsoft Entra ID
```bash export SERVICE_ACCOUNT_NAME="workload-identity-sa" # sample name; can be changed export SERVICE_ACCOUNT_NAMESPACE="default" # can be changed to namespace of your workload
-
+ cat <<EOF | kubectl apply -f - apiVersion: v1 kind: ServiceAccount
In this security model, the AKS cluster acts as token issuer. Microsoft Entra ID
spec: provider: azure parameters:
- usePodIdentity: "false"
+ usePodIdentity: "false"
clientID: "${USER_ASSIGNED_CLIENT_ID}" # Setting this to use workload identity keyvaultName: ${KEYVAULT_NAME} # Set to the name of your key vault cloudName: "" # [OPTIONAL for Azure] if not provided, the Azure environment defaults to AzurePublicCloud
In this security model, the AKS cluster acts as token issuer. Microsoft Entra ID
apiVersion: v1 metadata: name: busybox-secrets-store-inline-wi
- labels:
+ labels:
azure.workload.identity/use: "true" spec: serviceAccountName: "workload-identity-sa"
In this security model, the AKS cluster acts as token issuer. Microsoft Entra ID
readOnly: true volumeAttributes: secretProviderClass: "azure-kvname-wi"
- EOF
+ EOF
``` <a name='access-with-a-user-assigned-managed-identity'></a>
-## Access with managed identity
+## Access with managed identity
-A [Microsoft Entra Managed ID][managed-identity] is an identity that an administrator uses to authenticate themselves against other Azure services. The managed identity uses RBAC to federate with external identity providers.
+A [Microsoft Entra Managed ID][managed-identity] is an identity that an administrator uses to authenticate themselves against other Azure services. The managed identity uses RBAC to federate with external identity providers.
In this security model, you can grant access to your cluster's resources to team members or tenants sharing a managed role. The role is checked for scope to access the keyvault and other credentials. When you [enabled the Azure Key Vault provider for Secrets Store CSI Driver on your AKS Cluster](./csi-secrets-store-driver.md#create-an-aks-cluster-with-azure-key-vault-provider-for-secrets-store-csi-driver-support), it created a user identity.
In this security model, you can grant access to your cluster's resources to team
Alternatively, you can create a new managed identity and assign it to your virtual machine (VM) scale set or to each VM instance in your availability set using the following commands. ```azurecli-interactive
- az identity create -g <resource-group> -n <identity-name>
+ az identity create -g <resource-group> -n <identity-name>
az vmss identity assign -g <resource-group> -n <agent-pool-vmss> --identities <identity-resource-id> az vm identity assign -g <resource-group> -n <agent-pool-vm> --identities <identity-resource-id> ```
aks Deploy Confidential Containers Default Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-confidential-containers-default-policy.md
Title: Deploy an AKS cluster with Confidential Containers (preview)
description: Learn how to create an Azure Kubernetes Service (AKS) cluster with Confidential Containers (preview) and a default security policy by using the Azure CLI. Last updated 01/10/2024-+ # Deploy an AKS cluster with Confidential Containers and a default policy
To configure the workload identity, perform the following steps described in the
The following steps configure end-to-end encryption for Kafka messages using encryption keys managed by [Azure Managed Hardware Security Modules][azure-managed-hsm] (mHSM). The key is only released when the Kafka consumer runs within a Confidential Container with an Azure attestation secret provisioning container injected in to the pod.
-This configuration is basedon the following four components:
+This configuration is based on the following four components:
* Kafka Cluster: A simple Kafka cluster deployed in the Kafka namespace on the cluster. * Kafka Producer: A Kafka producer running as a vanilla Kubernetes pod that sends encrypted user-configured messages using a public key to a Kafka topic.
For this preview release, we recommend for test and evaluation purposes to eithe
>The managed identity is the value you assigned to the `USER_ASSIGNED_IDENTITY_NAME` variable. >[!NOTE]
- >To add role assignments, you must have `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/roleAssignments/delete` permissions, such as [Key Vault Data Access Administrator][key-vault-data-access-admin-rbac], [User Access Administrator][user-access-admin-rbac],or [Owner][owner-rbac].
+ >To add role assignments, you must have `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/roleAssignments/delete` permissions, such as [Key Vault Data Access Administrator][key-vault-data-access-admin-rbac], [User Access Administrator][user-access-admin-rbac], or [Owner][owner-rbac].
Run the following command to set the scope: ```azurecli-interactive
- AKV_SCOPE=$(az keyvault show --name <AZURE_AKV_RESOURCE_NAME> --query id --output tsv)
+ AKV_SCOPE=$(az keyvault show --name <AZURE_AKV_RESOURCE_NAME> --query id --output tsv)
``` Run the following command to assign the **Key Vault Crypto Officer** role.
For this preview release, we recommend for test and evaluation purposes to eithe
targetPort: kafka-consumer ```
-1. Create a Kafka namespace by running the following command:
+1. Create a kafka namespace by running the following command:
```bash kubectl create namespace kafka ```
-1. Install the Kafka cluster in the Kafka namespace by running the following command::
+1. Install the Kafka cluster in the kafka namespace by running the following command:
```bash kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka ```
-1. Run the following command to apply the `Kafka` cluster CR file.
+1. Run the following command to apply the `kafka` cluster CR file.
```bash kubectl apply -f https://strimzi.io/examples/latest/kafka/kafka-persistent-single.yaml -n kafka
For this preview release, we recommend for test and evaluation purposes to eithe
```
-1. Prepare the RSA Encryption/Decryption key by [https://github.com/microsoft/confidential-container-demos/blob/main/kafka/setup-key.sh] the Bash script for the workload from GitHub. Save the file as `setup-key.sh`.
+1. Prepare the RSA Encryption/Decryption key by the [bash script](https://github.com/microsoft/confidential-container-demos/raw/main/kafka/setup-key.sh) for the workload from GitHub. Save the file as `setup-key.sh`.
1. Set the `MAA_ENDPOINT` environmental variable to match the value for the `SkrClientMAAEndpoint` from the `consumer.yaml` manifest file by running the following command.
For this preview release, we recommend for test and evaluation purposes to eithe
1. Get the IP address of the web service using the following command: ```bash
- kubectl get svc consumer -n kafka
+ kubectl get svc consumer -n kafka
```
-Copy and paste the external IP address of the consumer service into your browser and observe the decrypted message.
+1. Copy and paste the external IP address of the consumer service into your browser and observe the decrypted message.
-The following resemblers the output of the command:
+ The following resembles the output of the command:
-```output
-Welcome to Confidential Containers on AKS!
-Encrypted Kafka Message:
-Msg 1: Azure Confidential Computing
-```
+ ```output
+ Welcome to Confidential Containers on AKS!
+ Encrypted Kafka Message:
+ Msg 1: Azure Confidential Computing
+ ```
-You should also attempt to run the consumer as a regular Kubernetes pod by removing the `skr container` and `kata-cc runtime class` spec. Since you aren't running the consumer with kata-cc runtime class, you no longer need the policy.
+1. You should also attempt to run the consumer as a regular Kubernetes pod by removing the `skr container` and `kata-cc runtime class` spec. Since you aren't running the consumer with kata-cc runtime class, you no longer need the policy.
-Remove the entire policy and observe the messages again in the browser after redeploying the workload. Messages appear as base64-encoded ciphertext because the private encryption key can't be retrieved. The key can't be retrieved because the consumer is no longer running in a confidential environment, and the `skr container` is missing, preventing decryption of messages.
+1. Remove the entire policy and observe the messages again in the browser after redeploying the workload. Messages appear as base64-encoded ciphertext because the private encryption key can't be retrieved. The key can't be retrieved because the consumer is no longer running in a confidential environment, and the `skr container` is missing, preventing decryption of messages.
## Cleanup When you're finished evaluating this feature, to avoid Azure charges, clean up your unnecessary resources. If you deployed a new cluster as part of your evaluation or testing, you can delete the cluster using the [az aks delete][az-aks-delete] command. ```azurecli-interactive
-az aks delete --resource-group myResourceGroup --name myAKSCluster
+az aks delete --resource-group myResourceGroup --name myAKSCluster
``` If you enabled Confidential Containers (preview) on an existing cluster, you can remove the pod(s) using the [kubectl delete pod][kubectl-delete-pod] command.
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md
Title: Frequently asked questions for Azure Kubernetes Service (AKS)
description: Find answers to some of the common questions about Azure Kubernetes Service (AKS). Last updated 11/06/2023-+ # Frequently asked questions about Azure Kubernetes Service (AKS)
Any patch, including a security patch, is automatically applied to the AKS clust
## What is the purpose of the AKS Linux Extension I see installed on my Linux Virtual Machine Scale Sets instances?
-The AKS Linux Extension is an Azure VM extension that installs and configures monitoring tools on Kubernetes worker nodes. The extension is installed on all new and existing Linux nodes. It configures the following monitoring tools:
+The AKS Linux Extension is an Azure VM extension that installs and configures monitoring tools on Kubernetes worker nodes. The extension is installed on all new and existing Linux nodes. It configures the following monitoring tools:
- [Node-exporter](https://github.com/prometheus/node_exporter): Collects hardware telemetry from the virtual machine and makes it available using a metrics endpoint. Then, a monitoring tool, such as Prometheus, is able to scrap these metrics. - [Node-problem-detector](https://github.com/kubernetes/node-problem-detector): Aims to make various node problems visible to upstream layers in the cluster management stack. It's a systemd unit that runs on each node, detects node problems, and reports them to the clusterΓÇÖs API server using Events and NodeConditions.
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
description: Deploy a Java application with Open Liberty/WebSphere Liberty on an
Last updated 12/21/2022 keywords: java, jakartaee, javaee, microprofile, open-liberty, websphere-liberty, aks, kubernetes-+ # Deploy a Java application with Open Liberty or WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster
The following steps guide you to create a Liberty runtime on AKS. After completi
1. Visit the [Azure portal](https://portal.azure.com/). In the search box at the top of the page, type *IBM WebSphere Liberty and Open Liberty on Azure Kubernetes Service*. When the suggestions start appearing, select the one and only match that appears in the **Marketplace** section. If you prefer, you can go directly to the offer with this shortcut link: [https://aka.ms/liberty-aks](https://aka.ms/liberty-aks). 1. Select **Create**.
-1. In the **Basics** pane, create a new resource group. Because resource groups must be unique within a subscription, pick a unique name. An easy way to have unique names is to use a combination of your initials, today's date, and some identifier. For example, `ejb0913-java-liberty-project-rg`. Select *East US* as **Region**. Select **Next** to **AKS** pane.
+1. In the **Basics** pane, create a new resource group. Because resource groups must be unique within a subscription, pick a unique name. An easy way to have unique names is to use a combination of your initials, today's date, and some identifier. For example, `ejb0913-java-liberty-project-rg`. Select *East US* as **Region**. Select **Next** to **AKS** pane.
1. This pane allows you to select an existing AKS cluster and Azure Container Registry (ACR), instead of causing the deployment to create a new one, if desired. This capability enables you to use the sidecar pattern, as shown in the [Azure architecture center](/azure/architecture/patterns/sidecar). You can also adjust the settings for the size and number of the virtual machines in the AKS node pool. Leave all other values at the defaults and select **Next** to **Load balancing** pane. 1. Next to **Connect to Azure Application Gateway?** select **Yes**. This section lets you customize the following deployment options. 1. You can customize the virtual network and subnet into which the deployment will place the resources. Leave these values at their defaults.
You can now run and test the project locally before deploying to Azure. For conv
### Build image for AKS deployment
-You can now run the `docker build` command to build the image.
+You can now run the `docker build` command to build the image.
```bash cd <path-to-your-repo>/java-app/target
The following steps deploy and test the application.
``` Copy the value of **ADDRESS** from the output, this is the frontend public IP address of the deployed Azure Application Gateway.
-
+ 1. Go to `https://<ADDRESS>` to test the application. For your convenience, this shell command will create an environment variable whose value you can paste straight into the browser.
-
+ ```bash export APP_URL=https://$(kubectl get ingress | grep javaee-cafe-cluster-agic-ingress | cut -d " " -f14)/ echo $APP_URL
aks Http Application Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/http-application-routing.md
Title: HTTP application routing add-on for Azure Kubernetes Service (AKS) (retired) description: Use the HTTP application routing add-on to access applications deployed on Azure Kubernetes Service (AKS) (retired). -+ Last updated 04/05/2023
The add-on deploys two components: a [Kubernetes ingress controller][ingress] an
apiVersion: apps/v1 kind: Deployment metadata:
- name: aks-helloworld
+ name: aks-helloworld
spec: replicas: 1 selector:
The add-on deploys two components: a [Kubernetes ingress controller][ingress] an
apiVersion: v1 kind: Service metadata:
- name: aks-helloworld
+ name: aks-helloworld
spec: type: ClusterIP ports:
The add-on deploys two components: a [Kubernetes ingress controller][ingress] an
- path: / pathType: Prefix backend:
- service:
+ service:
name: aks-helloworld
- port:
+ port:
number: 80 ```
aks Ingress Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-tls.md
Title: Use TLS with an ingress controller on Azure Kubernetes Service (AKS)
description: Learn how to install and configure an ingress controller that uses TLS in an Azure Kubernetes Service (AKS) cluster. -+
In the following example, traffic is routed as such:
backend: service: name: aks-helloworld-one
- port:
+ port:
number: 80 ```
Alternatively, you can delete the resource individually.
$ helm list --namespace ingress-basic NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
- cert-manager ingress-basic 1 2020-01-15 10:23:36.515514 -0600 CST deployed cert-manager-v0.13.0 v0.13.0
- nginx ingress-basic 1 2020-01-15 10:09:45.982693 -0600 CST deployed nginx-ingress-1.29.1 0.27.0
+ cert-manager ingress-basic 1 2020-01-15 10:23:36.515514 -0600 CST deployed cert-manager-v0.13.0 v0.13.0
+ nginx ingress-basic 1 2020-01-15 10:09:45.982693 -0600 CST deployed nginx-ingress-1.29.1 0.27.0
``` 3. Uninstall the releases using the `helm uninstall` command. The following example uninstalls the NGINX ingress and cert-manager deployments.
aks Istio Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-upgrade.md
This article addresses upgrade experiences for Istio-based service mesh add-on f
## How Istio components are upgraded
-**Minor version:** Currently the Istio add-on only has minor version 1.17 available. Minor version upgrade experiences are planned for when newer versions of Istio (1.18) are introduced.
+### Minor version upgrade
-**Patch version:**
+Istio add-on allows upgrading the minor version using [canary upgrade process][istio-canary-upstream]. When an upgrade is initiated, the control plane of the new (canary) revision is deployed alongside the old (stable) revision's control plane. You can then manually roll over data plane workloads while using monitoring tools to track the health of workloads during this process. If you don't observe any issues with the health of your workloads, you can complete the upgrade so that only the new revision remains on the cluster. Else, you can roll back to the previous revision of Istio.
+
+If the cluster is currently using a supported minor version of Istio, upgrades are only allowed one minor version at a time. If the cluster is using an unsupported version of Istio, you must upgrade to the lowest supported minor version of Istio for that Kubernetes version. After that, upgrades can again be done one minor version at a time.
+
+The following example illustrates how to upgrade from revision `asm-1-17` to `asm-1-18`. The steps are the same for all minor upgrades.
+
+1. Use the [az aks mesh get-upgrades](/cli/azure/aks/mesh#az-aks-mesh-get-upgrades) command to check which revisions are available for the cluster as upgrade targets:
+
+ ```bash
+ az aks mesh get-upgrades --resource-group $RESOURCE_GROUP --name $CLUSTER
+ ```
+
+ If you expect to see a newer revision not returned by this command, you may need to upgrade your AKS cluster first so that it's compatible with the newest revision.
+
+1. Initiate a canary upgrade from revision `asm-1-17` to `asm-1-18` using [az aks mesh upgrade start](/cli/azure/aks/mesh#az-aks-mesh-upgrade-start):
+
+ ```bash
+ az aks mesh upgrade start --resource-group $RESOURCE_GROUP --name $CLUSTER --revision asm-1-18
+ ```
+
+ A canary upgrade means the 1.18 control plane is deployed alongside the 1.17 control plane. They continue to coexist until you either complete or roll back the upgrade.
+
+1. Verify control plane pods corresponding to both `asm-1-17` and `asm-1-18` exist:
+
+ * Verify `istiod` pods:
+
+ ```bash
+ kubectl get pods -n aks-istio-system
+ ```
+
+ Example output:
+
+ ```
+ NAME READY STATUS RESTARTS AGE
+ istiod-asm-1-17-55fccf84c8-dbzlt 1/1 Running 0 58m
+ istiod-asm-1-17-55fccf84c8-fg8zh 1/1 Running 0 58m
+ istiod-asm-1-18-f85f46bf5-7rwg4 1/1 Running 0 51m
+ istiod-asm-1-18-f85f46bf5-8p9qx 1/1 Running 0 51m
+ ```
+
+ * If ingress is enabled, verify ingress pods:
+
+ ```bash
+ kubectl get pods -n aks-istio-ingress
+ ```
+
+ Example output:
+
+ ```
+ NAME READY STATUS RESTARTS AGE
+ aks-istio-ingressgateway-external-asm-1-17-58f889f99d-qkvq2 1/1 Running 0 59m
+ aks-istio-ingressgateway-external-asm-1-17-58f889f99d-vhtd5 1/1 Running 0 58m
+ aks-istio-ingressgateway-external-asm-1-18-7466f77bb9-ft9c8 1/1 Running 0 51m
+ aks-istio-ingressgateway-external-asm-1-18-7466f77bb9-wcb6s 1/1 Running 0 51m
+ aks-istio-ingressgateway-internal-asm-1-17-579c5d8d4b-4cc2l 1/1 Running 0 58m
+ aks-istio-ingressgateway-internal-asm-1-17-579c5d8d4b-jjc7m 1/1 Running 0 59m
+ aks-istio-ingressgateway-internal-asm-1-18-757d9b5545-g89s4 1/1 Running 0 51m
+ aks-istio-ingressgateway-internal-asm-1-18-757d9b5545-krq9w 1/1 Running 0 51m
+ ```
+
+ Observe that ingress gateway pods of both revisions are deployed side-by-side. However, the service and its IP remain immutable.
+
+1. Relabel the namespace so that any new pods get the Istio sidecar associated with the new revision and its control plane:
+
+ ```bash
+ kubectl label namespace default istio.io/rev=asm-1-18 --overwrite
+ ```
+
+ Relabeling doesn't affect your workloads until they're restarted.
+
+1. Individually roll over each of your application workloads by restarting them. For example:
+
+ ```bash
+ kubectl rollout restart deployment <deployment name> -n <deployment namespace>
+ ```
+
+1. Check your monitoring tools and dashboards to determine whether your workloads are all running in a healthy state after the restart. Based on the outcome, you have two options:
+
+ * **Complete the canary upgrade**: If you're satisfied that the workloads are all running in a healthy state as expected, you can complete the canary upgrade. This will remove the previous revision's control plane and leave behind the new revision's control plane on the cluster. Run the following command to complete the canary upgrade:
+
+ ```bash
+ az aks mesh upgrade complete --resource-group $RESOURCE_GROUP --name $CLUSTER
+ ```
+
+ * **Rollback the canary upgrade**: In case you observe any issues with the health of your workloads, you can roll back to the previous revision of Istio:
+
+ * Relabel the namespace to the previous revision
+
+ ```bash
+ kubectl label namespace default istio.io/rev=asm-1-17 --overwrite
+ ```
+
+ * Roll back the workloads to use the sidecar corresponding to the previous Istio revision by restarting these workloads again:
+
+ ```bash
+ kubectl rollout restart deployment <deployment name> -n <deployment namespace>
+ ```
+
+ * Roll back the control plane to the previous revision:
+
+ ```
+ az aks mesh upgrade rollback --resource-group $RESOURCE_GROUP --name $CLUSTER
+ ```
+
+> [!NOTE]
+> Manually relabeling namespaces when moving them to a new revision can be tedious and error-prone. [Revision tags](https://istio.io/latest/docs/setup/upgrade/canary/#stable-revision-labels) solve this problem. Revision tags are stable identifiers that point to revisions and can be used to avoid relabeling namespaces. Rather than relabeling the namespace, a mesh operator can simply change the tag to point to a new revision. All namespaces labeled with that tag will be updated at the same time. However, note that you still need to restart the workloads to make sure the correct version of `istio-proxy` sidecars are injected.
+
+### Patch version upgrade
* Istio add-on patch version availability information is published in [AKS weekly release notes][aks-release-notes].
-* Patches are rolled out automatically for istiod and ingress pods as part of these AKS weekly releases.
+* Patches are rolled out automatically for istiod and ingress pods as part of these AKS weekly releases, which respect the `default` [planned maintenance window](./planned-maintenance.md) set up for the cluster.
* User needs to initiate patches to Istio proxy in their workloads by restarting the pods for reinjection: * Check the version of the Istio proxy intended for new or restarted pods. This version is the same as the version of the istiod and Istio ingress pods after they were patched:
This article addresses upgrade experiences for Istio-based service mesh add-on f
productpage-v1-979d4d9fc-p4764: docker.io/istio/examples-bookinfo-productpage-v1:1.17.0, mcr.microsoft.com/oss/istio/proxyv2:1.17.2-distroless ```
-[aks-release-notes]: https://github.com/Azure/AKS/releases
+[aks-release-notes]: https://github.com/Azure/AKS/releases
+[istio-canary-upstream]: https://istio.io/latest/docs/setup/upgrade/canary/
aks Quick Kubernetes Deploy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-bicep.md
Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Bicep
description: Learn how to quickly deploy a Kubernetes cluster using a Bicep file and deploy an application in Azure Kubernetes Service (AKS). Last updated 12/27/2023-+ #Customer intent: As a developer or cluster operator, I want to quickly deploy an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
To deploy the application, you use a manifest file to create all the objects req
[rabbitmq_management,rabbitmq_prometheus,rabbitmq_amqp1_0]. kind: ConfigMap metadata:
- name: rabbitmq-enabled-plugins
+ name: rabbitmq-enabled-plugins
apiVersion: v1 kind: Service
To deploy the application, you use a manifest file to create all the objects req
memory: 50Mi limits: cpu: 75m
- memory: 128Mi
+ memory: 128Mi
apiVersion: v1 kind: Service
To deploy the application, you use a manifest file to create all the objects req
ports: - containerPort: 8080 name: store-front
- env:
+ env:
- name: VUE_APP_ORDER_SERVICE_URL value: "http://order-service:3000/" - name: VUE_APP_PRODUCT_SERVICE_URL
aks Quick Kubernetes Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-cli.md
Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Azure
description: Learn how to quickly deploy a Kubernetes cluster and deploy an application in Azure Kubernetes Service (AKS) using Azure CLI. Last updated 01/10/2024-+ #Customer intent: As a developer or cluster operator, I want to deploy an AKS cluster and deploy an application so I can see how to run applications using the managed Kubernetes service in Azure.
To deploy the application, you use a manifest file to create all the objects req
[rabbitmq_management,rabbitmq_prometheus,rabbitmq_amqp1_0]. kind: ConfigMap metadata:
- name: rabbitmq-enabled-plugins
+ name: rabbitmq-enabled-plugins
apiVersion: v1 kind: Service
To deploy the application, you use a manifest file to create all the objects req
memory: 50Mi limits: cpu: 75m
- memory: 128Mi
+ memory: 128Mi
apiVersion: v1 kind: Service
To deploy the application, you use a manifest file to create all the objects req
ports: - containerPort: 8080 name: store-front
- env:
+ env:
- name: VUE_APP_ORDER_SERVICE_URL value: "http://order-service:3000/" - name: VUE_APP_PRODUCT_SERVICE_URL
aks Quick Kubernetes Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-portal.md
description: Learn how to quickly deploy a Kubernetes cluster and deploy an application in Azure Kubernetes Service (AKS) using the Azure portal. Last updated 01/11/2024-+ #Customer intent: As a developer or cluster operator, I want to quickly deploy an AKS cluster and deploy an application so that I can see how to run and monitor applications using the managed Kubernetes service in Azure.
To deploy the application, you use a manifest file to create all the objects req
[rabbitmq_management,rabbitmq_prometheus,rabbitmq_amqp1_0]. kind: ConfigMap metadata:
- name: rabbitmq-enabled-plugins
+ name: rabbitmq-enabled-plugins
apiVersion: v1 kind: Service
To deploy the application, you use a manifest file to create all the objects req
memory: 50Mi limits: cpu: 75m
- memory: 128Mi
+ memory: 128Mi
apiVersion: v1 kind: Service
To deploy the application, you use a manifest file to create all the objects req
ports: - containerPort: 8080 name: store-front
- env:
+ env:
- name: VUE_APP_ORDER_SERVICE_URL value: "http://order-service:3000/" - name: VUE_APP_PRODUCT_SERVICE_URL
aks Quick Kubernetes Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-powershell.md
Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Azure
description: Learn how to quickly deploy a Kubernetes cluster and deploy an application in Azure Kubernetes Service (AKS) using PowerShell. Last updated 01/11/2024-+ #Customer intent: As a developer or cluster operator, I want to quickly deploy an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
To deploy the application, you use a manifest file to create all the objects req
[rabbitmq_management,rabbitmq_prometheus,rabbitmq_amqp1_0]. kind: ConfigMap metadata:
- name: rabbitmq-enabled-plugins
+ name: rabbitmq-enabled-plugins
apiVersion: v1 kind: Service
To deploy the application, you use a manifest file to create all the objects req
memory: 50Mi limits: cpu: 75m
- memory: 128Mi
+ memory: 128Mi
apiVersion: v1 kind: Service
To deploy the application, you use a manifest file to create all the objects req
ports: - containerPort: 8080 name: store-front
- env:
+ env:
- name: VUE_APP_ORDER_SERVICE_URL value: "http://order-service:3000/" - name: VUE_APP_PRODUCT_SERVICE_URL
aks Tutorial Kubernetes Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/tutorial-kubernetes-workload-identity.md
Title: Tutorial - Use a workload identity with an application on Azure Kubernetes Service (AKS) description: In this Azure Kubernetes Service (AKS) tutorial, you deploy an Azure Kubernetes Service cluster and configure an application to use a workload identity. -+ Last updated 05/24/2023
To help simplify steps to configure the identities required, the steps below def
2. Add a secret to the vault using the [az keyvault secret set][az-keyvault-secret-set] command. The password is the value you specified for the environment variable `KEYVAULT_SECRET_NAME` and stores the value of **Hello!** in it. ```azurecli-interactive
- az keyvault secret set --vault-name "${KEYVAULT_NAME}" --name "${KEYVAULT_SECRET_NAME}" --value 'Hello!'
+ az keyvault secret set --vault-name "${KEYVAULT_NAME}" --name "${KEYVAULT_SECRET_NAME}" --value 'Hello!'
``` 3. Add the Key Vault URL to the environment variable `KEYVAULT_URL` using the [az keyvault show][az-keyvault-show] command.
aks Limit Egress Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/limit-egress-traffic.md
Title: Limit Network Traffic with Azure Firewall in Azure Kubernetes Service (AKS) description: Learn how to control egress traffic with Azure Firewall to set restrictions for outbound network connections in AKS clusters. -+ Last updated 12/05/2023
#Customer intent: As a cluster operator, I want to restrict egress traffic for nodes to only access defined ports and addresses and improve cluster security.
-# Limit network traffic with Azure Firewall in Azure Kubernetes Service (AKS)
+# Limit network traffic with Azure Firewall in Azure Kubernetes Service (AKS)
Learn how to use the [Outbound network and FQDN rules for AKS clusters][outbound-fqdn-rules] to control egress traffic using the Azure Firewall in AKS. To simplify this configuration, Azure Firewall provides an Azure Kubernetes Service (`AzureKubernetesService`) Fully Qualified Domain Name (FQDN) tag that restricts outbound traffic from the AKS cluster. This article shows how you can configure your AKS Cluster traffic rules through Azure firewall.
If you don't have user-assigned identities, follow the steps in this section. If
The output should resemble the following example output: ```output
- {
+ {
"clientId": "<client-id>", "clientSecretUrl": "<clientSecretUrl>",
- "id": "/subscriptions/<subscriptionid>/resourcegroups/aks-egress-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity",
+ "id": "/subscriptions/<subscriptionid>/resourcegroups/aks-egress-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity",
"location": "eastus", "name": "myIdentity", "principalId": "<principal-id>",
- "resourceGroup": "aks-egress-rg",
+ "resourceGroup": "aks-egress-rg",
"tags": {}, "tenantId": "<tenant-id>", "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
If you don't have user-assigned identities, follow the steps in this section. If
{ "clientId": "<client-id>", "clientSecretUrl": "<clientSecretUrl>",
- "id": "/subscriptions/<subscriptionid>/resourcegroups/aks-egress-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myKubeletIdentity",
+ "id": "/subscriptions/<subscriptionid>/resourcegroups/aks-egress-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myKubeletIdentity",
"location": "westus2", "name": "myKubeletIdentity", "principalId": "<principal-id>",
- "resourceGroup": "aks-egress-rg",
+ "resourceGroup": "aks-egress-rg",
"tags": {}, "tenantId": "<tenant-id>", "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
aks Manage Ssh Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-ssh-node-access.md
Use the [az aks create][az-aks-create] command to deploy an AKS cluster with an
|SSH parameter |Description |Default value | |--|--|--| |--generate-ssh-key |If you don't have your own SSH key, specify `--generate-ssh-key`. The Azure CLI first looks for the key in the `~/.ssh/` directory. If the key exists, it's used. If the key doesn't exist, the Azure CLI automatically generates a set of SSH keys and saves them in the specified or default directory.||
-|--ssh-key-vaule |Public key path or key contents to install on node VMs for SSH access. For example, `ssh-rsa AAAAB...snip...UcyupgH azureuser@linuxvm`.|`~.ssh\id_rsa.pub` |
+|--ssh-key-value |Public key path or key contents to install on node VMs for SSH access. For example, `ssh-rsa AAAAB...snip...UcyupgH azureuser@linuxvm`.|`~/.ssh/id_rsa.pub` |
|--no-ssh-key | If you don't require an SSH key, specify this argument. However, AKS automatically generates a set of SSH keys because the Azure Virtual Machine resource dependency doesnΓÇÖt support an empty SSH key file. As a result, the keys aren't returned and can't be used to SSH into the node VMs. || >[!NOTE]
aks Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-access.md
description: Learn how to connect to Azure Kubernetes Service (AKS) cluster node
Last updated 01/08/2024 -+ #Customer intent: As a cluster operator, I want to learn how to connect to virtual machines in an AKS cluster to perform maintenance or troubleshoot a problem. # Connect to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting
-Throughout the lifecycle of your Azure Kubernetes Service (AKS) cluster, you eventually need to directly access an AKS node. This access could be for maintenance, log collection, or troubleshooting operations.
+Throughout the lifecycle of your Azure Kubernetes Service (AKS) cluster, you eventually need to directly access an AKS node. This access could be for maintenance, log collection, or troubleshooting operations.
-You access a node through authentication, which methods vary depending on your Node OS and method of connection. You securely authenticate against AKS Linux and Windows nodes using SSH. Alternatively, for Windows Servers you can also connect to Windows Server nodes using the [remote desktop protocol (RDP)][aks-windows-rdp].
+You access a node through authentication, which methods vary depending on your Node OS and method of connection. You securely authenticate against AKS Linux and Windows nodes using SSH. Alternatively, for Windows Servers you can also connect to Windows Server nodes using the [remote desktop protocol (RDP)][aks-windows-rdp].
For security reasons, AKS nodes aren't exposed to the internet. Instead, to connect directly to any AKS nodes, you need to use either `kubectl debug` or the host's private IP address.
This guide shows you how to create a connection to an AKS node and update the SS
To follow along the steps, you need to use Azure CLI that supports version 2.0.64 or later. Run `az --version` to check the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-Complete these steps if you don't have an SSH key. Create an SSH key depending on your Node OS Image, for [macOS and Linux][ssh-nix], or [Windows][ssh-windows]. Make sure you save the key pair in the OpenSSH format, avoid unsupported formats such as `.ppk`. Next, refer to [Manage SSH configuration][manage-ssh-node-access] to add the key to your cluster.
+Complete these steps if you don't have an SSH key. Create an SSH key depending on your Node OS Image, for [macOS and Linux][ssh-nix], or [Windows][ssh-windows]. Make sure you save the key pair in the OpenSSH format, avoid unsupported formats such as `.ppk`. Next, refer to [Manage SSH configuration][manage-ssh-node-access] to add the key to your cluster.
## Linux and macOS
To create an interactive shell connection, use the `kubectl debug` command to ru
Sample output: ```output
- NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE
- aks-nodepool1-37663765-vmss000000 Ready agent 166m v1.25.6 10.224.0.33 <none> Ubuntu 22.04.2 LTS
- aks-nodepool1-37663765-vmss000001 Ready agent 166m v1.25.6 10.224.0.4 <none> Ubuntu 22.04.2 LTS
- aksnpwin000000 Ready agent 160m v1.25.6 10.224.0.62 <none> Windows Server 2022 Datacenter
+ NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE
+ aks-nodepool1-37663765-vmss000000 Ready agent 166m v1.25.6 10.224.0.33 <none> Ubuntu 22.04.2 LTS
+ aks-nodepool1-37663765-vmss000001 Ready agent 166m v1.25.6 10.224.0.4 <none> Ubuntu 22.04.2 LTS
+ aksnpwin000000 Ready agent 160m v1.25.6 10.224.0.62 <none> Windows Server 2022 Datacenter
``` 2. Use the `kubectl debug` command to start a privileged container on your node and connect to it.
kubectl delete pod node-debugger-aks-nodepool1-37663765-vmss000000-bkmmx
## Private IP Method
-If you don't have access to the Kubernetes API, you can get access to properties such as ```Node IP``` and ```Node Name``` through the [AKS Agent Pool Preview API][agent-pool-rest-api] (preview version 07-02-2023 or above) to troubleshoot node-specific issues in your AKS node pools.
+If you don't have access to the Kubernetes API, you can get access to properties such as ```Node IP``` and ```Node Name``` through the [AKS Agent Pool Preview API][agent-pool-rest-api] (preview version 07-02-2023 or above) to troubleshoot node-specific issues in your AKS node pools.
### Create an interactive shell connection to a node using the IP address
For convenience, the nodepools are exposed when the node has a public IP assigne
Sample output: ```output
- Name Ip
+ Name Ip
-- aks-nodepool1-33555069-vmss000000 10.224.0.5,family:IPv4; aks-nodepool1-33555069-vmss000001 10.224.0.6,family:IPv4;
- aks-nodepool1-33555069-vmss000002 10.224.0.4,family:IPv4;
+ aks-nodepool1-33555069-vmss000002 10.224.0.4,family:IPv4;
``` To target a specific node inside the nodepool, add a `--machine-name` flag:
For convenience, the nodepools are exposed when the node has a public IP assigne
Sample output: ```output
- Name Ip
+ Name Ip
-- aks-nodepool1-33555069-vmss000000 10.224.0.5,family:IPv4; ```
To connect to another node in the cluster, use the `kubectl debug` command. For
> [!IMPORTANT] >
-> The following steps for creating the SSH connection to the Windows Server node from another node can only be used if you created your AKS cluster using the Azure CLI and the `--generate-ssh-keys` parameter. The AKS Update command can also be used to manage, create SSH keys on an existing AKS cluster. For more information, see [manage SSH node access][manage-ssh-node-access].
+> The following steps for creating the SSH connection to the Windows Server node from another node can only be used if you created your AKS cluster using the Azure CLI and the `--generate-ssh-keys` parameter. The AKS Update command can also be used to manage, create SSH keys on an existing AKS cluster. For more information, see [manage SSH node access][manage-ssh-node-access].
Finish the prior steps to use kubectl debug, then return to this section, as you need to run the `kubectl debug` in your proxy.
Finish the prior steps to use kubectl debug, then return to this section, as you
Sample output: ```output
- NAME INTERNAL_IP
- aks-nodepool1-19409214-vmss000003 10.224.0.8
+ NAME INTERNAL_IP
+ aks-nodepool1-19409214-vmss000003 10.224.0.8
``` In the previous example, *10.224.0.62* is the internal IP address of the Windows Server node.
To learn about managing your SSH keys, see [Manage SSH configuration][manage-ssh
[view-control-plane-logs]: monitor-aks-reference.md#resource-logs [install-azure-cli]: /cli/azure/install-azure-cli [aks-windows-rdp]: rdp.md
-[azure-bastion]: ../bastion/bastion-overview.md
+[azure-bastion]: ../bastion/bastion-overview.md
[ssh-nix]: ../virtual-machines/linux/mac-create-ssh-keys.md [ssh-windows]: ../virtual-machines/linux/ssh-from-windows.md [agent-pool-rest-api]: /rest/api/aks/agent-pools/get#agentpool
aks Node Image Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-image-upgrade.md
Title: Upgrade Azure Kubernetes Service (AKS) node images description: Learn how to upgrade the images on AKS cluster nodes and node pools. -+ Last updated 03/28/2023
This article shows you how to upgrade AKS cluster node images and how to update
> [!NOTE] > The AKS cluster must use virtual machine scale sets for the nodes.
->
+>
> It's not possible to downgrade a node image version (for example *AKSUbuntu-2204 to AKSUbuntu-1804*, or *AKSUbuntu-2204-202308.01.0 to AKSUbuntu-2204-202307.27.0*). ## Check for available node image upgrades
aks Node Updates Kured https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-updates-kured.md
Title: Handle Linux node reboots with kured
description: Learn how to update Linux nodes and automatically reboot them with kured in Azure Kubernetes Service (AKS) -+ Last updated 04/19/2023 #Customer intent: As a cluster administrator, I want to know how to automatically apply Linux updates and reboot nodes in AKS for security and/or compliance
aks Outbound Rules Control Egress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/outbound-rules-control-egress.md
There are two options to provide access to Azure Monitor for containers:
| **`*.ods.opinsights.azure.com`** | **`HTTPS:443`** | This endpoint is used by Azure Monitor for ingesting log analytics data. | | **`*.oms.opinsights.azure.com`** | **`HTTPS:443`** | This endpoint is used by omsagent, which is used to authenticate the log analytics service. | | **`*.monitoring.azure.com`** | **`HTTPS:443`** | This endpoint is used to send metrics data to Azure Monitor. |
+| **`<cluster-region-name>.ingest.monitor.azure.com`** | **`HTTPS:443`** | This endpoint is used by Azure Monitor managed service for Prometheus metrics ingestion.|
+| **`<cluster-region-name>.handler.control.monitor.azure.com`** | **`HTTPS:443`** | This endpoint is used to fetch data collection rules for a specific cluster. |
+
+#### Microsoft Azure operated by 21Vianet required FQDN / application rules
+
+| FQDN | Port | Use |
+|--|--|-|
+| **`dc.services.visualstudio.com`** | **`HTTPS:443`** | This endpoint is used for metrics and monitoring telemetry using Azure Monitor. |
+| **`*.ods.opinsights.azure.cn`** | **`HTTPS:443`** | This endpoint is used by Azure Monitor for ingesting log analytics data. |
+| **`*.oms.opinsights.azure.cn`** | **`HTTPS:443`** | This endpoint is used by omsagent, which is used to authenticate the log analytics service. |
+| **`global.handler.control.monitor.azure.cn`** | **`HTTPS:443`** | This endpoint is used by Azure Monitor for accessing the control service. |
+| **`<cluster-region-name>.handler.control.monitor.azure.cn`** | **`HTTPS:443`** | This endpoint is used to fetch data collection rules for a specific cluster. |
+
+#### Azure US Government required FQDN / application rules
+
+| FQDN | Port | Use |
+|--|--|-|
+| **`dc.services.visualstudio.com`** | **`HTTPS:443`** | This endpoint is used for metrics and monitoring telemetry using Azure Monitor. |
+| **`*.ods.opinsights.azure.us`** | **`HTTPS:443`** | This endpoint is used by Azure Monitor for ingesting log analytics data. |
+| **`*.oms.opinsights.azure.us`** | **`HTTPS:443`** | This endpoint is used by omsagent, which is used to authenticate the log analytics service. |
+| **`global.handler.control.monitor.azure.us`** | **`HTTPS:443`** | This endpoint is used by Azure Monitor for accessing the control service. |
+| **`<cluster-region-name>.handler.control.monitor.azure.us`** | **`HTTPS:443`** | This endpoint is used to fetch data collection rules for a specific cluster. |
### Azure Policy
aks Quickstart Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-dapr.md
Last updated 12/27/2023-+ # Quickstart: Deploy an application using the Dapr cluster extension for Azure Kubernetes Service (AKS) or Arc-enabled Kubernetes
aks Rdp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/rdp.md
Title: RDP to AKS Windows Server nodes
description: Learn how to create an RDP connection with Azure Kubernetes Service (AKS) cluster Windows Server nodes for troubleshooting and maintenance tasks. -+ Last updated 04/26/2023 #Customer intent: As a cluster operator, I want to learn how to use RDP to connect to nodes in an AKS cluster to perform maintenance or troubleshoot a problem.
Last updated 04/26/2023
Throughout the lifecycle of your Azure Kubernetes Service (AKS) cluster, you may need to access an AKS Windows Server node. This access could be for maintenance, log collection, or other troubleshooting operations. You can access the AKS Windows Server nodes using RDP. For security purposes, the AKS nodes aren't exposed to the internet.
-Alternatively, if you want to SSH to your AKS Windows Server nodes, you need access to the same key-pair that was used during cluster creation. Follow the steps in [SSH into Azure Kubernetes Service (AKS) cluster nodes][ssh-steps].
+Alternatively, if you want to SSH to your AKS Windows Server nodes, you need access to the same key-pair that was used during cluster creation. Follow the steps in [SSH into Azure Kubernetes Service (AKS) cluster nodes][ssh-steps].
This article shows you how to create an RDP connection with an AKS node using their private IP addresses.
You'll need to get the subnet ID used by your Windows Server node pool and query
* The subnet ID ```azurepowershell-interactive
-$CLUSTER_RG = (Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster).nodeResourceGroup
+$CLUSTER_RG = (Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster).nodeResourceGroup
$VNET_NAME = (Get-AzVirtualNetwork -ResourceGroupName $CLUSTER_RG).Name $ADDRESS_PREFIX = (Get-AzVirtualNetwork -ResourceGroupName $CLUSTER_RG).AddressSpace | Select-Object -ExpandProperty AddressPrefixes $SUBNET_NAME = (Get-AzVirtualNetwork -ResourceGroupName $CLUSTER_RG).Subnets[0].Name
First, get the resource group and name of the NSG to add the rule to:
```azurepowershell-interactive $CLUSTER_RG = (Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster).nodeResourceGroup
-$NSG_NAME = (Get-AzNetworkSecurityGroup -ResourceGroupName $CLUSTER_RG).Name
+$NSG_NAME = (Get-AzNetworkSecurityGroup -ResourceGroupName $CLUSTER_RG).Name
``` Then, create the NSG rule:
Get-AzNetworkSecurityGroup -Name $NSG_NAME -ResourceGroupName $CLUSTER_RG | Add-
### [Azure CLI](#tab/azure-cli) To manage a Kubernetes cluster, you use [kubectl][kubectl], the Kubernetes command-line client. If you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, use the [az aks install-cli][az-aks-install-cli] command:
-
+ ```azurecli az aks install-cli ```
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
### [Azure PowerShell](#tab/azure-powershell) To manage a Kubernetes cluster, you use [kubectl][kubectl], the Kubernetes command-line client. If you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, use the [Install-AzAksKubectl][install-azakskubectl] cmdlet:
-
+ ```azurepowershell Install-AzAksKubectl ```
Alternatively, you can use [Azure Bastion][azure-bastion] to connect to your Win
### Deploy Azure Bastion
-To deploy Azure Bastion, you'll need to find the virtual network your AKS cluster is connected to.
+To deploy Azure Bastion, you'll need to find the virtual network your AKS cluster is connected to.
1. In the Azure portal, go to **Virtual networks**. Select the virtual network your AKS cluster is connected to. 1. Under **Settings**, select **Bastion**, then select **Deploy Bastion**. Wait until the process is finished before going to the next step.
az aks show -n myAKSCluster -g myResourceGroup --query 'nodeResourceGroup' -o ts
#### [Azure PowerShell](#tab/azure-powershell) ```azurepowershell-interactive
-(Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster).nodeResourceGroup
+(Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster).nodeResourceGroup
```
When you're finished, exit the Bastion session and remove the Bastion resource.
1. In the Azure portal, go to **Bastion** and select the Bastion resource you created. 1. At the top of the page, select **Delete**. Wait until the process is complete before proceeding to the next step.
-1. In the Azure portal, go to **Virtual networks**. Select the virtual network that your AKS cluster is connected to.
+1. In the Azure portal, go to **Virtual networks**. Select the virtual network that your AKS cluster is connected to.
1. Under **Settings**, select **Subnet**, and delete the **AzureBastionSubnet** subnet that was created for the Bastion resource. ## Next steps
If you need more troubleshooting data, you can [view the Kubernetes primary node
[install-azure-powershell]: /powershell/azure/install-az-ps [ssh-steps]: ssh.md [view-primary-logs]: monitor-aks.md#aks-control-planeresource-logs
-[azure-bastion]: ../bastion/bastion-overview.md
+[azure-bastion]: ../bastion/bastion-overview.md
aks Resize Node Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/resize-node-pool.md
Title: Resize node pools in Azure Kubernetes Service (AKS) description: Learn how to resize node pools for a cluster in Azure Kubernetes Service (AKS) by cordoning and draining. -+ Last updated 02/08/2023 #Customer intent: As a cluster operator, I want to resize my node pools so that I can run more or larger workloads.
kube-system metrics-server-774f99dbf4-h52hn 1/1 Running 1
Use the [az aks nodepool add][az-aks-nodepool-add] command to create a new node pool called `mynodepool` with three nodes using the `Standard_DS3_v2` VM SKU: ```azurecli-interactive
-az aks nodepool add \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --name mynodepool \
- --node-count 3 \
- --node-vm-size Standard_DS3_v2 \
- --mode System \
- --no-wait
+az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name mynodepool \
+ --node-count 3 \
+ --node-vm-size Standard_DS3_v2 \
+ --mode System \
+ --no-wait
``` > [!NOTE]
aks Upgrade Windows 2019 2022 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-windows-2019-2022.md
Title: Upgrade Azure Kubernetes Service (AKS) workloads from Windows Server 2019 to 2022 description: Learn how to upgrade the OS version for Windows workloads on Azure Kubernetes Service (AKS). -+ Last updated 09/12/2023
aks Use Pod Sandboxing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-pod-sandboxing.md
Title: Pod Sandboxing (preview) with Azure Kubernetes Service (AKS) description: Learn about and deploy Pod Sandboxing (preview), also referred to as Kernel Isolation, on an Azure Kubernetes Service (AKS) cluster. -+ Last updated 06/07/2023
Learn more about [Azure Dedicated hosts][azure-dedicated-hosts] for nodes with y
[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [kata-network-limitations]: https://github.com/kata-containers/kata-containers/blob/main/docs/Limitations.md#host-network [cloud-hypervisor]: https://www.cloudhypervisor.org
-[kata-container]: https://katacontainers.io
+[kata-container]: https://katacontainers.io
<!-- INTERNAL LINKS --> [install-azure-cli]: /cli/azure
aks Use Wasi Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-wasi-node-pools.md
Title: Create WebAssembly System Interface (WASI) node pools in Azure Kubernetes Service (AKS) to run your WebAssembly (WASM) workload (preview) description: Learn how to create a WebAssembly System Interface (WASI) node pool in Azure Kubernetes Service (AKS) to run your WebAssembly (WASM) workload on Kubernetes. -+ Last updated 05/17/2023
az provider register --namespace Microsoft.ContainerService
## Limitations
-* Currently, there are only containerd shims available for [spin][spin] and [slight][slight] applications, which use the [wasmtime][wasmtime] runtime. In addition to wasmtime runtime applications, you can also run containers on WASM/WASI node pools.
+* Currently, there are only containerd shims available for [spin][spin] and [slight][slight] applications, which use the [wasmtime][wasmtime] runtime. In addition to wasmtime runtime applications, you can also run containers on WASM/WASI node pools.
* You can run containers and wasm modules on the same node, but you can't run containers and wasm modules on the same pod. * The WASM/WASI node pools can't be used for system node pool. * The *os-type* for WASM/WASI node pools must be Linux.
az aks nodepool add \
--cluster-name myAKSCluster \ --name mywasipool \ --node-count 1 \
- --workload-runtime WasmWasi
+ --workload-runtime WasmWasi
``` > [!NOTE]
az aks nodepool show -g myResourceGroup --cluster-name myAKSCluster -n mywasipoo
"WasmWasi" ```
-Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. The following command:
+Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. The following command:
```azurecli-interactive az aks get-credentials -n myakscluster -g myresourcegroup
aks Workload Identity Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md
Title: Deploy and configure an Azure Kubernetes Service (AKS) cluster with workload identity description: In this Azure Kubernetes Service (AKS) article, you deploy an Azure Kubernetes Service cluster and configure it with a Microsoft Entra Workload ID. -+ Last updated 09/27/2023
This article assumes you have a basic understanding of Kubernetes concepts. For
To help simplify steps to configure the identities required, the steps below define environmental variables for reference on the cluster.
-Run the following commands to create these variables. Replace the default values for `RESOURCE_GROUP`, `LOCATION`, `SERVICE_ACCOUNT_NAME`, `SUBSCRIPTION`, `USER_ASSIGNED_IDENTITY_NAME`, and `FEDERATED_IDENTITY_CREDENTIAL_NAME`.
+Run the following commands to create these variables. Replace the default values for `RESOURCE_GROUP`, `LOCATION`, `SERVICE_ACCOUNT_NAME`, `SUBSCRIPTION`, `USER_ASSIGNED_IDENTITY_NAME`, and `FEDERATED_IDENTITY_CREDENTIAL_NAME`.
```bash export RESOURCE_GROUP="myResourceGroup"
export SERVICE_ACCOUNT_NAMESPACE="default"
export SERVICE_ACCOUNT_NAME="workload-identity-sa" export SUBSCRIPTION="$(az account show --query id --output tsv)" export USER_ASSIGNED_IDENTITY_NAME="myIdentity"
-export FEDERATED_IDENTITY_CREDENTIAL_NAME="myFedIdentity"
+export FEDERATED_IDENTITY_CREDENTIAL_NAME="myFedIdentity"
``` ## Create AKS cluster
To check whether all properties are injected properly by the webhook, use the [k
kubectl describe pod quick-start | grep "SECRET_NAME:" ```
-If successful, the output should be similar to the following:
+If successful, the output should be similar to the following:
```bash SECRET_NAME: ${KEYVAULT_SECRET_NAME} ```
To verify that pod is able to get a token and access the resource, use the kubec
kubectl logs quick-start ```
-If successful, the output should be similar to the following:
+If successful, the output should be similar to the following:
```bash I0114 10:35:09.795900 1 main.go:63] "successfully got secret" secret="Hello\\!" ```
aks Workload Identity Migrate From Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-migrate-from-pod-identity.md
Title: Migrate your Azure Kubernetes Service (AKS) pod to use workload identity description: In this Azure Kubernetes Service (AKS) article, you learn how to configure your Azure Kubernetes Service pod to authenticate with workload identity. -+ Last updated 07/31/2023
kubectl logs podName
The following log output resembles successful communication through the proxy sidecar. Verify that the logs show a token is successfully acquired and the GET operation is successful. ```output
-I0926 00:29:29.968723 1 proxy.go:97] proxy "msg"="starting the proxy server" "port"=8080 "userAgent"="azure-workload-identity/proxy/v0.13.0-12-gc8527f3 (linux/amd64) c8527f3/2022-09-26-00:19"
-I0926 00:29:29.972496 1 proxy.go:173] proxy "msg"="received readyz request" "method"="GET" "uri"="/readyz"
-I0926 00:29:30.936769 1 proxy.go:107] proxy "msg"="received token request" "method"="GET" "uri"="/metadata/identity/oauth2/token?resource=https://management.core.windows.net/api-version=2018-02-01&client_id=<client_id>"
+I0926 00:29:29.968723 1 proxy.go:97] proxy "msg"="starting the proxy server" "port"=8080 "userAgent"="azure-workload-identity/proxy/v0.13.0-12-gc8527f3 (linux/amd64) c8527f3/2022-09-26-00:19"
+I0926 00:29:29.972496 1 proxy.go:173] proxy "msg"="received readyz request" "method"="GET" "uri"="/readyz"
+I0926 00:29:30.936769 1 proxy.go:107] proxy "msg"="received token request" "method"="GET" "uri"="/metadata/identity/oauth2/token?resource=https://management.core.windows.net/api-version=2018-02-01&client_id=<client_id>"
I0926 00:29:31.101998 1 proxy.go:129] proxy "msg"="successfully acquired token" "method"="GET" "uri"="/metadata/identity/oauth2/token?resource=https://management.core.windows.net/api-version=2018-02-01&client_id=<client_id>" ```
api-management Api Management Gateways Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md
The following table compares features available in the managed gateway versus th
| [Synthetic GraphQL](graphql-apis-overview.md)| ✔️ | ✔️<sup>1</sup> | ✔️<sup>1</sup> | | [Pass-through WebSocket](websocket-api.md) | ✔️ | ❌ | ✔️ | | [Pass-through gRPC](grpc-api.md) | ❌ | ❌ | ✔️ |
-| [Circuit Breaker](backends.md#circuit-breaker-preview) | ✔️ | ✔️ | ✔️ |
+| [Circuit breaker in backend](backends.md#circuit-breaker-preview) | ✔️ | ❌ | ✔️ |
+| [Load-balanced backend pool](backends.md#load-balanced-pool-preview) | ✔️ | ✔️ | ✔️ |
<sup>1</sup> Synthetic GraphQL subscriptions (preview) aren't supported.
api-management Authentication Authorization Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-authorization-overview.md
Examples:
* [Configure credential manager - Microsoft Graph API](credentials-how-to-azure-ad.md) * [Configure credential manager - GitHub API](credentials-how-to-github.md)
-* [Configure credential manager - user delegated access to backend APIs](credentials-how-to-github.md)
+* [Configure credential manager - user delegated access to backend APIs](credentials-how-to-user-delegated.md)
## Other options to secure APIs
api-management Backends https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/backends.md
Starting in API version 2023-03-01 preview, API Management exposes a [circuit br
The backend circuit breaker is an implementation of the [circuit breaker pattern](/azure/architecture/patterns/circuit-breaker) to allow the backend to recover from overload situations. It augments general [rate-limiting](rate-limit-policy.md) and [concurrency-limiting](limit-concurrency-policy.md) policies that you can implement to protect the API Management gateway and your backend services.
+> [!NOTE]
+> * Currently, the backend circuit breaker isn't supported in the **Consumption** tier of API Management.
+> * Because of the distributed nature of the API Management architecture, circuit breaker tripping rules are approximate. Different instances of the gateway do not synchronize and will apply circuit breaker rules based on the information on the same instance.
+ ### Example Use the API Management [REST API](/rest/api/apimanagement/backend) or a Bicep or ARM template to configure a circuit breaker in a backend. In the following example, the circuit breaker in *myBackend* in the API Management instance *myAPIM* trips when there are three or more `5xx` status codes indicating server errors in a day. The circuit breaker resets after one hour.
Use a backend pool for scenarios such as the following:
To create a backend pool, set the `type` property of the backend to `pool` and specify a list of backends that make up the pool. > [!NOTE]
-> Currently, you can only include single backends in a backend pool. You can't add a backend of type `pool` to another backend pool.
+> * Currently, you can only include single backends in a backend pool. You can't add a backend of type `pool` to another backend pool.
+> * Because of the distributed nature of the API Management architecture, backend load balancing is approximate. Different instances of the gateway do not synchronize and will load balance based on the information on the same instance.
+ ### Example
api-management Migrate Stv1 To Stv2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/migrate-stv1-to-stv2.md
Previously updated : 10/18/2023 Last updated : 01/11/2024
API Management platform migration from `stv1` to `stv2` involves updating the un
For an API Management instance that's not deployed in a VNet, migrate your instance using the **Platform migration** blade in the Azure portal, or invoke the Migrate to `stv2` REST API.
-You can choose whether the virtual IP address of API Management will change, or whether the original VIP address is preserved.
+During the migration, the VIP address of your API Management instance will be preserved.
-* **New virtual IP address (recommended)** - If you choose this mode, API requests remain responsive during migration. Infrastructure configuration (such as custom domains, locations, and CA certificates) will be locked for 30 minutes. After migration, you'll need to update any network dependencies including DNS, firewall rules, and VNets to use the new VIP address.
-
-* **Preserve IP address** - If you preserve the VIP address, API requests will be unresponsive for approximately 15 minutes while the IP address is migrated to the new infrastructure. Infrastructure configuration (such as custom domains, locations, and CA certificates) will be locked for 45 minutes. No further configuration is required after migration.
+* API requests will be unresponsive for approximately 15 minutes while the IP address is migrated to the new infrastructure.
+* Infrastructure configuration (such as custom domains, locations, and CA certificates) will be locked for 45 minutes.
+* No further configuration is required after migration.
#### [Portal](#tab/portal) 1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance. 1. In the left menu, under **Settings**, select **Platform migration**.
-1. On the **Platform migration** page, select one of the two migration options:
-
- * **New virtual IP address (recommended)**. The VIP address of your API Management instance will change automatically. Your service will have no downtime, but after migration you'll need to update any network dependencies including DNS, firewall rules, and VNets to use the new VIP address.
-
- * **Preserve IP address** - The VIP address of your API Management instance won't change. Your instance will have downtime for up to 15 minutes.
-
- :::image type="content" source="media/migrate-stv1-to-stv2/platform-migration-portal.png" alt-text="Screenshot of API Management platform migration in the portal.":::
-
-1. Review guidance for the migration process, and prepare your environment.
-
+1. On the **Platform migration** page, review guidance for the migration process, and prepare your environment.
1. After you've completed preparation steps, select **I have read and understand the impact of the migration process.** Select **Migrate**. #### [Azure CLI](#tab/cli)
RG_NAME={name of your resource group}
# Get resource ID of API Management instance APIM_RESOURCE_ID=$(az apim show --name $APIM_NAME --resource-group $RG_NAME --query id --output tsv)
-# Call REST API to migrate to stv2 and change VIP address
-az rest --method post --uri "$APIM_RESOURCE_ID/migrateToStv2?api-version=2023-03-01-preview" --body '{"mode": "NewIp"}'
-
-# Alternate call to migrate to stv2 and preserve VIP address
-# az rest --method post --uri "$APIM_RESOURCE_ID/migrateToStv2?api-version=2023-03-01-preview" --body '{"mode": "PreserveIp"}'
+# Call REST API to migrate to stv2 and preserve VIP address
+az rest --method post --uri "$APIM_RESOURCE_ID/migrateToStv2?api-version=2023-03-01-preview" --body '{"mode": "PreserveIp"}'
```
az rest --method post --uri "$APIM_RESOURCE_ID/migrateToStv2?api-version=2023-03
To verify that the migration was successful, when the status changes to `Online`, check the [platform version](compute-infrastructure.md#how-do-i-know-which-platform-hosts-my-api-management-instance) of your API Management instance. After successful migration, the value is `stv2`.
-### Update network dependencies
-
-On successful migration, update any network dependencies including DNS, firewall rules, and VNets to use the new VIP address.
- ## Scenario 2: Migrate a network-injected API Management instance Trigger migration of a network-injected API Management instance to the `stv2` platform by updating the existing network configuration to use new network settings (see the following section). After that update completes, as an optional step, you can migrate back to the original VNet and subnet you used.
app-service Tutorial Multi Container App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-multi-container-app.md
Last updated 11/18/2022 -+ # Tutorial: Create a multi-container (preview) app in Web App for Containers
redis: image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
- environment:
+ environment:
- ALLOW_EMPTY_PASSWORD=yes restart: always ```
application-gateway Ingress Controller Autoscale Pods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-autoscale-pods.md
description: This article provides instructions on how to scale your AKS backend
-+ Last updated 10/26/2023
In the following tutorial, we explain how you can use Application Gateway's `Avg
Use following two components:
-* [`Azure Kubernetes Metric Adapter`](https://github.com/Azure/azure-k8s-metrics-adapter) - We use the metric adapter to expose Application Gateway metrics through the metric server. The Azure Kubernetes Metric Adapter is an open source project under Azure, similar to the Application Gateway Ingress Controller.
+* [`Azure Kubernetes Metric Adapter`](https://github.com/Azure/azure-k8s-metrics-adapter) - We use the metric adapter to expose Application Gateway metrics through the metric server. The Azure Kubernetes Metric Adapter is an open source project under Azure, similar to the Application Gateway Ingress Controller.
* [`Horizontal Pod Autoscaler`](../aks/concepts-scale.md#horizontal-pod-autoscaler) - We use HPA to use Application Gateway metrics and target a deployment for scaling. > [!NOTE]
Use following two components:
## Setting up Azure Kubernetes Metric Adapter
-1. First, create a Microsoft Entra service principal and assign it `Monitoring Reader` access over Application Gateway's resource group.
+1. First, create a Microsoft Entra service principal and assign it `Monitoring Reader` access over Application Gateway's resource group.
```azurecli applicationGatewayGroupName="<application-gateway-group-id>"
application-gateway Ingress Controller Expose Service Over Http Https https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-expose-service-over-http-https.md
Title: Expose an AKS service over HTTP or HTTPS using Application Gateway
-description: This article provides information on how to expose an AKS service over HTTP or HTTPS using Application Gateway.
+description: This article provides information on how to expose an AKS service over HTTP or HTTPS using Application Gateway.
-+ Last updated 07/23/2023
-# Expose an AKS service over HTTP or HTTPS using Application Gateway
+# Expose an AKS service over HTTP or HTTPS using Application Gateway
These tutorials help illustrate the usage of [Kubernetes Ingress Resources](https://kubernetes.io/docs/concepts/services-networking/ingress/) to expose an example Kubernetes service through the [Azure Application Gateway](https://azure.microsoft.com/services/application-gateway/) over HTTP or HTTPS.
Without specifying hostname, the guestbook service is available on all the host-
servicePort: 80 ```
- > [!NOTE]
+ > [!NOTE]
> Replace `<guestbook-secret-name>` in the above Ingress Resource with the name of your secret. Store the above Ingress Resource in a file name `ing-guestbook-tls.yaml`. 1. Deploy ing-guestbook-tls.yaml by running
application-gateway Ingress Controller Install Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-existing.md
Title: Create an ingress controller with an existing Application Gateway
-description: This article provides information on how to deploy an Application Gateway Ingress Controller with an existing Application Gateway.
+ Title: Create an ingress controller with an existing Application Gateway
+description: This article provides information on how to deploy an Application Gateway Ingress Controller with an existing Application Gateway.
-+ Last updated 07/28/2023
In the first few steps, we install Helm's Tiller on your Kubernetes cluster. Use
# Verbosity level of the App Gateway Ingress Controller verbosityLevel: 3
-
+ ################################################################################ # Specify which application gateway the ingress controller must manage #
In the first few steps, we install Helm's Tiller on your Kubernetes cluster. Use
subscriptionId: <subscriptionId> resourceGroup: <resourceGroupName> name: <applicationGatewayName>
-
+ # Setting appgw.shared to "true" creates an AzureIngressProhibitedTarget CRD. # This prohibits AGIC from applying config for any host/path. # Use "kubectl get AzureIngressProhibitedTargets" to view and change this. shared: false
-
+ ################################################################################ # Specify which kubernetes namespace the ingress controller must watch # Default value is "default"
In the first few steps, we install Helm's Tiller on your Kubernetes cluster. Use
# # kubernetes: # watchNamespace: <namespace>
-
+ ################################################################################ # Specify the authentication with Azure Resource Manager # # Two authentication methods are available:
- # - Option 1: Azure-AD-workload-identity
+ # - Option 1: Azure-AD-workload-identity
armAuth: type: workloadIdentity identityClientID: <identityClientId>
-
+ ## Alternatively you can use Service Principal credentials # armAuth: # type: servicePrincipal # secretJSON: <<Generate this value with: "az ad sp create-for-rbac --role Contributor --sdk-auth | base64 -w0" >>
-
+ ################################################################################ # Specify if the cluster is Kubernetes RBAC enabled or not rbac: enabled: false # true/false
-
+ # Specify aks cluster related information. THIS IS BEING DEPRECATED. aksClusterConfiguration: apiServerAddress: <aks-api-server-address> ``` 1. Edit helm-config.yaml and fill in the values for `appgw` and `armAuth`.
-
+ > [!NOTE] > The `<identity-client-id>` is a property of the Microsoft Entra Workload ID you setup in the previous section. You can retrieve this information by running the following command: `az identity show -g <resourcegroup> -n <identity-name>`, where `<resourcegroup>` is the resource group hosting the infrastructure resources related to the AKS cluster, Application Gateway and managed identity.
application-gateway Ingress Controller Install New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-new.md
Title: Creating an ingress controller with a new Application Gateway
-description: This article provides information on how to deploy an Application Gateway Ingress Controller with a new Application Gateway.
+ Title: Creating an ingress controller with a new Application Gateway
+description: This article provides information on how to deploy an Application Gateway Ingress Controller with a new Application Gateway.
-+ Last updated 07/28/2023
To install Microsoft Entra Pod Identity to your cluster:
```bash wget https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/master/docs/examples/sample-helm-config.yaml -O helm-config.yaml ```
- Or copy the YAML file below:
-
+ Or copy the YAML file below:
+ ```yaml # This file contains the essential configs for the ingress controller helm chart # Verbosity level of the App Gateway Ingress Controller verbosityLevel: 3
-
+ ################################################################################ # Specify which application gateway the ingress controller will manage #
To install Microsoft Entra Pod Identity to your cluster:
subscriptionId: <subscriptionId> resourceGroup: <resourceGroupName> name: <applicationGatewayName>
-
+ # Setting appgw.shared to "true" will create an AzureIngressProhibitedTarget CRD. # This prohibits AGIC from applying config for any host/path. # Use "kubectl get AzureIngressProhibitedTargets" to view and change this. shared: false
-
+ ################################################################################ # Specify which kubernetes namespace the ingress controller will watch # Default value is "default"
To install Microsoft Entra Pod Identity to your cluster:
# # kubernetes: # watchNamespace: <namespace>
-
+ ################################################################################ # Specify the authentication with Azure Resource Manager #
To install Microsoft Entra Pod Identity to your cluster:
type: aadPodIdentity identityResourceID: <identityResourceId> identityClientID: <identityClientId>
-
+ ## Alternatively you can use Service Principal credentials # armAuth: # type: servicePrincipal # secretJSON: <<Generate this value with: "az ad sp create-for-rbac --subscription <subscription-uuid> --role Contributor --sdk-auth | base64 -w0" >>
-
+ ################################################################################ # Specify if the cluster is Kubernetes RBAC enabled or not rbac: enabled: false # true/false
-
+ # Specify aks cluster related information. THIS IS BEING DEPRECATED. aksClusterConfiguration: apiServerAddress: <aks-api-server-address>
To install Microsoft Entra Pod Identity to your cluster:
sed -i "s|<identityResourceId>|${identityResourceId}|g" helm-config.yaml sed -i "s|<identityClientId>|${identityClientId}|g" helm-config.yaml ```
-
+ > [!NOTE] > **For deploying to Sovereign Clouds (e.g., Azure Government)**, the `appgw.environment` configuration parameter must be added and set to the appropriate value as documented below.
To install Microsoft Entra Pod Identity to your cluster:
- `kubernetes.watchNamespace`: Specify the namespace that AGIC should watch. The namespace value can be a single string value, or a comma-separated list of namespaces. - `armAuth.type`: could be `aadPodIdentity` or `servicePrincipal` - `armAuth.identityResourceID`: Resource ID of the Azure Managed Identity
- - `armAuth.identityClientID`: The Client ID of the Identity. More information about **identityClientID** is provided below.
- - `armAuth.secretJSON`: Only needed when Service Principal Secret type is chosen (when `armAuth.type` has been set to `servicePrincipal`)
+ - `armAuth.identityClientID`: The Client ID of the Identity. More information about **identityClientID** is provided below.
+ - `armAuth.secretJSON`: Only needed when Service Principal Secret type is chosen (when `armAuth.type` has been set to `servicePrincipal`)
> [!NOTE]
application-gateway Ingress Controller Letsencrypt Certificate Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-letsencrypt-certificate-application-gateway.md
Title: Use LetsEncrypt.org certificates with Application Gateway
-description: This article provides information on how to obtain a certificate from LetsEncrypt.org and use it on your Application Gateway for AKS clusters.
+description: This article provides information on how to obtain a certificate from LetsEncrypt.org and use it on your Application Gateway for AKS clusters.
-+ Last updated 08/01/2023
Use the following steps to install [cert-manager](https://docs.cert-manager.io)
--namespace cert-manager \ --version v1.10.1 \ # --set installCRDs=true
-
- # To automatically install and manage the CRDs as part of your Helm release,
+
+ # To automatically install and manage the CRDs as part of your Helm release,
# you must add the --set installCRDs=true flag to your Helm installation command. ```
Use the following steps to install [cert-manager](https://docs.cert-manager.io)
The default challenge type in the following YAML is `http01`. Other challenges are documented on [letsencrypt.org - Challenge Types](https://letsencrypt.org/docs/challenge-types/)
- > [!IMPORTANT]
+ > [!IMPORTANT]
> Update `<YOUR.EMAIL@ADDRESS>` in the following YAML. ```bash
Use the following steps to install [cert-manager](https://docs.cert-manager.io)
Ensure your Application Gateway has a public Frontend IP configuration with a DNS name (either using the default `azure.com` domain, or provision a `Azure DNS Zone` service, and assign your own custom domain). The annotation `certmanager.k8s.io/cluster-issuer: letsencrypt-staging`, which tells cert-manager to process the tagged Ingress resource.
- > [!IMPORTANT]
+ > [!IMPORTANT]
> Update `<PLACEHOLDERS.COM>` in the following YAML with your own domain (or the Application Gateway one, for example 'kh-aks-ingress.westeurope.cloudapp.azure.com') ```bash
application-gateway Ingress Controller Private Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-private-ip.md
Title: Use private IP address for internal routing for an ingress endpoint
-description: This article provides information on how to use private IPs for internal routing and thus exposing the Ingress endpoint within a cluster to the rest of the VNet.
+ Title: Use private IP address for internal routing for an ingress endpoint
+description: This article provides information on how to use private IPs for internal routing and thus exposing the Ingress endpoint within a cluster to the rest of the VNet.
-+ Last updated 07/23/2023
-# Use private IP for internal routing for an Ingress endpoint
+# Use private IP for internal routing for an Ingress endpoint
This feature exposes the ingress endpoint within the `Virtual Network` using a private IP. > [!TIP] > Also see [What is Application Gateway for Containers?](for-containers/overview.md) currently in public preview.
-## Prerequisites
+## Prerequisites
Application Gateway with a [Private IP configuration](./configure-application-gateway-with-private-frontend-ip.md) There are two ways to configure the controller to use Private IP for ingress,
For Application Gateways without a Private IP, Ingresses annotated with `appgw.i
Events: Type Reason Age From Message - - - -
- Warning NoPrivateIP 2m (x17 over 2m) azure/application-gateway, prod-ingress-azure-5c9b6fcd4-bctcb Ingress default/hello-world-ingress requires Application Gateway
+ Warning NoPrivateIP 2m (x17 over 2m) azure/application-gateway, prod-ingress-azure-5c9b6fcd4-bctcb Ingress default/hello-world-ingress requires Application Gateway
applicationgateway3026 has a private IP address ```
application-gateway Ingress Controller Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-troubleshoot.md
Title: Application Gateway Ingress Controller troubleshooting
-description: This article provides documentation on how to troubleshoot common questions and issues with the Application Gateway Ingress Controller.
+description: This article provides documentation on how to troubleshoot common questions and issues with the Application Gateway Ingress Controller.
-+ Last updated 08/01/2023
The following conditions must be in place for AGIC to function as expected:
delyan@Azure:~$ kubectl get services -o wide --show-labels NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR LABELS
- aspnetapp ClusterIP 10.2.63.254 <none> 80/TCP 17h app=aspnetapp <none>
+ aspnetapp ClusterIP 10.2.63.254 <none> 80/TCP 17h app=aspnetapp <none>
```
- 3. **Ingress**, annotated with `kubernetes.io/ingress.class: azure/application-gateway`, referencing the previous service.
+ 3. **Ingress**, annotated with `kubernetes.io/ingress.class: azure/application-gateway`, referencing the previous service.
Verify this configuration from [Cloud Shell](https://shell.azure.com/) with `kubectl get ingress -o wide --show-labels` ```output delyan@Azure:~$ kubectl get ingress -o wide --show-labels
The following conditions must be in place for AGIC to function as expected:
``` The ingress resource must be annotated with `kubernetes.io/ingress.class: azure/application-gateway`.
-
+ ### Verify Observed Namespace
The following conditions must be in place for AGIC to function as expected:
```bash # What namespaces exist on your cluster kubectl get namespaces
-
+ # What pods are currently running kubectl get pods --all-namespaces -o wide ```
The following conditions must be in place for AGIC to function as expected:
* Do you have a Kubernetes [Service](https://kubernetes.io/docs/concepts/services-networking/service/) and [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) resources?
-
+ ```bash # Get all services across all namespaces kubectl get service --all-namespaces -o wide
-
+ # Get all ingress resources across all namespaces kubectl get ingress --all-namespaces -o wide ``` * Is your [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) annotated with: `kubernetes.io/ingress.class: azure/application-gateway`? AGIC only watches for Kubernetes Ingress resources that have this annotation.
-
+ ```bash # Get the YAML definition of a particular ingress resource kubectl get ingress --namespace <which-namespace?> <which-ingress?> -o yaml
application-gateway Ingress Controller Update Ingress Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-update-ingress-controller.md
Title: Upgrade ingress controller with Helm
-description: This article provides information on how to upgrade an Application Gateway Ingress using Helm.
+description: This article provides information on how to upgrade an Application Gateway Ingress using Helm.
-+ Last updated 07/23/2023
-# How to upgrade Application Gateway Ingress Controller using Helm
+# How to upgrade Application Gateway Ingress Controller using Helm
The Azure Application Gateway Ingress Controller for Kubernetes (AGIC) can be upgraded using a Helm repository hosted on Azure Storage.
application-gateway Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-cli.md
Last updated 11/06/2023 -+ # Quickstart: Direct web traffic with Azure Application Gateway - Azure CLI
-In this quickstart, you use Azure CLI to create an application gateway. Then you test it to make sure it works correctly.
+In this quickstart, you use Azure CLI to create an application gateway. Then you test it to make sure it works correctly.
The application gateway directs application web traffic to specific resources in a backend pool. You assign listeners to ports, create rules, and add resources to a backend pool. For the sake of simplicity, this article uses a simple setup with a public frontend IP address, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines in the backend pool.
You can also complete this quickstart using [Azure PowerShell](quick-create-powe
## Create resource group
-In Azure, you allocate related resources to a resource group. Create a resource group by using `az group create`.
+In Azure, you allocate related resources to a resource group. Create a resource group by using `az group create`.
The following example creates a resource group named *myResourceGroupAG* in the *eastus* location.
-```azurecli-interactive
+```azurecli-interactive
az group create --name myResourceGroupAG --location eastus ```
-## Create network resources
+## Create network resources
For Azure to communicate between the resources that you create, it needs a virtual network. The application gateway subnet can contain only application gateways. No other resources are allowed. You can either create a new subnet for Application Gateway or use an existing one. In this example, you create two subnets: one for the application gateway, and another for the backend servers. You can configure the Frontend IP of the Application Gateway to be Public or Private as per your use case. In this example, you'll choose a Public Frontend IP address.
It can take up to 30 minutes for Azure to create the application gateway. After
## Test the application gateway
-Although Azure doesn't require an NGINX web server to create the application gateway, you installed it in this quickstart to verify whether Azure successfully created the application gateway. To get the public IP address of the new application gateway, use `az network public-ip show`.
+Although Azure doesn't require an NGINX web server to create the application gateway, you installed it in this quickstart to verify whether Azure successfully created the application gateway. To get the public IP address of the new application gateway, use `az network public-ip show`.
```azurecli-interactive az network public-ip show \
az network public-ip show \
``` Copy and paste the public IP address into the address bar of your browser.
-ΓÇï
+ΓÇï
![Test application gateway](./media/quick-create-cli/application-gateway-nginxtest.png) When you refresh the browser, you should see the name of the second VM. This indicates the application gateway was successfully created and can connect with the backend.
When you refresh the browser, you should see the name of the second VM. This ind
When you no longer need the resources that you created with the application gateway, use the `az group delete` command to delete the resource group. When you delete the resource group, you also delete the application gateway and all its related resources.
-```azurecli-interactive
+```azurecli-interactive
az group delete --name myResourceGroupAG ```
application-gateway Redirect Http To Https Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-http-to-https-cli.md
description: Learn how to create an HTTP to HTTPS redirection and add a certific
-+ Last updated 04/27/2023
A resource group is a logical container into which Azure resources are deployed
The following example creates a resource group named *myResourceGroupAG* in the *eastus* location.
-```azurecli-interactive
+```azurecli-interactive
az group create --name myResourceGroupAG --location eastus ```
az network public-ip create \
## Create the application gateway
-You can use [az network application-gateway create](/cli/azure/network/application-gateway#az-network-application-gateway-create) to create the application gateway named *myAppGateway*. When you create an application gateway using the Azure CLI, you specify configuration information, such as capacity, sku, and HTTP settings.
+You can use [az network application-gateway create](/cli/azure/network/application-gateway#az-network-application-gateway-create) to create the application gateway named *myAppGateway*. When you create an application gateway using the Azure CLI, you specify configuration information, such as capacity, sku, and HTTP settings.
-The application gateway is assigned to *myAGSubnet* and *myAGPublicIPAddress* that you previously created. In this example, you associate the certificate that you created and its password when you create the application gateway.
+The application gateway is assigned to *myAGSubnet* and *myAGPublicIPAddress* that you previously created. In this example, you associate the certificate that you created and its password when you create the application gateway.
```azurecli-interactive az network application-gateway create \
application-gateway Redirect Internal Site Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-internal-site-cli.md
description: Learn how to create an application gateway that redirects internal
-+ Last updated 04/27/2023
A resource group is a logical container into which Azure resources are deployed
The following example creates a resource group named *myResourceGroupAG* in the *eastus* location.
-```azurecli-interactive
+```azurecli-interactive
az group create --name myResourceGroupAG --location eastus ```
-## Create network resources
+## Create network resources
Create the virtual network named *myVNet* and the subnet named *myAGSubnet* using [az network vnet create](/cli/azure/network/vnet). You can then add the subnet named *myBackendSubnet* that's needed by the backend pool of servers using [az network vnet subnet create](/cli/azure/network/vnet/subnet). Create the public IP address named *myAGPublicIPAddress* using [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create).
az network public-ip create \
## Create an application gateway
-You can use [az network application-gateway create](/cli/azure/network/application-gateway) to create the application gateway named *myAppGateway*. When you create an application gateway using the Azure CLI, you specify configuration information, such as capacity, sku, and HTTP settings. The application gateway is assigned to *myAGSubnet* and *myAGPublicIPAddress* that you previously created.
+You can use [az network application-gateway create](/cli/azure/network/application-gateway) to create the application gateway named *myAppGateway*. When you create an application gateway using the Azure CLI, you specify configuration information, such as capacity, sku, and HTTP settings. The application gateway is assigned to *myAGSubnet* and *myAGPublicIPAddress* that you previously created.
```azurecli-interactive az network application-gateway create \
It may take several minutes for the application gateway to be created. After the
- *rule1* - The default routing rule that is associated with *appGatewayHttpListener*.
-## Add listeners and rules
+## Add listeners and rules
A listener is required to enable the application gateway to route traffic appropriately to the backend pool. In this tutorial, you create two listeners for your two domains. In this example, listeners are created for the domains of *www\.contoso.com* and *www\.contoso.org*.
az network application-gateway http-listener create \
--frontend-port appGatewayFrontendPort \ --resource-group myResourceGroupAG \ --gateway-name myAppGateway \
- --host-name www.contoso.org
+ --host-name www.contoso.org
``` ### Add the redirection configuration
az network application-gateway redirect-config create \
### Add routing rules
-Rules are processed in the order in which they are created, and traffic is directed using the first rule that matches the URL sent to the application gateway. For example, if you have a rule using a basic listener and a rule using a multi-site listener both on the same port, the rule with the multi-site listener must be listed before the rule with the basic listener in order for the multi-site rule to function as expected.
+Rules are processed in the order in which they are created, and traffic is directed using the first rule that matches the URL sent to the application gateway. For example, if you have a rule using a basic listener and a rule using a multi-site listener both on the same port, the rule with the multi-site listener must be listed before the rule with the basic listener in order for the multi-site rule to function as expected.
In this example, you create two new rules and delete the default rule that was created. You can add the rule using [az network application-gateway rule create](/cli/azure/network/application-gateway/rule#az-network-application-gateway-rule-create).
application-gateway Self Signed Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/self-signed-certificates.md
Last updated 01/17/2024--++ # Generate an Azure Application Gateway self-signed certificate with a custom root CA The Application Gateway v2 SKU introduces the use of Trusted Root Certificates to allow TLS connections with the backend servers. This provision removes the use of authentication certificates (individual Leaf certificates) that were required in the v1 SKU. The *root certificate* is a Base-64 encoded X.509(.CER) format root certificate from the backend certificate server. It identifies the root certificate authority (CA) that issued the server certificate and the server certificate is then used for the TLS/SSL communication.
-Application Gateway trusts your website's certificate by default if it's signed by a well-known CA (for example, GoDaddy or DigiCert). You don't need to explicitly upload the root certificate in that case. For more information, see [Overview of TLS termination and end to end TLS with Application Gateway](ssl-overview.md). However, if you have a dev/test environment and don't want to purchase a verified CA signed certificate, you can create your own custom Root CA and a leaf certificate signed by that Root CA.
+Application Gateway trusts your website's certificate by default if it's signed by a well-known CA (for example, GoDaddy or DigiCert). You don't need to explicitly upload the root certificate in that case. For more information, see [Overview of TLS termination and end to end TLS with Application Gateway](ssl-overview.md). However, if you have a dev/test environment and don't want to purchase a verified CA signed certificate, you can create your own custom Root CA and a leaf certificate signed by that Root CA.
> [!NOTE] > Self-generated certificates are not trusted by default, and can be difficult to maintain. Also, they may use outdated hash and cipher suites that may not be strong. For better security, purchase a certificate signed by a well-known certificate authority.
In this article, you will learn how to:
## Prerequisites -- **[OpenSSL](https://www.openssl.org/) on a computer running Windows or Linux**
+- **[OpenSSL](https://www.openssl.org/) on a computer running Windows or Linux**
While there could be other tools available for certificate management, this tutorial uses OpenSSL. You can find OpenSSL bundled with many Linux distributions, such as Ubuntu. - **A web server**
In this article, you will learn how to:
For example, Apache, IIS, or NGINX to test the certificates. - **An Application Gateway v2 SKU**
-
+ If you don't have an existing application gateway, see [Quickstart: Direct web traffic with Azure Application Gateway - Azure portal](quick-create-portal.md). ## Create a root CA certificate
Create your root CA certificate using OpenSSL.
``` openssl ecparam -out contoso.key -name prime256v1 -genkey ```
-
+ ### Create a Root Certificate and self-sign it 1. Use the following command to generate the Certificate Signing Request (CSR).
openssl s_client -connect localhost:443 -servername www.fabrikam.com -showcerts
## Upload the root certificate to Application Gateway's HTTP Settings
-To upload the certificate in Application Gateway, you must export the .crt certificate into a .cer format Base-64 encoded. Since .crt already contains the public key in the base-64 encoded format, just rename the file extension from .crt to .cer.
+To upload the certificate in Application Gateway, you must export the .crt certificate into a .cer format Base-64 encoded. Since .crt already contains the public key in the base-64 encoded format, just rename the file extension from .crt to .cer.
### Azure portal
Add-AzApplicationGatewayRequestRoutingRule `
-HttpListener $listener ` -BackendAddressPool $bepool
-Set-AzApplicationGateway -ApplicationGateway $gw
+Set-AzApplicationGateway -ApplicationGateway $gw
``` ### Verify the application gateway backend health
application-gateway Tutorial Manage Web Traffic Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-manage-web-traffic-cli.md
Last updated 04/27/2023--++ # Manage web traffic with an application gateway using the Azure CLI
A resource group is a logical container into which Azure resources are deployed
The following example creates a resource group named *myResourceGroupAG* in the *eastus* location.
- ```azurecli-interactive
+ ```azurecli-interactive
az group create --name myResourceGroupAG --location eastus ```
-## Create network resources
+## Create network resources
Create the virtual network named *myVNet* and the subnet named *myAGSubnet* using [az network vnet create](/cli/azure/network/vnet). You can then add the subnet named *myBackendSubnet* needed by the backend servers using [az network vnet subnet create](/cli/azure/network/vnet/subnet). Create the public IP address named *myAGPublicIPAddress* using [az network public-ip create](/cli/azure/network/public-ip).
Create the virtual network named *myVNet* and the subnet named *myAGSubnet* usin
## Create an application gateway
-Use [az network application-gateway create](/cli/azure/network/application-gateway) to create the application gateway named *myAppGateway*. When you create an application gateway using the Azure CLI, you specify configuration information, such as capacity, sku, and HTTP settings. The application gateway is assigned to *myAGSubnet* and *myPublicIPAddress* that you previously created.
+Use [az network application-gateway create](/cli/azure/network/application-gateway) to create the application gateway named *myAppGateway*. When you create an application gateway using the Azure CLI, you specify configuration information, such as capacity, sku, and HTTP settings. The application gateway is assigned to *myAGSubnet* and *myPublicIPAddress* that you previously created.
```azurecli-interactive az network application-gateway create \
application-gateway Tutorial Multiple Sites Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-multiple-sites-cli.md
Last updated 04/27/2023 -+ #Customer intent: As an IT administrator, I want to use Azure CLI to configure Application Gateway to host multiple web sites , so I can ensure my customers can access the web information they need.
az network public-ip create \
## Create the application gateway
-You can use [az network application-gateway create](/cli/azure/network/application-gateway#az-network-application-gateway-create) to create the application gateway. When you create an application gateway using the Azure CLI, you specify configuration information, such as capacity, sku, and HTTP settings. The application gateway is assigned to *myAGSubnet* and *myAGPublicIPAddress* that you previously created.
+You can use [az network application-gateway create](/cli/azure/network/application-gateway#az-network-application-gateway-create) to create the application gateway. When you create an application gateway using the Azure CLI, you specify configuration information, such as capacity, sku, and HTTP settings. The application gateway is assigned to *myAGSubnet* and *myAGPublicIPAddress* that you previously created.
```azurecli-interactive az network application-gateway create \
az network application-gateway http-listener create \
--frontend-port appGatewayFrontendPort \ --resource-group myResourceGroupAG \ --gateway-name myAppGateway \
- --host-name www.fabrikam.com
+ --host-name www.fabrikam.com
``` ### Add routing rules
done
## Create a CNAME record in your domain
-After the application gateway is created with its public IP address, you can get the DNS address and use it to create a CNAME record in your domain. You can use [az network public-ip show](/cli/azure/network/public-ip#az-network-public-ip-show) to get the DNS address of the application gateway. Copy the *fqdn* value of the DNSSettings and use it as the value of the CNAME record that you create.
+After the application gateway is created with its public IP address, you can get the DNS address and use it to create a CNAME record in your domain. You can use [az network public-ip show](/cli/azure/network/public-ip#az-network-public-ip-show) to get the DNS address of the application gateway. Copy the *fqdn* value of the DNSSettings and use it as the value of the CNAME record that you create.
```azurecli-interactive az network public-ip show \
application-gateway Tutorial Ssl Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ssl-cli.md
Last updated 04/27/2023 -+ # Create an application gateway with TLS termination using the Azure CLI
A resource group is a logical container into which Azure resources are deployed
The following example creates a resource group named *myResourceGroupAG* in the *eastus* location.
-```azurecli-interactive
+```azurecli-interactive
az group create --name myResourceGroupAG --location eastus ```
az network public-ip create \
## Create the application gateway
-You can use [az network application-gateway create](/cli/azure/network/application-gateway) to create the application gateway. When you create an application gateway using the Azure CLI, you specify configuration information, such as capacity, sku, and HTTP settings.
+You can use [az network application-gateway create](/cli/azure/network/application-gateway) to create the application gateway. When you create an application gateway using the Azure CLI, you specify configuration information, such as capacity, sku, and HTTP settings.
-The application gateway is assigned to *myAGSubnet* and *myAGPublicIPAddress* that you previously created. In this example, you associate the certificate that you created and its password when you create the application gateway.
+The application gateway is assigned to *myAGSubnet* and *myAGPublicIPAddress* that you previously created. In this example, you associate the certificate that you created and its password when you create the application gateway.
```azurecli-interactive az network application-gateway create \
application-gateway Tutorial Url Redirect Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-redirect-cli.md
Last updated 04/27/2023 -+ #Customer intent: As an IT administrator, I want to use Azure CLI to set up URL path redirection of web traffic to specific pools of servers so I can ensure my customers have access to the information they need.
A resource group is a logical container into which Azure resources are deployed
The following example creates a resource group named *myResourceGroupAG* in the *eastus* location.
-```azurecli-interactive
+```azurecli-interactive
az group create --name myResourceGroupAG --location eastus ```
-## Create network resources
+## Create network resources
Create the virtual network named *myVNet* and the subnet named *myAGSubnet* using [az network vnet create](/cli/azure/network/vnet). You can then add the subnet named *myBackendSubnet* that's needed by the backend servers using [az network vnet subnet create](/cli/azure/network/vnet/subnet). Create the public IP address named *myAGPublicIPAddress* using [az network public-ip create](/cli/azure/network/public-ip).
az network application-gateway create \
### Add backend pools and ports
-You can add backend address pools named *imagesBackendPool* and *videoBackendPool* to your application gateway by using [az network application-gateway address-pool create](/cli/azure/network/application-gateway/address-pool). You add the frontend ports for the pools using [az network application-gateway frontend-port create](/cli/azure/network/application-gateway/frontend-port).
+You can add backend address pools named *imagesBackendPool* and *videoBackendPool* to your application gateway by using [az network application-gateway address-pool create](/cli/azure/network/application-gateway/address-pool). You add the frontend ports for the pools using [az network application-gateway frontend-port create](/cli/azure/network/application-gateway/frontend-port).
```azurecli-interactive az network application-gateway address-pool create \
Replace \<azure-user> and \<password> with a user name and password of your choi
for i in `seq 1 3`; do if [ $i -eq 1 ] then
- poolName="appGatewayBackendPool"
+ poolName="appGatewayBackendPool"
fi if [ $i -eq 2 ] then
application-gateway Tutorial Url Route Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-route-cli.md
Last updated 04/27/2023 -+ #Customer intent: As an IT administrator, I want to use Azure CLI to set up routing of web traffic to specific pools of servers based on the URL that the customer uses, so I can ensure my customers have the most efficient route to the information they need.
for i in `seq 1 3`; do
if [ $i -eq 1 ] then
- poolName="appGatewayBackendPool"
+ poolName="appGatewayBackendPool"
fi if [ $i -eq 2 ]
automation Automation Hrw Run Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hrw-run-runbooks.md
description: This article describes how to run runbooks on machines in your loca
Last updated 11/21/2023--++ # Run Automation runbooks on a Hybrid Runbook Worker
Jobs for Hybrid Runbook Workers run under the local **System** account.
> [!NOTE] > To create environment variable in Windows systems, follow these steps:
-> 1. Go to **Control Panel** > **System** > **Advanced System Settings**.
-> 1. In **System Properties** select **Environment variables**.
+> 1. Go to **Control Panel** > **System** > **Advanced System Settings**.
+> 1. In **System Properties** select **Environment variables**.
> 1. In **System variables**, select **New**.
-> 1. Provide **Variable name** and **Variable value**, and then select **OK**.
+> 1. Provide **Variable name** and **Variable value**, and then select **OK**.
> 1. Restart the VM or logout from the current user and login to implement the environment variable changes. **PowerShell 7.2** To run PowerShell 7.2 runbooks on a Windows Hybrid Worker, install *PowerShell* on the Hybrid Worker. See [Installing PowerShell on Windows](/powershell/scripting/install/installing-powershell-on-windows).
-After PowerShell 7.2 installation is complete, create an environment variable with Variable name as powershell_7_2_path and Variable value as location of the executable *PowerShell*. Restart the Hybrid Runbook Worker after environment variable is created successfully.
+After PowerShell 7.2 installation is complete, create an environment variable with Variable name as powershell_7_2_path and Variable value as location of the executable *PowerShell*. Restart the Hybrid Runbook Worker after environment variable is created successfully.
**PowerShell 7.1**
If the *Python* executable file is at the default location *C:\Python27\python.e
> [!NOTE] > To create environment variable in Windows systems, follow these steps:
-> 1. Go to **Control Panel** > **System** > **Advanced System Settings**.
-> 1. In **System Properties** select **Environment variables**.
+> 1. Go to **Control Panel** > **System** > **Advanced System Settings**.
+> 1. In **System Properties** select **Environment variables**.
> 1. In **System variables**, select **New**.
-> 1. Provide **Variable name** and **Variable value**, and then select **OK**.
+> 1. Provide **Variable name** and **Variable value**, and then select **OK**.
> 1. Restart the VM or logout from the current user and login to implement the environment variable changes. **PowerShell 7.1**
If the *Python* executable file is at the default location *C:\Python27\python.e
#### [Extension-based Hybrid Workers](#tab/Lin-extn-hrw) > [!NOTE]
-> To create environment variable in Linux systems, follow these steps:
-> 1. Open /etc/environment.
+> To create environment variable in Linux systems, follow these steps:
+> 1. Open /etc/environment.
> 1. Create a new Environment variable by adding VARIABLE_NAME="variable_value" in a new line in /etc/environment (VARIABLE_NAME is the name of the new Environment variable and variable_value represents the value it is to be assigned). > 1. Restart the VM or logout from current user and login after saving the changes to /etc/environment to implement environment variable changes.
After Python 3.10 installation is complete, create an environment variable with
**Python 3.8**
-To run Python 3.8 runbooks on a Linux Hybrid Worker, install *Python* on the Hybrid Worker.
+To run Python 3.8 runbooks on a Linux Hybrid Worker, install *Python* on the Hybrid Worker.
Ensure to add the executable *Python* file to the PATH environment variable and restart the Hybrid Runbook Worker after the installation. **Python 2.7**
Ensure to add the executable *Python* file to the PATH environment variable and
#### [Agent-based Hybrid Workers](#tab/Lin-agt-hrw)
-Create Service accounts **nxautomation** and **omsagent** for agent-based Hybrid Workers. The creation and permission assignment script can be viewed at [linux data](https://github.com/microsoft/OMS-Agent-for-Linux/blob/master/installer/datafiles/linux.data). The accounts, with the corresponding sudo permissions, must be present during [installation of a Linux Hybrid Runbook worker](automation-linux-hrw-install.md).
+Create Service accounts **nxautomation** and **omsagent** for agent-based Hybrid Workers. The creation and permission assignment script can be viewed at [linux data](https://github.com/microsoft/OMS-Agent-for-Linux/blob/master/installer/datafiles/linux.data). The accounts, with the corresponding sudo permissions, must be present during [installation of a Linux Hybrid Runbook worker](automation-linux-hrw-install.md).
If you try to install the worker, and the account is not present or doesn't have the appropriate permissions, the installation fails. Do not change the permissions of the `sudoers.d` folder or its ownership. Sudo permission is required for the accounts and the permissions shouldn't be removed. Restricting this to certain folders or commands may result in a breaking change. The **nxautomation** user enabled as part of Update Management executes only signed runbooks.
There are two ways to use the Managed Identities in Hybrid Runbook Worker script
```powershell # Ensures you do not inherit an AzContext in your runbook Disable-AzContextAutosave -Scope Process
-
+ # Connect to Azure with system-assigned managed identity $AzureContext = (Connect-AzAccount -Identity).context
-
+ # set and store context
- $AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile
+ $AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile
$AzureContext # Get all VM names from the subscription
There are two ways to use the Managed Identities in Hybrid Runbook Worker script
> This will **NOT** work in an Automation Account which has been configured with an Automation account Managed Identity. As soon as the Automation account Managed Identity is enabled, it is no longer possible to use the VM Managed Identity and then, it is only possible to use the Automation Account System-Assigned Managed Identity as mentioned in option 1 above. Use any **one** of the following managed identities:
-
+ # [VM's system-assigned managed identity](#tab/sa-mi)
-
+ 1. [Configure](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vmss.md) a System Managed Identity for the VM. 1. Grant this identity the [required permissions](../active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-arm.md#grant-your-vm-access-to-a-resource-group-in-resource-manager) within the subscription to perform its tasks. 1. Update the runbook to use the [Connect-Az-Account](/powershell/module/az.accounts/connect-azaccount) cmdlet with the `Identity` parameter to authenticate to Azure resources. This configuration reduces the need to use a Run As Account and perform the associated account management.
There are two ways to use the Managed Identities in Hybrid Runbook Worker script
```powershell # Ensures you do not inherit an AzContext in your runbook Disable-AzContextAutosave -Scope Process
-
+ # Connect to Azure with system-assigned managed identity $AzureContext = (Connect-AzAccount -Identity).context
-
+ # set and store context
- $AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile
+ $AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile
$AzureContext # Get all VM names from the subscription
- Get-AzVM -DefaultProfile $AzureContext | Select Name
+ Get-AzVM -DefaultProfile $AzureContext | Select Name
```
-
+ # [VM's user-assigned managed identity](#tab/ua-mi) 1. [Configure](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vmss.md#user-assigned-managed-identity) a User Managed Identity for the VM.
There are two ways to use the Managed Identities in Hybrid Runbook Worker script
```powershell # Ensures you do not inherit an AzContext in your runbook Disable-AzContextAutosave -Scope Process
-
+ # Connect to Azure with user-managed-assigned managed identity. Replace <ClientId> below with the Client Id of the User Managed Identity $AzureContext = (Connect-AzAccount -Identity -AccountId <ClientId>).context
-
+ # set and store context
- $AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile
+ $AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile
$AzureContext # Get all VM names from the subscription
- Get-AzVM -DefaultProfile $AzureContext | Select Name
+ Get-AzVM -DefaultProfile $AzureContext | Select Name
```
-
+ > [!NOTE] > You can find the client Id of the user-assigned managed identity in the Azure portal.
- > :::image type="content" source="./media/automation-hrw-run-runbooks/managed-identities-client-id-inline.png" alt-text="Screenshot of client id in Managed Identites." lightbox="./media/automation-hrw-run-runbooks/managed-identities-client-id-expanded.png":::
+ > :::image type="content" source="./media/automation-hrw-run-runbooks/managed-identities-client-id-inline.png" alt-text="Screenshot of client id in Managed Identites." lightbox="./media/automation-hrw-run-runbooks/managed-identities-client-id-expanded.png":::
There are two ways to use the Managed Identities in Hybrid Runbook Worker script
1. You can grant this Managed Identity access to resources in your subscription in the Access control (IAM) blade for the resource by adding the appropriate role assignment. :::image type="content" source="./media/automation-hrw-run-runbooks/access-control-add-role-assignment.png" alt-text="Screenshot of how to select managed identities.":::
-
+ 2. Add the Azure Arc Managed Identity to your chosen role as required. :::image type="content" source="./media/automation-hrw-run-runbooks/select-managed-identities-inline.png" alt-text="Screenshot of how to add role assignment in the Access control blade." lightbox="./media/automation-hrw-run-runbooks/select-managed-identities-expanded.png":::
-
+ > [!NOTE] > This will **NOT** work in an Automation Account which has been configured with an Automation account Managed Identity. As soon as the Automation account Managed Identity is enabled, it is no longer possible to use the Arc Managed Identity and then, it is **only** possible to use the Automation Account System-Assigned Managed Identity as mentioned in option 1 above. >[!NOTE]
->By default, the Azure contexts are saved for use between PowerShell sessions. It is possible that when a previous runbook on the Hybrid Runbook Worker has been authenticated with Azure, that context persists to the disk in the System PowerShell profile, as per [Azure contexts and sign-in credentials | Microsoft Docs](/powershell/azure/context-persistence).
+>By default, the Azure contexts are saved for use between PowerShell sessions. It is possible that when a previous runbook on the Hybrid Runbook Worker has been authenticated with Azure, that context persists to the disk in the System PowerShell profile, as per [Azure contexts and sign-in credentials | Microsoft Docs](/powershell/azure/context-persistence).
For instance, a runbook with `Get-AzVM` can return all the VMs in the subscription with no call to `Connect-AzAccount`, and the user would be able to access Azure resources without having to authenticate within that runbook. You can disable context autosave in Azure PowerShell, as detailed [here](/powershell/azure/context-persistence#save-azure-contexts-across-powershell-sessions).
-
+ ### Use runbook authentication with Hybrid Worker Credentials Instead of having your runbook provide its own authentication to local resources, you can specify Hybrid Worker Credentials for a Hybrid Runbook Worker group. To specify a Hybrid Worker Credentials, you must define a [credential asset](./shared-resources/credentials.md) that has access to local resources. These resources include certificate stores and all runbooks run under these credentials on a Hybrid Runbook Worker in the group.
By default, the Hybrid jobs run under the context of System account. However, to
1. Select **Settings**. 1. Change the value of **Hybrid Worker credentials** from **Default** to **Custom**. 1. Select the credential and click **Save**.
-1. If the following permissions are not assigned for Custom users, jobs might get suspended.
+1. If the following permissions are not assigned for Custom users, jobs might get suspended.
| **Resource type** | **Folder permissions** | | | | |Azure VM | C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows (read and execute) | |Arc-enabled Server | C:\ProgramData\AzureConnectedMachineAgent\Tokens (read)</br> C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows (read and execute) |
-
+ >[!NOTE] >Linux Hybrid Worker doesn't support Hybrid Worker credentials.
-
+ ## Start a runbook on a Hybrid Runbook Worker [Start a runbook in Azure Automation](start-runbooks.md) describes different methods for starting a runbook. Starting a runbook on a Hybrid Runbook Worker uses a **Run on** option that allows you to specify the name of a Hybrid Runbook Worker group. When a group is specified, one of the workers in that group retrieves and runs the runbook. If your runbook does not specify this option, Azure Automation runs the runbook as usual.
You can configure a Windows Hybrid Runbook Worker to run only signed runbooks.
> Once you've configured a Hybrid Runbook Worker to run only signed runbooks, unsigned runbooks fail to execute on the worker. > [!NOTE]
-> PowerShell 7.x does not support signed runbooks for Windows and Linux Hybrid Runbook Worker.
+> PowerShell 7.x does not support signed runbooks for Windows and Linux Hybrid Runbook Worker.
### Create signing certificate
automation Automation Linux Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-linux-hrw-install.md
Title: Deploy an agent-based Linux Hybrid Runbook Worker in Automation
description: This article tells how to install an agent-based Hybrid Runbook Worker to run runbooks on Linux-based machines in your local datacenter or cloud environment. -+ Last updated 09/17/2023-+ # Deploy an agent-based Linux Hybrid Runbook Worker in Automation
The Hybrid Runbook Worker feature supports the following distributions. All oper
* Oracle Linux 6, 7, and 8 * Red Hat Enterprise Linux Server 5, 6, 7, and 8 * Debian GNU/Linux 6, 7, and 8
-* SUSE Linux Enterprise Server 12, 15, and 15.1 (SUSE didn't release versions numbered 13 or 14)
+* SUSE Linux Enterprise Server 12, 15, and 15.1 (SUSE didn't release versions numbered 13 or 14)
* Ubuntu **Linux OS** | **Name** | | |
- 20.04 LTS | Focal Fossa
- 18.04 LTS | Bionic Beaver
- 16.04 LTS | Xenial Xerus
- 14.04 LTS | Trusty Tahr
+ 20.04 LTS | Focal Fossa
+ 18.04 LTS | Bionic Beaver
+ 16.04 LTS | Xenial Xerus
+ 14.04 LTS | Trusty Tahr
> [!IMPORTANT] > Before enabling the Update Management feature, which depends on the system Hybrid Runbook Worker role, confirm the distributions it supports [here](update-management/operating-system-requirements.md).
Run the following commands as root on the agent-based Linux Hybrid Worker:
> [!NOTE]
- > - This script doesn't remove the Log Analytics agent for Linux from the machine. It only removes the functionality and configuration of the Hybrid Runbook Worker role.
+ > - This script doesn't remove the Log Analytics agent for Linux from the machine. It only removes the functionality and configuration of the Hybrid Runbook Worker role.
> - After you disable the Private Link in your Automation account, it might take up to 60 minutes to remove the Hybrid Runbook worker. > - After you remove the Hybrid Worker, the Hybrid Worker authentication certificate on the machine is valid for 45 minutes.
To check the version of agent-based Linux Hybrid Runbook Worker, go to the follo
```bash sudo cat /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/VERSION ```
-The file *VERSION* has the version number of Hybrid Runbook Worker.
+The file *VERSION* has the version number of Hybrid Runbook Worker.
## Next steps
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview.md
description: This article describes the Change Tracking and Inventory feature, w
Last updated 12/13/2023-+
Change Tracking and Inventory doesn't support or has the following limitations:
- Different installation methods - ***.exe** files stored on Windows - The **Max File Size** column and values are unused in the current implementation.-- If you are tracking file changes, it is limited to a file size of 5 MB or less.
+- If you are tracking file changes, it is limited to a file size of 5 MB or less.
- If the file size appears >1.25MB, then FileContentChecksum is incorrect due to memory constraints in the checksum calculation. - If you try to collect more than 2500 files in a 30-minute collection cycle, Change Tracking and Inventory performance might be degraded. - If network traffic is high, change records can take up to six hours to display.
Change Tracking and Inventory now support Python 2 and Python 3. If your machine
> [!NOTE] > To use the OMS agent compatible with Python 3, ensure that you first uninstall Python 2; otherwise, the OMS agent will continue to run with python 2 by default.
-#### [Python 2](#tab/python-2)
-- Red Hat, CentOS, Oracle:
+#### [Python 2](#tab/python-2)
+- Red Hat, CentOS, Oracle:
```bash sudo yum install -y python2 ``` - Ubuntu, Debian:
-
+ ```bash sudo apt-get update sudo apt-get install -y python2 ``` - SUSE:
-
+ ```bash sudo zypper install -y python2 ```
Change Tracking and Inventory now support Python 2 and Python 3. If your machine
```bash sudo yum install -y python3 ```-- Ubuntu, Debian:
+- Ubuntu, Debian:
```bash sudo apt-get update sudo apt-get install -y python3 ```-- SUSE:
-
+- SUSE:
+ ```bash sudo zypper install -y python3 ```
-
+ ## Network requirements
A key capability of Change Tracking and Inventory is alerting on changes to the
|ConfigurationChange <br>&#124; where RegistryKey contains @"HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\SharedAccess\\Parameters\\FirewallPolicy"| Useful for tracking changes to firewall settings.|
-## Update Log Analytics agent to latest version
+## Update Log Analytics agent to latest version
-For Change Tracking & Inventory, machines use the [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) to collect data about changes to installed software, Windows services, Windows registry and files, and Linux daemons on monitored servers. Soon, Azure will no longer accept connections from older versions of the Windows Log Analytics (LA) agent, also known as the Windows Microsoft Monitoring Agent (MMA), that uses an older method for certificate handling. We recommend to upgrade your agent to the latest version as soon as possible.
+For Change Tracking & Inventory, machines use the [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) to collect data about changes to installed software, Windows services, Windows registry and files, and Linux daemons on monitored servers. Soon, Azure will no longer accept connections from older versions of the Windows Log Analytics (LA) agent, also known as the Windows Microsoft Monitoring Agent (MMA), that uses an older method for certificate handling. We recommend to upgrade your agent to the latest version as soon as possible.
-[Agents that are on version - 10.20.18053 (bundle) and 1.0.18053.0 (extension)](../../virtual-machines/extensions/oms-windows.md#agent-and-vm-extension-version) or newer aren't affected in response to this change. If youΓÇÖre on an agent prior to that, your agent will be unable to connect, and the Change Tracking & Inventory pipeline & downstream activities can stop. You can check the current LA agent version in HeartBeat table within your LA Workspace.
+[Agents that are on version - 10.20.18053 (bundle) and 1.0.18053.0 (extension)](../../virtual-machines/extensions/oms-windows.md#agent-and-vm-extension-version) or newer aren't affected in response to this change. If youΓÇÖre on an agent prior to that, your agent will be unable to connect, and the Change Tracking & Inventory pipeline & downstream activities can stop. You can check the current LA agent version in HeartBeat table within your LA Workspace.
-Ensure to upgrade to the latest version of the Windows Log Analytics agent (MMA) following these [guidelines](../../azure-monitor/agents/agent-manage.md).
+Ensure to upgrade to the latest version of the Windows Log Analytics agent (MMA) following these [guidelines](../../azure-monitor/agents/agent-manage.md).
## Next steps
automation Migrate Existing Agent Based Hybrid Worker To Extension Based Workers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md
description: This article provides information on how to migrate an existing age
Last updated 12/10/2023-+ #Customer intent: As a developer, I want to learn about extension so that I can efficiently migrate agent based hybrid workers to extension based workers.
> [!IMPORTANT] > Azure Automation Agent-based User Hybrid Runbook Worker (Windows and Linux) will retire on **31 August 2024** and wouldn't be supported after that date. You must complete migrating existing Agent-based User Hybrid Runbook Workers to Extension-based Workers before 31 August 2024. Moreover, starting **1 November 2023**, creating new Agent-based Hybrid Workers wouldn't be possible. [Learn more](migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md).
-This article describes the benefits of Extension-based User Hybrid Runbook Worker and how to migrate existing Agent-based User Hybrid Runbook Workers to Extension-based Hybrid Workers.
+This article describes the benefits of Extension-based User Hybrid Runbook Worker and how to migrate existing Agent-based User Hybrid Runbook Workers to Extension-based Hybrid Workers.
There are two Hybrid Runbook Workers installation platforms supported by Azure Automation: - **Agent based hybrid runbook worker** (V1) - The Agent-based hybrid runbook worker depends on theΓÇ»[Log Analytics Agent](../azure-monitor/agents/log-analytics-agent.md).
The process of executing runbooks on Hybrid Runbook Workers remains the same for
The purpose of the Extension-based approach is to simplify the installation and management of the Hybrid Worker and remove the complexity working with the Agent-based version. Here are some key benefits: -- **Seamless onboarding** – The Agent-based approach for onboarding Hybrid Runbook worker is dependent on the Log Analytics Agent, which is a multi-step, time-consuming, and error-prone process. The Extension-based approach offers more security and is no longer dependent on the Log Analytics Agent.
+- **Seamless onboarding** – The Agent-based approach for onboarding Hybrid Runbook worker is dependent on the Log Analytics Agent, which is a multi-step, time-consuming, and error-prone process. The Extension-based approach offers more security and is no longer dependent on the Log Analytics Agent.
-- **Ease of Manageability** – It offers native integration with Azure Resource Manager (ARM) identity for Hybrid Runbook Worker and provides the flexibility for governance at scale through policies and templates.
+- **Ease of Manageability** – It offers native integration with Azure Resource Manager (ARM) identity for Hybrid Runbook Worker and provides the flexibility for governance at scale through policies and templates.
-- **Microsoft Entra ID based authentication** – It uses a VM system-assigned managed identities provided by Microsoft Entra ID. This centralizes control and management of identities and resource credentials.
+- **Microsoft Entra ID based authentication** – It uses a VM system-assigned managed identities provided by Microsoft Entra ID. This centralizes control and management of identities and resource credentials.
-- **Unified experience** – It offers an identical experience for managing Azure and off-Azure Arc-enabled machines.
+- **Unified experience** – It offers an identical experience for managing Azure and off-Azure Arc-enabled machines.
-- **Multiple onboarding channels** – You can choose to onboard and manage Extension-based workers through the Azure portal, PowerShell cmdlets, Bicep, ARM templates, REST API and Azure CLI.
+- **Multiple onboarding channels** – You can choose to onboard and manage Extension-based workers through the Azure portal, PowerShell cmdlets, Bicep, ARM templates, REST API and Azure CLI.
- **Default Automatic upgrade** – It offers Automatic upgrade of minor versions by default, significantly reducing the manageability of staying updated on the latest version. We recommend enabling Automatic upgrades to take advantage of any security or feature updates without the manual overhead. You can also opt out of automatic upgrades at any time. Any major version upgrades are currently not supported and should be managed manually.
The purpose of the Extension-based approach is to simplify the installation and
- 4 GB of RAM - **Non-Azure machines** must have the [Azure Connected Machine agent](../azure-arc/servers/agent-overview.md) installed. To install the `AzureConnectedMachineAgent`, see [Connect hybrid machines to Azure from the Azure portal](../azure-arc/servers/onboard-portal.md) for Arc-enabled servers or see [Manage VMware virtual machines Azure Arc](../azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md#enable-guest-management) to enable guest management for Arc-enabled VMware vSphere VMs. - The system-assigned managed identity must be enabled on the Azure virtual machine, Arc-enabled server or Arc-enabled VMware vSphere VM. If the system-assigned managed identity isn't enabled, it will be enabled as part of the installation process through the Azure portal.
-
+ ### Supported operating systems | Windows (x64) | Linux (x64) |
To install Hybrid worker extension on an existing agent based hybrid worker, fol
1. Select **Add** to append the machine to the group.
- The **Platform** column shows the same Hybrid worker as both **Agent based (V1)** and **Extension based (V2)**. After you're confident of the extension based Hybrid Worker experience and use, you can [remove](#remove-agent-based-hybrid-worker) the agent based Worker.
+ The **Platform** column shows the same Hybrid worker as both **Agent based (V1)** and **Extension based (V2)**. After you're confident of the extension based Hybrid Worker experience and use, you can [remove](#remove-agent-based-hybrid-worker) the agent based Worker.
:::image type="content" source="./media/migrate-existing-agent-based-hybrid-worker-extension-based-hybrid-worker/hybrid-workers-group-platform-inline.png" alt-text="Screenshot of platform field showing agent or extension based hybrid worker." lightbox="./media/migrate-existing-agent-based-hybrid-worker-extension-based-hybrid-worker/hybrid-workers-group-platform-expanded.png":::
Follow the steps mentioned below as an example:
1. Create a Hybrid Worker Group. 1. Create either an Azure VM or Arc-enabled server. Alternatively, you can also use an existing Azure VM or Arc-enabled server.
-1. Connect the Azure VM or Arc-enabled server to the above created Hybrid Worker Group.
+1. Connect the Azure VM or Arc-enabled server to the above created Hybrid Worker Group.
1. Generate a new GUID and pass it as the name of the Hybrid Worker. 1. Enable System-assigned managed identity on the VM. 1. Install Hybrid Worker Extension on the VM.
Follow the steps mentioned below as an example:
1. Create a Hybrid Worker Group. 1. Create either an Azure VM or Arc-enabled server. Alternatively, you can also use an existing Azure VM or Arc-enabled server.
-1. Connect the Azure VM or Arc-enabled server to the above created Hybrid Worker Group.
+1. Connect the Azure VM or Arc-enabled server to the above created Hybrid Worker Group.
1. Generate a new GUID and pass it as the name of the Hybrid Worker. 1. Enable System-assigned managed identity on the VM. 1. Install Hybrid Worker Extension on the VM.
Review the parameters used in this template.
| osVersion | The OS for the new Windows VM. The default value is `2019-Datacenter`. | | dnsNameForPublicIP | The DNS name for the public IP. |
-
+ #### [REST API](#tab/rest-api) **Prerequisites**
To install and use Hybrid Worker extension using REST API, follow these steps. T
GET https://westcentralus.management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Automation/automationAccounts/{automationAccountName}/hybridRunbookWorkerGroups/{hybridRunbookWorkerGroupName}/hybridRunbookWorkers/{hybridRunbookWorkerId}?api-version=2021-06-22 ```
-
+ 1. Follow the steps [here](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#enable-system-assigned-managed-identity-on-an-existing-vm) to enable the System-assigned managed identity on the VM. 1. Get the automation account details using this API call.
To install and use Hybrid Worker extension using REST API, follow these steps. T
The API call will provide the value with the key: `AutomationHybridServiceUrl`. Use the URL in the next step to enable extension on the VM.
-1. Install the Hybrid Worker Extension on Azure VM by using the following API call.
-
+1. Install the Hybrid Worker Extension on Azure VM by using the following API call.
+ ```http PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/extensions/HybridWorkerExtension?api-version=2021-11-01 ```
-
+ The request body should contain the following information: ```json
To install and use Hybrid Worker extension using REST API, follow these steps. T
} ```
-
+ For ARC VMs, use the below API call for enabling the extension: ```http PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.HybridCompute/machines/{machineName}/extensions/{extensionName}?api-version=2021-05-20 ```
-
+ The request body should contain the following information: ```json
Follow the steps mentioned below as an example:
1. Install Hybrid Worker Extension on the VM ```azurecli-interactive
- az vm extension set --name HybridWorkerExtension --publisher Microsoft.Azure.Automation.HybridWorker --version 1.1 --vm-name <vmname> -g <resourceGroupName> \
- --settings '{"AutomationAccountURL" = "<registration-url>";}' --enable-auto-upgrade true
+ az vm extension set --name HybridWorkerExtension --publisher Microsoft.Azure.Automation.HybridWorker --version 1.1 --vm-name <vmname> -g <resourceGroupName> \
+ --settings '{"AutomationAccountURL" = "<registration-url>";}' --enable-auto-upgrade true
``` 1. To confirm if the extension has been successfully installed on the VM, in **Azure portal**, go to the VM > **Extensions** tab and check the status of the Hybrid Worker extension installed on the VM.
Follow the steps mentioned below as an example:
1. Create a Hybrid Worker Group. ```powershell-interactive
- New-AzAutomationHybridRunbookWorkerGroup -AutomationAccountName "Contoso17" -Name "RunbookWorkerGroupName" -ResourceGroupName "ResourceGroup01"
+ New-AzAutomationHybridRunbookWorkerGroup -AutomationAccountName "Contoso17" -Name "RunbookWorkerGroupName" -ResourceGroupName "ResourceGroup01"
``` 1. Create an Azure VM or Arc-enabled server and add it to the above created Hybrid Worker Group. Use the below command to add an existing Azure VM or Arc-enabled Server to the Hybrid Worker Group. Generate a new GUID and pass it as `hybridRunbookWorkerGroupName`. To fetch `vmResourceId`, go to the **Properties** tab of the VM on Azure portal. ```azurepowershell
- New-AzAutomationHybridRunbookWorker -AutomationAccountName "Contoso17" -Name "RunbookWorkerName" -HybridRunbookWorkerGroupName "RunbookWorkerGroupName" -VmResourceId "VmResourceId" -ResourceGroupName "ResourceGroup01"
+ New-AzAutomationHybridRunbookWorker -AutomationAccountName "Contoso17" -Name "RunbookWorkerName" -HybridRunbookWorkerGroupName "RunbookWorkerGroupName" -VmResourceId "VmResourceId" -ResourceGroupName "ResourceGroup01"
``` 1. Follow the steps [here](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#enable-system-assigned-managed-identity-on-an-existing-vm) to enable the System-assigned managed identity on the VM. 1. Install Hybrid Worker Extension on the VM.
-
+ **Hybrid Worker extension settings** ```powershell-interactive
Follow the steps mentioned below as an example:
"AutomationAccountURL" = "<registrationurl>"; }; ```
-
+ **Azure VMs** ```powershell
automation Remove Node And Configuration Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/state-configuration/remove-node-and-configuration-package.md
description: This article explains how to remove an Azure Automation State Confi
-+ Last updated 04/16/2021
To find the package names and other relevant details, see the [PowerShell Desire
```bash rpm -e <package name>
-```
+```
### dpkg-based systems
automation Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/hybrid-runbook-worker.md
Title: Troubleshoot agent-based Hybrid Runbook Worker issues in Azure Automation
description: This article tells how to troubleshoot and resolve issues that arise with Azure Automation agent-based Hybrid Runbook Workers. Last updated 09/17/2023--++ # Troubleshoot agent-based Hybrid Runbook Worker issues in Automation
This error can occur due to the following reasons:
- The Hybrid Runbook Worker extension has been uninstalled from the machine. #### Resolution-- Ensure that the machine exists, and Hybrid Runbook Worker extension is installed on it. The Hybrid Worker should be healthy and should give a heartbeat. Troubleshoot any network issues by checking the Microsoft-SMA event logs on the Workers in the Hybrid Runbook Worker Group that tried to run this job. -- You can also monitor [HybridWorkerPing](../../azure-monitor/essentials/metrics-supported.md#microsoftautomationautomationaccounts) metric that provides the number of pings from a Hybrid Worker and can help to check ping-related issues.
+- Ensure that the machine exists, and Hybrid Runbook Worker extension is installed on it. The Hybrid Worker should be healthy and should give a heartbeat. Troubleshoot any network issues by checking the Microsoft-SMA event logs on the Workers in the Hybrid Runbook Worker Group that tried to run this job.
+- You can also monitor [HybridWorkerPing](../../azure-monitor/essentials/metrics-supported.md#microsoftautomationautomationaccounts) metric that provides the number of pings from a Hybrid Worker and can help to check ping-related issues.
### Scenario: Job was suspended as it exceeded the job limit for a Hybrid Worker
Job gets suspended with the following error message:
#### Cause Jobs might get suspended due to any of the following reasons:-- Each active Hybrid Worker in the group will poll for jobs every 30 seconds to see if any jobs are available. The Worker picks jobs on a first-come, first-serve basis. Depending on when a job was pushed, whichever Hybrid Worker within the Hybrid Worker Group pings the Automation service first picks up the job. A single hybrid worker can generally pick up four jobs per ping (that is, every 30 seconds). If your rate of pushing jobs is higher than four per 30 seconds and no other Worker picks up the job, the job might get suspended. -- Hybrid Worker might not be polling as expected every 30 seconds. This could happen if the Worker is not healthy or there are network issues.
+- Each active Hybrid Worker in the group will poll for jobs every 30 seconds to see if any jobs are available. The Worker picks jobs on a first-come, first-serve basis. Depending on when a job was pushed, whichever Hybrid Worker within the Hybrid Worker Group pings the Automation service first picks up the job. A single hybrid worker can generally pick up four jobs per ping (that is, every 30 seconds). If your rate of pushing jobs is higher than four per 30 seconds and no other Worker picks up the job, the job might get suspended.
+- Hybrid Worker might not be polling as expected every 30 seconds. This could happen if the Worker is not healthy or there are network issues.
#### Resolution-- If the job limit for a Hybrid Worker exceeds four jobs per 30 seconds, you can add more Hybrid Workers to the Hybrid Worker group for high availability and load balancing. You can also schedule jobs so they do not exceed the limit of four jobs per 30 seconds. The processing time of the jobs queue depends on the Hybrid worker hardware profile and load. Ensure that the Hybrid Worker is healthy and gives a heartbeat. -- Troubleshoot any network issues by checking the Microsoft-SMA event logs on the Workers in the Hybrid Runbook Worker Group that tried to run this job. -- You can also monitor the [HybridWorkerPing](../../azure-monitor/essentials/metrics-supported.md#microsoftautomationautomationaccounts) metric that provides the number of pings from a Hybrid Worker and can help to check ping-related issues.
+- If the job limit for a Hybrid Worker exceeds four jobs per 30 seconds, you can add more Hybrid Workers to the Hybrid Worker group for high availability and load balancing. You can also schedule jobs so they do not exceed the limit of four jobs per 30 seconds. The processing time of the jobs queue depends on the Hybrid worker hardware profile and load. Ensure that the Hybrid Worker is healthy and gives a heartbeat.
+- Troubleshoot any network issues by checking the Microsoft-SMA event logs on the Workers in the Hybrid Runbook Worker Group that tried to run this job.
+- You can also monitor the [HybridWorkerPing](../../azure-monitor/essentials/metrics-supported.md#microsoftautomationautomationaccounts) metric that provides the number of pings from a Hybrid Worker and can help to check ping-related issues.
You can't see the Hybrid Runbook Worker or VMs when the worker machine has been
#### Cause
-The Hybrid Runbook Worker machine hasn't pinged Azure Automation for more than 30 days. As a result, Automation has purged the Hybrid Runbook Worker group or the System Worker group.
+The Hybrid Runbook Worker machine hasn't pinged Azure Automation for more than 30 days. As a result, Automation has purged the Hybrid Runbook Worker group or the System Worker group.
#### Resolution
Start the worker machine, and then re-register it with Azure Automation. For ins
A runbook running on a Hybrid Runbook Worker fails with the following error message:
-`Connect-AzAccount : No certificate was found in the certificate store with thumbprint 0000000000000000000000000000000000000000`
-`At line:3 char:1`
-`+ Connect-AzAccount -ServicePrincipal -Tenant $Conn.TenantID -Appl ...`
-`+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~`
-` + CategoryInfo : CloseError: (:) [Connect-AzAccount],ArgumentException`
+`Connect-AzAccount : No certificate was found in the certificate store with thumbprint 0000000000000000000000000000000000000000`
+`At line:3 char:1`
+`+ Connect-AzAccount -ServicePrincipal -Tenant $Conn.TenantID -Appl ...`
+`+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~`
+` + CategoryInfo : CloseError: (:) [Connect-AzAccount],ArgumentException`
` + FullyQualifiedErrorId : Microsoft.Azure.Commands.Profile.ConnectAzAccountCommand` #### Cause
The worker's initial registration phase fails, and you receive the following err
The following issues are possible causes:
-* There's a mistyped workspace ID or workspace key (primary) in the agent's settings.
+* There's a mistyped workspace ID or workspace key (primary) in the agent's settings.
* The Hybrid Runbook Worker can't download the configuration, which causes an account linking error. When Azure enables features on machines, it supports only certain regions for linking a Log Analytics workspace and an Automation account. It's also possible that an incorrect date or time is set on the computer. If the time is +/- 15 minutes from the current time, feature deployment fails. * Log Analytics Gateway is not configured to support Hybrid Runbook Worker.
You might also need to update the date or time zone of your computer. If you sel
Follow the steps mentioned [here](../../azure-monitor/agents/gateway.md#configure-for-automation-hybrid-runbook-workers) to add Hybrid Runbook Worker endpoints to the Log Analytics Gateway.
-### <a name="set-azstorageblobcontent-execution-fails"></a>Scenario: Set-AzStorageBlobContent fails on a Hybrid Runbook Worker
+### <a name="set-azstorageblobcontent-execution-fails"></a>Scenario: Set-AzStorageBlobContent fails on a Hybrid Runbook Worker
#### Issue
Hybrid workers send [Runbook output and messages](../automation-runbook-output-a
#### Issue
-A script running on a Windows Hybrid Runbook Worker can't connect as expected to Microsoft 365 on an Orchestrator sandbox. The script is using [Connect-MsolService](/powershell/module/msonline/connect-msolservice) for connection.
+A script running on a Windows Hybrid Runbook Worker can't connect as expected to Microsoft 365 on an Orchestrator sandbox. The script is using [Connect-MsolService](/powershell/module/msonline/connect-msolservice) for connection.
If you adjust **Orchestrator.Sandbox.exe.config** to set the proxy and the bypass list, the sandbox still doesn't connect properly. A **Powershell_ise.exe.config** file with the same proxy and bypass list settings seems to work as you expect. Service Management Automation (SMA) logs and PowerShell logs don't provide any information about proxy.ΓÇï #### Cause
-The connection to Active Directory Federation Services (AD FS) on the server can't bypass the proxy. Remember that a PowerShell sandbox runs as the logged user. However, an Orchestrator sandbox is heavily customized and might ignore the **Orchestrator.Sandbox.exe.config** file settings. It has special code for handling machine or Log Analytics agent proxy settings, but not for handling other custom proxy settings.
+The connection to Active Directory Federation Services (AD FS) on the server can't bypass the proxy. Remember that a PowerShell sandbox runs as the logged user. However, an Orchestrator sandbox is heavily customized and might ignore the **Orchestrator.Sandbox.exe.config** file settings. It has special code for handling machine or Log Analytics agent proxy settings, but not for handling other custom proxy settings.
#### Resolution You can resolve the issue for the Orchestrator sandbox by migrating your script to use the Microsoft Entra modules instead of the MSOnline module for PowerShell cmdlets. For more information, see [Migrating from Orchestrator to Azure Automation (Beta)](../automation-orchestrator-migration.md).
-ΓÇïIf you want to continue to use the MSOnline module cmdlets, change your script to use [Invoke-Command](/powershell/module/microsoft.powershell.core/invoke-command). Specify values for the `ComputerName` and `Credential` parameters.
+ΓÇïIf you want to continue to use the MSOnline module cmdlets, change your script to use [Invoke-Command](/powershell/module/microsoft.powershell.core/invoke-command). Specify values for the `ComputerName` and `Credential` parameters.
```powershell $Credential = Get-AutomationPSCredential -Name MyProxyAccessibleCredentialΓÇï
-Invoke-Command -ComputerName $env:COMPUTERNAME -Credential $Credential
+Invoke-Command -ComputerName $env:COMPUTERNAME -Credential $Credential
{ Connect-MsolService … }​ ```
automation Update Agent Issues Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-agent-issues-linux.md
Last updated 11/01/2021 -+ # Troubleshoot Linux update agent issues
To verify if a VM is an Azure VM, check for Asset tag value using the below comm
sudo dmidecode ```
-If the asset tag is different than 7783-7084-3265-9085-8269-3286-77, then reboot VM to initiate re-registration.
+If the asset tag is different than 7783-7084-3265-9085-8269-3286-77, then reboot VM to initiate re-registration.
## Monitoring agent service health checks
If the asset tag is different than 7783-7084-3265-9085-8269-3286-77, then reboot
To fix this, install Azure Log Analytics Linux agent and ensure it communicates the required endpoints. For more information, see [Install Log Analytics agent on Linux computers](../../azure-monitor/agents/agent-linux.md).
-This task checks if the folder is present -
+This task checks if the folder is present -
*/etc/opt/microsoft/omsagent/conf/omsadmin.conf* ### Monitoring Agent status
-
-To fix this issue, you must start the OMS Agent service by using the following command:
+
+To fix this issue, you must start the OMS Agent service by using the following command:
```bash sudo /opt/microsoft/omsagent/bin/service_control restart ```
-To validate you can perform process check using the below command:
+To validate you can perform process check using the below command:
```bash
-process_name="omsagent"
-ps aux | grep %s | grep -v grep" % (process_name)"
+process_name="omsagent"
+ps aux | grep %s | grep -v grep" % (process_name)"
``` For more information, see [Troubleshoot issues with the Log Analytics agent for Linux](../../azure-monitor/agents/agent-linux-troubleshoot.md)
To fix this issue, purge the OMS Agent completely and reinstall it with the [wor
Validate that there are no more multihoming by checking the directories under this path:
- */var/opt/microsoft/omsagent*.
+ */var/opt/microsoft/omsagent*.
As they are the directories of workspaces, the number of directories equals the number of workspaces on-boarded to OMSAgent. ### Hybrid Runbook Worker
-To fix the issue, run the following command:
+To fix the issue, run the following command:
```bash sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/PerformRequiredConfigurationChecks.py' ```
-This command forces the omsconfig agent to talk to Azure Monitor and retrieve the latest configuration.
+This command forces the omsconfig agent to talk to Azure Monitor and retrieve the latest configuration.
Validate to check if the following two paths exists:
To fix this issue, run the following command:
sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/PerformRequiredConfigurationChecks.py' ```
-This command forces the omsconfig agent to talk to Azure Monitor and retrieve the latest configuration.
+This command forces the omsconfig agent to talk to Azure Monitor and retrieve the latest configuration.
If the issue still persists, run the [omsagent Log Collector tool](https://github.com/Microsoft/OMS-Agent-for-Linux/blob/master/tools/LogCollector/OMS_Linux_Agent_Log_Collector.md)
HTTP_PROXY
To fix this issue, allow access to IP **169.254.169.254**. For more information, see [Access Azure Instance Metadata Service](../../virtual-machines/windows/instance-metadata-service.md#azure-instance-metadata-service-windows)
-After the network changes, you can either rerun the Troubleshooter or run the below commands to validate:
+After the network changes, you can either rerun the Troubleshooter or run the below commands to validate:
```bash curl -H \"Metadata: true\" http://169.254.169.254/metadata/instance?api-version=2018-02-01
After the network changes, you can either rerun the Troubleshooter or run the be
### General internet connectivity
-This check makes sure that the machine has access to the internet and can be ignored if you have blocked internet and allowed only specific URLs.
+This check makes sure that the machine has access to the internet and can be ignored if you have blocked internet and allowed only specific URLs.
CURL on any http url.
Fix this issue by allowing the prerequisite Repo URL. For RHEL, see [here](../..
Post making Network changes you can either rerun the Troubleshooter or
-Curl on software repositories configured in package manager.
+Curl on software repositories configured in package manager.
-Refreshing repos would help to confirm the communication.
+Refreshing repos would help to confirm the communication.
```bash sudo apt-get check
azure-arc Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/private-link-security.md
Title: Use Azure Private Link to securely connect servers to Azure Arc description: Learn how to use Azure Private Link to securely connect networks to Azure Arc. -+ Last updated 06/20/2023
See the visual diagram under the section [How it works](#how-it-works) for the n
1. Enter a **Name** for the endpoint. 1. Choose **Yes** for **Integrate with private DNS zone**, and let it automatically create a new Private DNS Zone.
-
+ > [!NOTE] > If you choose **No** and prefer to manage DNS records manually, first complete setting up your Private Link - including this Private Endpoint and the Private Scope configuration. Then, configure your DNS according to the instructions in [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md). Make sure not to create empty records as preparation for your Private Link setup. The DNS records you create can override existing settings and impact your connectivity with Azure Arc-enabled servers.
Once your Azure Arc Private Link Scope is created, you need to connect it with o
1. On the **Configuration** page,
- a. Choose the **virtual network** and **subnet** that you want to connect to your Azure Arc-enabled server.
+ a. Choose the **virtual network** and **subnet** that you want to connect to your Azure Arc-enabled server.
b. Choose **Yes** for **Integrate with private DNS zone**, and let it automatically create a new Private DNS Zone. The actual DNS zones might be different from what is shown in the screenshot below.
If you're only planning to use Private Links to support a few machines or server
### Configure a new Azure Arc-enabled server to use Private link
-When connecting a machine or server with Azure Arc-enabled servers for the first time, you can optionally connect it to a Private Link Scope. The following steps are
+When connecting a machine or server with Azure Arc-enabled servers for the first time, you can optionally connect it to a Private Link Scope. The following steps are
1. From your browser, go to the [Azure portal](https://portal.azure.com).
When connecting a machine or server with Azure Arc-enabled servers for the first
1. On the **Download and run script** page, review the summary information, and then select **Download**. If you still need to make changes, select **Previous**.
-After downloading the script, you have to run it on your machine or server using a privileged (administrator or root) account. Depending on your network configuration, you might need to download the agent from a computer with internet access and transfer it to your machine or server, and then modify the script with the path to the agent.
+After downloading the script, you have to run it on your machine or server using a privileged (administrator or root) account. Depending on your network configuration, you might need to download the agent from a computer with internet access and transfer it to your machine or server, and then modify the script with the path to the agent.
The Windows agent can be downloaded from [https://aka.ms/AzureConnectedMachineAgent](https://aka.ms/AzureConnectedMachineAgent) and the Linux agent can be downloaded from [https://packages.microsoft.com](https://packages.microsoft.com). Look for the latest version of the **azcmagent** under your OS distribution directory and installed with your local package manager.
azure-cache-for-redis Cache High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-high-availability.md
Zone-redundant Enterprise and Enterprise Flash tier caches are available in the
| Canada Central* | North Europe | | | Australia East | | Central US* | UK South | | | Central India | | East US | West Europe | | | Southeast Asia |
-| East US 2 | | | | |
-| South Central US | | | | |
+| East US 2 | | | | Japan East* |
+| South Central US | | | | East Asia* |
| West US 2 | | | | |
+| West US 3 | | | | |
+| Brazil South | | | | |
\* Enterprise Flash tier not available in this region.
azure-functions Configure Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-monitoring.md
The following chart describes the main categories of logs that the runtime creat
| Category | Table | Description | | -- | -- | -- |
+| **`Function`** | **traces**| Includes function started and completed logs for all function runs. For successful runs, these logs are at the `Information` level. Exceptions are logged at the `Error` level. The runtime also creates `Warning` level logs, such as when queue messages are sent to the [poison queue](functions-bindings-storage-queue-trigger.md#poison-messages).|
| **`Function.<YOUR_FUNCTION_NAME>`** | **dependencies**| Dependency data is automatically collected for some services. For successful runs, these logs are at the `Information` level. For more information, see [Dependencies](functions-monitoring.md#dependencies). Exceptions are logged at the `Error` level. The runtime also creates `Warning` level logs, such as when queue messages are sent to the [poison queue](functions-bindings-storage-queue-trigger.md#poison-messages). | | **`Function.<YOUR_FUNCTION_NAME>`** | **customMetrics**<br/>**customEvents** | C# and JavaScript SDKs lets you collect custom metrics and log custom events. For more information, see [Custom telemetry data](functions-monitoring.md#custom-telemetry-data).| | **`Function.<YOUR_FUNCTION_NAME>`** | **traces**| Includes function started and completed logs for specific function runs. For successful runs, these logs are at the `Information` level. Exceptions are logged at the `Error` level. The runtime also creates `Warning` level logs, such as when queue messages are sent to the [poison queue](functions-bindings-storage-queue-trigger.md#poison-messages). |
With scale controller logging enabled, you're now able to [query your scale cont
## Enable Application Insights integration
-For a function app to send data to Application Insights, it needs to know the instrumentation key of an Application Insights resource. The key must be in an app setting named **APPINSIGHTS_INSTRUMENTATIONKEY**.
+For a function app to send data to Application Insights, it needs to connect to the Application Insights resource using **only one** of these application settings:
+
+| Setting name | Description |
+| - | - |
+| **[APPLICATIONINSIGHTS_CONNECTION_STRING](functions-app-settings.md#applicationinsights_connection_string)** | This is the recommended setting, which is required when your Application Insights instance runs in a sovereign cloud. The connection string supports other [new capabilities](../azure-monitor/app/migrate-from-instrumentation-keys-to-connection-strings.md#new-capabilities). |
+| **[APPINSIGHTS_INSTRUMENTATIONKEY](functions-app-settings.md#appinsights_instrumentationkey)** | Legacy setting, which is deprecated by Application Insights in favor of the connection string setting. |
When you create your function app in the [Azure portal](./functions-get-started.md) from the command line by using [Azure Functions Core Tools](./create-first-function-cli-csharp.md) or [Visual Studio Code](./create-first-function-vs-code-csharp.md), Application Insights integration is enabled by default. The Application Insights resource has the same name as your function app, and it's created either in the same region or in the nearest region.
To review the Application Insights resource being created, select it to expand t
:::image type="content" source="media/functions-monitoring/enable-ai-new-function-app.png" alt-text="Screenshot of enabling Application Insights while creating a function app.":::
-When you select **Create**, an Application Insights resource is created with your function app, which has the `APPINSIGHTS_INSTRUMENTATIONKEY` set in application settings. Everything is ready to go.
+When you select **Create**, an Application Insights resource is created with your function app, which has the `APPLICATIONINSIGHTS_CONNECTION_STRING` set in application settings. Everything is ready to go.
<a id="manually-connect-an-app-insights-resource"></a> ### Add to an existing function app
-If an Application Insights resource wasn't created with your function app, use the following steps to create the resource. You can then add the instrumentation key from that resource as an [application setting](functions-how-to-use-azure-function-app-settings.md#settings) in your function app.
+If an Application Insights resource wasn't created with your function app, use the following steps to create the resource. You can then add the connection string from that resource as an [application setting](functions-how-to-use-azure-function-app-settings.md#settings) in your function app.
1. In the [Azure portal](https://portal.azure.com), search for and select **function app**, and then select your function app.
If an Application Insights resource wasn't created with your function app, use t
The Application Insights resource is created in the same resource group and subscription as your function app. After the resource is created, close the **Application Insights** window.
-1. In your function app, select **Configuration** under **Settings**, and then select **Application settings**. If you see a setting named `APPINSIGHTS_INSTRUMENTATIONKEY`, Application Insights integration is enabled for your function app running in Azure. If for some reason this setting doesn't exist, add it using your Application Insights instrumentation key as the value.
+1. In your function app, select **Configuration** under **Settings**, and then select **Application settings**. If you see a setting named `APPLICATIONINSIGHTS_CONNECTION_STRING`, Application Insights integration is enabled for your function app running in Azure. If for some reason this setting doesn't exist, add it using your Application Insights connection string as the value.
> [!NOTE]
-> Early versions of Functions used built-in monitoring, which is no longer recommended. When you're enabling Application Insights integration for such a function app, you must also [disable built-in logging](#disable-built-in-logging).
+> Older function apps might be using `APPINSIGHTS_INSTRUMENTATIONKEY` instead of `APPLICATIONINSIGHTS_CONNECTION_STRING`. When possible, you should update your app to use the connection string instead of the instrumentation key.
## Disable built-in logging
-When you enable Application Insights, disable the built-in logging that uses Azure Storage. The built-in logging is useful for testing with light workloads, but isn't intended for high-load production use. For production monitoring, we recommend Application Insights. If built-in logging is used in production, the logging record might be incomplete because of throttling on Azure Storage.
+Early versions of Functions used built-in monitoring, which is no longer recommended. When you enable Application Insights, disable the built-in logging that uses Azure Storage. The built-in logging is useful for testing with light workloads, but isn't intended for high-load production use. For production monitoring, we recommend Application Insights. If built-in logging is used in production, the logging record might be incomplete because of throttling on Azure Storage.
To disable built-in logging, delete the `AzureWebJobsDashboard` app setting. For more information about how to delete app settings in the Azure portal, see the **Application settings** section of [How to manage a function app](functions-how-to-use-azure-function-app-settings.md#settings). Before you delete the app setting, ensure that no existing functions in the same function app use the setting for Azure Storage triggers or bindings.
To configure these values at App settings level (and avoid redeployment on just
| Host.json path | App setting | |-|-| | logging.logLevel.default | AzureFunctionsJobHost__logging__logLevel__default |
-| logging.logLevel.Host.Aggregator | AzureFunctionsJobHost__logging__logLevel__Host.Aggregator |
+| logging.logLevel.Host.Aggregator | AzureFunctionsJobHost__logging__logLevel__Host__Aggregator |
| logging.logLevel.Function | AzureFunctionsJobHost__logging__logLevel__Function |
-| logging.logLevel.Function.Function1 | AzureFunctionsJobHost__logging__logLevel__Function.Function1 |
-| logging.logLevel.Function.Function1.User | AzureFunctionsJobHost__logging__logLevel__Function.Function1.User |
+| logging.logLevel.Function.Function1 | AzureFunctionsJobHost__logging__logLevel__Function__Function1 |
+| logging.logLevel.Function.Function1.User | AzureFunctionsJobHost__logging__logLevel__Function__Function1__User |
You can override the settings directly at the Azure portal Function App Configuration blade or by using an Azure CLI or PowerShell script.
azure-functions Functions Node Upgrade V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-node-upgrade-v4.md
ms.devlang: javascript
# ms.devlang: javascript, typescript
+zone_pivot_groups: programming-languages-set-functions-nodejs
# Migrate to version 4 of the Node.js programming model for Azure Functions
Version 4 is designed to provide Node.js developers with the following benefits:
Version 4 of the Node.js programming model requires the following minimum versions:
+- [`@azure/functions`](https://www.npmjs.com/package/@azure/functions) npm package v4.0.0
+- [Node.js](https://nodejs.org/en/download/releases/) v18+
+- [Azure Functions Runtime](./functions-versions.md) v4.25+
+- [Azure Functions Core Tools](./functions-run-local.md) v4.0.5382+ (if running locally)
- [`@azure/functions`](https://www.npmjs.com/package/@azure/functions) npm package v4.0.0 - [Node.js](https://nodejs.org/en/download/releases/) v18+ - [TypeScript](https://www.typescriptlang.org/) v4+ - [Azure Functions Runtime](./functions-versions.md) v4.25+ - [Azure Functions Core Tools](./functions-run-local.md) v4.0.5382+ (if running locally) ## Include the npm package
In v4, the [`@azure/functions`](https://www.npmjs.com/package/@azure/functions)
In v4 of the programming model, you can structure your code however you want. The only files that you need at the root of your app are *host.json* and *package.json*.
-Otherwise, you define the file structure by setting the `main` field in your *package.json* file. You can set the `main` field to a single file or multiple files by using a [glob pattern](https://wikipedia.org/wiki/Glob_(programming)). Common values for the `main` field might be:
+Otherwise, you define the file structure by setting the `main` field in your *package.json* file. You can set the `main` field to a single file or multiple files by using a [glob pattern](https://wikipedia.org/wiki/Glob_(programming)). The following table shows example values for the `main` field:
++
+| Example | Description |
+| | |
+| **`src/index.js`** | Register functions from a single root file. |
+| **`src/functions/*.js`** | Register each function from its own file. |
+| **`src/{index.js,functions/*.js}`** | A combination where you register each function from its own file, but you still have a root file for general app-level code. |
+ -- TypeScript:
- - `dist/src/index.js`
- - `dist/src/functions/*.js`
-- JavaScript:
- - `src/index.js`
- - `src/functions/*.js`
+
+| Example | Description |
+| | |
+| **`dist/src/index.js`** | Register functions from a single root file. |
+| **`dist/src/functions/*.js`** | Register each function from its own file. |
+| **`dist/src/{index.js,functions/*.js}`** | A combination where you register each function from its own file, but you still have a root file for general app-level code. |
+ > [!TIP] > Make sure you define a `main` field in your *package.json* file.
The trigger input, instead of the invocation context, is now the first argument
You no longer have to create and maintain those separate *function.json* configuration files. You can now fully define your functions directly in your TypeScript or JavaScript files. In addition, many properties now have defaults so that you don't have to specify them every time. + # [v4](#tab/v4) +
+# [v3](#tab/v3)
+ ```javascript
-const { app } = require("@azure/functions");
+module.exports = async function (context, req) {
+ context.log(`Http function processed request for url "${request.url}"`);
-app.http('helloWorld1', {
- methods: ['GET', 'POST'],
- handler: async (request, context) => {
- context.log('Http function processed request');
+ const name = req.query.name || req.body || 'world';
- const name = request.query.get('name')
- || await request.text()
- || 'world';
+ context.res = {
+ body: `Hello, ${name}!`
+ };
+};
+```
- return { body: `Hello, ${name}!` };
- }
-});
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "get",
+ "post"
+ ]
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+ }
+ ]
+}
``` ++++
+# [v4](#tab/v4)
++ # [v3](#tab/v3)
-```javascript
-module.exports = async function (context, req) {
- context.log('HTTP function processed a request');
+```typescript
+import { AzureFunction, Context, HttpRequest } from "@azure/functions"
- const name = req.query.name
- || req.body
- || 'world';
+const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> {
+ context.log(`Http function processed request for url "${request.url}"`);
- context.res = {
- body: `Hello, ${name}!`
- };
+ const name = req.query.name || req.body || 'world';
+
+ context.res = {
+ body: `Hello, ${name}!`
+ };
};+
+export default httpTrigger;
``` ```json
module.exports = async function (context, req) {
"direction": "out", "name": "res" }
- ]
+ ],
+ "scriptFile": "../dist/HttpTrigger1/index.js"
} ``` + > [!TIP] > Move the configuration from your *function.json* file to your code. The type of the trigger corresponds to a method on the `app` object in the new model. For example, if you use an `httpTrigger` type in *function.json*, call `app.http()` in your code to register the function. If you use `timerTrigger`, call `app.timer()`.
The primary input is also called the *trigger* and is the only required input or
Version 4 supports only one way of getting the trigger input, as the first argument: + ```javascript
-async function helloWorld1(request, context) {
+async function httpTrigger1(request, context) {
const onlyOption = request; ``` ++
+```typescript
+async function httpTrigger1(request: HttpRequest, context: InvocationContext): Promise<HttpResponseInit> {
+ const onlyOption = request;
+```
++ # [v3](#tab/v3) Version 3 supports several ways of getting the trigger input: + ```javascript
-async function helloWorld1(context, request) {
+async function httpTrigger1(context, request) {
+ const option1 = request;
+ const option2 = context.req;
+ const option3 = context.bindings.req;
+```
+++
+```typescript
+async function httpTrigger1(context: Context, req: HttpRequest): Promise<void> {
const option1 = request; const option2 = context.req; const option3 = context.bindings.req; ``` + > [!TIP]
async function helloWorld1(context, request) {
Version 4 supports only one way of setting the primary output, through the return value: + ```javascript return { body: `Hello, ${name}!` }; ``` +
+```typescript
+async function httpTrigger1(request: HttpRequest, context: InvocationContext): Promise<HttpResponseInit> {
+ // ...
+ return {
+ body: `Hello, ${name}!`
+ };
+}
+```
++ # [v3](#tab/v3) Version 3 supports several ways of setting the primary output:
return {
> [!TIP] > Make sure you always return the output in your function handler, instead of setting it with the `context` object.
+### Context logging
+
+In v4, logging methods were moved to the root `context` object as shown in the following example. For more information about logging, see the [Node.js developer guide](./functions-reference-node.md#logging).
+
+# [v4](#tab/v4)
+
+```javascript
+context.log('This is an info log');
+context.error('This is an error');
+context.warn('This is an error');
+```
+
+# [v3](#tab/v3)
+
+```javascript
+context.log('This is an info log');
+context.log.error('This is an error');
+context.log.warn('This is an error');
+```
+++ ### Create a test context Version 3 doesn't support creating an invocation context outside the Azure Functions runtime, so authoring unit tests can be difficult. Version 4 allows you to create an instance of the invocation context, although the information during tests isn't detailed unless you add it yourself.
The types use the [`undici`](https://undici.nodejs.org/) package in Node.js. Thi
- *Body*. You can access the body by using a method specific to the type that you want to receive:
- ```javascript
+ ```javascript
const body = await request.text(); const body = await request.json(); const body = await request.formData();
The types use the [`undici`](https://undici.nodejs.org/) package in Node.js. Thi
- *Body*:
+ Use the `body` property to return most types like a `string` or `Buffer`:
+ ```javascript return { body: "Hello, world!" }; ```
+ Use the `jsonBody` property for the easiest way to return a JSON response:
+
+ ```javascript
+ return { jsonBody: { hello: "world" } };
+ ```
+ - *Header*. You can set the header in two ways, depending on whether you're using the `HttpResponse` class or the `HttpResponseInit` interface: ```javascript
The types use the [`undici`](https://undici.nodejs.org/) package in Node.js. Thi
return { statusCode: 200 }; ``` -- *Body*. You can set a body in several ways:
+- *Body*. You can set a body in several ways and it's the same regardless of the body type (`string`, `Buffer`, JSON object, etc.):
```javascript context.res.send("Hello, world!"); context.res.end("Hello, world!");
- context.res = { body: "Hello, world!" }
+ context.res = { body: "Hello, world!" };
return { body: "Hello, world!" }; ```
The types use the [`undici`](https://undici.nodejs.org/) package in Node.js. Thi
+
+> [!TIP]
+> Update any logic by using the HTTP request or response types to match the new methods.
++ > [!TIP]
-> Update any logic by using the HTTP request or response types to match the new methods. If you're using TypeScript, you'll get build errors if you use old methods.
+> Update any logic by using the HTTP request or response types to match the new methods. You should get TypeScript build errors to help you identify if you're using old methods.
+ ## Troubleshoot
azure-functions Functions Reference Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-node.md
export default httpTrigger;
::: zone pivot="nodejs-model-v4"
-The programming model loads your functions based on the `main` field in your `package.json`. This field can be set to a single file like `src/index.js` or a [glob pattern](https://wikipedia.org/wiki/Glob_(programming)) specifying multiple files like `src/functions/*.js`.
+The programming model loads your functions based on the `main` field in your `package.json`. You can set the `main` field to a single file or multiple files by using a [glob pattern](https://wikipedia.org/wiki/Glob_(programming)). The following table shows example values for the `main` field:
+
+# [JavaScript](#tab/javascript)
+
+| Example | Description |
+| | |
+| **`src/index.js`** | Register functions from a single root file. |
+| **`src/functions/*.js`** | Register each function from its own file. |
+| **`src/{index.js,functions/*.js}`** | A combination where you register each function from its own file, but you still have a root file for general app-level code. |
+
+# [TypeScript](#tab/typescript)
+
+| Example | Description |
+| | |
+| **`dist/src/index.js`** | Register functions from a single root file. |
+| **`dist/src/functions/*.js`** | Register each function from its own file. |
+| **`dist/src/{index.js,functions/*.js}`** | A combination where you register each function from its own file, but you still have a root file for general app-level code. |
++ In order to register a function, you must import the `app` object from the `@azure/functions` npm module and call the method specific to your trigger type. The first argument when registering a function is the function name. The second argument is an `options` object specifying configuration for your trigger, your handler, and any other inputs or outputs. In some cases where trigger configuration isn't necessary, you can pass the handler directly as the second argument instead of an `options` object.
azure-government Azure Secure Isolation Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/azure-secure-isolation-guidance.md
From the networking isolation standpoint, key benefits of Private Link include:
> > *Extra resources:* > - **[How to manage private endpoint connections on Azure PaaS resources](../private-link/manage-private-endpoint.md#manage-private-endpoint-connections-on-azure-paas-resources)**
-> - **[How to manage private endpoint connections on customer/partner owned Private Link service](../private-link/manage-private-endpoint.md#manage-private-endpoint-connections-on-a-customerpartner-owned-private-link-service)**
+> - **[How to manage private endpoint connections on a customer- or partner-owned Private Link service](../private-link/manage-private-endpoint.md#manage-private-endpoint-connections-on-a-customer--or-partner-owned-private-link-service)**
### Data encryption in transit Azure provides many options for [encrypting data in transit](../security/fundamentals/encryption-overview.md#encryption-of-data-in-transit). **Data encryption in transit isolates your network traffic from other traffic and helps protect data from interception**. Data in transit applies to scenarios involving data traveling between:
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
Last updated 06/08/2023
Microsoft Azure Government uses same underlying technologies as global Azure, which includes the core components of [Infrastructure-as-a-Service (IaaS)](https://azure.microsoft.com/overview/what-is-iaas/), [Platform-as-a-Service (PaaS)](https://azure.microsoft.com/overview/what-is-paas/), and [Software-as-a-Service (SaaS)](https://azure.microsoft.com/overview/what-is-saas/). Both Azure and Azure Government have the same comprehensive security controls in place and the same Microsoft commitment on the safeguarding of customer data. Whereas both cloud environments are assessed and authorized at the FedRAMP High impact level, Azure Government provides an extra layer of protection to customers through contractual commitments regarding storage of customer data in the United States and limiting potential access to systems processing customer data to [screened US persons](./documentation-government-plan-security.md#screening). These commitments may be of interest to customers using the cloud to store or process data subject to US export control regulations.
+> [!NOTE]
+> These lists and tables do not include feature or bundle availability in the Azure Government Secret or Azure Government Top Secret clouds.
+> For more information about specific availability for air-gapped clouds, please contact your account team.
++ ## Export control implications You're responsible for designing and deploying your applications to meet [US export control requirements](./documentation-government-overview-itar.md) such as the requirements prescribed in the EAR, ITAR, and DoE 10 CFR Part 810. In doing so, you shouldn't include sensitive or restricted information in Azure resource names, as explained in [Considerations for naming Azure resources](./documentation-government-concept-naming-resources.md).
azure-monitor Agent Linux Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux-troubleshoot.md
Title: Troubleshoot Azure Log Analytics Linux Agent | Microsoft Docs description: Describe the symptoms, causes, and resolution for the most common issues with the Log Analytics agent for Linux in Azure Monitor. -+ Last updated 04/25/2023
For more information, see the [Troubleshooting Tool documentation on GitHub](htt
A clean reinstall of the agent fixes most issues. This task might be the first suggestion from our support team to get the agent into an uncorrupted state. Running the Troubleshooting Tool and Log Collector tool and attempting a clean reinstall helps to solve issues more quickly. 1. Download the purge script:
-
+ `$ wget https://raw.githubusercontent.com/microsoft/OMS-Agent-for-Linux/master/tools/purge_omsagent.sh` 1. Run the purge script (with sudo permissions):
-
+ `$ sudo sh purge_omsagent.sh` ## Important log locations and the Log Collector tool
This error indicates that the Linux diagnostic extension (LAD) is installed side
1. If you're using a proxy, check the preceding proxy troubleshooting steps. 1. In some Azure distribution systems, the omid OMI server daemon doesn't start after the virtual machine is rebooted. If this is the case, you won't see Audit, ChangeTracking, or UpdateManagement solution-related data. The workaround is to manually start the OMI server by running `sudo /opt/omi/bin/service_control restart`. 1. After the OMI package is manually upgraded to a newer version, it must be manually restarted for the Log Analytics agent to continue functioning. This step is required for some distros where the OMI server doesn't automatically start after it's upgraded. Run `sudo /opt/omi/bin/service_control restart` to restart the OMI.
-
+ In some situations, the OMI can become frozen. The OMS agent might enter a blocked state waiting for the OMI, which blocks all data collection. The OMS agent process will be running but there will be no activity, which is evidenced by no new log lines (such as sent heartbeats) present in `omsagent.log`. Restart the OMI with `sudo /opt/omi/bin/service_control restart` to recover the agent. 1. If you see a DSC resource *class not found* error in omsconfig.log, run `sudo /opt/omi/bin/service_control restart`. 1. In some cases, when the Log Analytics agent for Linux can't talk to Azure Monitor, data on the agent is backed up to the full buffer size of 50 MB. The agent should be restarted by running the following command: `/opt/microsoft/omsagent/bin/service_control restart`.
This error indicates that the Linux diagnostic extension (LAD) is installed side
mkdir -p /etc/cron.d/ echo "*/15 * * * * omsagent /opt/omi/bin/OMSConsistencyInvoker > 2>&1" | sudo tee /etc/cron.d/OMSConsistencyInvoker ```
-
+ * Also, make sure the cron service is running. You can use `service cron status` with Debian, Ubuntu, and SUSE or `service crond status` with RHEL, CentOS, and Oracle Linux to check the status of this service. If the service doesn't exist, you can install the binaries and start the service by using the following instructions: **Ubuntu/Debian**
-
+ ``` # To Install the service binaries sudo apt-get install -y cron # To start the service sudo service cron start ```
-
+ **SUSE**
-
+ ``` # To Install the service binaries sudo zypper in cron -y
This error indicates that the Linux diagnostic extension (LAD) is installed side
sudo systemctl enable cron sudo systemctl start cron ```
-
+ **RHEL/CentOS**
-
+ ``` # To Install the service binaries sudo yum install -y crond # To start the service sudo service crond start ```
-
+ **Oracle Linux**
-
+ ``` # To Install the service binaries sudo yum install -y cronie
azure-monitor Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux.md
Title: Install Log Analytics agent on Linux computers description: This article describes how to connect Linux computers hosted in other clouds or on-premises to Azure Monitor with the Log Analytics agent for Linux. -+ Last updated 06/01/2023
OpenSSL 1.1.0 is only supported on x86_x64 platforms (64-bit). OpenSSL earlier t
>[!NOTE] >The Log Analytics Linux agent doesn't run in containers. To monitor containers, use the [Container Monitoring solution](/previous-versions/azure/azure-monitor/containers/containers) for Docker hosts or [Container insights](../containers/container-insights-overview.md) for Kubernetes.
-Starting with versions released after August 2018, we're making the following changes to our support model:
+Starting with versions released after August 2018, we're making the following changes to our support model:
-* Only the server versions are supported, not the client versions.
+* Only the server versions are supported, not the client versions.
* Focus support on any of the [Azure Linux Endorsed distros](../../virtual-machines/linux/endorsed-distros.md). There might be some delay between a new distro/version being Azure Linux Endorsed and it being supported for the Log Analytics Linux agent. * All minor releases are supported for each major version listed. * Versions that have passed their manufacturer's end-of-support date aren't supported. * Only support VM images. Containers aren't supported, even those derived from official distro publishers' images.
-* New versions of AMI aren't supported.
+* New versions of AMI aren't supported.
* Only versions that run OpenSSL 1.x by default are supported. >[!NOTE]
Starting from agent version 1.13.27, the Linux agent will support both Python 2
If you're using an older version of the agent, you must have the virtual machine use Python 2 by default. If your virtual machine is using a distro that doesn't include Python 2 by default, then you must install it. The following sample commands will install Python 2 on different distros: -- **Red Hat, CentOS, Oracle**:
-
+- **Red Hat, CentOS, Oracle**:
+ ```bash sudo yum install -y python2 ```
-
+ - **Ubuntu, Debian**:
+ ```bash sudo apt-get update sudo apt-get install -y python2 ```
+ - **SUSE**:
```bash sudo zypper install -y python2
If you're using an older version of the agent, you must have the virtual machine
Again, only if you're using an older version of the agent, the python2 executable must be aliased to *python*. Use the following method to set this alias: 1. Run the following command to remove any existing aliases:
-
+ ```bash sudo update-alternatives --remove-all python ```
The following table highlights the packages required for [supported Linux distro
|Required package |Description |Minimum version | |--||-|
-|Glibc | GNU C library | 2.5-12
+|Glibc | GNU C library | 2.5-12
|Openssl | OpenSSL libraries | 1.0.x or 1.1.x | |Curl | cURL web client | 7.15.5 | |Python | | 2.7 or 3.6+
-|Python-ctypes | |
-|PAM | Pluggable authentication modules | |
+|Python-ctypes | |
+|PAM | Pluggable authentication modules | |
>[!NOTE] >Either rsyslog or syslog-ng is required to collect syslog messages. The default syslog daemon on version 5 of Red Hat Enterprise Linux, CentOS, and Oracle Linux version (sysklog) isn't supported for syslog event collection. To collect syslog data from this version of these distributions, the rsyslog daemon should be installed and configured to replace sysklog.
On a monitored Linux computer, the agent is listed as `omsagent`. `omsconfig` is
### [Wrapper script](#tab/wrapper-script)
-The following steps configure setup of the agent for Log Analytics in Azure and Azure Government cloud. A wrapper script is used for Linux computers that can communicate directly or through a proxy server to download the agent hosted on GitHub and install the agent.
+The following steps configure setup of the agent for Log Analytics in Azure and Azure Government cloud. A wrapper script is used for Linux computers that can communicate directly or through a proxy server to download the agent hosted on GitHub and install the agent.
If your Linux computer needs to communicate through a proxy server to Log Analytics, this configuration can be specified on the command line by including `-p [protocol://][user:password@]proxyhost[:port]`. The `protocol` property accepts `http` or `https`. The `proxyhost` property accepts a fully qualified domain name or IP address of the proxy server.
For example: `https://proxy01.contoso.com:30443`
If authentication is required in either case, specify the username and password. For example: `https://user01:password@proxy01.contoso.com:30443` 1. To configure the Linux computer to connect to a Log Analytics workspace, run the following command that provides the workspace ID and primary key. The following command downloads the agent, validates its checksum, and installs it.
-
+ ``` wget https://raw.githubusercontent.com/Microsoft/OMS-Agent-for-Linux/master/installer/scripts/onboard_agent.sh && sh onboard_agent.sh -w <YOUR WORKSPACE ID> -s <YOUR WORKSPACE PRIMARY KEY> ```
If authentication is required in either case, specify the username and password.
``` wget https://raw.githubusercontent.com/Microsoft/OMS-Agent-for-Linux/master/installer/scripts/onboard_agent.sh && sh onboard_agent.sh -w <YOUR WORKSPACE ID> -s <YOUR WORKSPACE PRIMARY KEY> -d opinsights.azure.us
- ```
+ ```
The following command includes the `-p` proxy parameter and example syntax when authentication is required by your proxy server:
If authentication is required in either case, specify the username and password.
``` sudo /opt/microsoft/omsagent/bin/service_control restart [<workspace id>]
- ```
+ ```
### [Shell](#tab/shell)
The Log Analytics agent for Linux is provided in a self-extracting and installab
>[!NOTE] > Use the `--upgrade` argument if any dependent packages, such as omi, scx, omsconfig, or their older versions, are installed. This would be the case if the System Center Operations Manager agent for Linux is already installed.
-
+ ``` sudo sh ./omsagent-*.universal.x64.sh --install -w <workspace id> -s <shared key> --skip-docker-provider-install ```
The Log Analytics agent for Linux is provided in a self-extracting and installab
> [!NOTE] > The preceding command uses the optional `--skip-docker-provider-install` flag to disable the Container Monitoring data collection because the [Container Monitoring solution](/previous-versions/azure/azure-monitor/containers/containers) is being retired.
-1. To configure the Linux agent to install and connect to a Log Analytics workspace through a Log Analytics gateway, run the following command. It provides the proxy, workspace ID, and workspace key parameters. This configuration can be specified on the command line by including `-p [protocol://][user:password@]proxyhost[:port]`. The `proxyhost` property accepts a fully qualified domain name or IP address of the Log Analytics gateway server.
+1. To configure the Linux agent to install and connect to a Log Analytics workspace through a Log Analytics gateway, run the following command. It provides the proxy, workspace ID, and workspace key parameters. This configuration can be specified on the command line by including `-p [protocol://][user:password@]proxyhost[:port]`. The `proxyhost` property accepts a fully qualified domain name or IP address of the Log Analytics gateway server.
``` sudo sh ./omsagent-*.universal.x64.sh --upgrade -p https://<proxy address>:<proxy port> -w <workspace id> -s <shared key> ``` If authentication is required, specify the username and password. For example:
-
+ ``` sudo sh ./omsagent-*.universal.x64.sh --upgrade -p https://<proxy user>:<proxy password>@<proxy address>:<proxy port> -w <workspace id> -s <shared key> ```
azure-monitor Azure Monitor Agent Troubleshoot Linux Vm Rsyslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md
Title: Syslog troubleshooting on Azure Monitor Agent for Linux
+ Title: Syslog troubleshooting on Azure Monitor Agent for Linux
description: Guidance for troubleshooting rsyslog issues on Linux virtual machines, scale sets with Azure Monitor Agent, and data collection rules. Last updated 5/31/2023-+ # Syslog troubleshooting guide for Azure Monitor Agent for Linux
In some cases, `du` might not report any large files or directories. It might be
```bash sudo lsof +L1
-```
+```
```output COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME
azure-monitor Tutorial Log Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/tutorial-log-alert.md
Last updated 11/07/2023 + # Tutorial: Create a log query alert for an Azure resource Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. Log query alert rules create an alert when a log query returns a particular result. For example, receive an alert when a particular event is created on a virtual machine, or send a warning when excessive anonymous requests are made to a storage account.
Once you verify your query, you can create the alert rule. Select **New alert ru
:::image type="content" source="media/tutorial-log-alert/create-alert-rule.png" lightbox="media/tutorial-log-alert/create-alert-rule.png"alt-text="Create alert rule"::: ## Configure condition
-On the **Condition** tab, the **Log query** will already be filled in. The **Measurement** section defines how the records from the log query will be measured. If the query doesn't perform a summary, then the only option will be to **Count** the number of **Table rows**. If the query includes one or more summarized columns, then you'll have the option to use number of **Table rows** or a calculation based on any of the summarized columns. **Aggregation granularity** defines the time interval over which the collected values are aggregated.
+On the **Condition** tab, the **Log query** will already be filled in. The **Measurement** section defines how the records from the log query will be measured. If the query doesn't perform a summary, then the only option will be to **Count** the number of **Table rows**. If the query includes one or more summarized columns, then you'll have the option to use number of **Table rows** or a calculation based on any of the summarized columns. **Aggregation granularity** defines the time interval over which the collected values are aggregated. For example, if the aggregation granularity is set to 5 minutes, the alert rule will evaluate the data aggregated over the last 5 minutes. If the aggregation granularity is set to 15 minutes, the alert rule will evaluate the data aggregated over the last 15 minutes. It is important to choose the right aggregation granularity for your alert rule, as it can affect the accuracy of the alert.
:::image type="content" source="media/tutorial-log-alert/alert-rule-condition.png" lightbox="media/tutorial-log-alert/alert-rule-condition.png"alt-text="Alert rule condition":::
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Use the following script to identify your Application Insights resources by inge
#### Example
-```azurecli
+```powershell
Get-AzApplicationInsights -SubscriptionId 'Your Subscription ID' | Format-Table -Property Name, IngestionMode, Id, @{label='Type';expression={ if ([string]::IsNullOrEmpty($_.IngestionMode)) { 'Unknown'
azure-monitor Container Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-overview.md
Title: Overview of Container insights in Azure Monitor
-description: This article describes Container insights, which monitors the AKS Container insights solution, and the value it delivers by monitoring the health of your AKS clusters and Container Instances in Azure.
+ Title: Azure Monitor features for Kubernetes monitoring
+description: Describes Container insights and Managed Prometheus in Azure Monitor, which work together to monitor your Kubernetes clusters.
Last updated 12/20/2023
-# Overview of Container insights in Azure Monitor
+# Azure Monitor features for Kubernetes monitoring
-Container insights is a feature of Azure Monitor that collects and analyzes container logs from [Azure Kubernetes clusters](../../aks/intro-kubernetes.md) or [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md) clusters and their components. You can analyze the collected data for the different components in your cluster with a collection of [views](container-insights-analyze.md) and prebuilt [workbooks](container-insights-reports.md).
-
-Container insights works with [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) for complete monitoring of your Kubernetes environment. It identifies all clusters across your subscriptions and allows you to quickly enable monitoring by both services.
+[Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) and Container insights work together for complete monitoring of your Kubernetes environment. This article describes both features and the data they collect.
+- [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) is a fully managed service based on the [Prometheus](https://aka.ms/azureprometheus-promio) project from the Cloud Native Computing Foundation. It allows you to collect and analyze metrics from your Kubernetes cluster at scale and analyze them using prebuilt dashboards in [Grafana](../../managed-grafan).
+- Container insights is a feature of Azure Monitor that collects and analyzes container logs from [Azure Kubernetes clusters](../../aks/intro-kubernetes.md) or [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md) clusters and their components. You can analyze the collected data for the different components in your cluster with a collection of [views](container-insights-analyze.md) and prebuilt [workbooks](container-insights-reports.md).
> [!IMPORTANT] > Container insights collects metric data from your cluster in addition to logs. This functionality has been replaced by [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md). You can analyze that data using built-in dashboards in [Managed Grafana](../../managed-grafan). > > You can continue to have Container insights collect metric data so you can use the Container insights monitoring experience. Or you can save cost by disabling this collection and using Grafana for metric analysis. See [Configure data collection in Container insights using data collection rule](container-insights-data-collection-dcr.md) for configuration options.--
-## Access Container insights
-
-Access Container insights in the Azure portal from **Containers** in the **Monitor** menu or directly from the selected AKS cluster by selecting **Insights**. The Azure Monitor menu gives you the global perspective of all the containers that are deployed and monitored. This information allows you to search and filter across your subscriptions and resource groups. You can then drill into Container insights from the selected container. Access Container insights for a particular cluster from its page in the Azure portal.
--
+>
## Data collected
-Container insights sends data to a [Log Analytics workspace](../logs/data-platform-logs.md) where you can analyze it using different features of Azure Monitor. This workspace is different than the [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md) used by Managed Prometheus. For more information on these other services, see [Monitoring data](../../aks/monitor-aks.md#monitoring-data).
+Container insights sends data to a [Log Analytics workspace](../logs/data-platform-logs.md) where you can analyze it using different features of Azure Monitor. Managed Prometheus sends data to an [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md) where it can be accessed by Managed Grafana. See [Monitoring data](../../aks/monitor-aks.md#monitoring-data) for further details on this data.
+
-## Supported configurations
+### Supported configurations
Container insights supports the following environments: - [Azure Kubernetes Service (AKS)](../../aks/index.yml)
Container insights supports the following environments:
> [!NOTE] > Container insights supports ARM64 nodes on AKS. See [Cluster requirements](../../azure-arc/kubernetes/system-requirements.md#cluster-requirements) for the details of Azure Arc-enabled clusters that support ARM64 nodes.-
->[!NOTE]
+>
> Container insights support for Windows Server 2022 operating system is in public preview.
+## Access Container insights
+
+Access Container insights in the Azure portal from **Containers** in the **Monitor** menu or directly from the selected AKS cluster by selecting **Insights**. The Azure Monitor menu gives you the global perspective of all the containers that are deployed and monitored. This information allows you to search and filter across your subscriptions and resource groups. You can then drill into Container insights from the selected container. Access Container insights for a particular cluster from its page in the Azure portal.
++ ## Agent
Yes, Container Insights supports pod sandboxing through support for Kata Contain
## Next steps
-To begin monitoring your Kubernetes cluster, review [Enable Container insights](container-insights-onboard.md) to understand the requirements and available methods to enable monitoring.
+- See [Enable monitoring for Kubernetes clusters](kubernetes-monitoring-enable.md) to enable Managed Prometheus and Container insights on your cluster.
<!-- LINKS - external --> [aks-release-notes]: https://github.com/Azure/AKS/releases
azure-monitor Collect Custom Metrics Linux Telegraf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-linux-telegraf.md
Title: Collect custom metrics for Linux VM with the InfluxData Telegraf agent
-description: Instructions on how to deploy the InfluxData Telegraf agent on a Linux VM in Azure and configure the agent to publish metrics to Azure Monitor.
+description: Instructions on how to deploy the InfluxData Telegraf agent on a Linux VM in Azure and configure the agent to publish metrics to Azure Monitor.
-+ Last updated 08/01/2023 # Collect custom metrics for a Linux VM with the InfluxData Telegraf agent
-This article explains how to deploy and configure the [InfluxData](https://www.influxdata.com/) Telegraf agent on a Linux virtual machine to send metrics to Azure Monitor.
+This article explains how to deploy and configure the [InfluxData](https://www.influxdata.com/) Telegraf agent on a Linux virtual machine to send metrics to Azure Monitor.
> [!NOTE] > InfluxData Telegraf is an open source agent and not officially supported by Azure Monitor. For issues with the Telegraf connector, please refer to the Telegraf GitHub page here: [InfluxData](https://github.com/influxdata/telegraf)
-## InfluxData Telegraf agent
+## InfluxData Telegraf agent
-[Telegraf](https://docs.influxdata.com/telegraf/) is a plug-in-driven agent that enables the collection of metrics from over 150 different sources. Depending on what workloads run on your VM, you can configure the agent to use specialized input plug-ins to collect metrics. Examples are MySQL, NGINX, and Apache. By using output plug-ins, the agent can then write to destinations that you choose. The Telegraf agent has integrated directly with the Azure Monitor custom metrics REST API. It supports an Azure Monitor output plug-in. Using this plug-in, the agent can collect workload-specific metrics on your Linux VM and submit them as custom metrics to Azure Monitor.
+[Telegraf](https://docs.influxdata.com/telegraf/) is a plug-in-driven agent that enables the collection of metrics from over 150 different sources. Depending on what workloads run on your VM, you can configure the agent to use specialized input plug-ins to collect metrics. Examples are MySQL, NGINX, and Apache. By using output plug-ins, the agent can then write to destinations that you choose. The Telegraf agent has integrated directly with the Azure Monitor custom metrics REST API. It supports an Azure Monitor output plug-in. Using this plug-in, the agent can collect workload-specific metrics on your Linux VM and submit them as custom metrics to Azure Monitor.
:::image type="content" source="./media/collect-custom-metrics-linux-telegraf/telegraf-agent-overview.png" alt-text="A diagram showing the Telegraph agent overview." lightbox="./media/collect-custom-metrics-linux-telegraf/telegraf-agent-overview.png"::: ## Connect to the VM
-Create an SSH connection to the VM where you want to install Telegraf. Select the **Connect** button on the overview page for your virtual machine.
+Create an SSH connection to the VM where you want to install Telegraf. Select the **Connect** button on the overview page for your virtual machine.
:::image source="./media/collect-custom-metrics-linux-telegraf/connect-to-virtual-machine.png" alt-text="A screenshot of the a Virtual machine overview page with the connect button highlighted." lightbox="./media/collect-custom-metrics-linux-telegraf/connect-to-virtual-machine.png":::
-In the **Connect to virtual machine** page, keep the default options to connect by DNS name over port 22. In **Login using VM local account**, a connection command is shown. Select the button to copy the command. The following example shows what the SSH connection command looks like:
+In the **Connect to virtual machine** page, keep the default options to connect by DNS name over port 22. In **Login using VM local account**, a connection command is shown. Select the button to copy the command. The following example shows what the SSH connection command looks like:
```cmd
-ssh azureuser@XXXX.XX.XXX
+ssh azureuser@XXXX.XX.XXX
```
-Paste the SSH connection command into a shell, such as Azure Cloud Shell or Bash on Ubuntu on Windows, or use an SSH client of your choice to create the connection.
+Paste the SSH connection command into a shell, such as Azure Cloud Shell or Bash on Ubuntu on Windows, or use an SSH client of your choice to create the connection.
-## Install and configure Telegraf
+## Install and configure Telegraf
-To install the Telegraf Debian package onto the VM, run the following commands from your SSH session:
+To install the Telegraf Debian package onto the VM, run the following commands from your SSH session:
# [Ubuntu, Debian](#tab/ubuntu) Add the repository: ```bash
-# download the package to the VM
+# download the package to the VM
curl -s https://repos.influxdata.com/influxdb.key | sudo apt-key add - source /etc/lsb-release sudo echo "deb https://repos.influxdata.com/${DISTRIB_ID,,} ${DISTRIB_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
Install the package:
sudo apt-get update sudo apt-get install telegraf ```
-# [RHEL, CentOS, Oracle Linux](#tab/redhat)
+# [RHEL, CentOS, Oracle Linux](#tab/redhat)
Add the repository:
Install the package:
sudo yum -y install telegraf ```
-Telegraf's configuration file defines Telegraf's operations. By default, an example configuration file is installed at the path **/etc/telegraf/telegraf.conf**. The example configuration file lists all possible input and output plug-ins. However, we'll create a custom configuration file and have the agent use it by running the following commands:
+Telegraf's configuration file defines Telegraf's operations. By default, an example configuration file is installed at the path **/etc/telegraf/telegraf.conf**. The example configuration file lists all possible input and output plug-ins. However, we'll create a custom configuration file and have the agent use it by running the following commands:
```bash
-# generate the new Telegraf config file in the current directory
-telegraf --input-filter cpu:mem --output-filter azure_monitor config > azm-telegraf.conf
+# generate the new Telegraf config file in the current directory
+telegraf --input-filter cpu:mem --output-filter azure_monitor config > azm-telegraf.conf
-# replace the example config with the new generated config
-sudo cp azm-telegraf.conf /etc/telegraf/telegraf.conf
+# replace the example config with the new generated config
+sudo cp azm-telegraf.conf /etc/telegraf/telegraf.conf
```
-> [!NOTE]
-> The preceding code enables only two input plug-ins: **cpu** and **mem**. You can add more input plug-ins, depending on the workload that runs on your machine. Examples are Docker, MySQL, and NGINX. For a full list of input plug-ins, see the **Additional configuration** section.
+> [!NOTE]
+> The preceding code enables only two input plug-ins: **cpu** and **mem**. You can add more input plug-ins, depending on the workload that runs on your machine. Examples are Docker, MySQL, and NGINX. For a full list of input plug-ins, see the **Additional configuration** section.
-Finally, to have the agent start using the new configuration, we force the agent to stop and start by running the following commands:
+Finally, to have the agent start using the new configuration, we force the agent to stop and start by running the following commands:
```bash
-# stop the telegraf agent on the VM
-sudo systemctl stop telegraf
-# start and enable the telegraf agent on the VM to ensure it picks up the latest configuration
-sudo systemctl enable --now telegraf
+# stop the telegraf agent on the VM
+sudo systemctl stop telegraf
+# start and enable the telegraf agent on the VM to ensure it picks up the latest configuration
+sudo systemctl enable --now telegraf
```
-Now the agent collects metrics from each of the input plug-ins specified and emits them to Azure Monitor.
+Now the agent collects metrics from each of the input plug-ins specified and emits them to Azure Monitor.
-## Plot your Telegraf metrics in the Azure portal
+## Plot your Telegraf metrics in the Azure portal
-1. Open the [Azure portal](https://portal.azure.com).
+1. Open the [Azure portal](https://portal.azure.com).
-1. Navigate to the new **Monitor** tab. Then select **Metrics**.
+1. Navigate to the new **Monitor** tab. Then select **Metrics**.
1. Select your VM in the resource selector.
-1. Select the **Telegraf/CPU** namespace, and select the **usage_system** metric. You can choose to filter by the dimensions on this metric or split on them.
+1. Select the **Telegraf/CPU** namespace, and select the **usage_system** metric. You can choose to filter by the dimensions on this metric or split on them.
:::image type="content" source="./media/collect-custom-metrics-linux-telegraf/metric-chart.png" alt-text="A screenshot showing a metric chart with telegraph metrics selected." lightbox="./media/collect-custom-metrics-linux-telegraf/metric-chart.png":::
-## Additional configuration
+## Additional configuration
-The preceding walkthrough provides information on how to configure the Telegraf agent to collect metrics from a few basic input plug-ins. The Telegraf agent has support for over 150 input plug-ins, with some supporting additional configuration options. InfluxData has published a [list of supported plugins](https://docs.influxdata.com/telegraf/v1.15/plugins/inputs/) and instructions on [how to configure them](https://docs.influxdata.com/telegraf/v1.15/administration/configuration/).
+The preceding walkthrough provides information on how to configure the Telegraf agent to collect metrics from a few basic input plug-ins. The Telegraf agent has support for over 150 input plug-ins, with some supporting additional configuration options. InfluxData has published a [list of supported plugins](https://docs.influxdata.com/telegraf/v1.15/plugins/inputs/) and instructions on [how to configure them](https://docs.influxdata.com/telegraf/v1.15/administration/configuration/).
-Additionally, in this walkthrough, you used the Telegraf agent to emit metrics about the VM the agent is deployed on. The Telegraf agent can also be used as a collector and forwarder of metrics for other resources. To learn how to configure the agent to emit metrics for other Azure resources, see [Azure Monitor Custom Metric Output for Telegraf](https://github.com/influxdat).
+Additionally, in this walkthrough, you used the Telegraf agent to emit metrics about the VM the agent is deployed on. The Telegraf agent can also be used as a collector and forwarder of metrics for other resources. To learn how to configure the agent to emit metrics for other Azure resources, see [Azure Monitor Custom Metric Output for Telegraf](https://github.com/influxdat).
-## Clean up resources
+## Clean up resources
-When they're no longer needed, you can delete the resource group, virtual machine, and all related resources. To do so, select the resource group for the virtual machine and select **Delete**. Then confirm the name of the resource group to delete.
+When they're no longer needed, you can delete the resource group, virtual machine, and all related resources. To do so, select the resource group for the virtual machine and select **Delete**. Then confirm the name of the resource group to delete.
## Next steps - Learn more about [custom metrics](./metrics-custom-overview.md).
azure-netapp-files Azure Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-government.md
Title: Azure NetApp Files for Azure Government | Microsoft Docs
-description: Describes how to connect to Azure Government to use Azure NetApp Files and the Azure NetApp Files feature availability in Azure Government.
+description: Learn how to connect to Azure Government to use Azure NetApp Files and the Azure NetApp Files feature availability in Azure Government.
documentationcenter: ''
Last updated 11/02/2023
-# Azure NetApp Files for Azure Government
+# Azure NetApp Files for Azure Government
-[Microsoft Azure Government](../azure-government/documentation-government-welcome.md) delivers a dedicated cloud enabling government agencies and their partners to transform mission-critical workloads to the cloud.
+[Microsoft Azure Government](../azure-government/documentation-government-welcome.md) delivers a dedicated cloud that enables government agencies and their partners to transform mission-critical workloads to the cloud.
-This article describes Azure NetApp Files feature availability in Azure Government. It also shows you how to access the Azure NetApp Files service within Azure Government.
+This article describes Azure NetApp Files feature availability in Azure Government. It also shows you how to access Azure NetApp Files within Azure Government.
## Feature availability
-For Azure Government regions supported by Azure NetApp Files, see the *[Products Available by Region page](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=netapp&regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true)*.
+For Azure Government regions supported by Azure NetApp Files, see [Products available by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=netapp&regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
-All [Azure NetApp Files features](whats-new.md) available on Azure public cloud are also available on supported Azure Government regions ***except for the features listed in the following table***:
+All [Azure NetApp Files features](whats-new.md) available on Azure public cloud are also available on supported Azure Government regions, *except for the features listed in the following table*:
| Azure NetApp Files features | Azure public cloud availability | Azure Government availability | |: |: |: |
All [Azure NetApp Files features](whats-new.md) available on Azure public cloud
## Portal access
-Azure Government users can access Azure NetApp Files by pointing their browsers to **portal.azure.us**.  The portal site name is **Microsoft Azure Government**. See [Connect to Azure Government using portal](../azure-government/documentation-government-get-started-connect-with-portal.md) for details.
+Azure Government users can access Azure NetApp Files by pointing their browsers to **portal.azure.us**. The portal site name is **Microsoft Azure Government**. For more information, see [Connect to Azure Government using portal](../azure-government/documentation-government-get-started-connect-with-portal.md).
-![Screenshot of the Azure Government portal highlighting portal.azure.us as the URL](../media/azure-netapp-files/azure-government.jpg)
+![Screenshot that shows the Azure Government portal highlighting portal.azure.us as the URL.](../media/azure-netapp-files/azure-government.jpg)
-From the Microsoft Azure Government portal, you can access Azure NetApp Files the same way you would in the Azure portal. For example, you can enter **Azure NetApp Files** in the portal’s Search Resources box, and then select **Azure NetApp Files** from the list that appears.
+From the Azure Government portal, you can access Azure NetApp Files the same way you would in the Azure portal. For example, you can enter **Azure NetApp Files** in the portal's **Search resources** box, and then select **Azure NetApp Files** from the list that appears.
You can follow [Azure NetApp Files](./index.yml) documentation for details about using the service. ## Azure CLI access
-You can connect to Azure Government by setting the cloud name to `AzureUSGovernment` and then proceeding to sign in as you normally would with the `az login` command. After you run the sign-in command, a browser will launch where you enter the appropriate Azure Government credentials.
+You can connect to Azure Government by setting the cloud name to `AzureUSGovernment` and then proceeding to sign in as you normally would with the `az login` command. After you run the sign-in command, a browser launches, where you enter the appropriate Azure Government credentials.
```azurecli
az cloud set --name AzureUSGovernment
```
-To confirm the cloud has been set to `AzureUSGovernment`, run:
+To confirm the cloud was set to `AzureUSGovernment`, run:
```azurecli
az cloud list --output table
```
-This command produces a table with Azure cloud locations. The `isActive` column entry for `AzureUSGovernment` should read `true`.
+This command produces a table with Azure cloud locations. The `isActive` column entry for `AzureUSGovernment` should read `true`.
-See [Connect to Azure Government with Azure CLI](../azure-government/documentation-government-get-started-connect-with-cli.md) for details.
+For more information, see [Connect to Azure Government with Azure CLI](../azure-government/documentation-government-get-started-connect-with-cli.md).
## REST API access
-Endpoints for Azure Government are different from commercial Azure endpoints. For a list of different endpoints, see Azure GovernmentΓÇÖs [Guidance for Developers](../azure-government/compare-azure-government-global-azure.md#guidance-for-developers).
+Endpoints for Azure Government are different from commercial Azure endpoints. For a list of different endpoints, see Azure Government's [Guidance for developers](../azure-government/compare-azure-government-global-azure.md#guidance-for-developers).
## PowerShell access
-When connecting to Azure Government through PowerShell, you must specify an environmental parameter to ensure you connect to the correct endpoints. From there, you can proceed to use Azure NetApp Files as you normally would with PowerShell.
+When you connect to Azure Government through PowerShell, you must specify an environmental parameter to ensure that you connect to the correct endpoints. From there, you can proceed to use Azure NetApp Files as you normally would with PowerShell.
| Connection type | Command | | | |
When connecting to Azure Government through PowerShell, you must specify an envi
| [Azure (Classic deployment model)](/powershell/module/servicemanagement/azure/add-azureaccount) commands |`Add-AzureAccount -Environment AzureUSGovernment` | | [Microsoft Entra ID (Classic deployment model)](/previous-versions/azure/jj151815(v=azure.100)) commands |`Connect-MsolService -AzureEnvironment UsGovernment` |
-See [Connect to Azure Government with PowerShell](../azure-government/documentation-government-get-started-connect-with-ps.md) for details.
+For more information, see [Connect to Azure Government with PowerShell](../azure-government/documentation-government-get-started-connect-with-ps.md).
## Next steps+ * [What is Azure Government?](../azure-government/documentation-government-welcome.md) * [What's new in Azure NetApp Files](whats-new.md) * [Compare Azure Government and global Azure](../azure-government/compare-azure-government-global-azure.md)
azure-netapp-files Azure Netapp Files Create Netapp Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-netapp-account.md
Title: Create a NetApp account for Access Azure NetApp Files | Microsoft Docs
-description: Describes how to access Azure NetApp Files and create a NetApp account so that you can set up a capacity pool and create a volume.
+ Title: Create a NetApp account to access Azure NetApp Files | Microsoft Docs
+description: Learn how to access Azure NetApp Files and create a NetApp account so that you can set up a capacity pool and create a volume.
documentationcenter: ''
Last updated 10/04/2021 + # Create a NetApp account
-Creating a NetApp account enables you to set up a capacity pool and subsequently create a volume. You use the Azure NetApp Files blade to create a new NetApp account.
-## Before you begin
+Creating a NetApp account enables you to set up a capacity pool so that you can create a volume. You use the Azure NetApp Files pane to create a new NetApp account.
-You must have registered your subscription for using the NetApp Resource Provider. See [Register the NetApp Resource Provider](azure-netapp-files-register.md).
+## Before you begin
-## Steps
+You must register your subscription for using the NetApp Resource Provider. For more information, see [Register the NetApp Resource Provider](azure-netapp-files-register.md).
-1. Sign in to the Azure portal.
-2. Access the Azure NetApp Files blade by using one of the following methods:
- * Search for **Azure NetApp Files** in the Azure portal search box.
- * Click **All services** in the navigation, and then filter to Azure NetApp Files.
+## Steps
- You can "favorite" the Azure NetApp Files blade by clicking the star icon next to it.
+1. Sign in to the Azure portal.
+1. Access the Azure NetApp Files pane by using one of the following methods:
+ * Search for **Azure NetApp Files** in the Azure portal search box.
+ * Select **All services** in the navigation, and then filter to Azure NetApp Files.
-3. Click **+ Add** to create a new NetApp account.
- The New NetApp account window appears.
+ To make the Azure NetApp Files pane a favorite, select the star icon next to it.
-4. Provide the following information for your NetApp account:
- * **Account name**
- Specify a unique name for the subscription.
- * **Subscription**
- Select a subscription from your existing subscriptions.
- * **Resource group**
- Use an existing Resource Group or create a new one.
- * **Location**
- Select the region where you want the account and its child resources to be located.
+1. Select **+ Add** to create a new NetApp account.
+ The **New NetApp account** window appears.
- ![New NetApp account](../media/azure-netapp-files/azure-netapp-files-new-netapp-account.png)
+1. Provide the following information for your NetApp account:
+ * **Account name**: Specify a unique name for the subscription.
+ * **Subscription**: Select a subscription from your existing subscriptions.
+ * **Resource group**: Use an existing resource group or create a new one.
+ * **Location**: Select the region where you want the account and its child resources to be located.
+ ![Screenshot that shows New NetApp account.](../media/azure-netapp-files/azure-netapp-files-new-netapp-account.png)
-5. Click **Create**.
- The NetApp account you created now appears in the Azure NetApp Files blade.
+1. Select **Create**.
+ The NetApp account you created now appears in the Azure NetApp Files pane.
-> [!NOTE]
-> If you haven't registered your subscription for using the NetApp Resource Provider, you will receive the following error when you try to create the first NetApp account:
+> [!NOTE]
+> If you didn't register your subscription for using the NetApp Resource Provider, you receive the following error when you try to create the first NetApp account:
> > `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.","details":[{"code":"NotFound","message":"{\r\n \"error\": {\r\n \"code\": \"InvalidResourceType\",\r\n \"message\": \"The resource type could not be found in the namespace 'Microsoft.NetApp' for api version '20xx-xx-xx'.\"\r\n }\r\n}"}]}`
-## Next steps
+## Next steps
[Create a capacity pool](azure-netapp-files-set-up-capacity-pool.md)-
azure-netapp-files Azure Netapp Files Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-introduction.md
# What is Azure NetApp Files?
-Azure NetApp Files is an Azure native, first-party, enterprise-class, high-performance file storage service. It provides _Volumes as a service_ for which you can create NetApp accounts, capacity pools, volumes, select service and performance levels, and manage data protection. It allows you to create and manage high-performance, highly available, and scalable file shares, using the same protocols and tools that you're familiar with and enterprise applications rely on on-premises.
+Azure NetApp Files is an Azure native, first-party, enterprise-class, high-performance file storage service. It provides _Volumes as a service_ for which you can create NetApp accounts, capacity pools, and volumes. You can also select service and performance levels and manage data protection. You can create and manage high-performance, highly available, and scalable file shares by using the same protocols and tools that you're familiar with and enterprise applications that rely on on-premises.
-Azure NetApp FilesΓÇÖ key attributes are:
+Key attributes of Azure NetApp Files are:
-- Performance, cost optimization and scale-- Simplicity and availability-- Data management and security
+- Performance, cost optimization, and scale.
+- Simplicity and availability.
+- Data management and security.
-Azure NetApp Files supports SMB, NFS and dual protocols volumes and can be used for various use cases such as:
-- file sharing-- home directories -- databases-- high-performance computing and more
+Azure NetApp Files supports SMB, NFS, and dual protocols volumes and can be used for use cases such as:
-For more information about workload solutions leveraging Azure NetApp Files, see [Solution architectures using Azure NetApp Files](azure-netapp-files-solution-architectures.md).
+- File sharing.
+- Home directories.
+- Databases.
+- High-performance computing.
-## Performance, cost optimization, and scale
+For more information about workload solutions using Azure NetApp Files, see [Solution architectures using Azure NetApp Files](azure-netapp-files-solution-architectures.md).
-Azure NetApp Files is designed to provide high-performance file storage for enterprise workloads and provide functionality to provide cost optimization and scale. Key features that contribute to these include:
+## Performance, cost optimization, and scale
-| Functionality | Description | Benefit |
+Azure NetApp Files is designed to provide high-performance file storage for enterprise workloads and provide functionality to provide cost optimization and scale. Key features that contribute to these capabilities include:
+
+| Functionality | Description | Benefit |
| - | - | - |
-| In-Azure bare-metal flash performance | Fast and reliable all-flash performance with submillisecond latency | Run performance-intensive workloads in the cloud with on-premises infrastructure-level performance
-| Multi-protocol support | Supports multiple protocols including NFSv3, NFSv4.1, SMB 3.0, SMB 3.1.1 and simultaneous dual-protocol | Seamlessly integrate with existing infrastructure and workflows without compatibility issues or complex configurations. |
-| Three flexible performance tiers (standard, premium, ultra) | Three performance tiers with dynamic service level change capability based on workload needs, including cool access for cold data | Choose the right performance level for workloads and dynamically adjust performance without overspending on resources.
-| Small-to-large volumes | Easily resize file volumes from 100 GiB up to 100 TiB without downtime | Scale storage as business needs grow without over-provisioning, avoiding upfront cost.
-| 1-TiB minimum capacity pool size | 1-TiB capacity pool is a reduced size storage pool compared to the initial 4 TiB minimum | Save money by starting with a smaller storage footprint and lower entry point, without sacrificing performance or availability. Scale storage based on growth without high upfront costs.
-| 1000-TiB maximum capacity pool | 1000-TiB capacity pool is an increased storage pool compared to the initial 500 TiB maximum | Reduce waste by creating larger, pooled capacity and performance budget and share/distribute across volumes.
-| 100-500 TiB large volumes | Store large volumes of data up to 500 TiB in a single volume | Manage large data sets and high-performance workloads with ease.
-| User and group quotas | Set quotas on storage usage for individual users and groups | Control storage usage and optimize resource allocation.
-| Virtual machine (VM) networked storage performance | Higher VM network throughput compared to disk IO limits enable more-demanding workloads on smaller Azure VMs | Improve application performance at a smaller virtual machine footprint, improving overall efficiency and lowering application license cost.
-| Deep workload readiness | Seamless deployment and migration of any-size workload with well-documented deployment guides | Easily migrate any workload of any size to the platform. Enjoy a seamless, cost-effective deployment and migration experience.
-| Datastores for Azure VMware Solution | Use Azure NetApp Files as a storage solution for VMware workloads in Azure, reducing the need for superfluous compute nodes normally included with Azure VMware Solution expansions | Save money by eliminating the need for unnecessary compute nodes when expanding storage, resulting in significant cost savings.
-| Standard storage with cool access | Use the cool access option of Azure NetApp Files Standard service level to move inactive data transparently from Azure NetApp Files Standard service-level storage (the hot tier) to an Azure storage account (the cool tier) | Save money by transitioning data that resides within Azure NetApp Files volumes (the hot tier) by moving blocks to the lower cost storage (the cool tier). |
-
-These features work together to provide a high-performance file storage solution for the demands of enterprise workloads. They help to ensure that your workloads experience optimal (low) storage latency, cost and scale.
+| In-Azure bare-metal flash performance | Fast and reliable all-flash performance with submillisecond latency. | Run performance-intensive workloads in the cloud with on-premises infrastructure-level performance.
+| Multi-protocol support | Supports multiple protocols, including NFSv3, NFSv4.1, SMB 3.0, SMB 3.1.1, and simultaneous dual-protocol. | Seamlessly integrate with existing infrastructure and workflows without compatibility issues or complex configurations. |
+| Three flexible performance tiers (Standard, Premium, Ultra) | Three performance tiers with dynamic service-level change capability based on workload needs, including cool access for cold data. | Choose the right performance level for workloads and dynamically adjust performance without overspending on resources.
+| Small-to-large volumes | Easily resize file volumes from 100 GiB up to 100 TiB without downtime. | Scale storage as business needs grow without over-provisioning, avoiding upfront cost.
+| 1-TiB minimum capacity pool size | 1-TiB capacity pool is a reduced-size storage pool compared to the initial 4-TiB minimum. | Save money by starting with a smaller storage footprint and lower entry point, without sacrificing performance or availability. Scale storage based on growth without high upfront costs.
+| 1,000-TiB maximum capacity pool | 1000-TiB capacity pool is an increased storage pool compared to the initial 500-TiB maximum. | Reduce waste by creating larger, pooled capacity and performance budget, and share and distribute across volumes.
+| 100-500 TiB large volumes | Store large volumes of data up to 500 TiB in a single volume. | Manage large datasets and high-performance workloads with ease.
+| User and group quotas | Set quotas on storage usage for individual users and groups. | Control storage usage and optimize resource allocation.
+| Virtual machine (VM) networked storage performance | Higher VM network throughput compared to disk IO limits enable more demanding workloads on smaller Azure VMs. | Improve application performance at a smaller VM footprint, improving overall efficiency and lowering application license cost.
+| Deep workload readiness | Seamless deployment and migration of any-size workload with well-documented deployment guides. | Easily migrate any workload of any size to the platform. Enjoy a seamless, cost-effective deployment and migration experience.
+| Datastores for Azure VMware Solution | Use Azure NetApp Files as a storage solution for VMware workloads in Azure, reducing the need for superfluous compute nodes normally included with Azure VMware Solution expansions. | Save money by eliminating the need for unnecessary compute nodes when you expand storage, resulting in significant cost savings.
+| Standard storage with cool access | Use the cool access option of Azure NetApp Files Standard service level to move inactive data transparently from Azure NetApp Files Standard service-level storage (the hot tier) to an Azure Storage account (the cool tier). | Save money by transitioning data that resides within Azure NetApp Files volumes (the hot tier) by moving blocks to the lower-cost storage (the cool tier). |
+
+These features work together to provide a high-performance file storage solution for the demands of enterprise workloads. They help to ensure that your workloads experience optimal (low) storage latency, cost, and scale.
## Simplicity and availability
Azure NetApp Files is designed to provide simplicity and high availability for y
| Functionality | Description | Benefit | | - | - | - |
-| Volumes as a Service | Provision and manage volumes in minutes with a few clicks like any other Azure service | Enables businesses to quickly and easily provision and manage volumes without the need for dedicated hardware or complex configurations.
-| Native Azure Integration | Integration with the Azure portal, REST, CLI, billing, monitoring, and security | Simplifies management and ensures consistency with other Azure services, while providing a familiar interface and integration with existing tools and workflows.
-| High availability | Azure NetApp Files provides a [high availability SLA](https://azure.microsoft.com/support/legal/sla/netapp/) with automatic failover | Ensures that data is always available and accessible, avoiding downtime and disruption to business operations.
-| Application migration | Migrate applications to Azure without refactoring | Enables businesses to move their workloads to Azure quickly and easily without the need for costly and time-consuming application refactoring or redesign.
-| Cross-region and cross-zone replication | Replicate data between regions or zones | Provide disaster recovery capabilities and ensure data availability and redundancy across different Azure regions or availability zones.
-| Application volume groups | Application volume groups enable you to deploy all application volumes according to best practices in a single one-step and optimized workflow | Simplified multi-volume deployment for applications, ensuring volumes and mount points are optimized and adhere to best practices in a single step, saving time and effort.
-| Programmatic deployment | Automate deployment and management with APIs and SDKs | Enables businesses to integrate Azure NetApp Files with their existing automation and management tools, reducing the need for manual intervention and improving efficiency.
-| Fault-tolerant bare metal | Built on a fault-tolerant bare metal fleet powered by ONTAP | Ensures high performance and reliability by leveraging a robust, fault-tolerant storage platform and powerful data management capabilities provided by ONTAP.
-| Azure native billing | Integrates natively with Azure billing, providing a seamless and easy-to-use billing experience, based on hourly usage | Easily and accurately manage and track the cost of using the service, allowing for seamless budgeting and cost control. Easily track usage and expenses directly from the Azure portal, providing a unified experience for billing and management. |
+| Volumes as a service | Provision and manage volumes in minutes with a few clicks like any other Azure service. | Enables businesses to quickly and easily provision and manage volumes without the need for dedicated hardware or complex configurations.
+| Native Azure integration | Integration with the Azure portal, REST, CLI, billing, monitoring, and security. | Simplifies management and ensures consistency with other Azure services while providing a familiar interface and integration with existing tools and workflows.
+| High availability | Azure NetApp Files provides a [high-availability SLA](https://azure.microsoft.com/support/legal/sla/netapp/) with automatic failover. | Ensures that data is always available and accessible, avoiding downtime and disruption to business operations.
+| Application migration | Migrate applications to Azure without refactoring. | Enables businesses to move their workloads to Azure quickly and easily without the need for costly and time-consuming application refactoring or redesign.
+| Cross-region and cross-zone replication | Replicate data between regions or zones. | Provides disaster recovery capabilities and ensures data availability and redundancy across different Azure regions or availability zones.
+| Application volume groups | Application volume groups enable you to deploy all application volumes according to best practices in a single one-step and optimized workflow. | Simplified multi-volume deployment for applications ensures volumes and mount points are optimized and adhere to best practices in a single step, saving time and effort.
+| Programmatic deployment | Automate deployment and management with APIs and SDKs. | Enables businesses to integrate Azure NetApp Files with their existing automation and management tools, reducing the need for manual intervention and improving efficiency.
+| Fault-tolerant bare metal | Built on a fault-tolerant bare-metal fleet powered by ONTAP. | Ensures high performance and reliability by using a robust, fault-tolerant storage platform and powerful data management capabilities provided by ONTAP.
+| Azure native billing | Integrates natively with Azure billing, providing a seamless and easy-to-use billing experience, based on hourly usage. | Easily and accurately manage and track the cost of using the service for seamless budgeting and cost control. Easily track usage and expenses directly from the Azure portal for a unified experience for billing and management. |
-These features work together to provide a simple-to-use and highly available file storage solution to ensure that your data is easy to manage and always available, recoverable, and accessible to your applications even in an outage.
+These features work together to provide a simple-to-use and highly available file storage solution. This solution ensures that your data is easy to manage and always available, recoverable, and accessible to your applications, even in an outage.
-## Data management and security
+## Data management and security
Azure NetApp Files provides built-in data management and security capabilities to help ensure the secure storage, availability, and manageability of your data. Key features include: | Functionality | Description | Benefit | | - | - | - |
-| Efficient snapshots and backup | Advanced data protection and faster recovery of data by leveraging block-efficient, incremental snapshots and vaulting | Quickly and easily backup data and restore to a previous point in time, minimizing downtime and reducing the risk of data loss.
-| Snapshot restore to a new volume | Instantly restore data from a previously taken snapshot quickly and accurately | Reduce downtime and save time and resources that would otherwise be spent on restoring data from backups.
-| Snapshot revert | Revert volume to the state it was in when a previous snapshot was taken | Easily and quickly recover data (in-place) to a known good state, ensuring business continuity and maintaining productivity.
-| Application-aware snapshots and backup | Ensure application-consistent snapshots with guaranteed recoverability | Automate snapshot creation and deletion processes, reducing manual efforts and potential errors while increasing productivity by allowing teams to focus on other critical tasks.
-| Efficient cloning | Create and access clones in seconds | Save time and reduce costs for test, development, system refresh and analytics.
-| Data-in-transit encryption | Secure data transfers with protocol encryption | Ensure the confidentiality and integrity of data being transmitted, with peace of mind that information is safe and secure.
-| Data-at-rest encryption | Data-at-rest encryption with platform- or customer-managed keys | Prevent unrestrained access to stored data, meet compliance requirements and enhance data security.
-| Azure platform integration and compliance certifications | Compliance with regulatory requirements and Azure platform integration | Adhere to Azure standards and regulatory compliance, ensure audit and governance completion.
-| Azure Identity and Access Management (IAM) | Azure role-based access control (RBAC) service allows you to manage permissions for resources at any level | Simplify access management and improve compliance with Azure-native RBAC, empowering you to easily control user access to configuration management.
-| AD/LDAP authentication, export policies & access control lists (ACLs) | Authenticate and authorize access to data using existing AD/LDAP credentials and allow for the creation of export policies and ACLs to govern data access and usage | Prevent data breaches and ensure compliance with data security regulations, with enhanced granular control over access to data volumes, directories and files. |
+| Efficient snapshots and backup | Advanced data protection and faster recovery of data by using block-efficient, incremental snapshots and vaulting. | Quickly and easily back up data and restore to a previous point in time, minimizing downtime and reducing the risk of data loss.
+| Snapshot restore to a new volume | Instantly restore data from a previously taken snapshot quickly and accurately. | Reduces downtime and saves time and resources that would otherwise be spent on restoring data from backups.
+| Snapshot revert | Revert volume to the state it was in when a previous snapshot was taken. | Easily and quickly recover data (in-place) to a known good state, ensuring business continuity and maintaining productivity.
+| Application-aware snapshots and backup | Ensure application-consistent snapshots with guaranteed recoverability. | Automates snapshot creation and deletion processes, reducing manual efforts and potential errors while increasing productivity by allowing teams to focus on other critical tasks.
+| Efficient cloning | Create and access clones in seconds. | Saves time and reduces costs for test, development, system refresh, and analytics.
+| Data-in-transit encryption | Secure data transfers with protocol encryption. | Ensures the confidentiality and integrity of data being transmitted for peace of mind that information is safe and secure.
+| Data-at-rest encryption | Data-at-rest encryption with platform- or customer-managed keys. | Prevents unrestrained access to stored data, meets compliance requirements, and enhances data security.
+| Azure platform integration and compliance certifications | Compliance with regulatory requirements and Azure platform integration. | Adheres to Azure standards and regulatory compliance and ensures audit and governance completion.
+| Azure Identity & Access Management (IAM) | Azure role-based access control (RBAC) allows you to manage permissions for resources at any level. | Simplifies access management and improves compliance with Azure-native RBAC, empowering you to easily control user access to configuration management.
+| AD/LDAP authentication, export policies, and access control lists (ACLs) | Authenticate and authorize access to data by using existing AD/LDAP credentials and allow for the creation of export policies and ACLs to govern data access and usage. | Prevents data breaches and ensures compliance with data security regulations, with enhanced granular control over access to data volumes, directories, and files. |
These features work together to provide a comprehensive data management solution that helps to ensure that your data is always available, recoverable, and secure.
These features work together to provide a comprehensive data management solution
* [Understand the storage hierarchy of Azure NetApp Files](azure-netapp-files-understand-storage-hierarchy.md) * [Quickstart: Set up Azure NetApp Files and create an NFS volume](azure-netapp-files-quickstart-set-up-account-create-volumes.md)
-* [Understand NAS concepts in Azure NetApp Files](network-attached-storage-concept.md)
-* [Register for NetApp Resource Provider](azure-netapp-files-register.md)
-* [Solution architectures using Azure NetApp Files](azure-netapp-files-solution-architectures.md)
+* [Understand NAS concepts in Azure NetApp Files](network-attached-storage-concept.md)
+* [Register for NetApp Resource Provider](azure-netapp-files-register.md)
+* [Solution architectures using Azure NetApp Files](azure-netapp-files-solution-architectures.md)
* [Azure NetApp Files videos](azure-netapp-files-videos.md)
azure-netapp-files Double Encryption At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/double-encryption-at-rest.md
Azure NetApp Files double encryption at rest is supported for the following regi
* Australia Southeast * Brazil South * Canada Central
-* Canada East
* Central US * East Asia * East US
azure-netapp-files Performance Linux Nfs Read Ahead https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-nfs-read-ahead.md
Title: Linux NFS read-ahead best practices for Azure NetApp Files - Session slots and slot table entries | Microsoft Docs
-description: Describes filesystem cache and Linux NFS read-ahead best practices for Azure NetApp Files.
+description: Describes filesystem cache and Linux NFS read-ahead best practices for Azure NetApp Files.
documentationcenter: ''
ms.assetid:
na-+ Last updated 09/29/2022 # Linux NFS read-ahead best practices for Azure NetApp Files
-This article helps you understand filesystem cache best practices for Azure NetApp Files.
+This article helps you understand filesystem cache best practices for Azure NetApp Files.
-NFS read-ahead predictively requests blocks from a file in advance of I/O requests by the application. It is designed to improve client sequential read throughput. Until recently, all modern Linux distributions set the read-ahead value to be equivalent of 15 times the mounted filesystems `rsize`.
+NFS read-ahead predictively requests blocks from a file in advance of I/O requests by the application. It is designed to improve client sequential read throughput. Until recently, all modern Linux distributions set the read-ahead value to be equivalent of 15 times the mounted filesystems `rsize`.
The following table shows the default read-ahead values for each given `rsize` mount option.
The following table shows the default read-ahead values for each currently avail
| Debian | Up to at least 10 | 15 x `rsize` |
-## How to work with per-NFS filesystem read-ahead
+## How to work with per-NFS filesystem read-ahead
NFS read-ahead is defined at the mount point for an NFS filesystem. The default setting can be viewed and set both dynamically and persistently. For convenience, the following bash script written by Red Hat has been provided for viewing or dynamically setting read-ahead for amounted NFS filesystem.
-Read-ahead can be defined either dynamically per NFS mount using the following script or persistently using `udev` rules as shown in this section. To display or set read-ahead for a mounted NFS filesystem, you can save the following script as a bash file, modify the fileΓÇÖs permissions to make it an executable (`chmod 544 readahead.sh`), and run as shown.
+Read-ahead can be defined either dynamically per NFS mount using the following script or persistently using `udev` rules as shown in this section. To display or set read-ahead for a mounted NFS filesystem, you can save the following script as a bash file, modify the fileΓÇÖs permissions to make it an executable (`chmod 544 readahead.sh`), and run as shown.
-## How to show or set read-ahead values
+## How to show or set read-ahead values
-To show the current read-ahead value (the returned value is in KiB), run the following command:
+To show the current read-ahead value (the returned value is in KiB), run the following command:
```bash ./readahead.sh show <mount-point> ```
-To set a new value for read-ahead, run the following command:
+To set a new value for read-ahead, run the following command:
```bash ./readahead.sh set <mount-point> [read-ahead-kb] ```
-
-### Example
+
+### Example
```bash #!/bin/bash
fi
## How to persistently set read-ahead for NFS mounts
-To persistently set read-ahead for NFS mounts, `udev` rules can be written as follows:
+To persistently set read-ahead for NFS mounts, `udev` rules can be written as follows:
1. Create and test `/etc/udev/rules.d/99-nfs.rules`:
To persistently set read-ahead for NFS mounts, `udev` rules can be written as fo
SUBSYSTEM=="bdi", ACTION=="add", PROGRAM="<absolute_path>/awk -v bdi=$kernel 'BEGIN{ret=1} {if ($4 == bdi) {ret=0}} END{exit ret}' /proc/fs/nfsfs/volumes", ATTR{read_ahead_kb}="15380" ```
-2. Apply the `udev` rule:
+2. Apply the `udev` rule:
```bash sudo udevadm control --reload ```
-## Next steps
+## Next steps
* [Linux direct I/O best practices for Azure NetApp Files](performance-linux-direct-io.md) * [Linux filesystem cache best practices for Azure NetApp Files](performance-linux-filesystem-cache.md) * [Linux NFS mount options best practices for Azure NetApp Files](performance-linux-mount-options.md) * [Linux concurrency best practices](performance-linux-concurrency-session-slots.md)
-* [Azure virtual machine SKUs best practices](performance-virtual-machine-sku.md)
-* [Performance benchmarks for Linux](performance-benchmarks-linux.md)
+* [Azure virtual machine SKUs best practices](performance-virtual-machine-sku.md)
+* [Performance benchmarks for Linux](performance-benchmarks-linux.md)
azure-netapp-files Troubleshoot Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-volumes.md
ms.assetid:
na-+ Last updated 02/21/2023 # Troubleshoot volume errors for Azure NetApp Files
-This article describes error messages and resolutions that can help you troubleshoot Azure NetApp Files volumes.
+This article describes error messages and resolutions that can help you troubleshoot Azure NetApp Files volumes.
## Errors for SMB and dual-protocol volumes | Error conditions | Resolutions | |--|-|
-| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Could not query DNS server. Verify that the network configuration is correct and that DNS servers are available."}]}` | This error indicates that the DNS is not reachable. <br> Consider the following solutions: <ul><li>Check if AD DS and the volume are being deployed in same region.</li> <li>Check if AD DS and the volume are using the same VNet. If they're using different VNETs, make sure that the VNets are peered with each other. See [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md). </li> <li>The DNS server might have network security groups (NSGs) applied. As such, it does not allow the traffic to flow. In this case, open the NSGs to the DNS or AD to connect to various ports. For port requirements, see [Requirements for Active Directory connections](create-active-directory-connections.md#requirements-for-active-directory-connections). </li></ul> <br>The same solutions apply for Microsoft Entra Domain Services. Microsoft Entra Domain Services should be deployed in the same region. The VNet should be in the same region or peered with the VNet used by the volume. |
-| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-C1C8\". Reason: Kerberos Error: Invalid credentials were given Details: Error: Machine account creation procedure failed\n [ 563] Loaded the preliminary configuration.\n**[ 670] FAILURE: Could not authenticate as 'test@contoso.com':\n** Unknown user (KRB5KDC_ERR_C_PRINCIPAL_UNKNOWN)\n. "}]}` | <ul><li>Make sure that the username entered is correct. </li> <li>Make sure that the user is part of the Administrator group that has the privilege to create machine (computer) accounts. </li> <li> If you use Microsoft Entra Domain Services, make sure that the user is part of the Microsoft Entra group `Azure AD DC Administrators`. </li></ul> |
+| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Could not query DNS server. Verify that the network configuration is correct and that DNS servers are available."}]}` | This error indicates that the DNS is not reachable. <br> Consider the following solutions: <ul><li>Check if AD DS and the volume are being deployed in same region.</li> <li>Check if AD DS and the volume are using the same VNet. If they're using different VNETs, make sure that the VNets are peered with each other. See [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md). </li> <li>The DNS server might have network security groups (NSGs) applied. As such, it does not allow the traffic to flow. In this case, open the NSGs to the DNS or AD to connect to various ports. For port requirements, see [Requirements for Active Directory connections](create-active-directory-connections.md#requirements-for-active-directory-connections). </li></ul> <br>The same solutions apply for Microsoft Entra Domain Services. Microsoft Entra Domain Services should be deployed in the same region. The VNet should be in the same region or peered with the VNet used by the volume. |
+| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-C1C8\". Reason: Kerberos Error: Invalid credentials were given Details: Error: Machine account creation procedure failed\n [ 563] Loaded the preliminary configuration.\n**[ 670] FAILURE: Could not authenticate as 'test@contoso.com':\n** Unknown user (KRB5KDC_ERR_C_PRINCIPAL_UNKNOWN)\n. "}]}` | <ul><li>Make sure that the username entered is correct. </li> <li>Make sure that the user is part of the Administrator group that has the privilege to create machine (computer) accounts. </li> <li> If you use Microsoft Entra Domain Services, make sure that the user is part of the Microsoft Entra group `Azure AD DC Administrators`. </li></ul> |
| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-A452\". Reason: Kerberos Error: Pre-authentication information was invalid Details: Error: Machine account creation procedure failed\n [ 567] Loaded the preliminary configuration.\n [ 671] Successfully connected to ip 10.x.x.x, port 88 using TCP\n**[ 1099] FAILURE: Could not authenticate as\n** 'user@contoso.com': CIFS server account password does\n** not match password stored in Active Directory\n** (KRB5KDC_ERR_PREAUTH_FAILED)\n. "}]}` | Make sure that the password entered for joining the AD connection is correct. | | The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError","message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-D9A2\". Reason: SecD Error: ou not found Details: Error: Machine account creation procedure failed\n [ 561] Loaded the preliminary configuration.\n [ 665] Successfully connected to ip 10.x.x.x, port 88 using TCP\n [ 1039] Successfully connected to ip 10.x.x.x, port 389 using TCP\n**[ 1147] FAILURE: Specifed OU 'OU=AADDC Com' does not exist in\n** contoso.com\n. "}]}` | Make sure that the OU path specified for joining the AD connection is correct. If you use Microsoft Entra Domain Services, make sure that the organizational unit path is `OU=AADDC Computers`. |
-| The SMB or dual-protocol volume creation fails with the following error: <br> `Failed to create the Active Directory machine account \"SMB-ANF-VOL. Reason: LDAP Error: Local error occurred Details: Error: Machine account creation procedure failed. [nnn] Loaded the preliminary configuration. [nnn] Successfully connected to ip 10.x.x.x, port 88 using TCP [nnn] Successfully connected to ip 10.x.x.x, port 389 using [nnn] Entry for host-address: 10.x.x.x not found in the current source: FILES. Ignoring and trying next available source [nnn] Source: DNS unavailable. Entry for host-address:10.x.x.x found in any of the available sources\n*[nnn] FAILURE: Unable to SASL bind to LDAP server using GSSAPI: local error [nnn] Additional info: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Cannot determine realm for numeric host address) [nnn] Unable to connect to LDAP (Active Directory) service on contoso.com (Error: Local error) [nnn] Unable to make a connection (LDAP (Active Directory):contosa.com, result: 7643. ` | The pointer (PTR) record of the AD host machine might be missing on the DNS server. You need to create a reverse lookup zone on the DNS server, and then add a PTR record of the AD host machine in that reverse lookup zone. |
+| The SMB or dual-protocol volume creation fails with the following error: <br> `Failed to create the Active Directory machine account \"SMB-ANF-VOL. Reason: LDAP Error: Local error occurred Details: Error: Machine account creation procedure failed. [nnn] Loaded the preliminary configuration. [nnn] Successfully connected to ip 10.x.x.x, port 88 using TCP [nnn] Successfully connected to ip 10.x.x.x, port 389 using [nnn] Entry for host-address: 10.x.x.x not found in the current source: FILES. Ignoring and trying next available source [nnn] Source: DNS unavailable. Entry for host-address:10.x.x.x found in any of the available sources\n*[nnn] FAILURE: Unable to SASL bind to LDAP server using GSSAPI: local error [nnn] Additional info: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Cannot determine realm for numeric host address) [nnn] Unable to connect to LDAP (Active Directory) service on contoso.com (Error: Local error) [nnn] Unable to make a connection (LDAP (Active Directory):contosa.com, result: 7643. ` | The pointer (PTR) record of the AD host machine might be missing on the DNS server. You need to create a reverse lookup zone on the DNS server, and then add a PTR record of the AD host machine in that reverse lookup zone. |
| The SMB or dual-protocol volume creation fails with the following error: <br> `Failed to create the Active Directory machine account \"SMB-ANF-VOL\". Reason: Kerberos Error: KDC has no support for encryption type Details: Error: Machine account creation procedure failed [nnn]Loaded the preliminary configuration. [nnn]Successfully connected to ip 10.x.x.x, port 88 using TCP [nnn]FAILURE: Could not authenticate as 'contosa.com': KDC has no support for encryption type (KRB5KDC_ERR_ETYPE_NOSUPP) ` | Make sure that [AES Encryption](./create-active-directory-connections.md#create-an-active-directory-connection) is enabled both in the Active Directory connection and for the service account. | | The SMB or dual-protocol volume creation fails with the following error: <br> `Failed to create the Active Directory machine account \"SMB-NTAP-VOL\". Reason: LDAP Error: Strong authentication is required Details: Error: Machine account creation procedure failed\n [ 338] Loaded the preliminary configuration.\n [ nnn] Successfully connected to ip 10.x.x.x, port 88 using TCP\n [ nnn ] Successfully connected to ip 10.x.x.x, port 389 using TCP\n [ 765] Unable to connect to LDAP (Active Directory) service on\n dc51.area51.com (Error: Strong(er) authentication\n required)\n*[ nnn] FAILURE: Unable to make a connection (LDAP (Active\n* Directory):contoso.com), result: 7609\n. "` | The LDAP Signing option is not selected, but the AD client has LDAP signing. [Enable LDAP Signing](create-active-directory-connections.md#create-an-active-directory-connection) and retry. | | SMB volume creation fails with the following error: <br> `Failed to create the Active Directory machine account. Reason: LDAP Error: Intialization of LDAP library failed Details: Error: Machine account creation procedure failed` | This error occurs because the service or user account used in the Azure NetApp Files Active Directory connections does not have sufficient privilege to create computer objects or make modifications to the newly created computer object. <br> To solve the issue, you should grant the account being used greater privilege. You can apply a default role with sufficient privilege. You can also delegate additional privilege to the user or service account or to a group it's part of. |
This article describes error messages and resolutions that can help you troubles
|`Error allocating volume - Export policy rules does not match kerberosEnabled flag` | Azure NetApp Files does not support Kerberos for NFSv3 volumes. Kerberos is supported only for the NFSv4.1 protocol. | |`This NetApp account has no configured Active Directory connections` | Configure Active Directory for the NetApp account with fields **KDC IP** and **AD Server Name**. See [Configure the Azure portal](configure-kerberos-encryption.md#configure-the-azure-portal) for instructions. | |`Mismatch between KerberosEnabled flag value and ExportPolicyRule's access type parameter values.` | Azure NetApp Files does not support converting a plain NFSv4.1 volume to Kerberos NFSv4.1 volume, and vice-versa. |
-|`mount.nfs: access denied by server when mounting volume <SMB_SERVER_NAME-XXX.DOMAIN_NAME>/<VOLUME_NAME>` <br> Example: `smb-test-64d9.contoso.com:/nfs41-vol101` | <ol><li> Ensure that the A/PTR records are properly set up and exist in the Active Directory for the server name `smb-test-64d9.contoso.com`. <br> In the NFS client, if `nslookup` of `smb-test-64d9.contoso.com` resolves to IP address IP1 (that is, `10.1.1.68`), then `nslookup` of IP1 must resolve to only one record (that is, `smb-test-64d9.contoso.com`). `nslookup` of IP1 *must* not resolve to multiple names. </li> <li>Set AES-256 for the NFS machine account of type `NFS-<Smb NETBIOS NAME>-<few random characters>` on AD using either PowerShell or the UI. <br> Example commands: <ul><li>`Set-ADComputer <NFS_MACHINE_ACCOUNT_NAME> -KerberosEncryptionType AES256` </li><li>`Set-ADComputer NFS-SMB-TEST-64 -KerberosEncryptionType AES256` </li></ul> </li> <li>Ensure that the time of the NFS client, AD, and Azure NetApp Files storage software is synchronized with each other and is within a five-minute skew range. </li> <li>Get the Kerberos ticket on the NFS client using the command `kinit <administrator>`.</li> <li>Reduce the NFS client hostname to fewer than 15 characters and perform the realm join again. </li><li>Restart the NFS client and the `rpc-gssd` service as follows. The exact service names may vary on some Linux distributions.<br>Most current distributions use the same service names. Perform the following as root or with `sudo`<br> `systemctl enable nfs-client.target && systemctl start nfs-client.target`<br>(Restart the `rpc-gssd` service.) <br> `systemctl restart rpc-gssd.service` </ul>|
+|`mount.nfs: access denied by server when mounting volume <SMB_SERVER_NAME-XXX.DOMAIN_NAME>/<VOLUME_NAME>` <br> Example: `smb-test-64d9.contoso.com:/nfs41-vol101` | <ol><li> Ensure that the A/PTR records are properly set up and exist in the Active Directory for the server name `smb-test-64d9.contoso.com`. <br> In the NFS client, if `nslookup` of `smb-test-64d9.contoso.com` resolves to IP address IP1 (that is, `10.1.1.68`), then `nslookup` of IP1 must resolve to only one record (that is, `smb-test-64d9.contoso.com`). `nslookup` of IP1 *must* not resolve to multiple names. </li> <li>Set AES-256 for the NFS machine account of type `NFS-<Smb NETBIOS NAME>-<few random characters>` on AD using either PowerShell or the UI. <br> Example commands: <ul><li>`Set-ADComputer <NFS_MACHINE_ACCOUNT_NAME> -KerberosEncryptionType AES256` </li><li>`Set-ADComputer NFS-SMB-TEST-64 -KerberosEncryptionType AES256` </li></ul> </li> <li>Ensure that the time of the NFS client, AD, and Azure NetApp Files storage software is synchronized with each other and is within a five-minute skew range. </li> <li>Get the Kerberos ticket on the NFS client using the command `kinit <administrator>`.</li> <li>Reduce the NFS client hostname to fewer than 15 characters and perform the realm join again. </li><li>Restart the NFS client and the `rpc-gssd` service as follows. The exact service names may vary on some Linux distributions.<br>Most current distributions use the same service names. Perform the following as root or with `sudo`<br> `systemctl enable nfs-client.target && systemctl start nfs-client.target`<br>(Restart the `rpc-gssd` service.) <br> `systemctl restart rpc-gssd.service` </ul>|
|`mount.nfs: an incorrect mount option was specified` | The issue might be related to the NFS client issue. Reboot the NFS client. | |`Hostname lookup failed` | You need to create a reverse lookup zone on the DNS server, and then add a PTR record of the AD host machine in that reverse lookup zone. <br> For example, assume that the IP address of the AD machine is `10.1.1.4`, the hostname of the AD machine (as found by using the hostname command) is `AD1`, and the domain name is `contoso.com`. The PTR record added to the reverse lookup zone should be `10.1.1.4 -> AD1.contoso.com`. | |`Volume creation fails due to unreachable DNS server` | Two possible solutions are available: <br> <ul><li> This error indicates that DNS is not reachable. The reason might be an incorrect DNS IP or a networking issue. Check the DNS IP entered in AD connection and make sure that the IP is correct. </li> <li> Make sure that the AD and the volume are in same region and in same VNet. If they are in different VNets, ensure that VNet peering is established between the two VNets. </li></ul> |
This article describes error messages and resolutions that can help you troubles
| When only primary group IDs are seen and user belongs to auxiliary groups too. | This is caused by a query timeout: <br> -Use [LDAP search scope option](configure-ldap-extended-groups.md). <br> -Use [preferred Active Directory servers for LDAP client](create-active-directory-connections.md#preferred-server-ldap). | | `Error describing volume - Entry doesn't exist for username: <username>, please try with a valid username` | -Check if the user is present on LDAP server. <br> -Check if the LDAP server is healthy. |
-## Errors for volume allocation
+## Errors for volume allocation
When you create a new volume or resize an existing volume in Azure NetApp Files, Microsoft Azure allocates storage and networking resources to your subscription. You might occasionally experience resource allocation failures because of unprecedented growth in demand for Azure services in specific regions.
This section explains the causes of some of the common allocation failures and s
|Out of storage or networking capacity in a region for regular volumes. <br> Error message: `There are currently insufficient resources available to create [or extend] a volume in this region. Please retry the operation. If the problem persists, contact Support.` | The error indicates that there are insufficient resources available in the region to create or resize volumes. <br> Try one of the following workarounds: <ul><li>Create the volume under a new VNet. Doing so will avoid hitting networking-related resource limits.</li> <li>Retry after some time. Resources may have been freed in the cluster, region, or zone in the interim.</li></ul> | |Out of storage capacity when creating a volume with network features set to `Standard`. <br> Error message: `No storage available with Standard network features, for the provided VNet.` | The error indicates that there are insufficient resources available in the region to create volumes with `Standard` networking features. <br> Try one of the following workarounds: <ul><li>If `Standard` network features are not required, create the volume with `Basic` network features.</li> <li>Try creating the volume under a new VNet. Doing so will avoid hitting networking-related resource limits</li><li>Retry after some time. Resources may have been freed in the cluster, region, or zone in the interim.</li></ul> |
-## Activity log warnings for volumes
+## Activity log warnings for volumes
| Warnings | Resolutions | |-|-|
-| The `Microsoft.NetApp/netAppAccounts/capacityPools/volumes/ScaleUp` operation displays a warning: <br> `Percentage Volume Consumed Size reached 90%` | The used size of an Azure NetApp Files volume has reached 90% of the volume quota. You should [resize the volume](azure-netapp-files-resize-capacity-pools-or-volumes.md) soon. |
+| The `Microsoft.NetApp/netAppAccounts/capacityPools/volumes/ScaleUp` operation displays a warning: <br> `Percentage Volume Consumed Size reached 90%` | The used size of an Azure NetApp Files volume has reached 90% of the volume quota. You should [resize the volume](azure-netapp-files-resize-capacity-pools-or-volumes.md) soon. |
-## Next steps
+## Next steps
* [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md)
-* [Create an SMB volume for Azure NetApp Files](azure-netapp-files-create-volumes-smb.md)
-* [Create a dual-protocol volume for Azure NetApp Files](create-volumes-dual-protocol.md)
+* [Create an SMB volume for Azure NetApp Files](azure-netapp-files-create-volumes-smb.md)
+* [Create a dual-protocol volume for Azure NetApp Files](create-volumes-dual-protocol.md)
* [Configure network features for a volume](configure-network-features.md)
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
ms.assetid:
na-+ Last updated 11/27/2023
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Customer-managed keys](configure-customer-managed-keys.md) is now generally available (GA).
- You still must register the feature before using it for the first time.
-
+ You still must register the feature before using it for the first time.
+ ## November 2023 * [Capacity pool enhancement:](azure-netapp-files-set-up-capacity-pool.md) New lower limits
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Metrics enhancement: Throughput limits](azure-netapp-files-metrics.md#volumes)
- Azure NetApp Files now supports a "throughput limit reached" metric for volumes. The metric is a Boolean value that denotes the volume is hitting its QoS limit. With this metric, you know whether or not to adjust volumes so they meet the specific needs of your workloads.
+ Azure NetApp Files now supports a "throughput limit reached" metric for volumes. The metric is a Boolean value that denotes the volume is hitting its QoS limit. With this metric, you know whether or not to adjust volumes so they meet the specific needs of your workloads.
* [Standard network features in US Gov regions](azure-netapp-files-network-topologies.md#supported-regions) is now generally available (GA)
-
- Azure NetApp Files now supports Standard network features for new volumes in US Gov Arizona, US Gov Texas, and US Gov Virginia. Standard network features provide an enhanced virtual networking experience through various features for a seamless and consistent experience with security posture of all their workloads including Azure NetApp Files.
+
+ Azure NetApp Files now supports Standard network features for new volumes in US Gov Arizona, US Gov Texas, and US Gov Virginia. Standard network features provide an enhanced virtual networking experience through various features for a seamless and consistent experience with security posture of all their workloads including Azure NetApp Files.
* [Volume user and group quotas](default-individual-user-group-quotas-introduction.md) is now generally available (GA).
- User and group quotas enable you to stay in control and define how much storage capacity can be used by individual users or groups can use within a specific Azure NetApp Files volume. You can set default (same for all users) or individual user quotas on all NFS, SMB, and dual protocol-enabled volumes. On all NFS-enabled volumes, you can define a default (that is, same for all users) or individual group quotas.
+ User and group quotas enable you to stay in control and define how much storage capacity can be used by individual users or groups can use within a specific Azure NetApp Files volume. You can set default (same for all users) or individual user quotas on all NFS, SMB, and dual protocol-enabled volumes. On all NFS-enabled volumes, you can define a default (that is, same for all users) or individual group quotas.
This feature is Generally Available in Azure commercial regions and US Gov regions where Azure NetApp Files is available.
Azure NetApp Files is updated regularly. This article provides a summary about t
In addition to Citrix App Layering, FSLogix user profiles including FSLogix ODFC containers, and Microsoft SQL Server, Azure NetApp Files now supports [MSIX app attach](../virtual-desktop/create-netapp-files.md) with SMB Continuous Availability shares to enhance resiliency during storage service maintenance operations. Continuous Availability enables SMB transparent failover to eliminate disruptions as a result of service maintenance events and improves reliability and user experience. * [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md#supported-regions) in US Gov regions
-
+ Azure NetApp Files now supports [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md?tabs=azure-portal) in US Gov Arizona and US Gov Virginia regions. Azure NetApp Files datastores for Azure VMware Solution provide the ability to scale storage independently of compute and can go beyond the limits of the local instance storage provided by vSAN reducing total cost of ownership. ## October 2023
Azure NetApp Files is updated regularly. This article provides a summary about t
Most of unstructured data is typically infrequently accessed. It can account for more than 50% of the total storage capacity in many storage environments. Infrequently accessed data associated with productivity software, completed projects, and old datasets are an inefficient use of a high-performance storage. You can now use the cool access option in a capacity pool of Azure NetApp Files standard service level to have inactive data transparently moved from Azure NetApp Files standard service-level storage (the *hot tier*) to an Azure storage account (the *cool tier*). This option lets you free up storage that resides within Azure NetApp Files volumes by moving data blocks to the lower cost cool tier, resulting in overall cost savings. You can configure this option on a volume by specifying the number of days (the *coolness period*, ranging from 7 to 183 days) for inactive data to be considered "cool". Viewing and accessing the data stay transparent, except for a higher access time to data blocks that were moved to the cool tier.
-* [Troubleshoot Azure NetApp Files using diagnose and solve problems tool](troubleshoot-diagnose-solve-problems.md)
+* [Troubleshoot Azure NetApp Files using diagnose and solve problems tool](troubleshoot-diagnose-solve-problems.md)
The **diagnose and solve problems** tool simplifies the troubleshooting process, making it effortless to identify and resolve any issues affecting your Azure NetApp Files deployment. With the tool's proactive troubleshooting, user-friendly guidance, and seamless integration with Azure Support, you can more easily manage and maintain a reliable and high-performance Azure NetApp Files storage environment. Experience enhanced issue resolution and optimization capabilities today, ensuring a smoother Azure NetApp Files management experience. * [Snapshot manageability enhancement: Identify parent snapshot](snapshots-restore-new-volume.md)
- You can now see the name of the snapshot used to create a new volume. In the Volume overview page, the **Originated from** field identifies the source snapshot used in volume creation. If the field is empty, no snapshot was used.
+ You can now see the name of the snapshot used to create a new volume. In the Volume overview page, the **Originated from** field identifies the source snapshot used in volume creation. If the field is empty, no snapshot was used.
## September 2023
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Troubleshooting enhancement: validate user connectivity, group membership and access to LDAP-enabled volumes](troubleshoot-user-access-ldap.md)
- Azure NetApp Files now provides you with the ability to validate user connectivity and access to LDAP-enabled volumes based on group membership. When you provide a user ID, Azure NetApp Files reports a list of primary and auxiliary group IDs that the user belongs to from the LDAP server. Validating user access is helpful for scenarios such as ensuring POSIX attributes set on the LDAP server are accurate or when you encounter permission errors.
+ Azure NetApp Files now provides you with the ability to validate user connectivity and access to LDAP-enabled volumes based on group membership. When you provide a user ID, Azure NetApp Files reports a list of primary and auxiliary group IDs that the user belongs to from the LDAP server. Validating user access is helpful for scenarios such as ensuring POSIX attributes set on the LDAP server are accurate or when you encounter permission errors.
## August 2023 * [Cross-region replication enhancement: re-establish deleted volume replication](reestablish-deleted-volume-relationships.md) (Preview)
- Azure NetApp Files now allows you to re-establish a replication relationship between two volumes in case you had previously deleted it. If the destination volume remained operational and no snapshots were deleted, the replication re-establish operation will use the last common snapshot and incrementally synchronize the destination volume based on the last known good snapshot. In that case, no baseline replication is required.
+ Azure NetApp Files now allows you to re-establish a replication relationship between two volumes in case you had previously deleted it. If the destination volume remained operational and no snapshots were deleted, the replication re-establish operation will use the last common snapshot and incrementally synchronize the destination volume based on the last known good snapshot. In that case, no baseline replication is required.
* [Backup vault](backup-vault-manage.md) (Preview)
Azure NetApp Files is updated regularly. This article provides a summary about t
* [SMB Continuous Availability (CA) shares](azure-netapp-files-create-volumes-smb.md#add-an-smb-volume) is now generally available (GA).
- To enhance resiliency during storage service maintenance operations, SMB volumes used by Citrix App Layering, FSLogix user profile containers and Microsoft SQL Server on Microsoft Windows Server can be enabled with Continuous Availability. Continuous Availability enables SMB Transparent Failover to eliminate disruptions as a result of service maintenance events and improves reliability and user experience.
-
+ To enhance resiliency during storage service maintenance operations, SMB volumes used by Citrix App Layering, FSLogix user profile containers and Microsoft SQL Server on Microsoft Windows Server can be enabled with Continuous Availability. Continuous Availability enables SMB Transparent Failover to eliminate disruptions as a result of service maintenance events and improves reliability and user experience.
+ To learn more about Continuous Availability, see the [application resiliency FAQ](faq-application-resilience.md#do-i-need-to-take-special-precautions-for-smb-based-applications) and follow the instructions to enable it on new and existing SMB volumes. * [Configure NFSv4.1 ID domain for non-LDAP volumes](azure-netapp-files-configure-nfsv41-domain.md) (Preview)
Azure NetApp Files is updated regularly. This article provides a summary about t
For details on registering the feature and setting NFSv4.1 ID Domain in Azure NetApp Files, see [Configure NFSv4.1 ID Domain](azure-netapp-files-configure-nfsv41-domain.md).
-* [Moving volumes from *manual* QoS capacity pool to *auto* QoS capacity pool](dynamic-change-volume-service-level.md)
+* [Moving volumes from *manual* QoS capacity pool to *auto* QoS capacity pool](dynamic-change-volume-service-level.md)
- You can now move volumes from a manual QoS capacity pool to an auto QoS capacity pool. When you move a volume to an auto QoS capacity pool, the throughput is changed according to the allocated volume size (quota) of the target pool's service level: `<throughput> = <volume quota> x <Service Level Throughput / TiB>`
+ You can now move volumes from a manual QoS capacity pool to an auto QoS capacity pool. When you move a volume to an auto QoS capacity pool, the throughput is changed according to the allocated volume size (quota) of the target pool's service level: `<throughput> = <volume quota> x <Service Level Throughput / TiB>`
## June 2023
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Azure NetApp Files double encryption at rest](double-encryption-at-rest.md) (Preview)
- We're excited to announce the addition of double encryption at rest for Azure NetApp Files volumes. This new feature provides an extra layer of protection for your critical data, ensuring maximum confidentiality and mitigating potential liabilities. Double encryption at rest is ideal for industries such as finance, military, healthcare, and government, where breaches of confidentiality can have catastrophic consequences. By combining hardware-based encryption with encrypted SSD drives and software-based encryption at the volume level, your data remains secure throughout its lifecycle. You can select **double** as the encryption type during capacity pool creation to easily enable this advanced security layer.
+ We're excited to announce the addition of double encryption at rest for Azure NetApp Files volumes. This new feature provides an extra layer of protection for your critical data, ensuring maximum confidentiality and mitigating potential liabilities. Double encryption at rest is ideal for industries such as finance, military, healthcare, and government, where breaches of confidentiality can have catastrophic consequences. By combining hardware-based encryption with encrypted SSD drives and software-based encryption at the volume level, your data remains secure throughout its lifecycle. You can select **double** as the encryption type during capacity pool creation to easily enable this advanced security layer.
* Availability zone volume placement enhancement - [Populate existing volumes](manage-availability-zone-volume-placement.md#populate-an-existing-volume-with-availability-zone-information) (Preview) The Azure NetApp Files [availability zone volume placement](manage-availability-zone-volume-placement.md) feature lets you deploy *new volumes* in the availability zone of your choice, in alignment with Azure compute and other services in the same zone. With this "Populate existing volume" enhancement, you can now obtain and, if desired, populate *previously deployed, existing volumes* with the logical availability zone information. This capability automatically maps the physical zone the volumes was deployed in and maps it to the logical zone for your subscription. This feature doesn't move any volumes between zones.
-* [Customer-managed keys](configure-customer-managed-keys.md) for Azure NetApp Files now supports the option to Disable public access on the key vault that contains your encryption key. Selecting this option enhances network security by denying public configurations and allowing only connections through private endpoints.
+* [Customer-managed keys](configure-customer-managed-keys.md) for Azure NetApp Files now supports the option to Disable public access on the key vault that contains your encryption key. Selecting this option enhances network security by denying public configurations and allowing only connections through private endpoints.
-## May 2023
+## May 2023
-* Azure NetApp Files now supports [customer-managed keys](configure-customer-managed-keys.md) on both source and data replication volumes with [cross-region replication](cross-region-replication-requirements-considerations.md) or [cross-zone replication](cross-zone-replication-requirements-considerations.md) relationships.
+* Azure NetApp Files now supports [customer-managed keys](configure-customer-managed-keys.md) on both source and data replication volumes with [cross-region replication](cross-region-replication-requirements-considerations.md) or [cross-zone replication](cross-zone-replication-requirements-considerations.md) relationships.
* [Standard network features - Edit volumes](configure-network-features.md#edit-network-features-option-for-existing-volumes) (Preview)
- Azure NetApp Files volumes have been supported with Standard network features since [October 2021](#october-2021), but only for newly created volumes. This new *edit volumes* capability lets you change *existing* volumes that were configured with Basic network features to use Standard network features. This capability provides an enhanced, more standard, Microsoft Azure Virtual Network experience through various security and connectivity features that are available on Virtual Networks to Azure services. When you edit existing volumes to use Standard network features, you can start taking advantage of networking capabilities, such as (but not limited to):
+ Azure NetApp Files volumes have been supported with Standard network features since [October 2021](#october-2021), but only for newly created volumes. This new *edit volumes* capability lets you change *existing* volumes that were configured with Basic network features to use Standard network features. This capability provides an enhanced, more standard, Microsoft Azure Virtual Network experience through various security and connectivity features that are available on Virtual Networks to Azure services. When you edit existing volumes to use Standard network features, you can start taking advantage of networking capabilities, such as (but not limited to):
* Increased number of client IPs in a virtual network (including immediately peered Virtual Networks) accessing Azure NetApp Files volumes - the [same as Azure VMs](azure-netapp-files-resource-limits.md#resource-limits) * Enhanced network security with support for [network security groups](../virtual-network/network-security-groups-overview.md) on Azure NetApp Files delegated subnets * Enhanced network control with support for [user-defined routes](../virtual-network/virtual-networks-udr-overview.md#user-defined) to and from Azure NetApp Files delegated subnets * Connectivity over Active/Active VPN gateway setup
- * [ExpressRoute FastPath](../expressroute/about-fastpath.md) connectivity to Azure NetApp Files
+ * [ExpressRoute FastPath](../expressroute/about-fastpath.md) connectivity to Azure NetApp Files
This feature is now in public preview, currently available in [16 Azure regions](azure-netapp-files-network-topologies.md#regions-edit-network-features). It will roll out to other regions. Stay tuned for further information as more regions become available. * [Azure Application Consistent Snapshot tool (AzAcSnap) 8 (GA)](azacsnap-introduction.md)
- Version 8 of the AzAcSnap tool is now generally available. [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (AzAcSnap) is a command-line tool that enables you to simplify data protection for third-party databases in Linux environments. AzAcSnap 8 introduces the following new capabilities and improvements:
+ Version 8 of the AzAcSnap tool is now generally available. [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (AzAcSnap) is a command-line tool that enables you to simplify data protection for third-party databases in Linux environments. AzAcSnap 8 introduces the following new capabilities and improvements:
- * Restore change - ability to revert volume for Azure NetApp Files
- * New global settings file (`.azacsnaprc`) to control behavior of `azacsnap`
- * Logging enhancements for failure cases and new "mainlog" for summarized monitoring
- * Backup (`-c backup`) and Details (`-c details`) fixes
+ * Restore change - ability to revert volume for Azure NetApp Files
+ * New global settings file (`.azacsnaprc`) to control behavior of `azacsnap`
+ * Logging enhancements for failure cases and new "mainlog" for summarized monitoring
+ * Backup (`-c backup`) and Details (`-c details`) fixes
- Download the latest release of the installer [here](https://aka.ms/azacsnapinstaller).
+ Download the latest release of the installer [here](https://aka.ms/azacsnapinstaller).
* [Single-file snapshot restore](snapshots-restore-file-single.md) is now generally available (GA) * [Troubleshooting enhancement: break file locks](troubleshoot-file-locks.md)
- In some cases you may encounter (stale) file locks on NFS, SMB, or dual-protocol volumes that need to be cleared. With this new Azure NetApp Files feature, you can now break these locks. You can break file locks for all files in a volume or break all file locks initiated by a specified client.
+ In some cases you may encounter (stale) file locks on NFS, SMB, or dual-protocol volumes that need to be cleared. With this new Azure NetApp Files feature, you can now break these locks. You can break file locks for all files in a volume or break all file locks initiated by a specified client.
## April 2023
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Disable `showmount`](disable-showmount.md) (Preview)
- By default, Azure NetApp Files enables [`showmount` functionality](/windows-server/administration/windows-commands/showmount) to show NFS exported paths. The setting allows NFS clients to use the `showmount -e` command to see a list of exports available on the Azure NetApp Files NFS-enabled storage endpoint. This functionality might cause security scanners to flag the Azure NetApp Files NFS service as having a vulnerability because these scanners often use `showmount` to see what is being returned. In those scenarios, you might want to disable `showmount` on Azure NetApp Files. This setting allows you to enable/disable `showmount` for your NFS-enabled storage endpoints.
+ By default, Azure NetApp Files enables [`showmount` functionality](/windows-server/administration/windows-commands/showmount) to show NFS exported paths. The setting allows NFS clients to use the `showmount -e` command to see a list of exports available on the Azure NetApp Files NFS-enabled storage endpoint. This functionality might cause security scanners to flag the Azure NetApp Files NFS service as having a vulnerability because these scanners often use `showmount` to see what is being returned. In those scenarios, you might want to disable `showmount` on Azure NetApp Files. This setting allows you to enable/disable `showmount` for your NFS-enabled storage endpoints.
* [Active Directory support improvement](create-active-directory-connections.md#preferred-server-ldap) (Preview)
- The Preferred server for LDAP client option allows you to submit the IP addresses of up to two Active Directory (AD) servers as a comma-separated list. Rather than sequentially contacting all of the discovered AD services for a domain, the LDAP client will contact the specified servers first.
+ The Preferred server for LDAP client option allows you to submit the IP addresses of up to two Active Directory (AD) servers as a comma-separated list. Rather than sequentially contacting all of the discovered AD services for a domain, the LDAP client will contact the specified servers first.
## February 2023
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Access-based enumeration](azure-netapp-files-create-volumes-smb.md#access-based-enumeration) (Preview)
- Access-based enumeration (ABE) displays only the files and folders that a user has permissions to access. If a user doesn't have Read (or equivalent) permissions for a folder, the Windows client hides the folder from the userΓÇÖs view. This new capability provides an additional layer of security by only displaying files and folders a user has access to, and as a result hiding file and folder information a user has no access to. You can now enable ABE on Azure NetApp Files [SMB](azure-netapp-files-create-volumes-smb.md#access-based-enumeration) and [dual-protocol](create-volumes-dual-protocol.md#access-based-enumeration) (with NTFS security style) volumes.
+ Access-based enumeration (ABE) displays only the files and folders that a user has permissions to access. If a user doesn't have Read (or equivalent) permissions for a folder, the Windows client hides the folder from the userΓÇÖs view. This new capability provides an additional layer of security by only displaying files and folders a user has access to, and as a result hiding file and folder information a user has no access to. You can now enable ABE on Azure NetApp Files [SMB](azure-netapp-files-create-volumes-smb.md#access-based-enumeration) and [dual-protocol](create-volumes-dual-protocol.md#access-based-enumeration) (with NTFS security style) volumes.
* [Non-browsable shares](azure-netapp-files-create-volumes-smb.md#non-browsable-share) (Preview)
- You can now configure Azure NetApp Files [SMB](azure-netapp-files-create-volumes-smb.md#non-browsable-share) or [dual-protocol](create-volumes-dual-protocol.md#non-browsable-share) volumes as non-browsable. This new feature prevents the Windows client from browsing the share, and the share doesn't show up in the Windows File Explorer. This new capability provides an additional layer of security by not displaying shares that are configured as non-browsable. Users who have access to the share will maintain access.
+ You can now configure Azure NetApp Files [SMB](azure-netapp-files-create-volumes-smb.md#non-browsable-share) or [dual-protocol](create-volumes-dual-protocol.md#non-browsable-share) volumes as non-browsable. This new feature prevents the Windows client from browsing the share, and the share doesn't show up in the Windows File Explorer. This new capability provides an additional layer of security by not displaying shares that are configured as non-browsable. Users who have access to the share will maintain access.
-* Option to **delete base snapshot** when you [restore a snapshot to a new volume using Azure NetApp Files](snapshots-restore-new-volume.md)
+* Option to **delete base snapshot** when you [restore a snapshot to a new volume using Azure NetApp Files](snapshots-restore-new-volume.md)
By default, the new volume includes a reference to the snapshot that was used for the restore operation, referred to as the *base snapshot*. If you donΓÇÖt want the new volume to contain this base snapshot, you can select the **Delete base snapshot** option during volume creation.
Azure NetApp Files is updated regularly. This article provides a summary about t
You no longer need to register the features before using them.
-* The `Vaults` API is deprecated starting with Azure NetApp Files REST API version 2022-09-01.
+* The `Vaults` API is deprecated starting with Azure NetApp Files REST API version 2022-09-01.
Enabling backup of volumes doesn't require the `Vaults` API. REST API users can use `PUT` and `PATCH` [Volumes API](/rest/api/netapp/volumes) to enable backup for a volume.
-
+ * [Volume user and group quotas](default-individual-user-group-quotas-introduction.md) (Preview) Azure NetApp Files volumes provide flexible, large and scalable storage shares for applications and users. Storage capacity and consumption by users is only limited by the size of the volume. In some scenarios, you may want to limit this storage consumption of users and groups within the volume. With Azure NetApp Files volume user and group quotas, you can now do so. User and/or group quotas enable you to restrict the storage space that a user or group can use within a specific Azure NetApp Files volume. You can choose to set default (same for all users) or individual user quotas on all NFS, SMB, and dual protocol-enabled volumes. On all NFS-enabled volumes, you can set default (same for all users) or individual group quotas.
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Large volumes](large-volumes-requirements-considerations.md) (Preview) Regular Azure NetApp Files volumes are limited to 100 TiB in size. Azure NetApp Files [large volumes](azure-netapp-files-understand-storage-hierarchy.md#large-volumes) break this barrier by enabling volumes of 100 TiB to 500 TiB in size. The large volumes capability enables various use cases and workloads that require large volumes with a single directory namespace.
-
+ * [Customer-managed keys](configure-customer-managed-keys.md) (Preview)
- Azure NetApp Files volumes now support encryption with customer-managed keys and Azure Key Vault to enable an extra layer of security for data at rest.
-
- Data encryption with customer-managed keys for Azure NetApp Files allows you to bring your own key for data encryption at rest. You can use this feature to implement separation of duties for managing keys and data. Additionally, you can centrally manage and organize keys using Azure Key Vault. With customer-managed encryption, you are in full control of, and responsible for, a key's lifecycle, key usage permissions, and auditing operations on keys.
-
+ Azure NetApp Files volumes now support encryption with customer-managed keys and Azure Key Vault to enable an extra layer of security for data at rest.
+
+ Data encryption with customer-managed keys for Azure NetApp Files allows you to bring your own key for data encryption at rest. You can use this feature to implement separation of duties for managing keys and data. Additionally, you can centrally manage and organize keys using Azure Key Vault. With customer-managed encryption, you are in full control of, and responsible for, a key's lifecycle, key usage permissions, and auditing operations on keys.
+ * [Capacity pool enhancement](azure-netapp-files-set-up-capacity-pool.md) (Preview) Azure NetApp Files now supports a lower limit of 2 TiB for capacity pool sizing with Standard network features.
Azure NetApp Files is updated regularly. This article provides a summary about t
## December 2022
-* [Azure Application Consistent Snapshot tool (AzAcSnap) 7](azacsnap-introduction.md)
-
- Azure Application Consistent Snapshot Tool (AzAcSnap) is a command-line tool that enables customers to simplify data protection for third-party databases in Linux environments.
+* [Azure Application Consistent Snapshot tool (AzAcSnap) 7](azacsnap-introduction.md)
+
+ Azure Application Consistent Snapshot Tool (AzAcSnap) is a command-line tool that enables customers to simplify data protection for third-party databases in Linux environments.
- The AzAcSnap 7 release includes the following fixes and improvements:
+ The AzAcSnap 7 release includes the following fixes and improvements:
* Shortening of snapshot names * Restore (`-c restore`) improvements
- * Test (`-c test`) improvements
- * Validation improvements
- * Timeout improvements
- * Azure Backup integration improvements
+ * Test (`-c test`) improvements
+ * Validation improvements
+ * Timeout improvements
+ * Azure Backup integration improvements
* Features moved to GA (generally available): None
- * The following features are now in preview:
+ * The following features are now in preview:
* Preliminary support for [Azure NetApp Files backup](backup-introduction.md) * [IBM Db2 database](https://www.ibm.com/products/db2) support adding options to configure, test, and snapshot backup IBM Db2 in an application consistent manner
- Download the latest release of the installer [here](https://aka.ms/azacsnapinstaller).
+ Download the latest release of the installer [here](https://aka.ms/azacsnapinstaller).
* [Cross-zone replication](create-cross-zone-replication.md) (Preview) With AzureΓÇÖs push towards the use of availability zones (AZs), the need for storage-based data replication is equally increasing. Azure NetApp Files now supports [cross-zone replication](cross-zone-replication-introduction.md). With this new in-region replication capability - by combining it with the new availability zone volume placement feature - you can replicate your Azure NetApp Files volumes asynchronously from one Azure availability zone to another in a fast and cost-effective way.
- Cross-zone replication helps you protect your data from unforeseeable zone failures without the need for host-based data replication. Cross-zone replication minimizes the amount of data required to replicate across the zones, therefore limiting data transfers required and also shortens the replication time, so you can achieve a smaller Restore Point Objective (RPO). Cross-zone replication doesnΓÇÖt involve any network transfer costs, hence it's highly cost-effective.
+ Cross-zone replication helps you protect your data from unforeseeable zone failures without the need for host-based data replication. Cross-zone replication minimizes the amount of data required to replicate across the zones, therefore limiting data transfers required and also shortens the replication time, so you can achieve a smaller Restore Point Objective (RPO). Cross-zone replication doesnΓÇÖt involve any network transfer costs, hence it's highly cost-effective.
The public preview of the feature is currently available in the following regions: Australia East, Brazil South, Canada Central, Central US, East Asia, East US, East US 2, France Central, Germany West Central, Japan East, North Europe, Norway East, Southeast Asia, South Central US, UK South, West Europe, West US 2, and West US 3.
-
+ In the future, cross-zone replication is planned for all [AZ-enabled regions](../availability-zones/az-overview.md#azure-regions-with-availability-zones) with [Azure NetApp Files presence](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=netapp&regions=all&rar=true). * [Azure Virtual WAN](configure-virtual-wan.md) (Preview)
- [Azure Virtual WAN](../virtual-wan/virtual-wan-about.md) is now supported on Azure NetApp Files with Standard network features. Azure Virtual WAN is a spoke-and-hub architecture, enabling cloud-hosted network hub connectivity between endpoints, creating networking, security, and routing functionalities in one interface. Use cases for Azure Virtual WAN include remote user VPN connectivity (point-to-site), private connectivity (ExpressRoute), intra-cloud connectivity, and VPN ExpressRoute inter-connectivity.
+ [Azure Virtual WAN](../virtual-wan/virtual-wan-about.md) is now supported on Azure NetApp Files with Standard network features. Azure Virtual WAN is a spoke-and-hub architecture, enabling cloud-hosted network hub connectivity between endpoints, creating networking, security, and routing functionalities in one interface. Use cases for Azure Virtual WAN include remote user VPN connectivity (point-to-site), private connectivity (ExpressRoute), intra-cloud connectivity, and VPN ExpressRoute inter-connectivity.
-## November 2022
+## November 2022
-* [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) is now generally available (GA) with expanded regional coverage.
+* [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) is now generally available (GA) with expanded regional coverage.
* [Encrypted SMB connections to Domain Controller](create-active-directory-connections.md#encrypted-smb-dc) (Preview)
Azure NetApp Files is updated regularly. This article provides a summary about t
## October 2022
-* [Availability zone volume placement](manage-availability-zone-volume-placement.md) (Preview)
+* [Availability zone volume placement](manage-availability-zone-volume-placement.md) (Preview)
- Azure availability zones are highly available, fault tolerant, and more scalable than traditional single or multiple data center infrastructures. Using Azure availability zones lets you design and operate applications and databases that automatically transition between zones without interruption. Azure NetApp Files lets you deploy new volumes in the logical availability zone of your choice to support enterprise, mission-critical HA deployments across multiple AZs. AzureΓÇÖs push towards the use of [availability zones (AZs)](../availability-zones/az-overview.md#availability-zones) has increased, and the use of high availability (HA) deployments with availability zones are now a default and best practice recommendation in AzureΓÇÖs [Well-Architected Framework](/azure/architecture/framework/resiliency/design-best-practices#use-zone-aware-services).
+ Azure availability zones are highly available, fault tolerant, and more scalable than traditional single or multiple data center infrastructures. Using Azure availability zones lets you design and operate applications and databases that automatically transition between zones without interruption. Azure NetApp Files lets you deploy new volumes in the logical availability zone of your choice to support enterprise, mission-critical HA deployments across multiple AZs. AzureΓÇÖs push towards the use of [availability zones (AZs)](../availability-zones/az-overview.md#availability-zones) has increased, and the use of high availability (HA) deployments with availability zones are now a default and best practice recommendation in AzureΓÇÖs [Well-Architected Framework](/azure/architecture/framework/resiliency/design-best-practices#use-zone-aware-services).
* [Application volume group for SAP HANA](application-volume-group-introduction.md) now generally available (GA)
- The application volume group for SAP HANA feature is now generally available. You no longer need to register the feature to use it.
+ The application volume group for SAP HANA feature is now generally available. You no longer need to register the feature to use it.
## August 2022 * [Standard network features](configure-network-features.md) are now generally available [in supported regions](azure-netapp-files-network-topologies.md#supported-regions).
- Standard network features now includes Global virtual network peering.
+ Standard network features now includes Global virtual network peering.
Regular billing for Standard network features on Azure NetApp Files began November 1, 2022.
-
+ ## July 2022 * [Azure Application Consistent Snapshot Tool (AzAcSnap) 6](azacsnap-release-notes.md)
-
+ [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (AzAcSnap) is a command-line tool that enables customers to simplify data protection for third-party databases (SAP HANA) in Linux environments. With AzAcSnap 6, there's a new [release model](azacsnap-release-notes.md). AzAcSnap 6 also introduces the following new capabilities: Now generally available:
Azure NetApp Files is updated regularly. This article provides a summary about t
* Backint integration to work with Azure Backup * [RunBefore and RunAfter](azacsnap-cmd-ref-runbefore-runafter.md) CLI options to execute custom shell scripts and commands before or after taking storage snapshots
- In preview:
+ In preview:
* Azure Key Vault to store Service Principal content * Azure Managed Disk as an alternate storage back end
Azure NetApp Files is updated regularly. This article provides a summary about t
[Azure NetApp Files datastores for Azure VMware Solution](https://azure.microsoft.com/blog/power-your-file-storageintensive-workloads-with-azure-vmware-solution) is now in public preview. This new integration between Azure VMware Solution and Azure NetApp Files enables you to [create datastores via the Azure VMware Solution resource provider with Azure NetApp Files NFS volumes](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) and mount the datastores on your private cloud clusters of choice. Along with the integration of Azure disk pools for Azure VMware Solution, this capability provides more choice to scale storage needs independently of compute resources. For your storage-intensive workloads running on Azure VMware Solution, the integration with Azure NetApp Files helps to easily scale storage capacity beyond the limits of the local instance storage for Azure VMware Solution provided by vSAN and lower your overall total cost of ownership for storage-intensive workloads.
-* [Azure Policy built-in definitions for Azure NetApp Files](azure-policy-definitions.md#built-in-policy-definitions)
+* [Azure Policy built-in definitions for Azure NetApp Files](azure-policy-definitions.md#built-in-policy-definitions)
- Azure Policy helps to enforce organizational standards and assess compliance at scale. Through its compliance dashboard, it provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to the per-resource, per-policy granularity. It also helps to bring your resources to compliance through bulk remediation for existing resources and automatic remediation for new resources. Azure NetApp Files already supports Azure Policy via custom policy definitions. Azure NetApp Files now also provides built-in policy to enable organization admins to restrict creation of unsecure NFS volumes or audit existing volumes more easily.
+ Azure Policy helps to enforce organizational standards and assess compliance at scale. Through its compliance dashboard, it provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to the per-resource, per-policy granularity. It also helps to bring your resources to compliance through bulk remediation for existing resources and automatic remediation for new resources. Azure NetApp Files already supports Azure Policy via custom policy definitions. Azure NetApp Files now also provides built-in policy to enable organization admins to restrict creation of unsecure NFS volumes or audit existing volumes more easily.
## May 2022
-* [LDAP signing](create-active-directory-connections.md#ldap-signing) now generally available (GA)
+* [LDAP signing](create-active-directory-connections.md#ldap-signing) now generally available (GA)
The LDAP signing feature is now generally available. You no longer need to register the feature before using it.
Azure NetApp Files is updated regularly. This article provides a summary about t
## April 2022
-* Features that are now generally available (GA)
+* Features that are now generally available (GA)
- The following features are now GA. You no longer need to register the features before using them.
+ The following features are now GA. You no longer need to register the features before using them.
* [Dynamic change of service level](dynamic-change-volume-service-level.md)
- * [Administrators privilege users](create-active-directory-connections.md#administrators-privilege-users)
+ * [Administrators privilege users](create-active-directory-connections.md#administrators-privilege-users)
## March 2022
-* Features that are now generally available (GA)
+* Features that are now generally available (GA)
- The following features are now GA. You no longer need to register the features before using them.
- * [Backup policy users](create-active-directory-connections.md#backup-policy-users)
+ The following features are now GA. You no longer need to register the features before using them.
+ * [Backup policy users](create-active-directory-connections.md#backup-policy-users)
* [AES encryption for AD authentication](create-active-directory-connections.md#aes-encryption) ## January 2022 * [Azure Application Consistent Snapshot Tool (AzAcSnap) v5.1 Public Preview](azacsnap-release-notes.md)
- [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (AzAcSnap) is a command-line tool that enables customers to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, `SUSE` and `RHEL`).
-
- The public preview of v5.1 brings the following new capabilities to AzAcSnap:
+ [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (AzAcSnap) is a command-line tool that enables customers to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, `SUSE` and `RHEL`).
+
+ The public preview of v5.1 brings the following new capabilities to AzAcSnap:
* Oracle Database support * Backint Co-existence * Azure Managed Disk
- * RunBefore and RunAfter capability
+ * RunBefore and RunAfter capability
* [LDAP search scope](configure-ldap-extended-groups.md#ldap-search-scope)
- You might be using the Unix security style with a dual-protocol volume or Lightweight Directory Access Protocol (LDAP) with extended groups features in combination with large LDAP topologies. In this case, you might encounter "access denied" errors on Linux clients when interacting with such Azure NetApp Files volumes. You can now use the **LDAP Search Scope** option to specify the LDAP search scope to avoid "access denied" errors.
+ You might be using the Unix security style with a dual-protocol volume or Lightweight Directory Access Protocol (LDAP) with extended groups features in combination with large LDAP topologies. In this case, you might encounter "access denied" errors on Linux clients when interacting with such Azure NetApp Files volumes. You can now use the **LDAP Search Scope** option to specify the LDAP search scope to avoid "access denied" errors.
* [Active Directory Domain Services (AD DS) LDAP user-mapping with NFS extended groups](configure-ldap-extended-groups.md) now generally available (GA)
Azure NetApp Files is updated regularly. This article provides a summary about t
## December 2021
-* [NFS protocol version conversion](convert-nfsv3-nfsv41.md) (Preview)
+* [NFS protocol version conversion](convert-nfsv3-nfsv41.md) (Preview)
- In some cases, you might need to transition from one NFS protocol version to another. For example, when you want an existing NFS NFSv3 volume to take advantage of NFSv4.1 features, you might want to convert the protocol version from NFSv3 to NFSv4.1. Likewise, you might want to convert an existing NFSv4.1 volume to NFSv3 for performance or simplicity reasons. Azure NetApp Files now provides an option that enables you to convert an NFS volume between NFSv3 and NFSv4.1. This option doesn't require creating new volumes or performing data copies. The conversion operations preserve the data and update the volume export policies as part of the operation.
+ In some cases, you might need to transition from one NFS protocol version to another. For example, when you want an existing NFS NFSv3 volume to take advantage of NFSv4.1 features, you might want to convert the protocol version from NFSv3 to NFSv4.1. Likewise, you might want to convert an existing NFSv4.1 volume to NFSv3 for performance or simplicity reasons. Azure NetApp Files now provides an option that enables you to convert an NFS volume between NFSv3 and NFSv4.1. This option doesn't require creating new volumes or performing data copies. The conversion operations preserve the data and update the volume export policies as part of the operation.
* [Single-file snapshot restore](snapshots-restore-file-single.md) (Preview) Azure NetApp Files provides ways to quickly restore data from snapshots (mainly at the volume level). See [How Azure NetApp Files snapshots work](snapshots-introduction.md). Options for user file self-restore are available via client-side data copy from the `~snapshot` (Windows) or `.snapshot` (Linux) folders. These operations require data (files and directories) to traverse the network twice (upon read and write). As such, the operations aren't time and resource efficient, especially with large data sets. If you don't want to restore the entire snapshot to a new volume, revert a volume, or copy large files across the network, you can use the single-file snapshot restore feature to restore individual files directly on the service from a volume snapshot without requiring data copy via an external client. This approach drastically reduces RTO and network resource usage when restoring large files.
-* Features that are now generally available (GA)
+* Features that are now generally available (GA)
The following features are now GA. You no longer need to register the features before using them.
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Application volume group for SAP HANA](application-volume-group-introduction.md) (Preview)
- Application volume group (AVG) for SAP HANA enables you to deploy all volumes required to install and operate an SAP HANA database according to best practices, including the use of proximity placement group (PPG) with VMs to achieve automated, low-latency deployments. AVG for SAP HANA has implemented many technical improvements that simplify and standardize the entire process to help you streamline volume deployments for SAP HANA.
-
+ Application volume group (AVG) for SAP HANA enables you to deploy all volumes required to install and operate an SAP HANA database according to best practices, including the use of proximity placement group (PPG) with VMs to achieve automated, low-latency deployments. AVG for SAP HANA has implemented many technical improvements that simplify and standardize the entire process to help you streamline volume deployments for SAP HANA.
+ ## October 2021 * [Azure NetApp Files cross-region replication](cross-region-replication-introduction.md) now generally available (GA)
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Standard network features](configure-network-features.md) (Preview) Azure NetApp Files now supports **Standard** network features for volumes that customers have been asking for since the inception. This capability is a result of innovative hardware and software integration. Standard network features provide an enhanced virtual networking experience through various features for a seamless and consistent experience with security posture of all their workloads including Azure NetApp Files.
-
- You can now choose *Standard* or *Basic* network features when creating a new Azure NetApp Files volume. Upon choosing Standard network features, you can take advantage of the following supported features for Azure NetApp Files volumes and delegated subnets:
+
+ You can now choose *Standard* or *Basic* network features when creating a new Azure NetApp Files volume. Upon choosing Standard network features, you can take advantage of the following supported features for Azure NetApp Files volumes and delegated subnets:
* Increased IP limits for the virtual networks with Azure NetApp Files volumes at par with VMs * Enhanced network security with support for [network security groups](../virtual-network/network-security-groups-overview.md) on the Azure NetApp Files delegated subnet * Enhanced network control with support for [user-defined routes](../virtual-network/virtual-networks-udr-overview.md#custom-routes) to and from Azure NetApp Files delegated subnets * Connectivity over Active/Active VPN gateway setup * [ExpressRoute FastPath](../expressroute/about-fastpath.md) connectivity to Azure NetApp Files
- This public preview is currently available starting with **North Central US** and will roll out to other regions. Stay tuned for further information through [Azure Update](https://azure.microsoft.com/updates/) as more regions and features become available.
-
+ This public preview is currently available starting with **North Central US** and will roll out to other regions. Stay tuned for further information through [Azure Update](https://azure.microsoft.com/updates/) as more regions and features become available.
+ To learn more, see [Configure network features for an Azure NetApp Files volume](configure-network-features.md). ## September 2021 * [Azure NetApp Files backup](backup-introduction.md) (Preview)
- Azure NetApp Files online snapshots now support backup of snapshots. With this new backup capability, you can vault your Azure NetApp Files snapshots to cost-efficient and ZRS-enabled Azure storage in a fast and cost-effective way. This approach further protects your data from accidental deletion.
+ Azure NetApp Files online snapshots now support backup of snapshots. With this new backup capability, you can vault your Azure NetApp Files snapshots to cost-efficient and ZRS-enabled Azure storage in a fast and cost-effective way. This approach further protects your data from accidental deletion.
- Azure NetApp Files backup extends ONTAP's built-in snapshot technology. When snapshots are vaulted to Azure storage, only changed blocks relative to previously vaulted snapshots are copied and stored, in an efficient format. Vaulted snapshots are still represented in full. You can restore them to a new volume individually and directly, eliminating the need for an iterative, full-incremental recovery process. This advanced technology minimizes the amount of data required to store to and retrieve from Azure storage, therefore saving data transfer and storage costs. It also shortens the backup vaulting time, so you can achieve a smaller Restore Point Objective (RPO). You can keep a minimum number of snapshots online on the Azure NetApp Files service for the most immediate, near-instantaneous data-recovery needs. In doing so, you can build up a longer history of snapshots at a lower cost for long-term retention in the Azure NetApp Files backup vault.
+ Azure NetApp Files backup extends ONTAP's built-in snapshot technology. When snapshots are vaulted to Azure storage, only changed blocks relative to previously vaulted snapshots are copied and stored, in an efficient format. Vaulted snapshots are still represented in full. You can restore them to a new volume individually and directly, eliminating the need for an iterative, full-incremental recovery process. This advanced technology minimizes the amount of data required to store to and retrieve from Azure storage, therefore saving data transfer and storage costs. It also shortens the backup vaulting time, so you can achieve a smaller Restore Point Objective (RPO). You can keep a minimum number of snapshots online on the Azure NetApp Files service for the most immediate, near-instantaneous data-recovery needs. In doing so, you can build up a longer history of snapshots at a lower cost for long-term retention in the Azure NetApp Files backup vault.
For more information, see [How Azure NetApp Files snapshots work](snapshots-introduction.md).
Azure NetApp Files is updated regularly. This article provides a summary about t
You can already enable the SMB Continuous Availability (CA) feature when you [create a new SMB volume](azure-netapp-files-create-volumes-smb.md#continuous-availability). You can now also enable SMB CA on an existing SMB volume. See [Enable Continuous Availability on existing SMB volumes](enable-continuous-availability-existing-SMB.md).
-* [Snapshot policy](snapshots-manage-policy.md) now generally available (GA)
+* [Snapshot policy](snapshots-manage-policy.md) now generally available (GA)
The snapshot policy feature is now generally available. You no longer need to register the feature before using it.
-* [NFS `Chown Mode` export policy and UNIX export permissions](configure-unix-permissions-change-ownership-mode.md) (Preview)
+* [NFS `Chown Mode` export policy and UNIX export permissions](configure-unix-permissions-change-ownership-mode.md) (Preview)
- You can now set the Unix permissions and the change ownership mode (`Chown Mode`) options on Azure NetApp Files NFS volumes or dual-protocol volumes with the Unix security style. You can specify these settings during volume creation or after volume creation.
+ You can now set the Unix permissions and the change ownership mode (`Chown Mode`) options on Azure NetApp Files NFS volumes or dual-protocol volumes with the Unix security style. You can specify these settings during volume creation or after volume creation.
- The change ownership mode (`Chown Mode`) functionality enables you to set the ownership management capabilities of files and directories. You can specify or modify the setting under a volume's export policy. Two options for `Chown Mode` are available:
- * *Restricted* (default), where only the root user can change the ownership of files and directories
- * *Unrestricted*, where non-root users can change the ownership for files and directories that they own
+ The change ownership mode (`Chown Mode`) functionality enables you to set the ownership management capabilities of files and directories. You can specify or modify the setting under a volume's export policy. Two options for `Chown Mode` are available:
+ * *Restricted* (default), where only the root user can change the ownership of files and directories
+ * *Unrestricted*, where non-root users can change the ownership for files and directories that they own
- The Azure NetApp Files Unix Permissions functionality enables you to specify change permissions for the mount path.
+ The Azure NetApp Files Unix Permissions functionality enables you to specify change permissions for the mount path.
- These new features put access control of certain files and directories in the hands of the data user instead of the service operator.
+ These new features put access control of certain files and directories in the hands of the data user instead of the service operator.
-* [Dual-protocol (NFSv4.1 and SMB) volume](create-volumes-dual-protocol.md) (Preview)
+* [Dual-protocol (NFSv4.1 and SMB) volume](create-volumes-dual-protocol.md) (Preview)
Azure NetApp Files already supports dual-protocol access to NFSv3 and SMB volumes as of [July 2020](#july-2020). You can now create an Azure NetApp Files volume that allows simultaneous dual-protocol (NFSv4.1 and SMB) access with support for LDAP user mapping. This feature enables use cases where you might have a Linux-based workload using NFSv4.1 for its access, and the workload generates and stores data in an Azure NetApp Files volume. At the same time, your staff might need to use Windows-based clients and software to analyze the newly generated data from the same Azure NetApp Files volume. The simultaneous dual-protocol access removes the need to copy the workload-generated data to a separate volume with a different protocol for post-analysis, saving storage cost and operational time. This feature is free of charge (normal Azure NetApp Files storage cost still applies) and is generally available. Learn more from the [simultaneous dual-protocol NFSv4.1/SMB access](create-volumes-dual-protocol.md) documentation.
-## June 2021
+## June 2021
* [Azure NetApp Files storage service add-ons](storage-service-add-ons.md)
- The new Azure NetApp Files **Storage service add-ons** menu option provides an Azure portal ΓÇ£launching padΓÇ¥ for available third-party, ecosystem add-ons to the Azure NetApp Files storage service. With this new portal menu option, you can enter a landing page by selecting an add-on tile to quickly access the add-on.
+ The new Azure NetApp Files **Storage service add-ons** menu option provides an Azure portal ΓÇ£launching padΓÇ¥ for available third-party, ecosystem add-ons to the Azure NetApp Files storage service. With this new portal menu option, you can enter a landing page by selecting an add-on tile to quickly access the add-on.
- **NetApp add-ons** is the first category of add-ons introduced under **Storage service add-ons**. It provides access to NetApp Cloud Data Sense. Selecting the **Cloud Data Sense** tile opens a new browser and directs you to the add-on installation page.
+ **NetApp add-ons** is the first category of add-ons introduced under **Storage service add-ons**. It provides access to NetApp Cloud Data Sense. Selecting the **Cloud Data Sense** tile opens a new browser and directs you to the add-on installation page.
-* [Manual QoS capacity pool](azure-netapp-files-understand-storage-hierarchy.md#manual-qos-type) now generally available (GA)
+* [Manual QoS capacity pool](azure-netapp-files-understand-storage-hierarchy.md#manual-qos-type) now generally available (GA)
- The Manual QoS capacity pool feature is now generally available. You no longer need to register the feature before using it.
+ The Manual QoS capacity pool feature is now generally available. You no longer need to register the feature before using it.
-* [Shared AD support for multiple accounts to one Active Directory per region per subscription](create-active-directory-connections.md#shared_ad) (Preview)
+* [Shared AD support for multiple accounts to one Active Directory per region per subscription](create-active-directory-connections.md#shared_ad) (Preview)
To date, Azure NetApp Files supports only a single Active Directory (AD) per region, where only a single NetApp account could be configured to access the AD. The new **Shared AD** feature enables all NetApp accounts to share an AD connection created by one of the NetApp accounts that belong to the same subscription and the same region. For example, all NetApp accounts in the same subscription and region can use the common AD configuration to create an SMB volume, a NFSv4.1 Kerberos volume, or a dual-protocol volume. When you use this feature, the AD connection is visible in all NetApp accounts that are under the same subscription and same region.
-## May 2021
+## May 2021
-* Azure NetApp Files Application Consistent Snapshot tool [(AzAcSnap)](azacsnap-introduction.md) is now generally available.
+* Azure NetApp Files Application Consistent Snapshot tool [(AzAcSnap)](azacsnap-introduction.md) is now generally available.
- AzAcSnap is a command-line tool that enables you to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, `SUSE` and `RHEL`). See [Release Notes for AzAcSnap](azacsnap-release-notes.md) for the latest changes about the tool.
+ AzAcSnap is a command-line tool that enables you to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, `SUSE` and `RHEL`). See [Release Notes for AzAcSnap](azacsnap-release-notes.md) for the latest changes about the tool.
-* [Support for capacity pool billing tags](manage-billing-tags.md)
+* [Support for capacity pool billing tags](manage-billing-tags.md)
Azure NetApp Files now supports billing tags to help you cross-reference cost with business units or other internal consumers. Billing tags are assigned at the capacity pool level and not volume level, and they appear on the customer invoice.
-* [Enable Active Directory Domain Services (AD DS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md) (Preview)
+* [Enable Active Directory Domain Services (AD DS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md) (Preview)
- By default, LDAP communications between client and server applications aren't encrypted. This setting means that it's possible to use a network-monitoring device or software to view the communications between an LDAP client and server computers. This scenario might be problematic in non-isolated or shared virtual networks when an LDAP simple bind is used, because the credentials (username and password) used to bind the LDAP client to the LDAP server are passed over the network unencrypted. LDAP over TLS (also known as LDAPS) is a protocol that uses TLS to secure communication between LDAP clients and LDAP servers. Azure NetApp Files now supports the secure communication between an Active Directory Domain Server (AD DS) using LDAP over TLS. Azure NetApp Files can now use LDAP over TLS for setting up authenticated sessions between the Active Directory-integrated LDAP servers. You can enable the LDAP over TLS feature for NFS, SMB, and dual-protocol volumes. By default, LDAP over TLS is disabled on Azure NetApp Files.
+ By default, LDAP communications between client and server applications aren't encrypted. This setting means that it's possible to use a network-monitoring device or software to view the communications between an LDAP client and server computers. This scenario might be problematic in non-isolated or shared virtual networks when an LDAP simple bind is used, because the credentials (username and password) used to bind the LDAP client to the LDAP server are passed over the network unencrypted. LDAP over TLS (also known as LDAPS) is a protocol that uses TLS to secure communication between LDAP clients and LDAP servers. Azure NetApp Files now supports the secure communication between an Active Directory Domain Server (AD DS) using LDAP over TLS. Azure NetApp Files can now use LDAP over TLS for setting up authenticated sessions between the Active Directory-integrated LDAP servers. You can enable the LDAP over TLS feature for NFS, SMB, and dual-protocol volumes. By default, LDAP over TLS is disabled on Azure NetApp Files.
-* Support for throughput [metrics](azure-netapp-files-metrics.md)
+* Support for throughput [metrics](azure-netapp-files-metrics.md)
- Azure NetApp Files adds support for the following metrics:
+ Azure NetApp Files adds support for the following metrics:
* Capacity pool throughput metrics * *Pool Allocated to Volume Throughput* * *Pool Consumed Throughput*
Azure NetApp Files is updated regularly. This article provides a summary about t
* *Volume Consumed Throughput* * *Percentage Volume Consumed Throughput*
-* Support for [dynamic change of service level](dynamic-change-volume-service-level.md) of replication volumes
+* Support for [dynamic change of service level](dynamic-change-volume-service-level.md) of replication volumes
Azure NetApp Files now supports dynamically changing the service level of replication source and destination volumes. ## April 2021
-* [Manual volume and capacity pool management](volume-quota-introduction.md) (hard quota)
+* [Manual volume and capacity pool management](volume-quota-introduction.md) (hard quota)
The behavior of Azure NetApp Files volume and capacity pool provisioning has changed to a manual and controllable mechanism. The storage capacity of a volume is limited to the set size (quota) of the volume. When volume consumption maxes out, neither the volume nor the underlying capacity pool grows automatically. Instead, the volume will receive an ΓÇ£out of spaceΓÇ¥ condition. However, you can [resize the capacity pool or a volume](azure-netapp-files-resize-capacity-pools-or-volumes.md) as needed. You should actively [monitor the capacity of a volume](monitor-volume-capacity.md) and the underlying capacity pool. This behavior change is a result of the following key requests indicated by many users:
- * Previously, VM clients would see the thinly provisioned (100 TiB) capacity of any given volume when using OS space or capacity monitoring tools. This situation could result in inaccurate capacity visibility on the client or application side. This behavior has been corrected.
- * The previous auto-grow behavior of capacity pools gave application owners no control over the provisioned capacity pool space (and the associated cost). This behavior was especially cumbersome in environments where ΓÇ£run-away processesΓÇ¥ could rapidly fill up and grow the provisioned capacity. This behavior has been corrected.
+ * Previously, VM clients would see the thinly provisioned (100 TiB) capacity of any given volume when using OS space or capacity monitoring tools. This situation could result in inaccurate capacity visibility on the client or application side. This behavior has been corrected.
+ * The previous auto-grow behavior of capacity pools gave application owners no control over the provisioned capacity pool space (and the associated cost). This behavior was especially cumbersome in environments where ΓÇ£run-away processesΓÇ¥ could rapidly fill up and grow the provisioned capacity. This behavior has been corrected.
* Users want to see and maintain a direct correlation between volume size (quota) and performance. The previous behavior allowed for (implicit) over-subscription of a volume (capacity) and capacity pool auto-grow. As such, users couldn't make a direct correlation until the volume quota had been actively set or reset. This behavior has now been corrected. Users have requested direct control over provisioned capacity. Users want to control and balance storage capacity and utilization. They also want to control cost along with the application-side and client-side visibility of available, used, and provisioned capacity and the performance of their application volumes. With this new behavior, all this capability has now been enabled.
-* [SMB Continuous Availability (CA) shares support for FSLogix user profile containers](azure-netapp-files-create-volumes-smb.md#continuous-availability) (Preview)
+* [SMB Continuous Availability (CA) shares support for FSLogix user profile containers](azure-netapp-files-create-volumes-smb.md#continuous-availability) (Preview)
- [FSLogix](/fslogix/overview) is a set of solutions that enhance, enable, and simplify non-persistent Windows computing environments. FSLogix solutions are appropriate for virtual environments in both public and private clouds. You can also use FSLogix solutions to create more portable computing sessions when you use physical devices. FSLogix can provide dynamic access to persistent user profile containers stored on SMB shared networked storage, including Azure NetApp Files. To enhance FSLogix resiliency to events of storage service maintenance, Azure NetApp Files has extended support for SMB Transparent Failover via [SMB Continuous Availability (CA) shares on Azure NetApp Files](azure-netapp-files-create-volumes-smb.md#continuous-availability) for user profile containers. For more information, see Azure NetApp Files [Azure Virtual Desktop solutions](azure-netapp-files-solution-architectures.md#windows-virtual-desktop).
+ [FSLogix](/fslogix/overview) is a set of solutions that enhance, enable, and simplify non-persistent Windows computing environments. FSLogix solutions are appropriate for virtual environments in both public and private clouds. You can also use FSLogix solutions to create more portable computing sessions when you use physical devices. FSLogix can provide dynamic access to persistent user profile containers stored on SMB shared networked storage, including Azure NetApp Files. To enhance FSLogix resiliency to events of storage service maintenance, Azure NetApp Files has extended support for SMB Transparent Failover via [SMB Continuous Availability (CA) shares on Azure NetApp Files](azure-netapp-files-create-volumes-smb.md#continuous-availability) for user profile containers. For more information, see Azure NetApp Files [Azure Virtual Desktop solutions](azure-netapp-files-solution-architectures.md#windows-virtual-desktop).
-* [SMB3 Protocol Encryption](azure-netapp-files-create-volumes-smb.md#smb3-encryption) (Preview)
+* [SMB3 Protocol Encryption](azure-netapp-files-create-volumes-smb.md#smb3-encryption) (Preview)
You can now enable SMB3 Protocol Encryption on Azure NetApp Files SMB and dual-protocol volumes. This feature enables encryption for in-flight SMB3 data, using the [AES-CCM algorithm on SMB 3.0, and the AES-GCM algorithm on SMB 3.1.1](/windows-server/storage/file-server/file-server-smb-overview#features-added-in-smb-311-with-windows-server-2016-and-windows-10-version-1607) connections. SMB clients not using SMB3 encryption can't access this volume. Data at rest is encrypted regardless of this setting. SMB encryption further enhances security. However, it might affect the client (CPU overhead for encrypting and decrypting messages). It might also affect storage resource utilization (reductions in throughput). You should test the encryption performance impact against your applications before deploying workloads into production.
-* [Active Directory Domain Services (AD DS) LDAP user-mapping with NFS extended groups](configure-ldap-extended-groups.md) (Preview)
+* [Active Directory Domain Services (AD DS) LDAP user-mapping with NFS extended groups](configure-ldap-extended-groups.md) (Preview)
- By default, Azure NetApp Files supports up to 16 group IDs when handling NFS user credentials, as defined in [RFC 5531](https://tools.ietf.org/html/rfc5531). With this new capability, you can now increase the maximum up to 1,024 if you have users who are members of more than the default number of groups. To support this capability, NFS volumes can now also be added to AD DS LDAP, which enables Active Directory LDAP users with extended groups entries (with up to 1024 groups) to access the volume.
+ By default, Azure NetApp Files supports up to 16 group IDs when handling NFS user credentials, as defined in [RFC 5531](https://tools.ietf.org/html/rfc5531). With this new capability, you can now increase the maximum up to 1,024 if you have users who are members of more than the default number of groups. To support this capability, NFS volumes can now also be added to AD DS LDAP, which enables Active Directory LDAP users with extended groups entries (with up to 1024 groups) to access the volume.
## March 2021
-
-* [SMB Continuous Availability (CA) shares](azure-netapp-files-create-volumes-smb.md#add-an-smb-volume) (Preview)
+
+* [SMB Continuous Availability (CA) shares](azure-netapp-files-create-volumes-smb.md#add-an-smb-volume) (Preview)
SMB Transparent Failover enables maintenance operations on the Azure NetApp Files service without interrupting connectivity to server applications storing and accessing data on SMB volumes. To support SMB Transparent Failover, Azure NetApp Files now supports the SMB Continuous Availability shares option for use with SQL Server applications over SMB running on Azure VMs. This feature is currently supported on Windows SQL Server. Azure NetApp Files doesn't currently support Linux SQL Server. This feature provides significant performance improvements for SQL Server. It also provides scale and cost benefits for [Single Instance, Always-On Failover Cluster Instance and Always-On Availability Group deployments](azure-netapp-files-solution-architectures.md#sql-server). See [Benefits of using Azure NetApp Files for SQL Server deployment](solutions-benefits-azure-netapp-files-sql-server.md).
Azure NetApp Files is updated regularly. This article provides a summary about t
## December 2020
-* [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (Preview)
+* [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (Preview)
- Azure Application Consistent Snapshot Tool (AzAcSnap) is a command-line tool that enables you to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, `SUSE` and `RHEL`).
+ Azure Application Consistent Snapshot Tool (AzAcSnap) is a command-line tool that enables you to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, `SUSE` and `RHEL`).
AzAcSnap uses the volume snapshot and replication functionalities in Azure NetApp Files and Azure Large Instance. It provides the following benefits:
- * Application-consistent data protection
- * Database catalog management
- * *Ad hoc* volume protection
- * Cloning of storage volumes
- * Support for disaster recovery
+ * Application-consistent data protection
+ * Database catalog management
+ * *Ad hoc* volume protection
+ * Cloning of storage volumes
+ * Support for disaster recovery
## November 2020
Azure NetApp Files is updated regularly. This article provides a summary about t
Azure NetApp Files now supports cross-region replication. With this new disaster recovery capability, you can replicate your Azure NetApp Files volumes from one Azure region to another in a fast and cost-effective way. It helps you protect your data from unforeseeable regional failures. Azure NetApp Files cross-region replication uses NetApp SnapMirror® technology; only changed blocks are sent over the network in a compressed, efficient format. This proprietary technology minimizes the amount of data required to replicate across the regions, therefore saving data transfer costs. It also shortens the replication time, so you can achieve a smaller Restore Point Objective (RPO).
-* [Manual QoS Capacity Pool](azure-netapp-files-understand-storage-hierarchy.md#manual-qos-type) (Preview)
+* [Manual QoS Capacity Pool](azure-netapp-files-understand-storage-hierarchy.md#manual-qos-type) (Preview)
In a manual QoS capacity pool, you can assign the capacity and throughput for a volume independently. The total throughput of all volumes created with a manual QoS capacity pool is limited by the total throughput of the pool. It's determined by the combination of the pool size and the service-level throughput. Alternatively, a capacity poolΓÇÖs [QoS type](azure-netapp-files-understand-storage-hierarchy.md#qos_types) can be auto (automatic), which is the default. In an auto QoS capacity pool, throughput is assigned automatically to the volumes in the pool, proportional to the size quota assigned to the volumes.
-* [LDAP signing](create-active-directory-connections.md#create-an-active-directory-connection) (Preview)
+* [LDAP signing](create-active-directory-connections.md#create-an-active-directory-connection) (Preview)
Azure NetApp Files now supports LDAP signing for secure LDAP lookups between the Azure NetApp Files service and the user-specified Active Directory Domain Services domain controllers. This feature is currently in preview. * [AES encryption for AD authentication](create-active-directory-connections.md#create-an-active-directory-connection) (Preview)
- Azure NetApp Files now supports AES encryption on LDAP connection to DC to enable AES encryption for an SMB volume. This feature is currently in preview.
+ Azure NetApp Files now supports AES encryption on LDAP connection to DC to enable AES encryption for an SMB volume. This feature is currently in preview.
-* New [metrics](azure-netapp-files-metrics.md):
+* New [metrics](azure-netapp-files-metrics.md):
- * New volume metrics:
+ * New volume metrics:
* *Volume allocated size*: The provisioned size of a volume
- * New pool metrics:
- * *Pool Allocated size*: The provisioned size of the pool
+ * New pool metrics:
+ * *Pool Allocated size*: The provisioned size of the pool
* *Total snapshot size for the pool*: The sum of snapshot size from all volumes in the pool ## July 2020
Azure NetApp Files is updated regularly. This article provides a summary about t
Azure NetApp Files now supports NFS client encryption in Kerberos modes (krb5, krb5i, and krb5p) with AES-256 encryption, providing you with more data security. This feature is free of charge (normal [Azure NetApp Files storage cost](https://azure.microsoft.com/pricing/details/netapp/) still applies) and is generally available. Learn more from the [NFS v4.1 Kerberos encryption documentation](configure-kerberos-encryption.MD).
-* [Dynamic volume service level change](dynamic-change-volume-service-level.MD) (Preview)
+* [Dynamic volume service level change](dynamic-change-volume-service-level.MD) (Preview)
Cloud promises flexibility in IT spending. You can now change the service level of an existing Azure NetApp Files volume by moving the volume to another capacity pool that uses the service level you want for the volume. This in-place service-level change for the volume doesn't require that you migrate data. It also doesn't affect the data plane access to the volume. You can change an existing volume to use a higher service level for better performance, or to use a lower service level for cost optimization. This feature is free of charge (normal [Azure NetApp Files storage cost](https://azure.microsoft.com/pricing/details/netapp/) still applies). It's currently in preview. You can register for the feature preview by following the [dynamic volume service level change documentation](dynamic-change-volume-service-level.md).
-* [Volume snapshot policy](snapshots-manage-policy.md) (Preview)
+* [Volume snapshot policy](snapshots-manage-policy.md) (Preview)
Azure NetApp Files allows you to create point-in-time snapshots of your volumes. You can now create a snapshot policy to have Azure NetApp Files automatically create volume snapshots at a frequency of your choice. You can schedule the snapshots to be taken in hourly, daily, weekly, or monthly cycles. You can also specify the maximum number of snapshots to keep as part of the snapshot policy. This feature is free of charge (normal [Azure NetApp Files storage cost](https://azure.microsoft.com/pricing/details/netapp/) still applies) and is currently in preview. You can register for the feature preview by following the [volume snapshot policy documentation](snapshots-manage-policy.md). * [NFS root access export policy](azure-netapp-files-configure-export-policy.md)
- Azure NetApp Files now allows you to specify whether the root account can access the volume.
+ Azure NetApp Files now allows you to specify whether the root account can access the volume.
* [Hide snapshot path](snapshots-edit-hide-path.md)
Azure NetApp Files is updated regularly. This article provides a summary about t
## Next steps * [What is Azure NetApp Files](azure-netapp-files-introduction.md)
-* [Understand the storage hierarchy of Azure NetApp Files](azure-netapp-files-understand-storage-hierarchy.md)
+* [Understand the storage hierarchy of Azure NetApp Files](azure-netapp-files-understand-storage-hierarchy.md)
azure-resource-manager Bicep Config Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-modules.md
Title: Module setting for Bicep config
description: Describes how to customize configuration values for modules in Bicep deployments. Previously updated : 01/18/2023 Last updated : 01/17/2024 # Add module settings in the Bicep config file
The available profiles are:
You can customize these profiles, or add new profiles for your on-premises environments.
-The available credential types are:
+Bicep uses the [Azure.Identity SDK](/dotnet/api/azure.identity) to do authentication. The available credential types are:
-- AzureCLI-- AzurePowerShell-- Environment-- ManagedIdentity-- VisualStudio-- VisualStudioCode
+- [AzureCLI](/dotnet/api/azure.identity.azureclicredential)
+- [AzurePowerShell](/dotnet/api/azure.identity.azurepowershellcredential)
+- [Environment](/dotnet/api/azure.identity.environmentcredential)
+- [ManagedIdentity](/dotnet/api/azure.identity.managedidentitycredential)
+- [VisualStudio](/dotnet/api/azure.identity.visualstudiocredential)
+- [VisualStudioCode](/dotnet/api/azure.identity.visualstudiocodecredential)
[!INCLUDE [vscode authentication](../../../includes/resource-manager-vscode-authentication.md)]
azure-resource-manager Control Plane Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/control-plane-metrics.md
Title: Control plane metrics in Azure Monitor description: Azure Resource Manager metrics in Azure Monitor | Traffic and latency observability for subscription-level control plane requests -+ Last updated 04/26/2023
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Before starting your move operation, review the [checklist](./move-resource-grou
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | automationaccounts | **Yes** | **Yes** | **Yes** [PowerShell script](../../automation/automation-disaster-recovery.md) |
+> | automationaccounts | **Yes** | **Yes** | [PowerShell script](../../automation/automation-disaster-recovery.md) |
> | automationaccounts / configurations | **Yes** | **Yes** | No | > | automationaccounts / runbooks | **Yes** | **Yes** | No |
Before starting your move operation, review the [checklist](./move-resource-grou
> | resources | No | No | No | > | subscriptions | No | No | No | > | tags | No | No | No |
-> | templatespecs | No | No | **Yes**<br/><br/>[Move Microsoft.Resources resources to new region](microsoft-resources-move-regions.md) |
+> | templatespecs | No | No | [Move Microsoft.Resources resources to new region](microsoft-resources-move-regions.md) |
> | templatespecs / versions | No | No | No | > | tenants | No | No | No |
azure-sql-edge Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/configure.md
Last updated 09/14/2023 -+ # Configure Azure SQL Edge
-> [!IMPORTANT]
+> [!IMPORTANT]
> Azure SQL Edge no longer supports the ARM64 platform. Azure SQL Edge supports configuration through one of the following two options:
Azure SQL Edge supports configuration through one of the following two options:
- Environment variables - An mssql.conf file placed in the /var/opt/mssql folder
-> [!NOTE]
+> [!NOTE]
> Setting environment variables overrides the settings specified in the mssql.conf file. ## Configure by using environment variables
The following SQL Server on Linux environment variable isn't supported for Azure
| | | | **MSSQL_ENABLE_HADR** | Enable availability group. For example, `1` is enabled, and `0` is disabled. |
-> [!IMPORTANT]
+> [!IMPORTANT]
> The **MSSQL_PID** environment variable for SQL Edge only accepts **Premium** and **Developer** as the valid values. Azure SQL Edge doesn't support initialization using a product key. ### Specify the environment variables
Add values in **Container Create Options**.
:::image type="content" source="media/configure/set-environment-variables-using-create-options.png" alt-text="Screenshot of set by using container create options.":::
-> [!NOTE]
+> [!NOTE]
> In the disconnected deployment mode, environment variables can be specified using the `-e` or `--env` or the `--env-file` option of the `docker run` command. ## Configure by using an `mssql.conf` file
Earlier CTPs of Azure SQL Edge were configured to run as the root users. The fol
Your Azure SQL Edge configuration changes and database files are persisted in the container even if you restart the container with `docker stop` and `docker start`. However, if you remove the container with `docker rm`, everything in the container is deleted, including Azure SQL Edge and your databases. The following section explains how to use **data volumes** to persist your database files even if the associated containers are deleted.
-> [!IMPORTANT]
+> [!IMPORTANT]
> For Azure SQL Edge, it's critical that you understand data persistence in Docker. In addition to the discussion in this section, see Docker's documentation on [how to manage data in Docker containers](https://docs.docker.com/engine/tutorials/dockervolumes/). ### Mount a host directory as data volume
docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=<YourStrong!Passw0rd>" -p 14
This technique also enables you to share and view the files on the host outside of Docker.
-> [!IMPORTANT]
+> [!IMPORTANT]
> Host volume mapping for **Docker on Windows** doesn't currently support mapping the complete `/var/opt/mssql` directory. However, you can map a subdirectory, such as `/var/opt/mssql/data` to your host machine.
-> [!IMPORTANT]
+> [!IMPORTANT]
> Host volume mapping for **Docker on macOS** with the Azure SQL Edge image isn't supported at this time. Use data volume containers instead. This restriction is specific to the `/var/opt/mssql` directory. Reading from a mounted directory works fine. For example, you can mount a host directory using `-v` on macOS and restore a backup from a `.bak` file that resides on the host. ### Use data volume containers
docker run -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=<YourStrong!Passw0rd>' -p 14
docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=<YourStrong!Passw0rd>" -p 1433:1433 -v sqlvolume:/var/opt/mssql -d mcr.microsoft.com/azure-sql-edge ```
-> [!NOTE]
+> [!NOTE]
> This technique for implicitly creating a data volume in the run command doesn't work with older versions of Docker. In that case, use the explicit steps outlined in the Docker documentation, [Creating and mounting a data volume container](https://docs.docker.com/engine/tutorials/dockervolumes/#creating-and-mounting-a-data-volume-container). Even if you stop and remove this container, the data volume persists. You can view it with the `docker volume ls` command.
If you then create another container with the same volume name, the new containe
To remove a data volume container, use the `docker volume rm` command.
-> [!WARNING]
+> [!WARNING]
> If you delete the data volume container, any Azure SQL Edge data in the container is *permanently* deleted. ## Next steps
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
Before you begin the prerequisites, review the [Performance best practices](#per
1. [Deploy Azure VMware Solution](./deploy-azure-vmware-solution.md) private cloud and a dedicated virtual network connected via ExpressRoute gateway. The virtual network gateway should be configured with the Ultra performance or ErGw3Az SKU and have FastPath enabled. For more information, see [Configure networking for your VMware private cloud](tutorial-configure-networking.md) and [Network planning checklist](tutorial-network-checklist.md). 1. Create an [NFSv3 volume for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-create-volumes.md) in the same virtual network created in the previous step. 1. Verify connectivity from the private cloud to Azure NetApp Files volume by pinging the attached target IP.
- 2. Verify the subscription is registered to the `ANFAvsDataStore` feature in the `Microsoft.NetApp` namespace. If the subscription isn't registered, register it now.
-
- `az feature register --name "ANFAvsDataStore" --namespace "Microsoft.NetApp"`
-
- `az feature show --name "ANFAvsDataStore" --namespace "Microsoft.NetApp" --query properties.state`
1. Based on your performance requirements, select the correct service level needed for the Azure NetApp Files capacity pool. Select option **Azure VMware Solution Datastore** listed under the **Protocol** section. 1. Create a volume with **Standard** [network features](../azure-netapp-files/configure-network-features.md) if available for ExpressRoute FastPath connectivity. 1. Under the **Protocol** section, select **Azure VMware Solution Datastore** to indicate the volume is created to use as a datastore for Azure VMware Solution private cloud. 1. If you're using [export policies](../azure-netapp-files/azure-netapp-files-configure-export-policy.md) to control access to Azure NetApp Files volumes, enable the Azure VMware private cloud IP range, not individual host IPs. Faulty hosts in a private cloud could get replaced. If the IP isn't enabled, connectivity to datastore is impacted.
->[!NOTE]
->Azure NetApp Files datastores for Azure VMware Solution are generally available. To use it, you must register Azure NetApp Files datastores for Azure VMware Solution.
- ## Supported regions Azure NetApp Files datastores for Azure VMware Solution are currently supported in the following regions:
For performance benchmarks that Azure NetApp Files datastores deliver for VMs on
To attach an Azure NetApp Files volume to your private cloud using Portal, follow these steps: 1. Sign in to the Azure portal.
-1. Select **Subscriptions** to see a list of subscriptions.
-1. From the list, select the subscription you want to use.
-1. Under Settings, select **Resource providers**.
-1. Search for **Microsoft.AVS** and select it.
-1. Select **Register**.
-1. Under **Settings**, select **Preview features**.
- 1. Verify you're registered for both the `CloudSanExperience` and `AnfDatstoreExperience` features.
1. Navigate to your Azure VMware Solution. Under **Manage**, select **Storage**. 1. Select **Connect Azure NetApp Files volume**.
Under **Manage**, select **Storage**.
To attach an Azure NetApp Files volume to your private cloud using Azure CLI, follow these steps:
-1. Verify the subscription is registered to `CloudSanExperience` feature in the **Microsoft.AVS** namespace. If it's not, register it.
-
- `az feature show --name "CloudSanExperience" --namespace "Microsoft.AVS"`
-
- `az feature register --name "CloudSanExperience" --namespace "Microsoft.AVS"`
-1. The registration should take approximately 15 minutes to complete. You can also check the status.
-
- `az feature show --name "CloudSanExperience" --namespace "Microsoft.AVS" --query properties.state`
-1. If the registration is stuck in an intermediate state for longer than 15 minutes, unregister, then re-register the flag.
-
- `az feature unregister --name "CloudSanExperience" --namespace "Microsoft.AVS"`
-
- `az feature register --name "CloudSanExperience" --namespace "Microsoft.AVS"`
-1. Verify the subscription is registered to `AnfDatastoreExperience` feature in the **Microsoft.AVS** namespace. If it's not, register it.
-
- `az feature register --name " AnfDatastoreExperience" --namespace "Microsoft.AVS"`
-
- `az feature show --name "AnfDatastoreExperience" --namespace "Microsoft.AVS" --query properties.state`
- 1. Verify the VMware extension is installed. If the extension is already installed, verify you're using the latest version of the Azure CLI extension. If an older version is installed, update the extension. `az extension show --name vmware`
To attach an Azure NetApp Files volume to your private cloud using Azure CLI, fo
1. Create a datastore using an existing ANF volume in Azure VMware Solution private cloud cluster. `az vmware datastore netapp-volume create --name MyDatastore1 --resource-group MyResourceGroup ΓÇô-cluster Cluster-1 --private-cloud MyPrivateCloud ΓÇô-volume-id /subscriptions/<Subscription Id>/resourceGroups/<Resourcegroup name>/providers/Microsoft.NetApp/netAppAccounts/<Account name>/capacityPools/<pool name>/volumes/<Volume name>`
-1. If needed, you can display the help on the datastores.
+1. If needed, display the help on the datastores.
`az vmware datastore -h` 1. Show the details of an ANF-based datastore in a private cloud cluster.
backup Backup Azure Restore Files From Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-restore-files-from-vm.md
Title: Recover files and folders from Azure VM backup
description: In this article, learn how to recover files and folders from an Azure virtual machine recovery point. Last updated 06/30/2023-+
To restore files or folders from the recovery point, go to the virtual machine a
## Step 2: Ensure the machine meets the requirements before executing the script
-After the script is successfully downloaded, make sure you have the right machine to execute this script. The VM where you are planning to execute the script, should not have any of the following unsupported configurations. **If it does, then choose an alternate machine that meets the requirements**.
+After the script is successfully downloaded, make sure you have the right machine to execute this script. The VM where you are planning to execute the script, should not have any of the following unsupported configurations. **If it does, then choose an alternate machine that meets the requirements**.
### Dynamic disks
You can't run the downloaded executable on the same backed-up VM if the backed-u
### Virtual machine backups having large disks
-If the backed-up machine has large number of disks (>16) or large disks (> 4 TB each) it's not recommended to execute the script on the same machine for restore, since it will have a significant impact on the VM. Instead it's recommended to have a separate VM only for file recovery (Azure VM D2v3 VMs) and then shut it down when not required.
+If the backed-up machine has large number of disks (>16) or large disks (> 4 TB each) it's not recommended to execute the script on the same machine for restore, since it will have a significant impact on the VM. Instead it's recommended to have a separate VM only for file recovery (Azure VM D2v3 VMs) and then shut it down when not required.
See requirements to restore files from backed-up VMs with large disk:<br> [Windows OS](#for-backed-up-vms-with-large-disks-windows)<br> [Linux OS](#for-backed-up-vms-with-large-disks-linux)
-After you choose the correct machine to run the ILR script, ensure that it meets the [OS requirements](#step-3-os-requirements-to-successfully-run-the-script) and [access requirements](#step-4-access-requirements-to-successfully-run-the-script).
+After you choose the correct machine to run the ILR script, ensure that it meets the [OS requirements](#step-3-os-requirements-to-successfully-run-the-script) and [access requirements](#step-4-access-requirements-to-successfully-run-the-script).
## Step 3: OS requirements to successfully run the script
Also, ensure that you have the [right machine to execute the ILR script](#step-2
> [!NOTE] > > The script is generated in English language only and is not localized. Hence it might require that the system locale is in English for the script to execute properly
->
+>
### For Windows
After you meet all the requirements listed in [Step 2](#step-2-ensure-the-machin
:::image type="content" source="./media/backup-azure-restore-files-from-vm/executable-output.png" alt-text="Screenshot shows the executable output for file restore from VM." lightbox="./media/backup-azure-restore-files-from-vm/executable-output.png":::
-When you run the executable, the operating system mounts the new volumes and assigns drive letters. You can use Windows Explorer or File Explorer to browse those drives. The drive letters assigned to the volumes may not be the same letters as the original virtual machine. However, the volume name is preserved. For example, if the volume on the original virtual machine was "Data Disk (E:`\`)", that volume can be attached on the local computer as "Data Disk ('Any letter':`\`). Browse through all volumes mentioned in the script output until you find your files or folder.
+When you run the executable, the operating system mounts the new volumes and assigns drive letters. You can use Windows Explorer or File Explorer to browse those drives. The drive letters assigned to the volumes may not be the same letters as the original virtual machine. However, the volume name is preserved. For example, if the volume on the original virtual machine was "Data Disk (E:`\`)", that volume can be attached on the local computer as "Data Disk ('Any letter':`\`). Browse through all volumes mentioned in the script output until you find your files or folder.
![Recovery volumes attached](./media/backup-azure-restore-files-from-vm/volumes-attached.png) #### For backed-up VMs with large disks (Windows) If the file recovery process hangs after you run the file-restore script (for example, if the disks are never mounted, or they're mounted but the volumes don't appear), perform the following steps:
-
+ 1. Ensure that the OS is WS 2012 or higher. 2. Ensure the registry keys are set as suggested below in the restore server and make sure to reboot the server. The number beside the GUID can range from 0001-0005. In the following example, it's 0004. Navigate through the registry key path until the parameters section.
Make sure that the Volume groups corresponding to script's volumes are active. T
```bash sudo vgdisplay -a
-```
+```
Otherwise, activate the volume group by using the following command.
backup Backup Azure Troubleshoot Vm Backup Fails Snapshot Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-troubleshoot-vm-backup-fails-snapshot-timeout.md
description: Symptoms, causes, and resolutions of Azure Backup failures related
Last updated 05/05/2022 -+
Azure Backup uses the VM Snapshot Extension to take an application consistent ba
- **Ensure VMSnapshot extension isn't in a failed state**: Follow the steps listed in this [section](backup-azure-troubleshoot-vm-backup-fails-snapshot-timeout.md#usererrorvmprovisioningstatefailedthe-vm-is-in-failed-provisioning-state) to verify and ensure the Azure Backup extension is healthy. - **Check if antivirus is blocking the extension**: Certain antivirus software can prevent extensions from executing.
-
+ At the time of the backup failure, verify if there are log entries in ***Event Viewer Application logs*** with ***faulting application name: IaaSBcdrExtension.exe***. If you see entries, then it could be the antivirus configured in the VM is restricting the execution of the backup extension. Test by excluding the following directories in the antivirus configuration and retry the backup operation. - `C:\Packages\Plugins\Microsoft.Azure.RecoveryServices.VMSnapshot` - `C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.RecoveryServices.VMSnapshot`
The Azure VM agent might be stopped, outdated, in an inconsistent state, or not
**Error code**: GuestAgentSnapshotTaskStatusError<br> **Error message**: Could not communicate with the VM agent for snapshot status <br>
-After you register and schedule a VM for the Azure Backup service, Backup starts the job by communicating with the VM backup extension to take a point-in-time snapshot. Any of the following conditions might prevent the snapshot from being triggered. If the snapshot isn't triggered, a backup failure might occur. Complete the following troubleshooting steps in the order listed, and then retry your operation:
+After you register and schedule a VM for the Azure Backup service, Backup starts the job by communicating with the VM backup extension to take a point-in-time snapshot. Any of the following conditions might prevent the snapshot from being triggered. If the snapshot isn't triggered, a backup failure might occur. Complete the following troubleshooting steps in the order listed, and then retry your operation:
-**Cause 1: [The agent is installed in the VM, but it's unresponsive (for Windows VMs)](#the-agent-installed-in-the-vm-but-unresponsive-for-windows-vms)**
+**Cause 1: [The agent is installed in the VM, but it's unresponsive (for Windows VMs)](#the-agent-installed-in-the-vm-but-unresponsive-for-windows-vms)**
**Cause 2: [The agent installed in the VM is out of date (for Linux VMs)](#the-agent-installed-in-the-vm-is-out-of-date-for-linux-vms)**
For a backup operation to succeed on encrypted VMs, it must have permissions to
After you register and schedule a VM for the Azure Backup service, Backup starts the job by communicating with the VM backup extension to take a point-in-time snapshot. Any of the following conditions might prevent the snapshot from being triggered. If the snapshot isn't triggered, a backup failure might occur. Complete the following troubleshooting step, and then retry your operation:
-**[The snapshot status can't be retrieved, or a snapshot can't be taken](#the-snapshot-status-cannot-be-retrieved-or-a-snapshot-cannot-be-taken)**
+**[The snapshot status can't be retrieved, or a snapshot can't be taken](#the-snapshot-status-cannot-be-retrieved-or-a-snapshot-cannot-be-taken)**
## <a name="ExtensionOperationFailed-vmsnapshot-extension-operation-failed"></a>ExtensionOperationFailedForManagedDisks - VMSnapshot extension operation failed **Error code**: ExtensionOperationFailedForManagedDisks <br> **Error message**: VMSnapshot extension operation failed<br>
-After you register and schedule a VM for the Azure Backup service, Backup starts the job by communicating with the VM backup extension to take a point-in-time snapshot. Any of the following conditions might prevent the snapshot from being triggered. If the snapshot isn't triggered, a backup failure might occur. Complete the following troubleshooting steps in the order listed, and then retry your operation:
-**Cause 1: [The snapshot status can't be retrieved, or a snapshot can't be taken](#the-snapshot-status-cannot-be-retrieved-or-a-snapshot-cannot-be-taken)**
-**Cause 2: [The agent is installed in the VM, but it's unresponsive (for Windows VMs)](#the-agent-installed-in-the-vm-but-unresponsive-for-windows-vms)**
+After you register and schedule a VM for the Azure Backup service, Backup starts the job by communicating with the VM backup extension to take a point-in-time snapshot. Any of the following conditions might prevent the snapshot from being triggered. If the snapshot isn't triggered, a backup failure might occur. Complete the following troubleshooting steps in the order listed, and then retry your operation:
+**Cause 1: [The snapshot status can't be retrieved, or a snapshot can't be taken](#the-snapshot-status-cannot-be-retrieved-or-a-snapshot-cannot-be-taken)**
+**Cause 2: [The agent is installed in the VM, but it's unresponsive (for Windows VMs)](#the-agent-installed-in-the-vm-but-unresponsive-for-windows-vms)**
**Cause 3: [The agent installed in the VM is out of date (for Linux VMs)](#the-agent-installed-in-the-vm-is-out-of-date-for-linux-vms)** ## BackUpOperationFailed / BackUpOperationFailedV2 - Backup fails, with an internal error
After you register and schedule a VM for the Azure Backup service, Backup starts
**Error code**: BackUpOperationFailed / BackUpOperationFailedV2 <br> **Error message**: Backup failed with an internal error - Please retry the operation in a few minutes <br>
-After you register and schedule a VM for the Azure Backup service, Backup initiates the job by communicating with the VM backup extension to take a point-in-time snapshot. Any of the following conditions might prevent the snapshot from being triggered. If the snapshot isn't triggered, a backup failure might occur. Complete the following troubleshooting steps in the order listed, and then retry your operation:
+After you register and schedule a VM for the Azure Backup service, Backup initiates the job by communicating with the VM backup extension to take a point-in-time snapshot. Any of the following conditions might prevent the snapshot from being triggered. If the snapshot isn't triggered, a backup failure might occur. Complete the following troubleshooting steps in the order listed, and then retry your operation:
-- **Cause 1: [The agent installed in the VM, but it's unresponsive (for Windows VMs)](#the-agent-installed-in-the-vm-but-unresponsive-for-windows-vms)** -- **Cause 2: [The agent installed in the VM is out of date (for Linux VMs)](#the-agent-installed-in-the-vm-is-out-of-date-for-linux-vms)** -- **Cause 3: [The snapshot status can't be retrieved, or a snapshot can't be taken](#the-snapshot-status-cannot-be-retrieved-or-a-snapshot-cannot-be-taken)**
+- **Cause 1: [The agent installed in the VM, but it's unresponsive (for Windows VMs)](#the-agent-installed-in-the-vm-but-unresponsive-for-windows-vms)**
+- **Cause 2: [The agent installed in the VM is out of date (for Linux VMs)](#the-agent-installed-in-the-vm-is-out-of-date-for-linux-vms)**
+- **Cause 3: [The snapshot status can't be retrieved, or a snapshot can't be taken](#the-snapshot-status-cannot-be-retrieved-or-a-snapshot-cannot-be-taken)**
- **Cause 4: [Backup service doesn't have permission to delete the old restore points because of a resource group lock](#remove_lock_from_the_recovery_point_resource_group)** - **Cause 5**: There's an extension version/bits mismatch with the Windows version you're running or the following module is corrupt: **C:\Packages\Plugins\Microsoft.Azure.RecoveryServices.VMSnapshot\\<extension version\>\iaasvmprovider.dll** <br> To resolve this issue, check if the module is compatible with x86 (32-bit)/x64 (64-bit) version of _regsvr32.exe_, and then follow these steps:
Most agent-related or extension-related failures for Linux VMs are caused by iss
If the process isn't running, restart it by using the following commands:
- - For Ubuntu/Debian:
+ - For Ubuntu/Debian:
```bash sudo systemctl restart walinuxagent ```
-
- - For other distributions:
+
+ - For other distributions:
```bash sudo systemctl restart waagent ```
backup Backup Azure Vm File Recovery Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vm-file-recovery-troubleshoot.md
Title: Troubleshoot Azure VM file recovery description: Troubleshoot issues when recovering files and folders from an Azure VM backup. -+ Last updated 07/12/2020
This section provides steps to troubleshoot common issues you might experience w
```bash ping download.microsoft.com ```
-
+ ### The script downloads successfully, but fails to run When you run the Python script for Item Level Recovery (ILR) on SUSE Linux Enterprise Server 12 SP4, it fails with the error "iscsi_tcp module canΓÇÖt be loaded" or "iscsi_tcp_module not found".
If the protected Linux VM uses LVM or RAID Arrays, follow the steps in [Recover
### You can't copy the files from mounted volumes
-The copy might fail with the error "0x80070780: The file cannot be accessed by the system."
+The copy might fail with the error "0x80070780: The file cannot be accessed by the system."
Check if the source server has disk deduplication enabled. If it does, ensure the restore server also has deduplication enabled on the drives. You can leave deduplication unconfigured so that you don't deduplicate the drives on the restore server.
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
description: Learn about frequently asked questions for Azure Bastion.
Previously updated : 10/13/2023 Last updated : 01/18/2024 # Azure Bastion FAQ
Azure Bastion isn't supported with Azure Private DNS Zones in national clouds.
No, Azure Bastion doesn't currently support private link.
+### Why do I get a "Failed to add subnet" error when using "Deploy Bastion" in the portal?
+
+At this time, for most address spaces, you must add a subnet named **AzureBastionSubnet** to your virtual network before you select **Deploy Bastion**.
+ ### <a name="subnet"></a>Can I have an Azure Bastion subnet of size /27 or smaller (/28, /29, etc.)? For Azure Bastion resources deployed on or after November 2, 2021, the minimum AzureBastionSubnet size is /26 or larger (/25, /24, etc.). All Azure Bastion resources deployed in subnets of size /27 prior to this date are unaffected by this change and will continue to work. However, we highly recommend increasing the size of any existing AzureBastionSubnet to /26 in case you choose to take advantage of [host scaling](./configure-host-scaling.md) in the future.
Make sure the user has **read** access to both the VM, and the peered VNet. Addi
|Microsoft.Network/virtualNetworks/subnets/virtualMachines/read|Gets references to all the virtual machines in a virtual network subnet|Action| |Microsoft.Network/virtualNetworks/virtualMachines/read|Gets references to all the virtual machines in a virtual network|Action|
+### I am connecting to a VM using a JIT policy, do I need additional permissions?
+
+If user is connecting to a VM using a JIT policy, there is no additional permissions needed. For more information on connecting to a VM using a JIT policy, see [Enable just-in-time access on VMs](../defender-for-cloud/just-in-time-access-usage.md)
+ ### My privatelink.azure.com can't resolve to management.privatelink.azure.com This may be due to the Private DNS zone for privatelink.azure.com linked to the Bastion virtual network causing management.azure.com CNAMEs to resolve to management.privatelink.azure.com behind the scenes. Create a CNAME record in their privatelink.azure.com zone for management.privatelink.azure.com to arm-frontdoor-prod.trafficmanager.net to enable successful DNS resolution.
bastion Quickstart Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/quickstart-host-portal.md
description: Learn how to deploy Azure Bastion with default settings from the Az
Previously updated : 10/12/2023 Last updated : 01/18/2024
When you deploy from VM settings, Bastion is automatically configured with the f
| **Name** | Based on the virtual network name | | **Public IP address name** | Based on the virtual network name |
+## Configure the AzureBastionSubnet
+
+When you deploy Azure Bastion, resources are created in a specific subnet which must be named **AzureBastionSubnet**. The name of the subnet lets the system know where to deploy resources. Use the following steps to add the AzureBastionSubnet to your virtual network:
++
+After adding the AzureBastionSubnet, you can continue to the next section and deploy Bastion.
+ ## <a name="createvmset"></a>Deploy Bastion When you create an Azure Bastion instance in the portal by using **Deploy Bastion**, you deploy Bastion automatically by using default settings and the Basic SKU. You can't modify, or specify additional values for, a default deployment.
batch Automatic Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/automatic-certificate-rotation.md
Title: Enable automatic certificate rotation in a Batch pool description: You can create a Batch pool with a managed identity and a certificate that will automatically be renewed. -+ Last updated 12/05/2023 # Enable automatic certificate rotation in a Batch pool
batch Batch Account Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-account-create-portal.md
Title: Create a Batch account in the Azure portal
description: Learn how to use the Azure portal to create and manage an Azure Batch account for running large-scale parallel workloads in the cloud. Last updated 07/18/2023-+ # Create a Batch account in the Azure portal
To create a Batch account in the default Batch service mode:
- **Subscription**: Select the subscription to use if not already selected. - **Resource group**: Select the resource group for the Batch account, or create a new one. - **Account name**: Enter a name for the Batch account. The name must be unique within the Azure region, can contain only lowercase characters or numbers, and must be 3-24 characters long.
-
+ > [!NOTE] > The Batch account name is part of its ID and can't be changed after creation. - **Location**: Select the Azure region for the Batch account if not already selected.
- - **Storage account**: Optionally, select **Select a storage account** to associate an [Azure Storage account](accounts.md#azure-storage-accounts) with the Batch account.
+ - **Storage account**: Optionally, select **Select a storage account** to associate an [Azure Storage account](accounts.md#azure-storage-accounts) with the Batch account.
:::image type="content" source="media/batch-account-create-portal/batch-account-portal.png" alt-text="Screenshot of the New Batch account screen.":::
When you create the first user subscription mode Batch account in an Azure subsc
:::image type="content" source="media/batch-account-create-portal/register_provider.png" alt-text="Screenshot of the Resource providers page."::: 1. Return to the **Subscription** page and select **Access control (IAM)** from the left navigation.
-1. At the top of the **Access control (IAM)** page, select **Add** > **Add role assignment**.
+1. At the top of the **Access control (IAM)** page, select **Add** > **Add role assignment**.
1. On the **Add role assignment** screen, under **Assignment type**, select **Privileged administrator role**, and then select **Next**. 1. On the **Role** tab, select either the **Contributor** or **Owner** role for the Batch account, and then select **Next**. 1. On the **Members** tab, select **Select members**. On the **Select members** screen, search for and select **Microsoft Azure Batch**, and then select **Select**.
batch Batch Automatic Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-automatic-scaling.md
Title: Autoscale compute nodes in an Azure Batch pool
description: Enable automatic scaling on an Azure Batch cloud pool to dynamically adjust the number of compute nodes in the pool. Last updated 08/23/2023-+ # Create a formula to automatically scale compute nodes in a Batch pool
batch Batch Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-ci-cd.md
Title: Use Azure Pipelines to build and deploy an HPC solution
description: Use Azure Pipelines CI/CD build and release pipelines to deploy Azure Resource Manager templates for an Azure Batch high performance computing (HPC) solution. Last updated 04/12/2023 -+ # Use Azure Pipelines to build and deploy an HPC solution
Save the following code as a file named *deployment.json*. This final template a
"accountName": {"value": "[parameters('applicationStorageAccountName')]"} } }
- },
+ },
{ "apiVersion": "2017-05-10", "name": "batchAccountDeployment",
batch Batch Cli Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-cli-templates.md
Title: Run jobs end-to-end using templates
description: With only CLI commands, you can create a pool, upload input data, create jobs and associated tasks, and download the resulting output data. Last updated 09/19/2023-+ # Use Azure Batch CLI templates and file transfer
batch Batch Docker Container Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-docker-container-workloads.md
Last updated 01/10/2024 ms.devlang: csharp # ms.devlang: csharp, python-+ # Use Azure Batch to run container workloads
batch Batch Js Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-js-get-started.md
description: Learn the basic concepts of Azure Batch and build a simple solution
Last updated 05/16/2023 ms.devlang: javascript-+ # Get started with Batch SDK for JavaScript
Following code snippet first imports the azure-batch JavaScript module and then
import { BatchServiceClient, BatchSharedKeyCredentials } from "@azure/batch";
-// Replace values below with Batch Account details
+// Replace values below with Batch Account details
const batchAccountName = '<batch-account-name>'; const batchAccountKey = '<batch-account-key>'; const batchEndpoint = '<batch-account-url>';
batch Batch Linux Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-linux-nodes.md
Last updated 05/18/2023 ms.devlang: csharp # ms.devlang: csharp, python-+ zone_pivot_groups: programming-languages-batch-linux-nodes # Provision Linux compute nodes in Batch pools
-You can use Azure Batch to run parallel compute workloads on both Linux and Windows virtual machines. This article details how to create pools of Linux compute nodes in the Batch service by using both the [Batch Python](https://pypi.python.org/pypi/azure-batch) and [Batch .NET](/dotnet/api/microsoft.azure.batch) client libraries.
+You can use Azure Batch to run parallel compute workloads on both Linux and Windows virtual machines. This article details how to create pools of Linux compute nodes in the Batch service by using both the [Batch Python](https://pypi.python.org/pypi/azure-batch) and [Batch .NET](/dotnet/api/microsoft.azure.batch) client libraries.
## Virtual Machine Configuration
batch Batch Parallel Node Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-parallel-node-tasks.md
Title: Run tasks concurrently to maximize usage of Batch compute nodes description: Learn how to increase efficiency and lower costs by using fewer compute nodes and parallelism in an Azure Batch pool. -+ Last updated 05/24/2023 ms.devlang: csharp
batch Batch Pool Compute Intensive Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-compute-intensive-sizes.md
Title: Use compute-intensive Azure VMs with Batch description: How to take advantage of HPC and GPU virtual machine sizes in Azure Batch pools. Learn about OS dependencies and see several scenario examples. -+ Last updated 05/01/2023 # Use RDMA or GPU instances in Batch pools
batch Batch Pool No Public Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-no-public-ip-address.md
Title: Create an Azure Batch pool without public IP addresses (preview)
description: Learn how to create an Azure Batch pool without public IP addresses. Last updated 05/30/2023-+ # Create a Batch pool without public IP addresses (preview)
batch Batch Powershell Cmdlets Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-powershell-cmdlets-get-started.md
Title: Get started with PowerShell
description: A quick introduction to the Azure PowerShell cmdlets you can use to manage Batch resources. Last updated 05/24/2023-+ # Manage Batch resources with PowerShell cmdlets
We recommend that you update your Azure PowerShell modules frequently to take ad
``` - **Register with the Batch provider namespace**. You only need to perform this operation **once per subscription**.
-
+ ```powershell Register-AzResourceProvider -ProviderNamespace Microsoft.Batch ```
batch Batch Sig Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-sig-images.md
Last updated 11/09/2023 ms.devlang: csharp # ms.devlang: csharp, python-+ # Use the Azure Compute Gallery to create a custom image pool
Using a Shared Image configured for your scenario can provide several advantages
> [!NOTE] > Currently, Azure Batch does not support the ΓÇÿTrustedLaunchΓÇÖ feature. You must use the standard security type to create a custom image instead.
->
-> You need to authenticate using Microsoft Entra ID. If you use shared-key-auth, you will get an authentication error.
+>
+> You need to authenticate using Microsoft Entra ID. If you use shared-key-auth, you will get an authentication error.
- **An Azure Batch account.** To create a Batch account, see the Batch quickstarts using the [Azure portal](quick-create-portal.md) or [Azure CLI](quick-create-cli.md).
The following steps show how to prepare a VM, take a snapshot, and create an ima
### Prepare a VM
-If you are creating a new VM for the image, use a first party Azure Marketplace image supported by Batch as the base image for your managed image. Only first party images can be used as a base image.
+If you are creating a new VM for the image, use a first party Azure Marketplace image supported by Batch as the base image for your managed image. Only first party images can be used as a base image.
To get a full list of current Azure Marketplace image references supported by Azure Batch, use one of the following APIs to return a list of Windows and Linux VM images including the node agent SKU IDs for each image: - PowerShell: [Azure Batch supported images](/powershell/module/az.batch/get-azbatchsupportedimage) - Azure CLI: [Azure Batch pool supported images](/cli/azure/batch/pool/supported-images) - Batch service APIs: [Batch service APIs](batch-apis-tools.md#batch-service-apis) and [Azure Batch service supported images](/rest/api/batchservice/account/listsupportedimages)-- List node agent SKUs: [Node agent SKUs](/java/api/com.microsoft.azure.batch.protocol.accounts.listnodeagentskus)
+- List node agent SKUs: [Node agent SKUs](/java/api/com.microsoft.azure.batch.protocol.accounts.listnodeagentskus)
> [!NOTE] > You can't use a third-party image that has additional license and purchase terms as your base image. For information about these Marketplace images, see the guidance for [Linux](../virtual-machines/linux/cli-ps-findimage.md#check-the-purchase-plan-information) or [Windows](../virtual-machines/windows/cli-ps-findimage.md#view-purchase-plan-properties)VMs.
Once you have successfully created your managed image, you need to create an Azu
To create a pool from your Shared Image using the Azure CLI, use the `az batch pool create` command. Specify the Shared Image ID in the `--image` field. Make sure the OS type and SKU matches the versions specified by `--node-agent-sku-id` > [!NOTE]
-> You need to authenticate using Microsoft Entra ID. If you use shared-key-auth, you will get an authentication error.
+> You need to authenticate using Microsoft Entra ID. If you use shared-key-auth, you will get an authentication error.
> [!IMPORTANT] > The node agent SKU id must align with the publisher/offer/SKU in order for the node to start.
private static void CreateBatchPool(BatchClient batchClient, VirtualMachineConfi
## Create a pool from a Shared Image using Python
-You also can create a pool from a Shared Image by using the Python SDK:
+You also can create a pool from a Shared Image by using the Python SDK:
```python # Import the required modules from the
Use the following steps to create a pool from a Shared Image in the Azure portal
1. Once the node is allocated, use **Connect** to generate user and the RDP file for Windows OR use SSH to for Linux to login to the allocated node and verify. ![Create a pool with from a Shared image with the portal.](media/batch-sig-images/create-custom-pool.png)
-
+ ## Considerations for large pools
batch Batch Spot Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-spot-vms.md
Title: Run Batch workloads on cost-effective Spot VMs
description: Learn how to provision Spot VMs to reduce the cost of Azure Batch workloads. Last updated 04/11/2023-+ # Use Spot VMs with Batch workloads
batch Batch User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-user-accounts.md
Title: Run tasks under user accounts
description: Learn the types of user accounts and how to configure them. Last updated 05/16/2023-+ ms.devlang: csharp # ms.devlang: csharp, java, python
A named user account exists on all nodes in the pool and is available to all tas
A named user account is useful when you want to run all tasks in a job under the same user account, but isolate them from tasks running in other jobs at the same time. For example, you can create a named user for each job, and run each job's tasks under that named user account. Each job can then share a secret with its own tasks, but not with tasks running in other jobs.
-You can also use a named user account to run a task that sets permissions on external resources such as file shares. With a named user account, you control the user identity and can use that user identity to set permissions.
+You can also use a named user account to run a task that sets permissions on external resources such as file shares. With a named user account, you control the user identity and can use that user identity to set permissions.
Named user accounts enable password-less SSH between Linux nodes. You can use a named user account with Linux nodes that need to run multi-instance tasks. Each node in the pool can run tasks under a user account defined on the whole pool. For more information about multi-instance tasks, see [Use multi\-instance tasks to run MPI applications](batch-mpi.md).
Func<ImageReference, bool> isUbuntu1804 = imageRef =>
imageRef.Sku.Contains("20.04-LTS"); // Obtain the first node agent SKU in the collection that matches
-// Ubuntu Server 20.04.
+// Ubuntu Server 20.04.
NodeAgentSku ubuntuAgentSku = nodeAgentSkus.First(sku => sku.VerifiedImageReferences.Any(isUbuntu2004));
batch Create Pool Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/create-pool-availability-zones.md
description: Learn how to create a Batch pool with zonal policy to help protect
Last updated 05/25/2023 ms.devlang: csharp-+ # Create an Azure Batch pool across Availability Zones
batch Create Pool Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/create-pool-extensions.md
Title: Use extensions with Batch pools description: Extensions are small applications that facilitate post-provisioning configuration and setup on Batch compute nodes. -+ Last updated 12/05/2023
batch Create Pool Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/create-pool-public-ip.md
Title: Create a Batch pool with specified public IP addresses description: Learn how to create an Azure Batch pool that uses your own static public IP addresses. -+ Last updated 05/26/2023
batch Managed Identity Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/managed-identity-pools.md
description: Learn how to enable user-assigned managed identities on Batch pools
Last updated 04/03/2023 ms.devlang: csharp-+ # Configure managed identities in Batch pools
batch Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-create-cli.md
Title: 'Quickstart: Use the Azure CLI to create a Batch account and run a job'
description: Follow this quickstart to use the Azure CLI to create a Batch account, a pool of compute nodes, and a job that runs basic tasks on the pool. Last updated 04/12/2023-+ # Quickstart: Use the Azure CLI to create a Batch account and run a job
az batch job create \
## Create job tasks
-Batch provides several ways to deploy apps and scripts to compute nodes. Use the [az batch task create](/cli/azure/batch/task#az-batch-task-create) command to create tasks to run in the job. Each task has a command line that specifies an app or script.
+Batch provides several ways to deploy apps and scripts to compute nodes. Use the [az batch task create](/cli/azure/batch/task#az-batch-task-create) command to create tasks to run in the job. Each task has a command line that specifies an app or script.
The following Bash script creates four identical, parallel tasks called `myTask1` through `myTask4`. The task command line displays the Batch environment variables on the compute node, and then waits 90 seconds.
az batch task file download \
--destination ./stdout.txt ```
-You can view the contents of the standard output file in a text editor. The following example shows a typical *stdout.txt* file. The standard output from this task shows the Azure Batch environment variables that are set on the node. You can refer to these environment variables in your Batch job task command lines, and in the apps and scripts the command lines run.
+You can view the contents of the standard output file in a text editor. The following example shows a typical *stdout.txt* file. The standard output from this task shows the Azure Batch environment variables that are set on the node. You can refer to these environment variables in your Batch job task command lines, and in the apps and scripts the command lines run.
```text AZ_BATCH_TASK_DIR=/mnt/batch/tasks/workitems/myJob/job-1/myTask1
batch Simplified Node Communication Pool No Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/simplified-node-communication-pool-no-public-ip.md
Title: Create a simplified node communication pool without public IP addresses
description: Learn how to create an Azure Batch simplified node communication pool without public IP addresses. Last updated 8/14/2023-+ # Create a simplified node communication pool without public IP addresses
batch Tutorial Batch Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/tutorial-batch-functions.md
description: Learn how to apply OCR to scanned documents as they're added to a s
ms.devlang: csharp Last updated 04/21/2023-+ # Tutorial: Trigger a Batch job using Azure Functions
In this section, you use the Azure portal to create the Batch pool and Batch job
### Create a pool 1. Sign in to the Azure portal using your Azure credentials.
-1. Create a pool by selecting **Pools** on the left side navigation, and then the select the **Add** button above the search form.
+1. Create a pool by selecting **Pools** on the left side navigation, and then the select the **Add** button above the search form.
:::image type="content" source="./media/tutorial-batch-functions/add-pool.png" alt-text="Screenshot of the Pools page in a Batch account that highlights the Add button.":::
-
+ 1. Enter a **Pool ID**. This example names the pool `ocr-pool`. 1. Select **canonical** as the **Publisher**. 1. Select **0001-com-ubuntu-server-jammy** as the **Offer**.
In this section, you use the Azure portal to create the Batch pool and Batch job
1. Set the **Mode** in the **Scale** section to **Fixed**, and enter 3 for the **Target dedicated nodes**. 1. Set **Start task** to **Enabled** the start task, and enter the command `/bin/bash -c "sudo update-locale LC_ALL=C.UTF-8 LANG=C.UTF-8; sudo apt-get update; sudo apt-get -y install ocrmypdf"` in **Command line**. Be sure to set the **Elevation level** as **Pool autouser, Admin**, which allows start tasks to include commands with `sudo`. 1. Select **OK**.
-
+ ### Create a job 1. Create a job on the pool by selecting **Jobs** in the left side navigation, and then choose the **Add** button above the search form.
In this section, you create the Azure Function that triggers the OCR Batch job w
## Trigger the function and retrieve results
-Upload any or all of the scanned files from the [`input_files`](https://github.com/Azure-Samples/batch-functions-tutorial/tree/master/input_files) directory on GitHub to your input container.
+Upload any or all of the scanned files from the [`input_files`](https://github.com/Azure-Samples/batch-functions-tutorial/tree/master/input_files) directory on GitHub to your input container.
You can test your function from Azure portal on the **Code + Test** page of your function.
- 1. Select **Test/run** on the **Code + Test** page.
+ 1. Select **Test/run** on the **Code + Test** page.
1. Enter the path for your input container in **Body** on the **Input** tab. 1. Select **Run**.
-
+ After a few seconds, the file with OCR applied is added to the output container. Log information outputs to the bottom window. The file is then visible and retrievable on Storage Explorer. Alternatively, you can find the log information on the **Monitor** page:
batch Tutorial Parallel Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/tutorial-parallel-python.md
description: Learn how to process media files in parallel using ffmpeg in Azure
ms.devlang: python Last updated 05/25/2023-+ # Tutorial: Run a parallel workload with Azure Batch using the Python API
Use Azure Batch to run large-scale parallel and high-performance computing (HPC)
> * Monitor task execution. > * Retrieve output files.
-In this tutorial, you convert MP4 media files to MP3 format, in parallel, by using the [ffmpeg](https://ffmpeg.org/) open-source tool.
+In this tutorial, you convert MP4 media files to MP3 format, in parallel, by using the [ffmpeg](https://ffmpeg.org/) open-source tool.
[!INCLUDE [quickstarts-free-trial-note.md](../../includes/quickstarts-free-trial-note.md)]
In this tutorial, you convert MP4 media files to MP3 format, in parallel, by usi
Sign in to the [Azure portal](https://portal.azure.com). ## Download and run the sample app
To run the script:
python batch_python_tutorial_ffmpeg.py ```
-When you run the sample application, the console output is similar to the following. During execution, you experience a pause at `Monitoring all tasks for 'Completed' state, timeout in 00:30:00...` while the pool's compute nodes are started.
-
+When you run the sample application, the console output is similar to the following. During execution, you experience a pause at `Monitoring all tasks for 'Completed' state, timeout in 00:30:00...` while the pool's compute nodes are started.
+ ``` Sample start: 11/28/2018 3:20:21 PM
When tasks are running, the heat map is similar to the following:
:::image type="content" source="./media/tutorial-parallel-python/pool.png" alt-text="Screenshot of Pool heat map.":::
-Typical execution time is approximately *5 minutes* when you run the application in its default configuration. Pool creation takes the most time.
+Typical execution time is approximately *5 minutes* when you run the application in its default configuration. Pool creation takes the most time.
[!INCLUDE [batch-common-tutorial-download](../../includes/batch-common-tutorial-download.md)]
input_files = [
Next, the sample creates a pool of compute nodes in the Batch account with a call to `create_pool`. This defined function uses the Batch [PoolAddParameter](/python/api/azure-batch/azure.batch.models.pooladdparameter) class to set the number of nodes, VM size, and a pool configuration. Here, a [VirtualMachineConfiguration](/python/api/azure-batch/azure.batch.models.virtualmachineconfiguration) object specifies an [ImageReference](/python/api/azure-batch/azure.batch.models.imagereference) to an Ubuntu Server 20.04 LTS image published in the Azure Marketplace. Batch supports a wide range of VM images in the Azure Marketplace, as well as custom VM images.
-The number of nodes and VM size are set using defined constants. Batch supports dedicated nodes and [Spot nodes](batch-spot-vms.md), and you can use either or both in your pools. Dedicated nodes are reserved for your pool. Spot nodes are offered at a reduced price from surplus VM capacity in Azure. Spot nodes become unavailable if Azure doesn't have enough capacity. The sample by default creates a pool containing only five Spot nodes in size *Standard_A1_v2*.
+The number of nodes and VM size are set using defined constants. Batch supports dedicated nodes and [Spot nodes](batch-spot-vms.md), and you can use either or both in your pools. Dedicated nodes are reserved for your pool. Spot nodes are offered at a reduced price from surplus VM capacity in Azure. Spot nodes become unavailable if Azure doesn't have enough capacity. The sample by default creates a pool containing only five Spot nodes in size *Standard_A1_v2*.
In addition to physical node properties, this pool configuration includes a [StartTask](/python/api/azure-batch/azure.batch.models.starttask) object. The StartTask executes on each node as that node joins the pool, and each time a node is restarted. In this example, the StartTask runs Bash shell commands to install the ffmpeg package and dependencies on the nodes.
The app creates tasks in the job with a call to `add_tasks`. This defined functi
The sample creates an [OutputFile](/python/api/azure-batch/azure.batch.models.outputfile) object for the MP3 file after running the command line. Each task's output files (one, in this case) are uploaded to a container in the linked storage account, using the task's `output_files` property.
-Then, the app adds tasks to the job with the [task.add_collection](/python/api/azure-batch/azure.batch.operations.taskoperations) method, which queues them to run on the compute nodes.
+Then, the app adds tasks to the job with the [task.add_collection](/python/api/azure-batch/azure.batch.operations.taskoperations) method, which queues them to run on the compute nodes.
```python tasks = list()
batch_service_client.task.add_collection(job_id, tasks)
### Monitor tasks
-When tasks are added to a job, Batch automatically queues and schedules them for execution on compute nodes in the associated pool. Based on the settings you specify, Batch handles all task queuing, scheduling, retrying, and other task administration duties.
+When tasks are added to a job, Batch automatically queues and schedules them for execution on compute nodes in the associated pool. Based on the settings you specify, Batch handles all task queuing, scheduling, retrying, and other task administration duties.
There are many approaches to monitoring task execution. The `wait_for_tasks_to_complete` function in this example uses the [TaskState](/python/api/azure-batch/azure.batch.models.taskstate) object to monitor tasks for a certain state, in this case the completed state, within a time limit.
batch Virtual File Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/virtual-file-mount.md
Title: Mount a virtual file system on a pool
description: Learn how to mount different kinds of virtual file systems on Batch pool nodes, and how to troubleshoot mounting issues. ms.devlang: csharp-+ Last updated 08/22/2023
cdn Cdn Custom Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-custom-ssl.md
Last updated 06/06/2022 -+ #Customer intent: As a website owner, I want to enable HTTPS on the custom domain of my CDN endpoint so that my users can use my custom domain to access my content securely.
The following table shows the operation progress that occurs when you disable HT
7. *How do cert renewals work with Bring Your Own Certificate?* To ensure a newer certificate is deployed to PoP infrastructure, upload your new certificate to Azure KeyVault. In your TLS settings on Azure CDN, choose the newest certificate version and select save. Azure CDN will then propagate your new updated cert.
-
+ For **Azure CDN from Edgio** profiles, if you use the same Azure Key Vault certificate on several custom domains (e.g. a wildcard certificate), ensure you update all of your custom domains that use that same certificate to the newer certificate version. ## Next steps
chaos-studio Chaos Studio Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-configure-customer-managed-keys.md
Title: Configure customer-managed keys (preview) for experiment encryption
+ Title: Configure customer-managed keys [preview] for experiment encryption
description: Learn how to configure customer-managed keys (preview) for your Azure Chaos Studio experiment resource using Azure Blob Storage
Last updated 10/06/2023
-# Configure customer-managed keys (preview) for Azure Chaos Studio using Azure Blob Storage
+# Configure customer-managed keys [preview] for Azure Chaos Studio using Azure Blob Storage
Azure Chaos Studio automatically encrypts all data stored in your experiment resource with keys that Microsoft provides (service-managed keys). As an optional feature, you can add a second layer of security by also providing your own (customer-managed) encryption key(s). Customer-managed keys offer greater flexibility for controlling access and key-rotation policies.
chaos-studio Chaos Studio Private Link Agent Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-private-link-agent-service.md
-# How-to: Configure Private Link for Agent-Based experiments
-This guide explains the steps needed to configure Private Link for a Chaos Studio **Agent-based** Experiment. The current user experience is based on the private endpoints support enabled as part of public preview of the private endpoints feature. Expect this experience to evolve with time as the feature is enhanced to GA quality.
+# How-to: Configure Private Link for Agent-Based experiments [Preview]
+This guide explains the steps needed to configure Private Link for a Chaos Studio **Agent-based** Experiment [Preview]. The current user experience is based on the private endpoints support enabled as part of public preview of the private endpoints feature. Expect this experience to evolve with time as the feature is enhanced to GA quality, as it is currently in **preview**.
## Prerequisites
Example of updated agentInstanceConfig.json:
**IF** you blocked outbound access to Microsoft Certificate Revocation List (CRL) verification endpoints, then you need to update agentSettings.JSON to disable CRL verification check in the agent.
+By default this field is set to **true**, so you can either remove this field or set the value to false. See [here](chaos-studio-tutorial-agent-based-cli.md) for more details.
+ ``` "communicationApi": { "checkCertRevocation": false
chaos-studio Chaos Studio Private Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-private-networking.md
Currently, you can only enable certain resource types for Chaos Studio virtual n
To use Chaos Studio with virtual network injection, you must meet the following requirements. 1. The `Microsoft.ContainerInstance` and `Microsoft.Relay` resource providers must be registered with your subscription. 1. The virtual network where Chaos Studio resources will be injected must have two subnets: a container subnet and a relay subnet. A container subnet is used for the Chaos Studio containers that will be injected into your private network. A relay subnet is used to forward communication from Chaos Studio to the containers inside the private network.
- 1. Both subnets need at least `/28` in the address space. An example is an address prefix of `10.0.0.0/28` or `10.0.0.0/24`.
+ 1. Both subnets need at least `/27` in the address space. An example is an address prefix of `10.0.0.0/28` or `10.0.0.0/24`.
1. The container subnet must be delegated to `Microsoft.ContainerInstance/containerGroups`. 1. The subnets can be arbitrarily named, but we recommend `ChaosStudioContainerSubnet` and `ChaosStudioRelaySubnet`. 1. When you enable the desired resource as a target so that you can use it in Chaos Studio experiments, the following properties must be set:
chaos-studio Chaos Studio Tutorial Agent Based Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-agent-based-cli.md
The chaos agent is an application that runs in your VM or virtual machine scale
1. Install the Chaos Studio VM extension. Replace `$VM_RESOURCE_ID` with the resource ID of your VM or replace `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$VMSS_NAME` with those properties for your virtual machine scale set. Replace `$AGENT_PROFILE_ID` with the agent Profile ID. Replace `$USER_IDENTITY_CLIENT_ID` with the client ID of your managed identity. Replace `$APP_INSIGHTS_KEY` with your Application Insights instrumentation key. If you aren't using Application Insights, remove that key/value pair.
+ #### Full list of default Agent virtual machine extension configuration
+
+ Here is the **minimum agent vm extension configuration** required by the user:
+
+ ```azcli-interactive
+ {
+ "profile": "$AGENT_PROFILE_ID",
+ "auth.msi.clientid": "$USER_IDENTITY_CLIENT_ID"
+ }
+ ```
+
+ Here is **all values for agent vm extension configuration**
+
+ ```azcli-interactive
+ {
+ "profile": "$AGENT_PROFILE_ID",
+ "auth.msi.clientid": "$USER_IDENTITY_CLIENT_ID",
+ "appinsightskey": "$APP_INSIGHTS_KEY",
+ "overrides": {
+ "region": string, default to be null
+ "logLevel": {
+ "default" : string , default to be Information
+ },
+ "checkCertRevocation": boolean, default to be false.
+ }
+ }
+ ```
++ #### Install the agent on a virtual machine Windows ```azurecli-interactive
- az vm extension set --ids $VM_RESOURCE_ID --name ChaosWindowsAgent --publisher Microsoft.Azure.Chaos --version 1.0 --settings '{"profile": "$AGENT_PROFILE_ID", "auth.msi.clientid":"$USER_IDENTITY_CLIENT_ID", "appinsightskey":"$APP_INSIGHTS_KEY"}'
+ az vm extension set --ids $VM_RESOURCE_ID --name ChaosWindowsAgent --publisher Microsoft.Azure.Chaos --version 1.0 --settings '{"profile": "$AGENT_PROFILE_ID", "auth.msi.clientid":"$USER_IDENTITY_CLIENT_ID", "appinsightskey":"$APP_INSIGHTS_KEY"{"Overrides": "CheckCertRevocation" = true}}'
``` Linux ```azurecli-interactive
- az vm extension set --ids $VM_RESOURCE_ID --name ChaosLinuxAgent --publisher Microsoft.Azure.Chaos --version 1.0 --settings '{"profile": "$AGENT_PROFILE_ID", "auth.msi.clientid":"$USER_IDENTITY_CLIENT_ID", "appinsightskey":"$APP_INSIGHTS_KEY"}'
+ az vm extension set --ids $VM_RESOURCE_ID --name ChaosLinuxAgent --publisher Microsoft.Azure.Chaos --version 1.0 --settings '{"profile": "$AGENT_PROFILE_ID", "auth.msi.clientid":"$USER_IDENTITY_CLIENT_ID", "appinsightskey":"$APP_INSIGHTS_KEY"{"Overrides": "CheckCertRevocation" = true}}'
``` #### Install the agent on a virtual machine scale set
The chaos agent is an application that runs in your VM or virtual machine scale
Windows ```azurecli-interactive
- az vmss extension set --subscription $SUBSCRIPTION_ID --resource-group $RESOURCE_GROUP --vmss-name $VMSS_NAME --name ChaosWindowsAgent --publisher Microsoft.Azure.Chaos --version 1.0 --settings '{"profile": "$AGENT_PROFILE_ID", "auth.msi.clientid":"$USER_IDENTITY_CLIENT_ID", "appinsightskey":"$APP_INSIGHTS_KEY"}'
+ az vmss extension set --subscription $SUBSCRIPTION_ID --resource-group $RESOURCE_GROUP --vmss-name $VMSS_NAME --name ChaosWindowsAgent --publisher Microsoft.Azure.Chaos --version 1.0 --settings '{"profile": "$AGENT_PROFILE_ID", "auth.msi.clientid":"$USER_IDENTITY_CLIENT_ID", "appinsightskey":"$APP_INSIGHTS_KEY"{"Overrides": "CheckCertRevocation" = true}}'
``` Linux ```azurecli-interactive
- az vmss extension set --subscription $SUBSCRIPTION_ID --resource-group $RESOURCE_GROUP --vmss-name $VMSS_NAME --name ChaosLinuxAgent --publisher Microsoft.Azure.Chaos --version 1.0 --settings '{"profile": "$AGENT_PROFILE_ID", "auth.msi.clientid":"$USER_IDENTITY_CLIENT_ID", "appinsightskey":"$APP_INSIGHTS_KEY"}'
+ az vmss extension set --subscription $SUBSCRIPTION_ID --resource-group $RESOURCE_GROUP --vmss-name $VMSS_NAME --name ChaosLinuxAgent --publisher Microsoft.Azure.Chaos --version 1.0 --settings '{"profile": "$AGENT_PROFILE_ID", "auth.msi.clientid":"$USER_IDENTITY_CLIENT_ID", "appinsightskey":"$APP_INSIGHTS_KEY"{"Overrides": "CheckCertRevocation" = true}}'
``` 1. If you're setting up a virtual machine scale set, verify that the instances were upgraded to the latest model. If needed, upgrade all instances in the model.
cloud-shell Persisting Shell Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/persisting-shell-storage.md
ms.contributor: jahelmic
Last updated 10/03/2023 tags: azure-resource-manager-+ Title: Persist files in Azure Cloud Shell
confidential-computing Confidential Enclave Nodes Aks Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-enclave-nodes-aks-get-started.md
Title: 'Quickstart: Deploy an AKS cluster with Enclave Confidential Container Intel SGX nodes by using the Azure CLI' description: Learn how to create an Azure Kubernetes Service (AKS) cluster with enclave confidential containers a Hello World app by using the Azure CLI. -+ Last updated 11/06/2023 -+ # Quickstart: Deploy an AKS cluster with confidential computing Intel SGX agent nodes by using the Azure CLI
This section assumes you're already running an AKS cluster that meets the prereq
Run the following command to enable the confidential computing add-on: ```azurecli-interactive
-az aks enable-addons --addons confcom --name MyManagedCluster --resource-group MyResourceGroup
+az aks enable-addons --addons confcom --name MyManagedCluster --resource-group MyResourceGroup
``` ### Add a DCsv3 user node pool to the cluster
kubectl get pods --all-namespaces
kube-system sgx-device-plugin-xxxx 1/1 Running ```
-If the output matches the preceding code, your AKS cluster is now ready to run confidential applications.
+If the output matches the preceding code, your AKS cluster is now ready to run confidential applications.
## Deploy Hello World from an isolated enclave application <a id="hello-world"></a>
-You're now ready to deploy a test application.
+You're now ready to deploy a test application.
Create a file named *hello-world-enclave.yaml* and paste in the following YAML manifest. You can find this sample application code in the [Open Enclave project](https://github.com/openenclave/openenclave/tree/master/samples/helloworld). This deployment assumes that you've deployed the *confcom* add-on.
Enclave called into host to print: Hello World!
## Clean up resources
-To remove the confidential computing node pool that you created in this quickstart, use the following command:
+To remove the confidential computing node pool that you created in this quickstart, use the following command:
```azurecli-interactive az aks nodepool delete --cluster-name myAKSCluster --name confcompool1 --resource-group myResourceGroup ```
-To delete the AKS cluster, use the following command:
+To delete the AKS cluster, use the following command:
```azurecli-interactive az aks delete --resource-group myResourceGroup --cluster-name myAKSCluster
confidential-computing Guest Attestation Example https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/guest-attestation-example.md
Last updated 04/11/2023-+
-
+ # Use sample application for guest attestation The [*guest attestation*](guest-attestation-confidential-vms.md) feature helps you to confirm that a confidential VM runs on a hardware-based trusted execution environment (TEE) with security features enabled for isolation and integrity. Sample applications for use with the guest attestation APIs are [available on GitHub](https://github.com/Azure/confidential-computing-cvm-guest-attestation).
-Depending on your [type of scenario](guest-attestation-confidential-vms.md#scenarios), you can reuse the sample code in your client program or workload code.
+Depending on your [type of scenario](guest-attestation-confidential-vms.md#scenarios), you can reuse the sample code in your client program or workload code.
## Prerequisites
To use a sample application in C++ for use with the guest attestation APIs, foll
1. Install the `build-essential` package. This package installs everything required for compiling the sample application. ```bash
- sudo apt-get install build-essential
+ sudo apt-get install build-essential
``` 1. Install the `libcurl4-openssl-dev` and `libjsoncpp-dev` packages. ```bash
- sudo apt-get install libcurl4-openssl-dev
+ sudo apt-get install libcurl4-openssl-dev
``` ```bash
- sudo apt-get install libjsoncpp-dev
+ sudo apt-get install libjsoncpp-dev
``` 1. Download the attestation package from <https://packages.microsoft.com/repos/azurecore/pool/main/a/azguestattestation1/>.
To use a sample application in C++ for use with the guest attestation APIs, foll
## Next steps -- [Learn how to use Microsoft Defender for Cloud integration with confidential VMs with guest attestation installed](guest-attestation-defender-for-cloud.md)
+- [Learn how to use Microsoft Defender for Cloud integration with confidential VMs with guest attestation installed](guest-attestation-defender-for-cloud.md)
- [Learn more about the guest attestation feature](guest-attestation-confidential-vms.md) - [Learn about Azure confidential VMs](confidential-vm-overview.md)
confidential-computing Quick Create Confidential Vm Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-arm.md
Last updated 12/01/2023 -+ ms.devlang: azurecli
ms.devlang: azurecli
You can use an Azure Resource Manager template (ARM template) to create an Azure [confidential VM](confidential-vm-overview.md) quickly. Confidential VMs run on both AMD processors backed by AMD SEV-SNP and Intel processors backed by Intel TDX to achieve VM memory encryption and isolation. For more information, see [Confidential VM Overview](confidential-vm-overview.md).
-This tutorial covers deployment of a confidential VM with a custom configuration.
+This tutorial covers deployment of a confidential VM with a custom configuration.
## Prerequisites -- An Azure subscription. Free trial accounts don't have access to the VMs used in this tutorial. One option is to use a [pay as you go subscription](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go/).
+- An Azure subscription. Free trial accounts don't have access to the VMs used in this tutorial. One option is to use a [pay as you go subscription](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go/).
- If you want to deploy from the Azure CLI, [install PowerShell](/powershell/azure/install-azure-powershell) and [install the Azure CLI](/cli/azure/install-azure-cli). ## Deploy confidential VM template with Azure CLI
To create and deploy your confidential VM using an ARM template through the Azur
az group create -n $resourceGroup -l $region ```
-1. Deploy your VM to Azure using an ARM template with a custom parameter file. For TDX deployments here is an example template: https://aka.ms/TDXtemplate.
+1. Deploy your VM to Azure using an ARM template with a custom parameter file. For TDX deployments here is an example template: https://aka.ms/TDXtemplate.
```azurecli-interactive az deployment group create `
When you create a confidential VM through the Azure Command-Line Interface (Azur
1. Depending on the OS image you're using, copy either the [example Windows parameter file](#example-windows-parameter-file) or the [example Linux parameter file](#example-linux-parameter-file) into your parameter file.
-1. Edit the JSON code in the parameter file as needed. For example, update the OS image name (`osImageName`) or the administrator username (`adminUsername`).
+1. Edit the JSON code in the parameter file as needed. For example, update the OS image name (`osImageName`) or the administrator username (`adminUsername`).
1. Configure your security type setting (`securityType`). Choose `VMGuestStateOnly` for no OS disk confidential encryption. Or, choose `DiskWithVMGuestState` for OS disk confidential encryption with a platform-managed key. For Intel TDX SKUs and Linux-based images only, customers may choose the `NonPersistedTPM` security type to deploy with an ephemeral vTPM. For the `NonPersistedTPM` security type use the minimum "apiVersion": "2023-09-01" under `Microsoft.Compute/virtualMachines` in the template file.
Use this example to create a custom parameter file for a Linux-based confidentia
``` 1. Grant confidential VM Service Principal `Confidential VM Orchestrator` to tenant
-
+ For this step you need to be a Global Admin or you need to have the User Access Administrator RBAC role. ```azurecli-interactive Connect-AzureAD -Tenant "your tenant ID"
- New-AzureADServicePrincipal -AppId bf7b6499-ff71-4aa2-97a4-f372087be7f0 -DisplayName "Confidential VM Orchestrator"
+ New-AzureADServicePrincipal -AppId bf7b6499-ff71-4aa2-97a4-f372087be7f0 -DisplayName "Confidential VM Orchestrator"
``` 1. Set up your Azure key vault. For how to use an Azure Key Vault Managed HSM instead, see the next step.
Use this example to create a custom parameter file for a Linux-based confidentia
```azurecli-interactive $KeyVault = <name of key vault>
- az keyvault create --name $KeyVault --resource-group $resourceGroup --location $region --sku Premium --enable-purge-protection
+ az keyvault create --name $KeyVault --resource-group $resourceGroup --location $region --sku Premium --enable-purge-protection
``` 1. Make sure that you have an **owner** role in this key vault.
Use this example to create a custom parameter file for a Linux-based confidentia
1. (Optional) If you don't want to use an Azure key vault, you can create an Azure Key Vault Managed HSM instead.
- 1. Follow the [quickstart to create an Azure Key Vault Managed HSM](../key-vault/managed-hsm/quick-create-cli.md) to provision and activate Azure Key Vault Managed HSM.
+ 1. Follow the [quickstart to create an Azure Key Vault Managed HSM](../key-vault/managed-hsm/quick-create-cli.md) to provision and activate Azure Key Vault Managed HSM.
1. Enable purge protection on the Azure Managed HSM. This step is required to enable key release.
-
+ ```azurecli-interactive az keyvault update-hsm --subscription $subscriptionId -g $resourceGroup --hsm-name $hsm --enable-purge-protection true ```
Use this example to create a custom parameter file for a Linux-based confidentia
```azurecli-interactive $cvmAgent = az ad sp show --id "bf7b6499-ff71-4aa2-97a4-f372087be7f0" | Out-String | ConvertFrom-Json
- az keyvault role assignment create --hsm-name $hsm --assignee $cvmAgent.Id --role "Managed HSM Crypto Service Release User" --scope /keys/$KeyName
+ az keyvault role assignment create --hsm-name $hsm --assignee $cvmAgent.Id --role "Managed HSM Crypto Service Release User" --scope /keys/$KeyName
``` 1. Create a new key using Azure Key Vault. For how to use an Azure Managed HSM instead, see the next step.
Use this example to create a custom parameter file for a Linux-based confidentia
```azurecli-interactive $KeyName = <name of key> $KeySize = 3072
- az keyvault key create --vault-name $KeyVault --name $KeyName --ops wrapKey unwrapkey --kty RSA-HSM --size $KeySize --exportable true --policy "@.\skr-policy.json"
+ az keyvault key create --vault-name $KeyVault --name $KeyName --ops wrapKey unwrapkey --kty RSA-HSM --size $KeySize --exportable true --policy "@.\skr-policy.json"
``` 1. Get information about the key that you created. ```azurecli-interactive $encryptionKeyVaultId = ((az keyvault show -n $KeyVault -g $resourceGroup) | ConvertFrom-Json).id
- $encryptionKeyURL= ((az keyvault key show --vault-name $KeyVault --name $KeyName) | ConvertFrom-Json).key.kid
+ $encryptionKeyURL= ((az keyvault key show --vault-name $KeyVault --name $KeyName) | ConvertFrom-Json).key.kid
``` 1. Deploy a Disk Encryption Set (DES) using a [DES ARM template](https://cvmprivatepreviewsa.blob.core.windows.net/cvmpublicpreviewcontainer/deploymentTemplate/deployDES.json) (`deployDES.json`).
Use this example to create a custom parameter file for a Linux-based confidentia
-p desName=$desName ` -p encryptionKeyURL=$encryptionKeyURL ` -p encryptionKeyVaultId=$encryptionKeyVaultId `
- -p region=$region
+ -p region=$region
``` 1. Assign key access to the DES file. ```azurecli-interactive
- $desIdentity= (az disk-encryption-set show -n $desName -g
+ $desIdentity= (az disk-encryption-set show -n $desName -g
$resourceGroup --query [identity.principalId] -o tsv) az keyvault set-policy -n $KeyVault ` -g $resourceGroup ` --object-id $desIdentity `
- --key-permissions wrapkey unwrapkey get
+ --key-permissions wrapkey unwrapkey get
``` 1. (Optional) Create a new key from an Azure Managed HSM.
Use this example to create a custom parameter file for a Linux-based confidentia
```azurecli-interactive $KeyName = <name of key> $KeySize = 3072
- az keyvault key create --hsm-name $hsm --name $KeyName --ops wrapKey unwrapkey --kty RSA-HSM --size $KeySize --exportable true --policy "@.\skr-policy.json"
+ az keyvault key create --hsm-name $hsm --name $KeyName --ops wrapKey unwrapkey --kty RSA-HSM --size $KeySize --exportable true --policy "@.\skr-policy.json"
``` 1. Get information about the key that you created. ```azurecli-interactive
- $encryptionKeyURL = ((az keyvault key show --hsm-name $hsm --name $KeyName) | ConvertFrom-Json).key.kid
+ $encryptionKeyURL = ((az keyvault key show --hsm-name $hsm --name $KeyName) | ConvertFrom-Json).key.kid
``` 1. Deploy a DES.
Use this example to create a custom parameter file for a Linux-based confidentia
1. Deploy your confidential VM with the customer-managed key. 1. Get the resource ID for the DES.
-
+ ```azurecli-interactive $desID = (az disk-encryption-set show -n $desName -g $resourceGroup --query [id] -o tsv) ```
confidential-computing Quick Create Confidential Vm Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-azure-cli.md
Last updated 12/01/2023 -+ # Quickstart: Create a confidential VM with the Azure CLI
To create a confidential [disk encryption set](../virtual-machines/linux/disks-e
For this step you need to be a Global Admin or you need to have the User Access Administrator RBAC role. [Install Microsoft Graph SDK](/powershell/microsoftgraph/installation) to execute the commands below. ```Powershell Connect-Graph -Tenant "your tenant ID" Application.ReadWrite.All
- New-MgServicePrincipal -AppId bf7b6499-ff71-4aa2-97a4-f372087be7f0 -DisplayName "Confidential VM Orchestrator"
+ New-MgServicePrincipal -AppId bf7b6499-ff71-4aa2-97a4-f372087be7f0 -DisplayName "Confidential VM Orchestrator"
``` 2. Create an Azure Key Vault using the [az keyvault create](/cli/azure/keyvault) command. For the pricing tier, select Premium (includes support for HSM backed keys). Make sure that you have an owner role in this key vault. ```azurecli-interactive
It takes a few minutes to create the VM and supporting resources. The following
} ``` Make a note of the `publicIpAddress` to use later.
-
+ ## Connect and attest the AMD-based CVM through Microsoft Azure Attestation Sample App To use a sample application in C++ for use with the guest attestation APIs, use the following steps. This example uses a Linux confidential virtual machine. For Windows, see [build instructions for Windows](https://github.com/Azure/confidential-computing-cvm-guest-attestation/tree/main/cvm-attestation-sample-app).
To use a sample application in C++ for use with the guest attestation APIs, use
3. Install the `build-essential` package. This package installs everything required for compiling the sample application. ```bash
-sudo apt-get install build-essential
+sudo apt-get install build-essential
``` 4. Install the packages below. ```bash
-sudo apt-get install libcurl4-openssl-dev
+sudo apt-get install libcurl4-openssl-dev
sudo apt-get install libjsoncpp-dev sudo apt-get install libboost-all-dev sudo apt install nlohmann-json3-dev
confidential-computing Quick Create Confidential Vm Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-portal.md
Last updated 12/01/2023
- mode-ui
- - devx-track-linux
+ - linux-related-content
- has-azure-ad-ps-ref - ignite-2023
You can use the Azure portal to create a [confidential VM](confidential-vm-overv
- An Azure subscription. Free trial accounts don't have access to the VMs used in this tutorial. One option is to use a [pay as you go subscription](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go/). - If you're using a Linux-based confidential VM, use a BASH shell for SSH or install an SSH client, such as [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/download.html).-- If Confidential disk encryption with a customer-managed key is required, please run below command to opt in service principal `Confidential VM Orchestrator` to your tenant.
+- If Confidential disk encryption with a customer-managed key is required, please run below command to opt in service principal `Confidential VM Orchestrator` to your tenant.
```azurecli Connect-AzureAD -Tenant "your tenant ID"
- New-AzureADServicePrincipal -AppId bf7b6499-ff71-4aa2-97a4-f372087be7f0 -DisplayName "Confidential VM Orchestrator"
+ New-AzureADServicePrincipal -AppId bf7b6499-ff71-4aa2-97a4-f372087be7f0 -DisplayName "Confidential VM Orchestrator"
``` ## Create confidential VM
To create a confidential VM in the Azure portal using an Azure Marketplace image
1. On the tab **Basics**, configure the following settings: a. Under **Project details**, for **Subscription**, select an Azure subscription that meets the [prerequisites](#prerequisites).
-
+ b. For **Resource Group**, select **Create new** to create a new resource group. Enter a name, and select **OK**. c. Under **Instance details**, for **Virtual machine name**, enter a name for your new VM.
- d. For **Region**, select the Azure region in which to deploy your VM.
+ d. For **Region**, select the Azure region in which to deploy your VM.
> [!NOTE] > Confidential VMs are not available in all locations. For currently supported locations, see which [VM products are available by Azure region](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines).
-
+ e. For **Availability options**, select **No infrastructure redundancy required** for singular VMs or [**Virtual machine scale set**](/azure/virtual-machine-scale-sets/overview) for multiple VMs. f. For **Security Type**, select **Confidential virtual machines**.
To create a confidential VM in the Azure portal using an Azure Marketplace image
1. Under **Disk options**, enable **Confidential OS disk encryption** if you want to encrypt your VM's OS disk during creation.
- 1. For **Key Management**, select the type of key to use.
-
+ 1. For **Key Management**, select the type of key to use.
+ 1. If **Confidential disk encryption with a customer-managed key** is selected, create a **Confidential disk encryption set** before creating your confidential VM. 1. If you want to encrypt your VM's temp disk, please refer to the [following documentation](https://aka.ms/CVM-tdisk-encrypt). 1. (Optional) If necessary, you need to create a **Confidential disk encryption set** as follows. 1. [Create an Azure Key Vault](../key-vault/general/quick-create-portal.md) selecting the **Premium** pricing tier that includes support for HSM-backed keys and enable purge protection. Alternatively, you can create an [Azure Key Vault managed Hardware Security Module (HSM)](../key-vault/managed-hsm/quick-create-cli.md).
-
- 1. In the Azure portal, search for and select **Disk Encryption Sets**.
- 1. Select **Create**.
+ 1. In the Azure portal, search for and select **Disk Encryption Sets**.
+
+ 1. Select **Create**.
- 1. For **Subscription**, select which Azure subscription to use.
+ 1. For **Subscription**, select which Azure subscription to use.
1. For **Resource group**, select or create a new resource group to use.
-
+ 1. For **Disk encryption set name**, enter a name for the set.
- 1. For **Region**, select an available Azure region.
+ 1. For **Region**, select an available Azure region.
1. For **Encryption type**, select **Confidential disk encryption with a customer-managed key**.
- 1. For **Key Vault**, select the key vault you already created.
+ 1. For **Key Vault**, select the key vault you already created.
1. Under **Key Vault**, select **Create new** to create a new key.
To create a confidential VM in the Azure portal using an Azure Marketplace image
1. For the key type, select **RSA-HSM** 1. Select your key size
-
+ n. Under Confidential Key Options select **Exportable** and set the Confidential operation policy as **CVM confidential operation policy**. o. Select **Create** to finish creating the key. p. Select **Review + create** to create new disk encryption set. Wait for the resource creation to complete successfully.
-
+ q. Go to the disk encryption set resource in the Azure portal. r. Select the pink banner to grant permissions to Azure Key Vault.
-
+ > [!IMPORTANT] > You must perform this step to successfully create the confidential VM.
confidential-ledger Create Client Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/create-client-certificate.md
description: Creating a Client Certificate with Microsoft Azure confidential led
-+ Last updated 04/11/2023
confidential-ledger Verify Node Quotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/verify-node-quotes.md
Last updated 08/18/2023 -+
connectors Connectors Create Api Azureblobstorage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-azureblobstorage.md
ms.suite: integration Previously updated : 01/10/2024 Last updated : 01/18/2024 tags: connectors
The Azure Blob Storage connector has different versions, based on [logic app typ
1. Follow the trigger with the Azure Blob Storage managed connector action named [**Get blob content**](/connectors/azureblobconnector/#get-blob-content), which reads the complete file and implicitly uses chunking.
+- Azure Blob Storage trigger limits
+
+ - The *managed* connector trigger is limited to 30,000 blobs in the polling virtual folder.
+ - The *built-in* connector trigger is limited to 10,000 blobs in the entire polling container.
+
+ If the limit is exceeded, a new blob might not be able to trigger the workflow, so the trigger is skipped.
+ ## Prerequisites - An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
container-apps Azure Arc Enable Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-enable-cluster.md
description: 'Tutorial: learn how to set up Azure Container Apps in your Azure A
-+ Last updated 3/24/2023
This tutorial will show you how to enable Azure Container Apps on your Arc-enabl
> [!NOTE] > During the preview, Azure Container Apps on Arc are not supported in production configurations. This article provides an example configuration for evaluation purposes only. >
-> This tutorial uses [Azure Kubernetes Service (AKS)](../aks/index.yml) to provide concrete instructions for setting up an environment from scratch. However, for a production workload, you may not want to enable Azure Arc on an AKS cluster as it is already managed in Azure.
+> This tutorial uses [Azure Kubernetes Service (AKS)](../aks/index.yml) to provide concrete instructions for setting up an environment from scratch. However, for a production workload, you may not want to enable Azure Arc on an AKS cluster as it is already managed in Azure.
Set environment variables based on your Kubernetes cluster deployment.
```bash GROUP_NAME="my-arc-cluster-group" AKS_CLUSTER_GROUP_NAME="my-aks-cluster-group"
-AKS_NAME="my-aks-cluster"
-LOCATION="eastus"
+AKS_NAME="my-aks-cluster"
+LOCATION="eastus"
``` # [PowerShell](#tab/azure-powershell)
LOCATION="eastus"
```azurepowershell-interactive $GROUP_NAME="my-arc-cluster-group" $AKS_CLUSTER_GROUP_NAME="my-aks-cluster-group"
-$AKS_NAME="my-aks-cluster"
-$LOCATION="eastus"
+$AKS_NAME="my-aks-cluster"
+$LOCATION="eastus"
```
The following steps help you get started understanding the service, but for prod
``` # [PowerShell](#tab/azure-powershell)
-
+ ```azurepowershell-interactive az group create --name $AKS_CLUSTER_GROUP_NAME --location $LOCATION az aks create `
The following steps help you get started understanding the service, but for prod
```azurecli-interactive az aks get-credentials --resource-group $AKS_CLUSTER_GROUP_NAME --name $AKS_NAME --admin
-
+ kubectl get ns ```
-1. Create a resource group to contain your Azure Arc resources.
+1. Create a resource group to contain your Azure Arc resources.
# [Azure CLI](#tab/azure-cli)
The following steps help you get started understanding the service, but for prod
```azurecli-interactive CLUSTER_NAME="${GROUP_NAME}-cluster" # Name of the connected cluster resource
-
+ az connectedk8s connect --resource-group $GROUP_NAME --name $CLUSTER_NAME ```
The following steps help you get started understanding the service, but for prod
```azurepowershell-interactive $CLUSTER_NAME="${GROUP_NAME}-cluster" # Name of the connected cluster resource
-
+ az connectedk8s connect --resource-group $GROUP_NAME --name $CLUSTER_NAME ```
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
```azurecli-interactive WORKSPACE_NAME="$GROUP_NAME-workspace" # Name of the Log Analytics workspace
-
+ az monitor log-analytics workspace create \ --resource-group $GROUP_NAME \ --workspace-name $WORKSPACE_NAME
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
# [Azure CLI](#tab/azure-cli) ```bash
- EXTENSION_NAME="appenv-ext"
+ EXTENSION_NAME="appenv-ext"
NAMESPACE="appplat-ns" CONNECTED_ENVIRONMENT_NAME="<connected-environment-name>" ```
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
```azurepowershell-interactive $EXTENSION_NAME="appenv-ext"
- $NAMESPACE="appplat-ns"
- $CONNECTED_ENVIRONMENT_NAME="<connected-environment-name>"
+ $NAMESPACE="appplat-ns"
+ $CONNECTED_ENVIRONMENT_NAME="<connected-environment-name>"
```
container-apps Background Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/background-processing.md
Last updated 01/10/2024 -+ # Tutorial: Deploy a background processing application with Azure Container Apps
az deployment group create --resource-group "$RESOURCE_GROUP" \
$Params = @{ environment_name = $ContainerAppsEnvironment location = $Location
- queueconnection = $QueueConnectionString
+ queueconnection = $QueueConnectionString
} $DeploymentArgs = @{
container-apps Dapr Functions Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-functions-extension.md
Title: Deploy the Dapr extension for Azure Functions in Azure Container Apps (preview)
-description: Learn how to use and deploy the Azure Functions with Dapr extension in your Dapr-enabled container apps.
+description: Learn how to use and deploy the Azure Functions with Dapr extension in your Dapr-enabled container apps.
-+ Last updated 10/30/2023-+ # Customer Intent: I'm a developer who wants to use the Dapr extension for Azure Functions in my Dapr-enabled container app
The [Dapr extension for Azure Functions](../azure-functions/functions-bindings-d
- Create an Azure Redis Cache for use as a Dapr statestore - Deploy an Azure Container Apps environment to host container apps - Deploy a Dapr-enabled function on Azure Container Apps:
- - One function that invokes the other service
+ - One function that invokes the other service
- One function that creates an Order and saves it to storage via Dapr statestore-- Verify the interaction between the two apps
+- Verify the interaction between the two apps
## Prerequisites
The [Dapr extension for Azure Functions](../azure-functions/functions-bindings-d
## Set up the environment
-1. In the terminal, log into your Azure subscription.
+1. In the terminal, log into your Azure subscription.
```azurecli az login
The [Dapr extension for Azure Functions](../azure-functions/functions-bindings-d
## Create resource group > [!NOTE]
-> Azure Container Apps support for Functions is currently in preview and available in the following regions.
+> Azure Container Apps support for Functions is currently in preview and available in the following regions.
> - Australia East > - Central US > - East US
Specifying one of the available regions, create a resource group for your contai
1. When prompted by the CLI, enter a resource name prefix. The name you choose must be a combination of numbers and lowercase letters, 3 and 24 characters in length. ```
- Please provide string value for 'resourceNamePrefix' (? for help): {your-resource-name-prefix}
+ Please provide string value for 'resourceNamePrefix' (? for help): {your-resource-name-prefix}
``` The template deploys the following resources and might take a while:
Specifying one of the available regions, create a resource group for your contai
- Application Insights - Log Analytics WorkSpace - Dapr Component (Azure Redis Cache) for State Management
- - The following .NET Dapr-enabled Functions:
+ - The following .NET Dapr-enabled Functions:
- `OrderService` - `CreateNewOrder` - `RetrieveOrder`
container-apps Deploy Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/deploy-visual-studio-code.md
Last updated 10/29/2023-+ # Quickstart: Deploy to Azure Container Apps using Visual Studio Code
In this tutorial, you'll deploy a containerized application to Azure Container A
- The following Visual Studio Code extensions installed: - The [Azure Account extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.azure-account) - The [Azure Container Apps extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecontainerapps)
- - The [Docker extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-docker)
+ - The [Docker extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-docker)
## Clone the project
The Azure Container Apps extension for Visual Studio Code enables you to choose
In the browser's location bar, append the `/albums` path at the end of the app URL to view data from a sample API request.
-Congratulations! You successfully created and deployed your first container app using Visual Studio code.
+Congratulations! You successfully created and deployed your first container app using Visual Studio Code.
+ ## Clean up resources
Follow these steps in the Azure portal to remove the resources you created:
1. Select the **my-container-app** resource group from the *Overview* section. 1. Select the **Delete resource group** button at the top of the resource group *Overview*. 1. Enter the resource group name **my-container-app** in the *Are you sure you want to delete "my-container-apps"* confirmation dialog.
-1. Select **Delete**.
+1. Select **Delete**.
The process to delete the resource group might take a few minutes to complete. > [!TIP]
container-apps Get Started Existing Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started-existing-container-image.md
description: Deploy an existing container image to Azure Container Apps with the
-+ Last updated 08/31/2022
If you have enabled ingress on your container app, you can add `--query properti
# [Azure PowerShell](#tab/azure-powershell) ```azurepowershell-interactive
-$TemplateObj = New-AzContainerAppTemplateObject -Name my-container-app -Image "<REGISTRY_CONTAINER_NAME>"
+$TemplateObj = New-AzContainerAppTemplateObject -Name my-container-app -Image "<REGISTRY_CONTAINER_NAME>"
``` (Replace the \<REGISTRY_CONTAINER_NAME\> with your value.)
container-apps Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started.md
The following message is displayed when the container app is deployed:
:::image type="content" source="media/get-started/azure-container-apps-quickstart.png" alt-text="Screenshot of container app web page."::: + ## Clean up resources If you're not going to continue to use this application, run the following command to delete the resource group along with all the resources created in this quickstart.
container-apps Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/github-actions.md
- devx-track-azurecli
- - devx-track-linux
+ - linux-related-content
- ignite-2023 Last updated 11/09/2022
steps:
uses: azure/login@v1 with: creds: ${{ secrets.AZURE_CREDENTIALS }}
-
+ - name: Build and deploy Container App uses: azure/container-apps-deploy-action@v1 with:
You take the following steps to configure a GitHub Actions workflow to deploy to
### Create a GitHub repository and clone source code
-Before creating a workflow, the source code for your app must be in a GitHub repository.
+Before creating a workflow, the source code for your app must be in a GitHub repository.
-1. Log in to Azure with the Azure CLI.
+1. Log in to Azure with the Azure CLI.
```azurecli-interactive az login
Before creating a workflow, the source code for your app must be in a GitHub rep
Create your container app using the `az containerapp up` command in the following steps. This command will create Azure resources, build the container image, store the image in a registry, and deploy to a container app.
-After you create your app, you can add a managed identity to the app and assign the identity the `AcrPull` role to allow the identity to pull images from the registry.
+After you create your app, you can add a managed identity to the app and assign the identity the `AcrPull` role to allow the identity to pull images from the registry.
[!INCLUDE [container-apps-github-devops-setup.md](../../includes/container-apps-github-devops-setup.md)]
The GitHub workflow requires a secret named `AZURE_CREDENTIALS` to authenticate
push: branches: - main
-
+ jobs: build: runs-on: ubuntu-latest
container-apps Jobs Get Started Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/jobs-get-started-cli.md
Job executions output logs to the logging provider that you configured for the C
] ``` + ## Clean up resources If you're not going to continue to use this application, run the following command to delete the resource group along with all the resources created in this quickstart.
container-apps Jobs Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/jobs-get-started-portal.md
Next, create an environment for your container app.
1. In *Job details*, select **Scheduled** for the *Trigger type*. In the *Cron expression* field, enter `*/1 * * * *`.
-
+ This expression starts the job every minute. 1. Select the **Next: Container** button at the bottom of the page.
Next, create an environment for your container app.
1. Select **Go to resource** to view your new Container Apps job.
-2. Select the **Execution history** tab.
+1. Select the **Execution history** tab.
The *Execution history* tab displays the status of each job execution. Select the **Refresh** button to update the list. Wait up to a minute for the scheduled job execution to start. Its status changes from *Pending* to *Running* to *Succeeded*.
Next, create an environment for your container app.
The logs show the output of the job execution. It may take a few minutes for the logs to appear. + ## Clean up resources If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the resource group.
container-apps Manage Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/manage-secrets.md
Last updated 03/23/2023 -+ # Manage secrets in Azure Container Apps
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr.md
Last updated 09/29/2022 -+ ms.devlang: azurecli
You learn how to:
With Azure Container Apps, you get a [fully managed version of the Dapr APIs](./dapr-overview.md) when building microservices. When you use Dapr in Azure Container Apps, you can enable sidecars to run next to your microservices that provide a rich set of capabilities. Available Dapr APIs include [Service to Service calls](https://docs.dapr.io/developing-applications/building-blocks/service-invocation/), [Pub/Sub](https://docs.dapr.io/developing-applications/building-blocks/pubsub/), [Event Bindings](https://docs.dapr.io/developing-applications/building-blocks/bindings/), [State Stores](https://docs.dapr.io/developing-applications/building-blocks/state-management/), and [Actors](https://docs.dapr.io/developing-applications/building-blocks/actors/).
-In this tutorial, you deploy the same applications from the Dapr [Hello World](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes) quickstart.
+In this tutorial, you deploy the same applications from the Dapr [Hello World](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes) quickstart.
The application consists of:
container-apps Quickstart Code To Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-code-to-cloud.md
Copy the FQDN to a web browser. From your web browser, go to the `/albums` endp
:::image type="content" source="media/quickstart-code-to-cloud/azure-container-apps-album-api.png" alt-text="Screenshot of response from albums API endpoint."::: + ## Clean up resources If you're not going to continue on to the [Deploy a frontend](communicate-between-microservices.md) tutorial, you can remove the Azure resources created during this quickstart with the following command.
container-apps Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-portal.md
Select the link next to *Application URL* to view your application. The followin
:::image type="content" source="media/get-started/azure-container-apps-quickstart.png" alt-text="Your first Azure Container Apps deployment."::: + ## Clean up resources If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the resource group.
container-apps Scale App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/scale-app.md
description: Learn how applications scale in and out in Azure Container Apps.
-+ Last updated 12/08/2022
Scaling is defined by the combination of limits, rules, and behavior.
- **Behavior** is how the rules and limits are combined together to determine scale decisions over time. [Scale behavior](#scale-behavior) explains how scale decisions are calculated.
-
+ As you define your scaling rules, keep in mind the following items: - You aren't billed usage charges if your container app scales to zero.
If you define more than one scale rule, the container app begins to scale once t
## HTTP
-With an HTTP scaling rule, you have control over the threshold of concurrent HTTP requests that determines how your container app revision scales. [Container Apps jobs](jobs.md) don't support HTTP scaling rules.
+With an HTTP scaling rule, you have control over the threshold of concurrent HTTP requests that determines how your container app revision scales. [Container Apps jobs](jobs.md) don't support HTTP scaling rules.
In the following example, the revision scales out up to five replicas and can scale in to zero. The scaling property is set to 100 concurrent requests per second.
A KEDA scaler may support using secrets in a [TriggerAuthentication](https://ked
1. In your container app, create the [secrets](./manage-secrets.md) that match the `secretTargetRef` properties.
-1. In the CLI command, set parameters for each `secretTargetRef` entry.
+1. In the CLI command, set parameters for each `secretTargetRef` entry.
1. Create a secret entry with the `--secrets` parameter. If there are multiple secrets, separate them with a space.
If the app was scaled to the maximum replica count of 20, scaling goes through t
- No usage charges are incurred when an application scales to zero. For more pricing information, see [Billing in Azure Container Apps](billing.md).
+- You need to enable data protection for all .NET apps on Azure Container Apps. See [Deploying and scaling an ASP.NET Core app on Azure Container Apps](/aspnet/core/host-and-deploy/scaling-aspnet-apps/scaling-aspnet-apps) for details.
+ ### Known limitations - Vertical scaling isn't supported.
container-apps Tutorial Java Quarkus Connect Managed Identity Postgresql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-java-quarkus-connect-managed-identity-postgresql-database.md
Last updated 09/26/2022-+ # Tutorial: Connect to PostgreSQL Database from a Java Quarkus Container App without secrets using a managed identity
container-apps Vnet Custom Internal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom-internal.md
description: Learn how to integrate a VNET to an internal Azure Container Apps e
-+ Last updated 08/29/2023
$VnetArgs = @{
Location = $Location ResourceGroupName = $ResourceGroupName AddressPrefix = '10.0.0.0/16'
- Subnet = $subnet
+ Subnet = $subnet
} $vnet = New-AzVirtualNetwork @VnetArgs ```
$DnsRecordArgs = @{
ZoneName = $EnvironmentDefaultDomain Name = '*' RecordType = 'A'
- Ttl = 3600
+ Ttl = 3600
PrivateDnsRecords = $DnsRecords } New-AzPrivateDnsRecordSet @DnsRecordArgs
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom.md
description: Learn how to integrate a VNET with an external Azure Container Apps
-+ Last updated 08/31/2022
$VnetArgs = @{
Location = $Location ResourceGroupName = $ResourceGroupName AddressPrefix = '10.0.0.0/16'
- Subnet = $subnet
+ Subnet = $subnet
} $vnet = New-AzVirtualNetwork @VnetArgs ```
$DnsRecordArgs = @{
ZoneName = $EnvironmentDefaultDomain Name = '*' RecordType = 'A'
- Ttl = 3600
+ Ttl = 3600
PrivateDnsRecords = $DnsRecords } New-AzPrivateDnsRecordSet @DnsRecordArgs
container-instances Container Instances Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-custom-dns.md
description: Configure a public or private DNS configuration for a container gro
-+ Last updated 05/25/2022
Last updated 05/25/2022
# Deploy a container group with custom DNS settings
-In [Azure Virtual Network](../virtual-network/virtual-networks-overview.md), you can deploy container groups using the `az container create` command in the Azure CLI. You can also provide advanced configuration settings to the `az container create` command using a YAML configuration file.
+In [Azure Virtual Network](../virtual-network/virtual-networks-overview.md), you can deploy container groups using the `az container create` command in the Azure CLI. You can also provide advanced configuration settings to the `az container create` command using a YAML configuration file.
-This article demonstrates how to deploy a container group with custom DNS settings using a YAML configuration file.
+This article demonstrates how to deploy a container group with custom DNS settings using a YAML configuration file.
For more information on deploying container groups to a virtual network, see the [Deploy in a virtual network article](container-instances-vnet.md).
If you have an existing virtual network that meets these criteria, you can skip
1. Link the DNS zone to your virtual network using the [az network private-dns link vnet create][az-network-private-dns-link-vnet-create] command. The DNS server is only required to test name resolution. The `-e` flag enables automatic hostname registration, which is unneeded, so we set it to `false`. ```azurecli-interactive
- az network private-dns link vnet create \
+ az network private-dns link vnet create \
-g ACIResourceGroup \
- -n aciDNSLink \
+ -n aciDNSLink \
-z private.contoso.com \ -v aci-vnet \ -e false
container-instances Container Instances Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-github-action.md
Last updated 12/09/2022-+ # Configure a GitHub Action to create a container instance
This article shows how to set up a workflow in a GitHub repo that performs the f
This article shows two ways to set up the workflow:
-* [Configure GitHub workflow](#configure-github-workflow) - Create a workflow in a GitHub repo using the Deploy to Azure Container Instances action and other actions.
+* [Configure GitHub workflow](#configure-github-workflow) - Create a workflow in a GitHub repo using the Deploy to Azure Container Instances action and other actions.
* [Use CLI extension](#use-deploy-to-azure-extension) - Use the `az container app up` command in the [Deploy to Azure](https://github.com/Azure/deploy-to-azure-cli-extension) extension in the Azure CLI. This command streamlines creation of the GitHub workflow and deployment steps. > [!IMPORTANT]
Save the JSON output because it is used in a later step. Also, take note of the
# [OpenID Connect](#tab/openid)
-OpenID Connect is an authentication method that uses short-lived tokens. Setting up [OpenID Connect with GitHub Actions](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect) is more complex process that offers hardened security.
+OpenID Connect is an authentication method that uses short-lived tokens. Setting up [OpenID Connect with GitHub Actions](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect) is more complex process that offers hardened security.
-1. If you do not have an existing application, register a [new Active Directory application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). Create the Active Directory application.
+1. If you do not have an existing application, register a [new Active Directory application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). Create the Active Directory application.
```azurecli-interactive az ad app create --display-name myApp ```
- This command will output JSON with an `appId` that is your `client-id`. Save the value to use as the `AZURE_CLIENT_ID` GitHub secret later.
+ This command will output JSON with an `appId` that is your `client-id`. Save the value to use as the `AZURE_CLIENT_ID` GitHub secret later.
You'll use the `objectId` value when creating federated credentials with Graph API and reference it as the `APPLICATION-OBJECT-ID`.
-1. Create a service principal. Replace the `$appID` with the appId from your JSON output.
+1. Create a service principal. Replace the `$appID` with the appId from your JSON output.
- This command generates JSON output with a different `objectId` and will be used in the next step. The new `objectId` is the `assignee-object-id`.
-
- Copy the `appOwnerTenantId` to use as a GitHub secret for `AZURE_TENANT_ID` later.
+ This command generates JSON output with a different `objectId` and will be used in the next step. The new `objectId` is the `assignee-object-id`.
+
+ Copy the `appOwnerTenantId` to use as a GitHub secret for `AZURE_TENANT_ID` later.
```azurecli-interactive az ad sp create --id $appId ```
-1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID, `$resourceGroupName` with your resource group name, and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
+1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID, `$resourceGroupName` with your resource group name, and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
```azurecli-interactive az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --scope /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Web/sites/ --assignee-principal-type ServicePrincipal
OpenID Connect is an authentication method that uses short-lived tokens. Setting
* Jobs in your GitHub Actions environment: `repo:< Organization/Repository >:environment:< Name >` * For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`. * For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull_request`.
-
+ ```azurecli-interactive az ad app federated-credential create --id <APPLICATION-OBJECT-ID> --parameters credential.json ("credential.json" contains the following content)
OpenID Connect is an authentication method that uses short-lived tokens. Setting
"audiences": [ "api://AzureADTokenExchange" ]
- }
+ }
```
-
+ To learn how to create a Create an active directory application, service principal, and federated credentials in Azure portal, see [Connect GitHub and Azure](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
To learn how to create a Create an active directory application, service princip
# [Service principal](#tab/userlevel)
-Update the Azure service principal credentials to allow push and pull access to your container registry. This step enables the GitHub workflow to use the service principal to [authenticate with your container registry](../container-registry/container-registry-auth-service-principal.md) and to push and pull a Docker image.
+Update the Azure service principal credentials to allow push and pull access to your container registry. This step enables the GitHub workflow to use the service principal to [authenticate with your container registry](../container-registry/container-registry-auth-service-principal.md) and to push and pull a Docker image.
Get the resource ID of your container registry. Substitute the name of your registry in the following [az acr show][az-acr-show] command:
az role assignment create \
# [OpenID Connect](#tab/openid)
-You need to give your application permission to access the Azure Container Registry and to create an Azure Container Instance.
+You need to give your application permission to access the Azure Container Registry and to create an Azure Container Instance.
-1. In Azure portal, go to [App registrations](https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/RegisteredApps).
-1. Search for your OpenID Connect app registration and copy the **Application (client) ID**.
-1. Grant permissions for your app to your resource group. You'll need to set permissions at the resource group level so that you can create Azure Container instances.
+1. In Azure portal, go to [App registrations](https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/RegisteredApps).
+1. Search for your OpenID Connect app registration and copy the **Application (client) ID**.
+1. Grant permissions for your app to your resource group. You'll need to set permissions at the resource group level so that you can create Azure Container instances.
```azurecli-interactive az role assignment create \
jobs:
# checkout the repo - name: 'Checkout GitHub Action' uses: actions/checkout@main
-
+ - name: 'Login via Azure CLI' uses: azure/login@v1 with: creds: ${{ secrets.AZURE_CREDENTIALS }}
-
+ - name: 'Build and push image' uses: azure/docker-login@v1 with:
jobs:
steps: - name: 'Checkout GitHub Action' uses: actions/checkout@main
-
+ - name: 'Login via Azure CLI' uses: azure/login@v1 with:
jobs:
### Validate workflow
-After you commit the workflow file, the workflow is triggered. To review workflow progress, navigate to **Actions** > **Workflows**.
+After you commit the workflow file, the workflow is triggered. To review workflow progress, navigate to **Actions** > **Workflows**.
![View workflow progress](./media/container-instances-github-action/github-action-progress.png) See [Viewing workflow run history](https://docs.github.com/en/actions/managing-workflow-runs/viewing-workflow-run-history) for information about viewing the status and results of each step in your workflow. If the workflow doesn't complete, see [Viewing logs to diagnose failures](https://docs.github.com/en/actions/managing-workflow-runs/using-workflow-run-logs#viewing-logs-to-diagnose-failures).
-When the workflow completes successfully, get information about the container instance named *aci-sampleapp* by running the [az container show][az-container-show] command. Substitute the name of your resource group:
+When the workflow completes successfully, get information about the container instance named *aci-sampleapp* by running the [az container show][az-container-show] command. Substitute the name of your resource group:
```azurecli-interactive az container show \
After the instance is provisioned, navigate to the container's FQDN in your brow
## Use Deploy to Azure extension
-Alternatively, use the [Deploy to Azure extension](https://github.com/Azure/deploy-to-azure-cli-extension) in the Azure CLI to configure the workflow. The `az container app up` command in the extension takes input parameters from you to set up a workflow to deploy to Azure Container Instances.
+Alternatively, use the [Deploy to Azure extension](https://github.com/Azure/deploy-to-azure-cli-extension) in the Azure CLI to configure the workflow. The `az container app up` command in the extension takes input parameters from you to set up a workflow to deploy to Azure Container Instances.
The workflow created by the Azure CLI is similar to the workflow you can [create manually using GitHub](#configure-github-workflow).
az container app up \
* Service principal credentials for the Azure CLI * Credentials to access the Azure container registry
-* After the command commits the workflow file to your repo, the workflow is triggered.
+* After the command commits the workflow file to your repo, the workflow is triggered.
Output is similar to:
To view the workflow status and results of each step in the GitHub UI, see [View
### Validate workflow
-The workflow deploys an Azure container instance with the base name of your GitHub repo, in this case, *acr-build-helloworld-node*. When the workflow completes successfully, get information about the container instance named *acr-build-helloworld-node* by running the [az container show][az-container-show] command. Substitute the name of your resource group:
+The workflow deploys an Azure container instance with the base name of your GitHub repo, in this case, *acr-build-helloworld-node*. When the workflow completes successfully, get information about the container instance named *acr-build-helloworld-node* by running the [az container show][az-container-show] command. Substitute the name of your resource group:
```azurecli-interactive az container show \
container-instances Container Instances Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-gpu.md
Title: Deploy GPU-enabled container instance
+ Title: Deploy GPU-enabled container instance
description: Learn how to deploy Azure container instances to run compute-intensive container applications using GPU resources. -+ Last updated 06/17/2022
This article shows how to add GPU resources when you deploy a container group by
> [!NOTE] > Due to some current limitations, not all limit increase requests are guaranteed to be approved.
-* If you would like to use this sku for your production container deployments, create an [Azure Support request](https://azure.microsoft.com/support) to increase the limit.
+* If you would like to use this sku for your production container deployments, create an [Azure Support request](https://azure.microsoft.com/support) to increase the limit.
## Preview limitations
-In preview, the following limitations apply when using GPU resources in container groups.
+In preview, the following limitations apply when using GPU resources in container groups.
[!INCLUDE [container-instances-gpu-regions](../../includes/container-instances-gpu-regions.md)]
To use GPUs in a container instance, specify a *GPU resource* with the following
[!INCLUDE [container-instances-gpu-limits](../../includes/container-instances-gpu-limits.md)]
-When deploying GPU resources, set CPU and memory resources appropriate for the workload, up to the maximum values shown in the preceding table. These values are currently larger than the CPU and memory resources available in container groups without GPU resources.
+When deploying GPU resources, set CPU and memory resources appropriate for the workload, up to the maximum values shown in the preceding table. These values are currently larger than the CPU and memory resources available in container groups without GPU resources.
> [!IMPORTANT] > Default [subscription limits](container-instances-quotas.md) (quotas) for GPU resources differ by SKU. The default CPU limits for V100 SKUs are initially set to 0. To request an increase in an available region, please submit an [Azure support request][azure-support]. ### Things to know
-* **Deployment time** - Creation of a container group containing GPU resources takes up to **8-10 minutes**. This is due to the additional time to provision and configure a GPU VM in Azure.
+* **Deployment time** - Creation of a container group containing GPU resources takes up to **8-10 minutes**. This is due to the additional time to provision and configure a GPU VM in Azure.
* **Pricing** - Similar to container groups without GPU resources, Azure bills for resources consumed over the *duration* of a container group with GPU resources. The duration is calculated from the time to pull your first container's image until the container group terminates. It does not include the time to deploy the container group.
When deploying GPU resources, set CPU and memory resources appropriate for the w
> [!NOTE] > To improve reliability when using a public container image from Docker Hub, import and manage the image in a private Azure container registry, and update your Dockerfile to use your privately managed base image. [Learn more about working with public images](../container-registry/buffer-gate-public-content.md).
-
+ ## YAML example One way to add GPU resources is to deploy a container group by using a [YAML file](container-instances-multi-container-yaml.md). Copy the following YAML into a new file named *gpu-deploy-aci.yaml*, then save the file. This YAML creates a container group named *gpucontainergroup* specifying a container instance with a V100 GPU. The instance runs a sample CUDA vector addition application. The resource requests are sufficient to run the workload.
properties:
restartPolicy: OnFailure ```
-Deploy the container group with the [az container create][az-container-create] command, specifying the YAML file name for the `--file` parameter. You need to supply the name of a resource group and a location for the container group such as *eastus* that supports GPU resources.
+Deploy the container group with the [az container create][az-container-create] command, specifying the YAML file name for the `--file` parameter. You need to supply the name of a resource group and a location for the container group such as *eastus* that supports GPU resources.
```azurecli-interactive az container create --resource-group myResourceGroup --file gpu-deploy-aci.yaml --location eastus
container-instances Container Instances Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-managed-identity.md
-+ Last updated 06/17/2022
Use a managed identity in a running container to authenticate to any [service th
### Enable a managed identity
- When you create a container group, enable one or more managed identities by setting a [ContainerGroupIdentity](/rest/api/container-instances/2022-09-01/container-groups/create-or-update#containergroupidentity) property. You can also enable or update managed identities after a container group is running - either action causes the container group to restart. To set the identities on a new or existing container group, use the Azure CLI, a Resource Manager template, a YAML file, or another Azure tool.
+ When you create a container group, enable one or more managed identities by setting a [ContainerGroupIdentity](/rest/api/container-instances/2022-09-01/container-groups/create-or-update#containergroupidentity) property. You can also enable or update managed identities after a container group is running - either action causes the container group to restart. To set the identities on a new or existing container group, use the Azure CLI, a Resource Manager template, a YAML file, or another Azure tool.
Azure Container Instances supports both types of managed Azure identities: user-assigned and system-assigned. On a container group, you can enable a system-assigned identity, one or more user-assigned identities, or both types of identities. If you're unfamiliar with managed identities for Azure resources, see the [overview](../active-directory/managed-identities-azure-resources/overview.md).
To use a managed identity, the identity must be granted access to one or more Az
## Create an Azure key vault
-The examples in this article use a managed identity in Azure Container Instances to access an Azure key vault secret.
+The examples in this article use a managed identity in Azure Container Instances to access an Azure key vault secret.
First, create a resource group named *myResourceGroup* in the *eastus* location with the following [az group create](/cli/azure/group#az-group-create) command:
First, create a resource group named *myResourceGroup* in the *eastus* location
az group create --name myResourceGroup --location eastus ```
-Use the [az keyvault create](/cli/azure/keyvault#az-keyvault-create) command to create a key vault. Be sure to specify a unique key vault name.
+Use the [az keyvault create](/cli/azure/keyvault#az-keyvault-create) command to create a key vault. Be sure to specify a unique key vault name.
```azurecli-interactive az keyvault create \ --name mykeyvault \
- --resource-group myResourceGroup \
+ --resource-group myResourceGroup \
--location eastus ```
Run the following [az keyvault set-policy](/cli/azure/keyvault) command to set a
### Enable user-assigned identity on a container group
-Run the following [az container create](/cli/azure/container#az-container-create) command to create a container instance based on Microsoft's `azure-cli` image. This example provides a single-container group that you can use interactively to run the Azure CLI to access other Azure services. In this section, only the base operating system is used. For an example to use the Azure CLI in the container, see [Enable system-assigned identity on a container group](#enable-system-assigned-identity-on-a-container-group).
+Run the following [az container create](/cli/azure/container#az-container-create) command to create a container instance based on Microsoft's `azure-cli` image. This example provides a single-container group that you can use interactively to run the Azure CLI to access other Azure services. In this section, only the base operating system is used. For an example to use the Azure CLI in the container, see [Enable system-assigned identity on a container group](#enable-system-assigned-identity-on-a-container-group).
The `--assign-identity` parameter passes your user-assigned managed identity to the group. The long-running command keeps the container running. This example uses the same resource group used to create the key vault, but you could specify a different one.
The response looks similar to the following, showing the secret. In your code, y
### Enable system-assigned identity on a container group
-Run the following [az container create](/cli/azure/container#az-container-create) command to create a container instance based on Microsoft's `azure-cli` image. This example provides a single-container group that you can use interactively to run the Azure CLI to access other Azure services.
+Run the following [az container create](/cli/azure/container#az-container-create) command to create a container instance based on Microsoft's `azure-cli` image. This example provides a single-container group that you can use interactively to run the Azure CLI to access other Azure services.
The `--assign-identity` parameter with no additional value enables a system-assigned managed identity on the group. The identity is scoped to the resource group of the container group. The long-running command keeps the container running. This example uses the same resource group used to create the key vault, which is in the scope of the identity.
A user-assigned identity is a resource ID of the form:
``` "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{identityName}"
-```
+```
You can enable one or more user-assigned identities.
Specify a minimum `apiVersion` of `2018-10-01`.
### User-assigned identity
-A user-assigned identity is a resource ID of the form
+A user-assigned identity is a resource ID of the form
``` '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{identityName}'
container-instances Container Instances Readiness Probe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-readiness-probe.md
-+ Last updated 06/17/2022
type: Microsoft.ContainerInstance/containerGroups
### Start command
-The deployment includes a `command` property defining a starting command that runs when the container first starts running. This property accepts an array of strings. This command simulates a time when the web app runs but the container isn't ready.
+The deployment includes a `command` property defining a starting command that runs when the container first starts running. This property accepts an array of strings. This command simulates a time when the web app runs but the container isn't ready.
First, it starts a shell session and runs a `node` command to start the web app. It also starts a command to sleep for 240 seconds, after which it creates a file called `ready` within the `/tmp` directory:
node /usr/src/app/index.js & (sleep 240; touch /tmp/ready); wait
This YAML file defines a `readinessProbe` which supports an `exec` readiness command that acts as the readiness check. This example readiness command tests for the existence of the `ready` file in the `/tmp` directory.
-When the `ready` file doesn't exist, the readiness command exits with a non-zero value; the container continues running but can't be accessed. When the command exits successfully with exit code 0, the container is ready to be accessed.
+When the `ready` file doesn't exist, the readiness command exits with a non-zero value; the container continues running but can't be accessed. When the command exits successfully with exit code 0, the container is ready to be accessed.
The `periodSeconds` property designates the readiness command should execute every 5 seconds. The readiness probe runs for the lifetime of the container group.
az container create --resource-group myResourceGroup --file readiness-probe.yaml
In this example, during the first 240 seconds, the readiness command fails when it checks for the `ready` file's existence. The status code returned signals that the container isn't ready.
-These events can be viewed from the Azure portal or Azure CLI. For example, the portal shows events of type `Unhealthy` are triggered upon the readiness command failing.
+These events can be viewed from the Azure portal or Azure CLI. For example, the portal shows events of type `Unhealthy` are triggered upon the readiness command failing.
![Portal unhealthy event][portal-unhealthy]
wget 192.0.2.1
```output --2019-10-15 16:46:02-- http://192.0.2.1/ Connecting to 192.0.2.1... connected.
-HTTP request sent, awaiting response...
+HTTP request sent, awaiting response...
``` After 240 seconds, the readiness command succeeds, signaling the container is ready. Now, when you run the `wget` command, it succeeds:
HTTP request sent, awaiting response...200 OK
Length: 1663 (1.6K) [text/html] Saving to: ΓÇÿhttps://docsupdatetracker.net/index.html.1ΓÇÖ
-https://docsupdatetracker.net/index.html.1 100%[===============================================================>] 1.62K --.-KB/s in 0s
+https://docsupdatetracker.net/index.html.1 100%[===============================================================>] 1.62K --.-KB/s in 0s
-2019-10-15 16:49:38 (113 MB/s) - ΓÇÿhttps://docsupdatetracker.net/index.html.1ΓÇÖ saved [1663/1663]
+2019-10-15 16:49:38 (113 MB/s) - ΓÇÿhttps://docsupdatetracker.net/index.html.1ΓÇÖ saved [1663/1663]
``` When the container is ready, you can also access the web app by browsing to the IP address using a web browser. > [!NOTE]
-> The readiness probe continues to run for the lifetime of the container group. If the readiness command fails at a later time, the container again becomes inaccessible.
->
+> The readiness probe continues to run for the lifetime of the container group. If the readiness command fails at a later time, the container again becomes inaccessible.
+>
## Next steps
container-instances Container Instances Restart Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-restart-policy.md
Title: Restart policy for run-once tasks
+ Title: Restart policy for run-once tasks
description: Learn how to use Azure Container Instances to execute tasks that run to completion, such as in build, test, or image rendering jobs. -+ Last updated 06/17/2022
container-instances Container Instances Start Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-start-command.md
-+ Last updated 06/17/2022
Like setting [environment variables](container-instances-environment-variables.m
* Depending on the container configuration, you might need to set a full path to the command line executable or arguments.
-* Set an appropriate [restart policy](container-instances-restart-policy.md) for the container instance, depending on whether the command-line specifies a long-running task or a run-once task. For example, a restart policy of `Never` or `OnFailure` is recommended for a run-once task.
+* Set an appropriate [restart policy](container-instances-restart-policy.md) for the container instance, depending on whether the command-line specifies a long-running task or a run-once task. For example, a restart policy of `Never` or `OnFailure` is recommended for a run-once task.
* If you need information about the default entrypoint set in a container image, use the [docker image inspect](https://docs.docker.com/engine/reference/commandline/image_inspect/) command.
The command line syntax varies depending on the Azure API or tool used to create
* [New-AzureRmContainerGroup][new-azurermcontainergroup] Azure PowerShell cmdlet: Pass a string with the `-Command` parameter. Example: `-Command "echo hello"`.
-* Azure portal: In the **Command override** property of the container configuration, provide a comma-separated list of strings, without quotes. Example: `python, myscript.py, arg1, arg2`).
+* Azure portal: In the **Command override** property of the container configuration, provide a comma-separated list of strings, without quotes. Example: `python, myscript.py, arg1, arg2`).
-* Resource Manager template or YAML file, or one of the Azure SDKs: Specify the command line property as an array of strings. Example: the JSON array `["python", "myscript.py", "arg1", "arg2"]` in a Resource Manager template.
+* Resource Manager template or YAML file, or one of the Azure SDKs: Specify the command line property as an array of strings. Example: the JSON array `["python", "myscript.py", "arg1", "arg2"]` in a Resource Manager template.
If you're familiar with [Dockerfile](https://docs.docker.com/engine/reference/builder/) syntax, this format is similar to the *exec* form of the CMD instruction. ### Examples
-| | Azure CLI | Portal | Template |
+| | Azure CLI | Portal | Template |
| - | - | | | | **Single command** | `--command-line "python myscript.py arg1 arg2"` | **Command override**: `python, myscript.py, arg1, arg2` | `"command": ["python", "myscript.py", "arg1", "arg2"]` | | **Multiple commands** | `--command-line "/bin/bash -c 'mkdir test; touch test/myfile; tail -f '"` |**Command override**: `/bin/bash, -c, mkdir test; touch test/myfile; tail -f ` | `"command": ["/bin/bash", "-c", "mkdir test; touch test/myfile; tail -f "]` |
container-instances Container Instances Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-troubleshooting.md
Last updated 06/17/2022-+ # Troubleshoot common issues in Azure Container Instances
When running container groups without long-running processes you may see repeate
az container create -g MyResourceGroup --name myapp --image ubuntu --command-line "tail -f " ```
-```azurecli-interactive
+```azurecli-interactive
## Deploying a Windows container az container create -g myResourceGroup --name mywindowsapp --os-type Windows --image mcr.microsoft.com/windows/servercore:ltsc2019 --command-line "ping -t localhost"
If you want to confirm that Azure Container Instances can listen on the port you
--ip-address Public --ports 9000 \ --environment-variables 'PORT'='9000' ```
-1. Find the IP address of the container group in the command output of `az container create`. Look for the value of **ip**.
-1. After the container is provisioned successfully, browse to the IP address and port of the container application in your browser, for example: `192.0.2.0:9000`.
+1. Find the IP address of the container group in the command output of `az container create`. Look for the value of **ip**.
+1. After the container is provisioned successfully, browse to the IP address and port of the container application in your browser, for example: `192.0.2.0:9000`.
You should see the "Welcome to Azure Container Instances!" message displayed by the web app. 1. When you're done with the container, remove it using the `az container delete` command:
If you want to confirm that Azure Container Instances can listen on the port you
az container delete --resource-group myResourceGroup --name mycontainer ```
-## Issues during confidential container group deployments
+## Issues during confidential container group deployments
-### Policy errors while using custom CCE policy
+### Policy errors while using custom CCE policy
-Custom CCE policies must be generated the [Azure CLI confcom extension](https://github.com/Azure/azure-cli-extensions/blob/main/src/confcom/azext_confcom/README.md). Before generating the policy, ensure that all properties specified in your ARM template are valid and match what you expect to be represented in a confidential computing policy. Some properties to validate include the container image, environment variables, volume mounts, and container commands.
+Custom CCE policies must be generated the [Azure CLI confcom extension](https://github.com/Azure/azure-cli-extensions/blob/main/src/confcom/azext_confcom/README.md). Before generating the policy, ensure that all properties specified in your ARM template are valid and match what you expect to be represented in a confidential computing policy. Some properties to validate include the container image, environment variables, volume mounts, and container commands.
-### Missing hash from policy
+### Missing hash from policy
-The Azure CLI confcom extension will use cached images on your local machine which may not match those that are available remotely which can result in layer mismatch when the policy is validated. Please ensure that you remove any old images and pull the latest container images to your local environment. Once you are sure that you have the latest SHA, you should regenerate the CCE policy.
+The Azure CLI confcom extension will use cached images on your local machine which may not match those that are available remotely which can result in layer mismatch when the policy is validated. Please ensure that you remove any old images and pull the latest container images to your local environment. Once you are sure that you have the latest SHA, you should regenerate the CCE policy.
### Process/container terminated with exit code: 139
-This exit code occurs due to limitations with the Ubuntu Version 22.04 base image. The recommendation is to use a different base image to resolve this issue.
+This exit code occurs due to limitations with the Ubuntu Version 22.04 base image. The recommendation is to use a different base image to resolve this issue.
## Next steps
container-instances Container Instances Tutorial Azure Function Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-tutorial-azure-function-trigger.md
Last updated 06/17/2022-+ # Tutorial: Use an HTTP-triggered Azure function to create a container group [Azure Functions](../azure-functions/functions-overview.md) is a serverless compute service that can run scripts or code in response to a variety of events, such as an HTTP request, a timer, or a message in an Azure Storage queue.
-In this tutorial, you create an Azure function that takes an HTTP request and triggers deployment of a [container group](container-instances-container-groups.md). This example shows the basics of using Azure Functions to automatically create resources in Azure Container Instances. Modify or extend the example for more complex scenarios or other event triggers.
+In this tutorial, you create an Azure function that takes an HTTP request and triggers deployment of a [container group](container-instances-container-groups.md). This example shows the basics of using Azure Functions to automatically create resources in Azure Container Instances. Modify or extend the example for more complex scenarios or other event triggers.
You learn how to:
This article assumes you publish the project using the name *myfunctionapp*, in
## Enable an Azure-managed identity in the function app
-The following commands enable a system-assigned [managed identity](../app-service/overview-managed-identity.md?toc=/azure/azure-functions/toc.json#add-a-system-assigned-identity) in your function app. The PowerShell host running the app can automatically authenticate to Azure using this identity, enabling functions to take actions on Azure services to which the identity is granted access. In this tutorial, you grant the managed identity permissions to create resources in the function app's resource group.
+The following commands enable a system-assigned [managed identity](../app-service/overview-managed-identity.md?toc=/azure/azure-functions/toc.json#add-a-system-assigned-identity) in your function app. The PowerShell host running the app can automatically authenticate to Azure using this identity, enabling functions to take actions on Azure services to which the identity is granted access. In this tutorial, you grant the managed identity permissions to create resources in the function app's resource group.
[Add an identity](../app-service/overview-managed-identity.md?tabs=ps%2Cdotnet) to the function app:
if ($name) {
``` This example creates a container group consisting of a single container instance running the `alpine` image. The container runs a single `echo` command and then terminates. In a real-world example, you might trigger creation of one or more container groups for running a batch job.
-
+ ## Test function app locally Ensure that the function runs locally before republishing the function app project to Azure. When run locally, the function doesn't create Azure resources. However, you can test the function flow with and without passing a name value in a query string. To debug the function, see [Debug PowerShell Azure Functions locally](../azure-functions/functions-debug-powershell-local.md).
https://myfunctionapp.azurewebsites.net/api/HttpTrigger
### Run function without passing a name
-As a first test, run the `curl` command and pass the function URL without appending a `name` query string.
+As a first test, run the `curl` command and pass the function URL without appending a `name` query string.
```bash curl --verbose "https://myfunctionapp.azurewebsites.net/api/HttpTrigger"
The function returns status code 200 and the text `This HTTP triggered function
> Host: myfunctionapp.azurewebsites.net > User-Agent: curl/7.64.1 > Accept: */*
->
+>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)! < HTTP/1.1 200 OK < Content-Length: 135 < Content-Type: text/plain; charset=utf-8 < Request-Context: appId=cid-v1:d0bd0123-f713-4579-8990-bb368a229c38 < Date: Wed, 10 Jun 2020 17:50:27 GMT
-<
+<
* Connection #0 to host myfunctionapp.azurewebsites.net left intact This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.* Closing connection 0 ```
The function returns status code 200 and triggers the creation of the container
> Host: myfunctionapp.azurewebsites.net > User-Agent: curl/7.64.1 > Accept: */*
->
+>
< HTTP/1.1 200 OK < Content-Length: 92 < Content-Type: text/plain; charset=utf-8 < Request-Context: appId=cid-v1:d0bd0123-f713-4579-8990-bb368a229c38 < Date: Wed, 10 Jun 2020 17:54:31 GMT
-<
+<
* Connection #0 to host myfunctionapp.azurewebsites.net left intact This HTTP triggered function executed successfully. Started container group mycontainergroup* Closing connection 0 ```
Verify that the container ran with the [Get-AzContainerInstanceLog][get-azcontai
```azurecli-interactive Get-AzContainerInstanceLog -ResourceGroupName myfunctionapp `
- -ContainerGroupName mycontainergroup
+ -ContainerGroupName mycontainergroup
``` Sample output:
In this tutorial, you created an Azure function that takes an HTTP request and t
For a detailed example to launch and monitor a containerized job, see the blog post [Event-Driven Serverless Containers with PowerShell Azure Functions and Azure Container Instances](https://dev.to/azure/event-driven-serverless-containers-with-powershell-azure-functions-and-azure-container-instances-e9b) and accompanying [code sample](https://github.com/anthonychu/functions-powershell-run-aci).
-See the [Azure Functions documentation](../azure-functions/index.yml) for detailed guidance on creating Azure functions and publishing a functions project.
+See the [Azure Functions documentation](../azure-functions/index.yml) for detailed guidance on creating Azure functions and publishing a functions project.
<!-- IMAGES -->
container-instances Container Instances Tutorial Deploy Confidential Containers Cce Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-tutorial-deploy-confidential-containers-cce-arm.md
Last updated 05/23/2023-+ # Tutorial: Create an ARM template for a confidential container deployment with custom confidential computing enforcement policy
-Confidential containers on ACI is a SKU on the serverless platform that enables customers to run container applications in a hardware-based and attested trusted execution environment (TEE), which can protect data in use and provides in-memory encryption via Secure Nested Paging.
+Confidential containers on ACI is a SKU on the serverless platform that enables customers to run container applications in a hardware-based and attested trusted execution environment (TEE), which can protect data in use and provides in-memory encryption via Secure Nested Paging.
In this article, you'll: > [!div class="checklist"] > * Create an ARM template for a confidential container group > * Generate a confidential computing enforcement (CCE) policy
-> * Deploy the confidential container group to Azure
+> * Deploy the confidential container group to Azure
## Before you begin
In this article, you'll:
In this tutorial, you deploy a hello world application that generates a hardware attestation report. You start by creating an ARM template with a container group resource to define the properties of this application. You'll use this ARM template with the Azure CLI confcom tooling to generate a confidential computing enforcement (CCE) policy for attestation. In this tutorial, we use this [ARM template](https://raw.githubusercontent.com/Azure-Samples/aci-confidential-hello-world/main/template.json?token=GHSAT0AAAAAAB5B6SJ7VUYU3G6MMQUL7KKKY7QBZBA). To view the source code for this application, visit [ACI Confidential Hello World](https://aka.ms/ccacihelloworld).
-> [!NOTE]
-> The ccePolicy parameter of the template is blank and needs to be updated based on the next step of this tutorial.
+> [!NOTE]
+> The ccePolicy parameter of the template is blank and needs to be updated based on the next step of this tutorial.
-There are two properties added to the Azure Container Instance resource definition to make the container group confidential:
+There are two properties added to the Azure Container Instance resource definition to make the container group confidential:
-1. **sku**: The SKU property enables you to select between confidential and standard container group deployments. If this property isn't added, the container group will be deployed as standard SKU.
+1. **sku**: The SKU property enables you to select between confidential and standard container group deployments. If this property isn't added, the container group will be deployed as standard SKU.
2. **confidentialComputeProperties**: The confidentialComputeProperties object enables you to pass in a custom confidential computing enforcement policy for attestation of your container group. If this object isn't added to the resource there will be no validation of the software components running within the container group. Use your preferred text editor to save this ARM template on your local machine as **template.json**.
You can see under **confidentialComputeProperties**, we have left a blank **cceP
} ```
-## Create a custom CCE Policy
+## Create a custom CCE Policy
With the ARM template that you've crafted and the Azure CLI confcom extension, you're able to generate a custom CCE policy. the CCE policy is used for attestation. The tool takes the ARM template as an input to generate the policy. The policy enforces the specific container images, environment variables, mounts, and commands, which can then be validated when the container group starts up. For more information on the Azure CLI confcom extension, see [Azure CLI confcom extension](https://github.com/Azure/azure-cli-extensions/blob/main/src/confcom/azext_confcom/README.md).
-1. To generate the CCE policy, you'll run the following command using the ARM template as input:
+1. To generate the CCE policy, you'll run the following command using the ARM template as input:
```azurecli-interactive az confcom acipolicygen -a .\template.json --print-policy
- ```
+ ```
- When this command completes, you should see a Base 64 string generated as output in the format seen below. This string is the CCE policy that you will copy and paste into your ARM template under the ccePolicy property.
+ When this command completes, you should see a Base 64 string generated as output in the format seen below. This string is the CCE policy that you will copy and paste into your ARM template under the ccePolicy property.
```output cGFja2FnZSBwb2xpY3kKCmFwaV9zdm4gOj0gIjAuOS4wIgoKaW1wb3J0IGZ1dHVyZS5rZXl3b3Jkcy5ldmVyeQppbXBvcnQgZnV0dXJlLmtleXdvcmRzLmluCgpmcmFnbWVudHMgOj0gWwpdCgpjb250YWluZXJzIDo9IFsKICAgIHsKICAgICAgICAiY29tbWFuZCI6IFsiL3BhdXNlIl0sCiAgICAgICAgImVudl9ydWxlcyI6IFt7InBhdHRlcm4iOiAiUEFUSD0vdXNyL2xvY2FsL3NiaW46L3Vzci9sb2NhbC9iaW46L3Vzci9zYmluOi91c3IvYmluOi9zYmluOi9iaW4iLCAic3RyYXRlZ3kiOiAic3RyaW5nIiwgInJlcXVpcmVkIjogdHJ1ZX0seyJwYXR0ZXJuIjogIlRFUk09eHRlcm0iLCAic3RyYXRlZ3kiOiAic3RyaW5nIiwgInJlcXVpcmVkIjogZmFsc2V9XSwKICAgICAgICAibGF5ZXJzIjogWyIxNmI1MTQwNTdhMDZhZDY2NWY5MmMwMjg2M2FjYTA3NGZkNTk3NmM3NTVkMjZiZmYxNjM2NTI5OTE2OWU4NDE1Il0sCiAgICAgICAgIm1vdW50cyI6IFtdLAogICAgICAgICJleGVjX3Byb2Nlc3NlcyI6IFtdLAogICAgICAgICJzaWduYWxzIjogW10sCiAgICAgICAgImFsbG93X2VsZXZhdGVkIjogZmFsc2UsCiAgICAgICAgIndvcmtpbmdfZGlyIjogIi8iCiAgICB9LApdCmFsbG93X3Byb3BlcnRpZXNfYWNjZXNzIDo9IHRydWUKYWxsb3dfZHVtcF9zdGFja3MgOj0gdHJ1ZQphbGxvd19ydW50aW1lX2xvZ2dpbmcgOj0gdHJ1ZQphbGxvd19lbnZpcm9ubWVudF92YXJpYWJsZV9kcm9wcGluZyA6PSB0cnVlCmFsbG93X3VuZW5jcnlwdGVkX3NjcmF0Y2ggOj0gdHJ1ZQoKCm1vdW50X2RldmljZSA6PSB7ICJhbGxvd2VkIiA6IHRydWUgfQp1bm1vdW50X2RldmljZSA6PSB7ICJhbGxvd2VkIiA6IHRydWUgfQptb3VudF9vdmVybGF5IDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CnVubW91bnRfb3ZlcmxheSA6PSB7ICJhbGxvd2VkIiA6IHRydWUgfQpjcmVhdGVfY29udGFpbmVyIDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CmV4ZWNfaW5fY29udGFpbmVyIDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CmV4ZWNfZXh0ZXJuYWwgOj0geyAiYWxsb3dlZCIgOiB0cnVlIH0Kc2h1dGRvd25fY29udGFpbmVyIDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CnNpZ25hbF9jb250YWluZXJfcHJvY2VzcyA6PSB7ICJhbGxvd2VkIiA6IHRydWUgfQpwbGFuOV9tb3VudCA6PSB7ICJhbGxvd2VkIiA6IHRydWUgfQpwbGFuOV91bm1vdW50IDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CmdldF9wcm9wZXJ0aWVzIDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CmR1bXBfc3RhY2tzIDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CnJ1bnRpbWVfbG9nZ2luZyA6PSB7ICJhbGxvd2VkIiA6IHRydWUgfQpsb2FkX2ZyYWdtZW50IDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CnNjcmF0Y2hfbW91bnQgOj0geyAiYWxsb3dlZCIgOiB0cnVlIH0Kc2NyYXRjaF91bm1vdW50IDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CnJlYXNvbiA6PSB7ImVycm9ycyI6IGRhdGEuZnJhbWV3b3JrLmVycm9yc30K
With the ARM template that you've crafted and the Azure CLI confcom extension, y
![Screenshot of Build your own template in the editor button on deployment screen, PNG.](./media/container-instances-confidential-containers-tutorials/confidential-containers-cce-build-template.png)
-1. Select **Load file** and upload **template.json**, which you've modified by adding the CCE policy you generated in the previous steps.
+1. Select **Load file** and upload **template.json**, which you've modified by adding the CCE policy you generated in the previous steps.
![Screenshot of Load file button on template screen, PNG.](./media/container-instances-confidential-containers-tutorials/confidential-containers-cce-load-file.png)
-1. Click **Save**.
+1. Click **Save**.
1. Select or enter the following values.
Use the Azure portal or a tool such as the [Azure CLI](container-instances-quick
![Screenshot of overview page for container group instance, PNG.](media/container-instances-confidential-containers-tutorials/confidential-containers-cce-portal.png)
-3. Once its status is *Running*, navigate to the IP address in your browser.
+3. Once its status is *Running*, navigate to the IP address in your browser.
![Screenshot of browser view of app deployed using Azure Container Instances, PNG.](media/container-instances-confidential-containers-tutorials/confidential-containers-aci-hello-world.png) The presence of the attestation report below the Azure Container Instances logo confirms that the container is running on hardware that supports a TEE. If you deploy to hardware that does not support a TEE, for example by choosing a region where the ACI Confidential SKU is not available, no attestation report will be shown.
-## Next Steps
+## Next Steps
-Now that you have deployed a confidential container group on ACI, you can learn more about how policies are enforced.
+Now that you have deployed a confidential container group on ACI, you can learn more about how policies are enforced.
* [Confidential computing enforcement policies overview](./container-instances-confidential-overview.md) * [Azure CLI confcom extension examples](https://github.com/Azure/azure-cli-extensions/blob/main/src/confcom/azext_confcom/README.md)
container-instances Container Instances Tutorial Prepare Acr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-tutorial-prepare-acr.md
Last updated 06/17/2022-+ # Tutorial: Create an Azure container registry and push a container image
container-instances Container Instances Tutorial Prepare App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-tutorial-prepare-app.md
Last updated 06/17/2022-+ # Tutorial: Create a container image for deployment to Azure Container Instances
container-instances Container Instances Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-vnet.md
Last updated 06/17/2022-+ # Deploy container instances into an Azure virtual network [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) provides secure, private networking for your Azure and on-premises resources. By deploying container groups into an Azure virtual network, your containers can communicate securely with other resources in the virtual network.
-This article shows how to use the [az container create][az-container-create] command in the Azure CLI to deploy container groups to either a new virtual network or an existing virtual network.
+This article shows how to use the [az container create][az-container-create] command in the Azure CLI to deploy container groups to either a new virtual network or an existing virtual network.
> [!IMPORTANT] > Before deploying container groups in virtual networks, we suggest checking the limitation first. For networking scenarios and limitations, see [Virtual network scenarios and resources for Azure Container Instances](container-instances-virtual-network-concepts.md). > [!IMPORTANT]
-> Container group deployment to a virtual network is generally available for Linux and Windows containers, in most regions where Azure Container Instances is available. For details, see [available-regions][available-regions].
+> Container group deployment to a virtual network is generally available for Linux and Windows containers, in most regions where Azure Container Instances is available. For details, see [available-regions][available-regions].
[!INCLUDE [network profile callout](./includes/network-profile/network-profile-callout.md)]
The log output should show that `wget` was able to connect and download the inde
### Example - YAML
-You can also deploy a container group to an existing virtual network by using a YAML file, a [Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.containerinstance/aci-vnet), or another programmatic method such as with the Python SDK.
+You can also deploy a container group to an existing virtual network by using a YAML file, a [Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.containerinstance/aci-vnet), or another programmatic method such as with the Python SDK.
For example, when using a YAML file, you can deploy to a virtual network with a subnet delegated to Azure Container Instances. Specify the following properties:
container-registry Container Registry Artifact Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-artifact-streaming.md
+
+ Title: "Artifact streaming in Azure Container Registry (Preview)"
+description: "Artifact streaming is a feature in Azure Container Registry to enhance and supercharge managing, scaling, and deploying artifacts through containerized platforms."
++++ Last updated : 12/14/2023+
+#customer intent: As a developer, I want artifact streaming capabilities so that I can efficiently deliver and serve containerized applications to end-users in real-time.
++
+# Artifact streaming in Azure Container Registry (Preview)
+
+Artifact streaming is a feature in Azure Container Registry that allows you to store container images within a single registry, manage, and stream the container images to Azure Kubernetes Service (AKS) clusters in multiple regions. This feature is designed to accelerate containerized workloads for Azure customers using AKS. With artifact streaming, you can easily scale workloads without having to wait for slow pull times for your node.
+
+## Use cases
+
+Here are few scenarios to use artifact streaming:
+
+**Deploying containerized applications to multiple regions**: With artifact streaming, you can store container images within a single registry and manage and stream container images to AKS clusters in multiple regions. artifact streaming deploys container applications to multiple regions without consuming time and resources.
+
+**Reducing image pull latency**: Artifact streaming can reduce time to pod readiness by over 15%, depending on the size of the image, and it works best for images < 30GB. This feature reduces image pull latency and fast container startup, which is beneficial for software developers and system architects.
+
+**Effective scaling of containerized applications**: Artifact streaming provides the opportunity to design, build, and deploy containerized applications at a high scale.
+
+## Artifact streaming aspects
+
+Here are some brief aspects of artifact streaming:
+
+* Customers with new and existing registries can start artifact streaming for specific repositories or tags.
+
+* Once artifact streaming is started, the original and the streaming artifact will be stored in the customerΓÇÖs ACR.
+
+* If the user decides to turn off artifact streaming for repositories or artifacts, the streaming and the original artifact will still be present.
+
+* If a customer deletes a repository or artifact with artifact streaming and Soft Delete enabled, then both the original and artifact streaming versions will be deleted. However, only the original version will be available on the soft delete blade.
+
+## Availability and pricing information
+
+Artifact streaming is only available in the **Premium** SKU [service tiers](container-registry-skus.md). Please note that artifact streaming may increase the overall registry storage consumption and customers may be subjected to additional storage charges as outlined in our [pricing](https://azure.microsoft.com/pricing/details/container-registry/) if the consumption exceeds the included 500 GiB Premium SKU threshold.
+
+## Preview limitations
+
+Artifact streaming is currently in preview. The following limitations apply:
+
+* Only images with Linux AMD64 architecture are supported in the preview release.
+* The preview release doesn't support Windows-based container images, and ARM64 images.
+* The preview release partially support multi-architecture images, only the AMD64 architecture is supported.
+* For creating Ubuntu based node pool in AKS, choose Ubuntu version 20.04 or higher.
+* For Kubernetes, use Kubernetes version 1.26 or higher or Kubernetes version > 1.25.
+* Only premium SKU registries support generating streaming artifacts in the preview release. The non-premium SKU registries do not offer this functionality during the preview.
+* The CMK (Customer-Managed Keys) registries are NOT supported in the preview release.
+* Kubernetes regcred is currently NOT supported.
+
+## Prerequisites
+
+* You can use the [Azure Cloud Shell][Azure Cloud Shell] or a local installation of the Azure CLI to run the command examples in this article. If you'd like to use it locally, version 2.54.0 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][Install Azure CLI].
+
+* Sign in to the [Azure portal](https://ms.portal.azure.com/).
+
+## Start artifact streaming
+
+Start artifact streaming with a series with Azure CLI commands and Azure portal for pushing, importing, and generating streaming artifacts for container images in an Azure Container Registry (ACR). These instructions outline the process for creating a *Premium* [SKU](container-registry-skus.md) ACR, importing an image, generating a streaming artifact, and managing the artifact streaming operation. Make sure to replace the placeholders with your actual values where necessary.
+
+### Push/Import the image and generate the streaming artifact - Azure CLI
+
+Artifact streaming is available in the **Premium** container registry service tier. To start Artifact streaming, update a registry using the Azure CLI (version 2.54.0 or above). To install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+Start artifact streaming, by following these general steps:
+
+>[!NOTE]
+> If you already have a premium container registry, you can skip this step. If the user is on Basic of Standard SKUs, the following commands will fail.
+> The code is written in Azure CLI and can be executed in an interactive mode.
+> Please note that the placeholders should be replaced with actual values before executing the command.
+
+1. Create a new Azure Container Registry (ACR) using the premium SKU through:
+
+ For example, run the [az group create][az-group-create] command to create an Azure Resource Group with name `my-streaming-test` in the West US region and then run the [az acr create][az-acr-create] command to create a premium Azure Container Registry with name `mystreamingtest` in that resource group.
+
+ ```azurecli-interactive
+ az group create -n my-streaming-test -l westus
+ az acr create -n mystreamingtest -g my-streaming-test -l westus --sku premium
+ ```
+
+2. Push or import an image to the registry through:
+
+ For example, run the [az configure] command to configure the default ACR and [az acr import][az-acr-import] command to import a Jupyter Notebook image from Docker Hub into the `mystreamingtest` ACR.
+
+ ```azurecli-interactive
+ az configure --defaults acr="mystreamingtest"
+ az acr import -source docker.io/jupyter/all-spark-notebook:latest -t jupyter/all-spark-notebook:latest
+ ```
+
+3. Create an artifact streaming from the Image
+
+ Initiates the creation of a streaming artifact from the specified image.
+
+ For example, run the [az acr artifact-streaming create][az-acr-artifact-streaming-create] commands to create a streaming artifact from the `jupyter/all-spark-notebook:latest` image in the `mystreamingtest` ACR.
+
+ ```azurecli-interactive
+ az acr artifact-streaming create --image jupyter/all-spark-notebook:latest
+ ```
+
+>[!NOTE]
+> An operation ID is generated during the process for future reference to verify the status of the operation.
+
+4. Verify the generated artifact streaming in the Azure CLI.
+
+ For example, run the [az acr manifest list-referrers][az-acr-manifest-list-referrers] command to list the streaming artifacts for the `jupyter/all-spark-notebook:latest` image in the `mystreamingtest` ACR.
+
+ ```azurecli-interactive
+ az acr manifest list-referrers -n jupyter/all-spark-notebook:latest
+ ```
+
+5. Cancel the artifact streaming creation (if needed)
+
+ Cancel the streaming artifact creation if the conversion is not finished yet. It will stop the operation.
+
+ For example, run the [az acr artifact-streaming operation cancel][az-acr-artifact-streaming-operation-cancel] command to cancel the conversion operation for the `jupyter/all-spark-notebook:latest` image in the `mystreamingtest` ACR.
+
+ ```azurecli-interactive
+ az acr artifact-streaming operation cancel --repository jupyter/all-spark-notebook --id c015067a-7463-4a5a-9168-3b17dbe42ca3
+ ```
+
+6. Start auto-conversion on the repository
+
+ Start auto-conversion in the repository for newly pushed or imported images. When started, new images pushed into that repository will trigger the generation of streaming artifacts.
+
+ >[!NOTE]
+ > Auto-conversion does not apply to existing images. Existing images can be manually converted.
+
+ For example, run the [az acr artifact-streaming update][az-acr-artifact-streaming-update] command to start auto-conversion for the `jupyter/all-spark-notebook` repository in the `mystreamingtest` ACR.
+
+ ```azurecli-interactive
+ az acr artifact-streaming update --repository jupyter/all-spark-notebook --enable-streaming true
+ ```
+
+7. Verify the streaming conversion progress, after pushing a new image `jupyter/all-spark-notebook:newtag` to the above repository.
+
+ For example, run the [az acr artifact-streaming operation show][az-acr-artifact-streaming-operation-show] command to check the status of the conversion operation for the `jupyter/all-spark-notebook:newtag` image in the `mystreamingtest` ACR.
+
+ ```azurecli-interactive
+ az acr artifact-streaming operation show --image jupyter/all-spark-notebook:newtag
+ ```
+
+>[!NOTE]
+> Artifact streaming can work across regions, regardless of whether geo-replication is started or not.
+> Artifact streaming can work through a private endpoint and attach to it.
+
+### Push/Import the image and generate the streaming artifact - Azure portal
+
+Artifact streaming is available in the *premium* [SKU](container-registry-skus.md) Azure Container Registry. To start artifact streaming, update a registry using the Azure portal.
+
+Follow the steps to create artifact streaming in the [Azure portal](https://portal.azure.com).
+
+1. Navigate to your Azure Container Registry.
+
+2. In the side **Menu**, under the **Services**, select **Repositories**.
+
+3. Select the latest imported image.
+
+4. Convert the image and create artifact streaming in Azure portal.
+
+ > [!div class="mx-imgBorder"]
+ > [![A screenshot of Azure portal with the create streaming artifact button highlighted.](./media/container-registry-artifact-streaming/01-create-artifact-streaming-inline.png)](./media/container-registry-artifact-streaming/01-create-artifact-streaming-expanded.png#lightbox)
++
+5. Check the streaming artifact generated from the image in Referrers tab.
+
+ > [!div class="mx-imgBorder"]
+ > [![A screenshot of Azure portal with the streaming artifact highlighted.](./media/container-registry-artifact-streaming/02-artifact-streaming-generated-inline.png)](./media/container-registry-artifact-streaming/02-artifact-streaming-generated-expanded.png#lightbox)
+
+6. You can also delete the artifact streaming from the repository blade.
+
+ > [!div class="mx-imgBorder"]
+ > [![A screenshot of Azure portal with the delete artifact streaming button highlighted.](./media/container-registry-artifact-streaming/04-delete-artifact-streaming-inline.png)](./media/container-registry-artifact-streaming/04-delete-artifact-streaming-expanded.png#lightbox)
+
+7. You can also enable auto-conversion on the repository blade. Active means auto-conversion is enabled on the repository. Inactive means auto-conversion is disabled on the repository.
+
+ > [!div class="mx-imgBorder"]
+ > [![A screenshot of Azure portal with the start artifact streaming button highlighted.](./media/container-registry-artifact-streaming/03-start-artifact-streaming-inline.png)](./media/container-registry-artifact-streaming/03-start-artifact-streaming-expanded.png#lightbox)
+
+> [!NOTE]
+> The state of artifact streaming in a repository (inactive or active) determines whether newly pushed compatible images will be automatically converted. By default, all repositories are in an inactive state for artifact streaming. This means that when new compatible images are pushed to the repository, artifact streaming will not be triggered, and the images will not be automatically converted. If you want to start automatic conversion of newly pushed images, you need to set the repository's artifact streaming to the active state. Once the repository is in the active state, any new compatible container images that are pushed to the repository will trigger artifact streaming. This will start the automatic conversion of those images.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Troubleshoot artifact streaming](troubleshoot-artifact-streaming.md)
+
+<!-- LINKS - External -->
+[Install Azure CLI]: /cli/azure/install-azure-cli
+[Azure Cloud Shell]: /azure/cloud-shell/quickstart
+[az-group-create]: /cli/azure/group#az-group-create
+[az-acr-import]: /cli/azure/acr#az-acr-import
+[az-acr-artifact-streaming-create]: /cli/azure/acr/artifact-streaming#az-acr-artifact-streaming-create
+[az-acr-manifest-list-referrers]: /cli/azure/acr/manifest#az-acr-manifest-list-referrers
+[az-acr-create]: /cli/azure/acr#az-acr-create
+[az-acr-artifact-streaming-operation-cancel]: /cli/azure/acr/artifact-streaming/operation#az-acr-artifact-streaming-operation-cancel
+[az-acr-artifact-streaming-operation-show]: /cli/azure/acr/artifact-streaming/operation#az-acr-artifact-streaming-operation-show
+[az-acr-artifact-streaming-update]: /cli/azure/acr/artifact-streaming#az-acr-artifact-streaming-update
+
container-registry Container Registry Authentication Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-authentication-managed-identity.md
Title: Authenticate with managed identity description: Provide access to images in your private container registry by using a user-assigned or system-assigned managed Azure identity. -+ Last updated 10/31/2023
-# Use an Azure managed identity to authenticate to an Azure container registry
+# Use an Azure managed identity to authenticate to an Azure container registry
Use a [managed identity for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) to authenticate to an Azure container registry from another Azure resource, without needing to provide or manage registry credentials. For example, set up a user-assigned or system-assigned managed identity on a Linux VM to access container images from your container registry, as easily as you use a public registry. Or, set up an Azure Kubernetes Service cluster to use its [managed identity](../aks/cluster-container-registry-integration.md) to pull container images from Azure Container Registry for pod deployments.
For this article, you learn more about managed identities and how to:
> [!div class="checklist"] > * Enable a user-assigned or system-assigned identity on an Azure VM > * Grant the identity access to an Azure container registry
-> * Use the managed identity to access the registry and pull a container image
+> * Use the managed identity to access the registry and pull a container image
### [Azure CLI](#tab/azure-cli)
If you're not familiar with the managed identities for Azure resources feature,
After you set up selected Azure resources with a managed identity, give the identity the access you want to another resource, just like any security principal. For example, assign a managed identity a role with pull, push and pull, or other permissions to a private registry in Azure. (For a complete list of registry roles, see [Azure Container Registry roles and permissions](container-registry-roles.md).) You can give an identity access to one or more resources.
-Then, use the identity to authenticate to any [service that supports Microsoft Entra authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication), without any credentials in your code. Choose how to authenticate using the managed identity, depending on your scenario. To use the identity to access an Azure container registry from a virtual machine, you authenticate with Azure Resource Manager.
+Then, use the identity to authenticate to any [service that supports Microsoft Entra authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication), without any credentials in your code. Choose how to authenticate using the managed identity, depending on your scenario. To use the identity to access an Azure container registry from a virtual machine, you authenticate with Azure Resource Manager.
## Create a container registry
$vmParams = @{
ResourceGroupName = 'MyResourceGroup' Name = 'myDockerVM' Image = 'UbuntuLTS'
- PublicIpAddressName = 'myPublicIP'
+ PublicIpAddressName = 'myPublicIP'
GenerateSshKey = $true SshKeyName = 'mySSHKey' }
New-AzRoleAssignment -ObjectId $spID -Scope $resourceID -RoleDefinitionName AcrP
SSH into the Docker virtual machine that's configured with the identity. Run the following Azure CLI commands, using the Azure CLI installed on the VM.
-First, authenticate to the Azure CLI with [az login][az-login], using the identity you configured on the VM. For `<userID>`, substitute the ID of the identity you retrieved in a previous step.
+First, authenticate to the Azure CLI with [az login][az-login], using the identity you configured on the VM. For `<userID>`, substitute the ID of the identity you retrieved in a previous step.
```azurecli-interactive az login --identity --username <userID>
docker pull mycontainerregistry.azurecr.io/aci-helloworld:v1
SSH into the Docker virtual machine that's configured with the identity. Run the following Azure PowerShell commands, using the Azure PowerShell installed on the VM.
-First, authenticate to the Azure PowerShell with [Connect-AzAccount][connect-azaccount], using the identity you configured on the VM. For `-AccountId` specify a client ID of the identity.
+First, authenticate to the Azure PowerShell with [Connect-AzAccount][connect-azaccount], using the identity you configured on the VM. For `-AccountId` specify a client ID of the identity.
```azurepowershell-interactive $clientId = (Get-AzUserAssignedIdentity -ResourceGroupName myResourceGroup -Name myACRId).ClientId
docker pull mycontainerregistry.azurecr.io/aci-helloworld:v1
The following [az vm identity assign][az-vm-identity-assign] command configures your Docker VM with a system-assigned identity: ```azurecli-interactive
-az vm identity assign --resource-group myResourceGroup --name myDockerVM
+az vm identity assign --resource-group myResourceGroup --name myDockerVM
``` Use the [az vm show][az-vm-show] command to set a variable to the value of `principalId` (the service principal ID) of the VM's identity, to use in later steps.
The following [Update-AzVM][update-azvm] command configures your Docker VM with
```azurepowershell-interactive $vm = Get-AzVM -ResourceGroupName myResourceGroup -Name myDockerVM
-Update-AzVM -ResourceGroupName myResourceGroup -VM $vm -IdentityType SystemAssigned
+Update-AzVM -ResourceGroupName myResourceGroup -VM $vm -IdentityType SystemAssigned
``` Use the [Get-AzVM][get-azvm] command to set a variable to the value of `principalId` (the service principal ID) of the VM's identity, to use in later steps.
container-registry Troubleshoot Artifact Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/troubleshoot-artifact-streaming.md
+
+ Title: "Troubleshoot artifact streaming"
+description: "Troubleshoot artifact streaming in Azure Container Registry to diagnose and resolve with managing, scaling, and deploying artifacts through containerized platforms."
+++ Last updated : 10/31/2023+++
+# Troubleshoot artifact streaming
+
+The troubleshooting steps in this article can help you resolve common issues that you might encounter when using artifact streaming in Azure Container Registry (ACR). These steps and recommendations can help diagnose and resolve issues related to artifact streaming as well as provide insights into the underlying processes and logs for debugging purposes.
+
+## Symptoms
+
+* Conversion operation failed due to an unknown error.
+* Troubleshooting Failed AKS Pod Deployments.
+* Pod conditions indicate "UpgradeIfStreamableDisabled."
+* Using Digest Instead of Tag for Streaming Artifact
+
+## Causes
+
+* Issues with authentication, network latency, image retrieval, streaming operations, or other issues.
+* Issues with image pull or streaming, streaming artifacts configurations, image sources, and resource constraints.
+* Issues with ACR configurations or permissions.
+
+## Conversion operation failed
+
+| Error Code | Error Message | Troubleshooting Info |
+| | - | |
+| UNKNOWN_ERROR | Conversion operation failed due to an unknown error. | Caused by an internal error. A retry helps here. If retry is unsuccessful, contact support. |
+| RESOURCE_NOT_FOUND | Conversion operation failed because target resource isn't found. | If the target image isn't found in the registry. Verify typos in the image digest, if the image is deleted, or missing in the target region (replication consistency is not immediate for example) |
+| UNSUPPORTED_PLATFORM | Conversion is not currently supported for image platform. | Only linux/amd64 images are initially supported. |
+| NO_SUPPORTED_PLATFORM_FOUND | Conversion is not currently supported for any of the image platforms in the index. | Only linux/amd64 images are initially supported. No image with this platform is found in the target index. |
+| UNSUPPORTED_MEDIATYPE | Conversion is not supported for the image MediaType. | Conversion can only target images with media type: application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, application/vnd.docker.distribution.manifest.v2+json or application/vnd.docker.distribution.manifest.list.v2+json |
+| UNSUPPORTED_ARTIFACT_TYPE | Conversion isn't supported for the image ArtifactType. | Streaming Artifacts (Artifact type: application/vnd.azure.artifact.streaming.v1) can't be converted again. |
+| IMAGE_NOT_RUNNABLE | Conversion isn't supported for nonrunnable images. | Only linux/amd64 runnable images are initially supported. |
+
+## Troubleshooting Failed AKS Pod Deployments
+
+If AKS pod deployment fails with an error related to image pulling, like the following example
+
+```bash
+Failed to pull image "mystreamingtest.azurecr.io/jupyter/all-spark-notebook:latest":
+rpc error: code = Unknown desc = failed to pull and unpack image
+"mystreamingtest.azurecr.io/latestobd/jupyter/all-spark-notebook:latest":
+failed to resolve reference "mystreamingtest.azurecr.io/jupyter/all-spark-notebook:latest":
+unexpected status from HEAD request to http://localhost:8578/v2/jupyter/all-spark-notebook/manifests/latest?ns=mystreamingtest.azurecr.io:503 Service Unavailable
+```
+
+To troubleshoot this issue, you should check the following:
+
+1. Verify if the AKS has permissions to access the container registry `mystreamingtest.azurecr.io`
+1. Ensure that the container registry `mystreamingtest.azurecr.io` is accessible and properly attached to AKS.
+
+## Checking for "UpgradeIfStreamableDisabled" Pod Condition:
+
+If the AKS pod condition shows "UpgradeIfStreamableDisabled," check if the image is from an Azure Container Registry.
+
+## Using Digest Instead of Tag for Streaming Artifact:
+
+If you deploy the streaming artifact using digest instead of tag (for example, mystreamingtest.azurecr.io/jupyter/all-spark-notebook@sha256:4ef83ea6b0f7763c230e696709d8d8c398e21f65542db36e82961908bcf58d18), AKS pod event and condition message won't include streaming related information. However, you see fast container startup as the underlying container engine. This engine stream the image to AKS if it detects the actual image content is streamed.
+
+## Related content
+
+> [!div class="nextstepaction"]
+> [Artifact streaming](./container-registry-artifact-streaming.md)
container-registry Tutorial Artifact Streaming Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-artifact-streaming-cli.md
- Title: "Enable Artifact Streaming- Azure CLI"
-description: "Enable Artifact Streaming in Azure Container Registry using Azure CLI commands to enhance and supercharge managing, scaling, and deploying artifacts through containerized platforms."
---- Previously updated : 10/31/2023-
-# Artifact Streaming - Azure CLI
-
-Start Artifact Streaming with a series of Azure CLI commands for pushing, importing, and generating streaming artifacts for container images in an Azure Container Registry (ACR). These commands outline the process for creating a *Premium* [SKU](container-registry-skus.md) ACR, importing an image, generating a streaming artifact, and managing the artifact streaming operation. Make sure to replace the placeholders with your actual values where necessary.
-
-This article is part two in a four-part tutorial series. In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Push/Import the image and generate the streaming artifact - Azure CLI.
-
-## Prerequisites
-
-* You can use the [Azure Cloud Shell][Azure Cloud Shell] or a local installation of the Azure CLI to run the command examples in this article. If you'd like to use it locally, version 2.54.0 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][Install Azure CLI].
-
-## Push/Import the image and generate the streaming artifact - Azure CLI
-
-Artifact Streaming is available in the **Premium** container registry service tier. To enable Artifact Streaming, update a registry using the Azure CLI (version 2.54.0 or above). To install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
-
-Enable Artifact Streaming by following these general steps:
-
->[!NOTE]
-> If you already have a premium container registry, you can skip this step. If the user is on Basic of Standard SKUs, the following commands will fail.
-> The code is written in Azure CLI and can be executed in an interactive mode.
-> Please note that the placeholders should be replaced with actual values before executing the command.
-
-Use the following command to create an Azure Resource Group with name `my-streaming-test` in the West US region and a premium Azure Container Registry with name `mystreamingtest` in that resource group.
-
-```azurecli-interactive
-az group create -n my-streaming-test -l westus
-az acr create -n mystreamingtest -g my-streaming-test -l westus --sku premium
-```
-
-To push or import an image to the registry, run the `az configure` command to configure the default ACR and `az acr import` command to import a Jupyter Notebook image from Docker Hub into the `mystreamingtest` ACR.
-
-```azurecli-interactive
-az configure --defaults acr="mystreamingtest"
-az acr import -source docker.io/jupyter/all-spark-notebook:latest -t jupyter/all-spark-notebook:latest
-```
-
-Use the following command to create a streaming artifact from the specified image. This example creates a streaming artifact from the `jupyter/all-spark-notebook:latest` image in the `mystreamingtest` ACR.
-
-```azurecli-interactive
-az acr artifact-streaming create --image jupyter/all-spark-notebook:latest
-```
-
-To verify the generated Artifact Streaming in the Azure CLI, run the `az acr manifest list-referrers` command. This command lists the streaming artifacts for the `jupyter/all-spark-notebook:latest` image in the `mystreamingtest` ACR.
-
-```azurecli-interactive
-az acr manifest list-referrers -n jupyter/all-spark-notebook:latest
-```
-
-If you need to cancel the streaming artifact creation, run the `az acr artifact-streaming operation cancel` command. This command stops the operation. For example, this command cancels the conversion operation for the `jupyter/all-spark-notebook:latest` image in the `mystreamingtest` ACR.
-
-```azurecli-interactive
-az acr artifact-streaming operation cancel --repository jupyter/all-spark-notebook --id c015067a-7463-4a5a-9168-3b17dbe42ca3
-```
-
-Enable auto-conversion in the repository for newly pushed or imported images. When enabled, new images pushed into that repository trigger the generation of streaming artifacts.
-
->[!NOTE]
-Auto-conversion does not apply to existing images. Existing images can be manually converted.
-
-For example, run the `az acr artifact-streaming update` command to enable auto-conversion for the `jupyter/all-spark-notebook` repository in the `mystreamingtest` ACR.
-
-```azurecli-interactive
-az acr artifact-streaming update --repository jupyter/all-spark-notebook --enable-streaming true
-```
-
-Use the `az acr artifact-streaming operation show` command to verify the streaming conversion progress. For example, this command checks the status of the conversion operation for the `jupyter/all-spark-notebook:newtag` image in the `mystreamingtest` ACR.
-
-```azurecli-interactive
-az acr artifact-streaming operation show --image jupyter/all-spark-notebook:newtag
-```
-
->[!NOTE]
-> Artifact Streaming can work across regions, regardless of whether geo-replication is enabled or not.
-> Artifact Streaming can work through a private endpoint and attach to it.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Enable Artifact Streaming- Portal](tutorial-artifact-streaming-portal.md)
-
-<!-- LINKS - External -->
-[Install Azure CLI]: /cli/azure/install-azure-cli
-[Azure Cloud Shell]: /azure/cloud-shell/quickstart
container-registry Tutorial Artifact Streaming Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-artifact-streaming-portal.md
- Title: "Enable Artifact Streaming- Portal"
-description: "Enable Artifact Streaming is a feature in Azure Container Registry in Azure portal to enhance and supercharge managing, scaling, and deploying artifacts through containerized platforms."
--- Previously updated : 10/31/2023--
-# Enable Artifact Streaming - Azure portal
-
-Start artifact streaming with a series of Azure portal steps for pushing, importing, and generating streaming artifacts for container images in an Azure Container Registry (ACR). These steps outline the process for creating a *premium* [SKU](container-registry-skus.md) ACR, importing an image, generating a streaming artifact, and managing the artifact streaming operation. Make sure to replace the placeholders with your actual values where necessary.
-
-This article is part three in a four-part tutorial series. In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Push/Import the image and generate the streaming artifact - Azure portal.
-
-## Prerequisites
-
-* Sign in to the [Azure portal](https://ms.portal.azure.com/).
-
-## Push/Import the image and generate the streaming artifact - Azure portal
-
-Complete the following steps to create artifact streaming in the [Azure portal](https://portal.azure.com).
-
-1. Navigate to your Azure Container Registry.
-
-1. In the side **Menu**, under the **Services**, select **Repositories**.
-
-1. Select the latest imported image.
-
-1. Convert the image and create artifact streaming in Azure portal.
-
- [ ![A screenshot of Azure portal with the create streaming artifact button highlighted](./media/container-registry-artifact-streaming/01-create-artifact-streaming-inline.png)](./media/container-registry-artifact-streaming/01-create-artifact-streaming-expanded.png#lightbox)
-
-1. Check the streaming artifact generated from the image in Referrers tab.
-
- [ ![A screenshot of Azure portal with the streaming artifact highlighted.](./media/container-registry-artifact-streaming/02-artifact-streaming-generated-inline.png) ](./media/container-registry-artifact-streaming/02-artifact-streaming-generated-expanded.png#lightbox)
-
-1. You can also delete the Artifact streaming from the repository blade.
-
- [ ![A screenshot of Azure portal with the delete artifact streaming button higlighted](./media/container-registry-artifact-streaming/04-delete-artifact-streaming-inline.png) ](./media/container-registry-artifact-streaming/04-delete-artifact-streaming-expanded.png#lightbox)
-
-1. You can also enable auto-conversion on the repository blade. Active means auto-conversion is enabled on the repository. Inactive means auto-conversion is disabled on the repository.
-
- [ ![A screenshot of Azure portal with the start artifact streaming button highlighted](./media/container-registry-artifact-streaming/03-start-artifact-streaming-inline.png) ](./media/container-registry-artifact-streaming/03-start-artifact-streaming-expanded.png#lightbox)
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Troubleshoot Artifact Streaming](tutorial-artifact-streaming-troubleshoot.md)
container-registry Tutorial Artifact Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-artifact-streaming.md
- Title: "Tutorial: Artifact Streaming in Azure Container Registry (Preview)"
-description: "Artifact Streaming is a feature in Azure Container Registry to enhance and supercharge managing, scaling, and deploying artifacts through containerized platforms."
--- Previously updated : 10/31/2023-
-#customer intent: As a developer, I want artifact streaming capabilities so that I can efficiently deliver and serve containerized applications to end-users in real-time.
--
-# Tutorial: Artifact Streaming in Azure Container Registry (Preview)
-
-Azure Container Registry (ACR) artifact streaming is designed to accelerate containerized workloads for Azure customers using Azure Kubernetes Service (AKS). Artifact streaming empowers customers to easily scale workloads without having to wait for slow pull times for their node.
-
-For example, consider the scenario where you have a containerized application that you want to deploy to multiple regions. Traditionally, you have to create multiple container registries and enable geo-replication to ensure that your container images are available in all regions. This can be time-consuming and can degrade performance of the application.
-
-Leverage artifact streaming to store container images within a single registry and manage and stream container images to Azure Kubernetes Service (AKS) clusters in multiple regions. Artifact streaming deploys container applications to multiple regions without having to create multiple registries or enable geo-replication.
-
-Artifact streaming is only available in the **Premium** SKU [service tiers](container-registry-skus.md)
-
-This article is part one in a four-part tutorial series. In this tutorial, you learn how to:
-
-* [Artifact Streaming (Preview)](tutorial-artifact-streaming.md)
-* [Artifact Streaming - Azure CLI](tutorial-artifact-streaming-cli.md)
-* [Artifact Streaming - Azure portal](tutorial-artifact-streaming-portal.md)
-* [Troubleshoot Artifact Streaming](tutorial-artifact-streaming-troubleshoot.md)
-
-## Preview limitations
-
-Artifact streaming is currently in preview. The following limitations apply:
-
-* Only images with Linux AMD64 architecture are supported in the preview release.
-* The preview release doesn't support Windows-based container images and ARM64 images.
-* The preview release partially supports multi-architecture images (only AMD64 architecture is enabled).
-* For creating Ubuntu based node pools in AKS, choose Ubuntu version 20.04 or higher.
-* For Kubernetes, use Kubernetes version 1.26 or higher or k8s version > 1.25.
-* Only premium SKU registries support generating streaming artifacts in the preview release. Non-premium SKU registries do not offer this functionality during the preview release.
-* Customer-Managed Keys (CMK) registries are not supported in the preview release.
-* Kubernetes regcred is currently not supported.
-
-## Benefits of using artifact streaming
-
-Benefits of enabling and using artifact streaming at a registry level include:
-
-* Reduce image pull latency and fast container startup.
-* Seamless and agile experience for software developers and system architects.
-* Time and performance effective scaling mechanism to design, build, and deploy container applications and cloud solutions at high scale.
-* Simplify the process of deploying containerized applications to multiple regions using a single container registry and streaming container images to multiple regions.
-* Supercharge the process of deploying containerized platforms by simplifying the process of deploying and managing container images.
-
-## Considerations before using artifact streaming
-
-Here is a brief overview on how to use artifact streaming with Azure Container Registry (ACR).
-
-* Customers with new and existing registries can enable artifact streaming for specific repositories or tags.
-* Once you enable artifact streaming, two versions of the artifact are stored in the container registry: the original artifact and the artifact streaming artifact.
-* If you disable or turn off artifact streaming for repositories or artifacts, the artifact streaming copy and original artifact still exist.
-* If you delete a repository or artifact with artifact streaming and soft delete enabled, then both the original and artifact streaming versions are deleted. However, only the original version is available on the soft delete blade.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Enable Artifact Streaming- Azure CLI](tutorial-artifact-streaming-cli.md)
copilot Build Infrastructure Deploy Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/build-infrastructure-deploy-workloads.md
Title: Build infrastructure and deploy workloads using Microsoft Copilot for Azure (preview) description: Learn how Microsoft Copilot for Azure (preview) can help you build custom infrastructure for your workloads and provide templates and scripts to help you deploy. Previously updated : 11/15/2023 Last updated : 01/18/2024
Microsoft Copilot for Azure (preview) can help you quickly build custom infrastr
Throughout a conversation, Microsoft Copilot for Azure (preview) asks you questions to better understand your requirements and applications. Based on the provided information, it then provides several architecture options suitable for deploying that infrastructure. After you select an option, Microsoft Copilot for Azure (preview) provides detailed descriptions of the infrastructure, including how it can be configured. Finally, Microsoft Copilot for Azure provides templates and scripts using the language of your choice to deploy your infrastructure.
-To get help building infrastructure and deploying workloads, start on the **Virtual machines** page in the Azure portal. Select the arrow next to **Create**, then select **More VMs and related solutions**.
-
+To get help building infrastructure and deploying workloads, start on the [More virtual machines and related solutions](https://portal.azure.com/?feature.customportal=false#view/Microsoft_Azure_SolutionCenter/SolutionGroup.ReactView/groupid/defaultLandingVmBrowse) page in the Azure portal.
Once you're there, start the conversation by letting Microsoft Copilot for Azure (preview) know what you want to build and deploy.
data-factory Tutorial Managed Virtual Network On Premise Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-managed-virtual-network-on-premise-sql-server.md
the page.
|Region |Select **East US**.| |Availability Options |Select **Availability zones**.| |Availability zone |Select **1**.|
- |Image |Select **Ubuntu Server 18.04LTS - Gen1**.|
+ |Image |Select **Ubuntu Server 22.04 LTS**.|
|Azure Spot instance |Select **No**.| |Size |Choose VM size or take default setting.| |**Administrator account**||
data-share Concepts Roles Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/concepts-roles-permissions.md
The following shows an example of how the required actions will be listed in JSO
"Microsoft.Storage/storageAccounts/blobServices/containers/read",
-"Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey/action",
+"Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey/action",
+
+"Microsoft.Storage/storageAccounts/listkeys/action",
"Microsoft.DataShare/accounts/read",
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
The following table summarizes each plan and their cloud availability.
> [!NOTE]
-> Starting March 1, 2024, Defender CSPM must be enabled to have premium DevOps security capabilities that include code-to-cloud contextualization powering security explorer and attack paths and pull request annotations for Infrastructure-as-Code security findings. See DevOps security [support and prerequisites](devops-support.md) to learn more.
+> Starting March 7, 2024, Defender CSPM must be enabled to have premium DevOps security capabilities that include code-to-cloud contextualization powering security explorer and attack paths and pull request annotations for Infrastructure-as-Code security findings. See DevOps security [support and prerequisites](devops-support.md) to learn more.
## Integrations (preview)
You can choose which ticketing system to integrate. For preview, only ServiceNow
- Defender CSPM for GCP is free until January 31, 2024. -- From March 1, 2023, advanced DevOps security posture capabilities will only be available through the paid Defender CSPM plan. Free foundational security posture management in Defender for Cloud will continue providing a number of Azure DevOps recommendations. Learn more about [DevOps security features](devops-support.md#azure-devops).
+- From March 7, 2024, advanced DevOps security posture capabilities will only be available through the paid Defender CSPM plan. Free foundational security posture management in Defender for Cloud will continue providing a number of Azure DevOps recommendations. Learn more about [DevOps security features](devops-support.md#azure-devops).
- For subscriptions that use both Defender CSPM and Defender for Containers plans, free vulnerability assessment is calculated based on free image scans provided via the Defender for Containers plan, as summarized [in the Microsoft Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
defender-for-cloud Defender For Storage Malware Scan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-malware-scan.md
Follow [these steps](tutorial-enable-storage-plan.md#set-up-and-configure-micros
If you have a file that you suspect might be malware or is being incorrectly detected, you can submit it to us for analysis through the [sample submission portal](/microsoft-365/security/intelligence/submission-guide). Select ΓÇ£Microsoft Defender for StorageΓÇ¥ as the source.
-Malware Scanning doesn't block access or change permissions to the uploaded blob, even if it's malicious.
+Defender for Cloud allows you to [suppress false positive alerts](alerts-suppression-rules.md). Make sure to limit the suppression rule by using the malware name or file hash.
+
+Malware Scanning doesn't automatically block access or change permissions to the uploaded blob, even if it's malicious.
## Limitations
defender-for-cloud Devops Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/devops-support.md
DevOps security requires the following permissions:
The following tables summarize the availability and prerequisites for each feature within the supported DevOps platforms: > [!NOTE]
-> Starting March 1, 2024, [Defender CSPM](concept-cloud-security-posture-management.md) must be enabled to have premium DevOps security capabilities which include code-to-cloud contextualization powering security explorer and attack paths and pull request annotations for Infrastructure-as-Code security findings. See details below to learn more.
+> Starting March 7, 2024, [Defender CSPM](concept-cloud-security-posture-management.md) must be enabled to have premium DevOps security capabilities which include code-to-cloud contextualization powering security explorer and attack paths and pull request annotations for Infrastructure-as-Code security findings. See details below to learn more.
### Azure DevOps
defender-for-cloud Episode Forty Three https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-forty-three.md
+
+ Title: Unified insights from Microsoft Entra permissions management | Defender for Cloud in the field
+description: Learn about unified insights from Microsoft Entra permissions management
+ Last updated : 01/18/2024++
+# Unified insights from Microsoft Entra permissions management
+
+**Episode description**: In this episode of Defender for Cloud in the Field, Sean Lee joins Yuri Diogenes to talk about the new unified insights from Microsoft Entra permissions management (CIEM) into Microsoft Defender for Cloud to enable comprehensive risk mitigation. Sean explains how this integration enables teams to drive least privilege access controls for cloud resources, and receive actionable recommendations for resolving permission risks across Azure, AWS, and GCP. Sean also presents the recommendations included with this integration and demonstrates how to remediate them.
+
+> [!VIDEO https://aka.ms/docs/player?id=28414ce1-1acb-486a-a327-802a654edc38]
+
+- [01:48](/shows/mdc-in-the-field/unified-insights#time=01m48s) - Overview of Entra permission management
+- [02:55](/shows/mdc-in-the-field/unified-insights#time=02m55s) - Details about the integration with Defender for Cloud
+- [06:50](/shows/mdc-in-the-field/unified-insights#time=06m50s) - Demonstration
+
+## Recommended resources
+
+- Learn more about [enabling permissions management in Defender for Cloud](enable-permissions-management.md)
+- Learn more about [Microsoft Security](https://msft.it/6002T9HQY).
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS).
+
+- Follow us on social media:
+
+ - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
+ - [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
defender-for-cloud Episode Forty Two https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-forty-two.md
Title: Agentless secrets scanning for virtual machines | Defender for Cloud in the field description: Learn about agentless secrets scanning for virtual machines Previously updated : 01/08/2024 Last updated : 01/18/2024 # Agentless secrets scanning for virtual machines
Last updated 01/08/2024
- Learn more about [Microsoft Security](https://msft.it/6002T9HQY). - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS). -- - Follow us on social media: - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
Last updated 01/08/2024
## Next steps > [!div class="nextstepaction"]
-> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
+> [Unified insights from Microsoft Entra permissions management](episode-forty-three.md)
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Title: Connect your AWS account description: Defend your AWS resources by using Microsoft Defender for Cloud. -+ Last updated 01/03/2024
AWS Systems Manager (SSM) manages autoprovisioning by using the SSM Agent. Some
Ensure that your SSM Agent has the managed policy [AmazonSSMManagedInstanceCore](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonSSMManagedInstanceCore.html). It enables core functionality for the AWS Systems Manager service. Enable these other extensions on the Azure Arc-connected machines:
-
+ - Microsoft Defender for Endpoint - A vulnerability assessment solution (TVM or Qualys) - The Log Analytics agent on Azure Arc-connected machines or the Azure Monitor agent
Ensure that your SSM Agent has the managed policy [AmazonSSMManagedInstanceCore]
If you want to manually install Azure Arc on your existing and future EC2 instances, use the [EC2 instances should be connected to Azure Arc](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/231dee23-84db-44d2-bd9d-c32fbcfb42a3) recommendation to identify instances that don't have Azure Arc installed. Enable these other extensions on the Azure Arc-connected machines:
-
+ - Microsoft Defender for Endpoint - A vulnerability assessment solution (TVM or Qualys) - The Log Analytics agent on Azure Arc-connected machines or the Azure Monitor agent
Deploy the CloudFormation template by using Stack (or StackSet if you have a man
- **Upload a template file**: AWS automatically creates an S3 bucket that the CloudFormation template is saved to. The automation for the S3 bucket has a security misconfiguration that causes the `S3 buckets should require requests to use Secure Socket Layer` recommendation to appear. You can remediate this recommendation by applying the following policy: ```bash
- {ΓÇ»
- ΓÇ» "Id": "ExamplePolicy",ΓÇ»
- ΓÇ» "Version": "2012-10-17",ΓÇ»
- ΓÇ» "Statement": [ΓÇ»
-     { 
-       "Sid": "AllowSSLRequestsOnly", 
-       "Action": "s3:*", 
-       "Effect": "Deny", 
-       "Resource": [ 
-         "<S3_Bucket ARN>", 
-         "<S3_Bucket ARN>/*" 
-       ], 
-       "Condition": { 
-         "Bool": { 
-           "aws:SecureTransport": "false" 
-         } 
-       }, 
-      "Principal": "*" 
-     } 
- ΓÇ» ]ΓÇ»
- }ΓÇ»
+ {ΓÇ»
+ ΓÇ» "Id": "ExamplePolicy",ΓÇ»
+ ΓÇ» "Version": "2012-10-17",ΓÇ»
+ ΓÇ» "Statement": [ΓÇ»
+     { 
+       "Sid": "AllowSSLRequestsOnly", 
+       "Action": "s3:*", 
+       "Effect": "Deny", 
+       "Resource": [ 
+         "<S3_Bucket ARN>", 
+         "<S3_Bucket ARN>/*" 
+       ], 
+       "Condition": { 
+         "Bool": { 
+           "aws:SecureTransport": "false" 
+         } 
+       }, 
+      "Principal": "*" 
+     } 
+ ΓÇ» ]ΓÇ»
+ }ΓÇ»
``` > [!NOTE]
defender-for-cloud Recommendations Reference Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-aws.md
impact on your secure score.
### Data plane recommendations
-All the data plane recommendations listed [here](kubernetes-workload-protections.md#view-and-configure-the-bundle-of-recommendations) are supported under AWS after [enabling Azure Policy for Kubernetes](kubernetes-workload-protections.md#enable-kubernetes-data-plane-hardening).
+All the data plane recommendations listed [here](kubernetes-workload-protections.md#view-and-configure-the-bundle-of-recommendations) are supported under AWS after [enabling Azure Policy for Kubernetes](kubernetes-workload-protections.md#enable-kubernetes-data-plane-hardening).
## <a name='recs-aws-data'></a> AWS Data recommendations
defender-for-cloud Sql Information Protection Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-information-protection-policy.md
SQL information protection's [data discovery and classification mechanism](/azur
The classification mechanism is based on the following two elements: -- **Labels** ΓÇô The main classification attributes, used to define the *sensitivity level of the data* stored in the column.
+- **Labels** ΓÇô The main classification attributes, used to define the *sensitivity level of the data* stored in the column.
- **Information Types** ΓÇô Provides additional granularity into the *type of data* stored in the column.
-The information protection policy options within Defender for Cloud provide a predefined set of labels and information types which serve as the defaults for the classification engine. You can customize the policy, according to your organization's needs, as described below.
+The information protection policy options within Defender for Cloud provide a predefined set of labels and information types that serve as the defaults for the classification engine. You can customize the policy, according to your organization's needs, as described below.
:::image type="content" source="./media/sql-information-protection-policy/sql-information-protection-policy-page.png" alt-text="The page showing your SQL information protection policy.":::
-
-- ## How do I access the SQL information protection policy? There are three ways to access the information protection policy: - **(Recommended)** From the **Environment settings** page of Defender for Cloud-- From the security recommendation "Sensitive data in your SQL databases should be classified"
+- From the security recommendation *Sensitive data in your SQL databases should be classified*
- From the Azure SQL DB data discovery page Each of these is shown in the relevant tab below. -- ### [**From Defender for Cloud's settings**](#tab/sqlip-tenant) <a name="sqlip-tenant"></a>
From Defender for Cloud's **Environment settings** page, select **SQL informatio
:::image type="content" source="./media/sql-information-protection-policy/environment-settings-link-to-information-protection.png" alt-text="Accessing the SQL Information Protection policy from the environment settings page of Microsoft Defender for Cloud."::: -- ### [**From Defender for Cloud's recommendation**](#tab/sqlip-db) <a name="sqlip-db"></a> ### Access the policy from the Defender for Cloud recommendation
-Use Defender for Cloud's recommendation, "Sensitive data in your SQL databases should be classified", to view the data discovery and classification page for your database. There, you'll also see the columns discovered to contain information that we recommend you classify.
+Use Defender for Cloud's recommendation, *Sensitive data in your SQL databases should be classified*, to view the data discovery and classification page for your database. There, you'll also see the columns discovered to contain information that we recommend you classify.
1. From Defender for Cloud's **Recommendations** page, search for the recommendation **Sensitive data in your SQL databases should be classified**.
Use Defender for Cloud's recommendation, "Sensitive data in your SQL databases s
:::image type="content" source="./media/sql-information-protection-policy/access-policy-from-security-center-recommendation.png" alt-text="Opening the SQL information protection policy from the relevant recommendation in Microsoft Defender for Cloud's"::: -- ### [**From Azure SQL**](#tab/sqlip-azuresql) <a name="sqlip-azuresql"></a>
Use Defender for Cloud's recommendation, "Sensitive data in your SQL databases s
:::image type="content" source="./media/sql-information-protection-policy/access-policy-from-azure-sql.png" alt-text="Opening the SQL information protection policy from Azure SQL.":::
-
+ ## Customize your information types
To manage and customize information types:
:::image type="content" source="./media/sql-information-protection-policy/manage-types.png" alt-text="Manage information types for your information protection policy."::: 1. To add a new type, select **Create information type**. You can configure a name, description, and search pattern strings for the information type. Search pattern strings can optionally use keywords with wildcard characters (using the character '%'), which the automated discovery engine uses to identify sensitive data in your databases, based on the columns' metadata.
-
+ :::image type="content" source="./media/sql-information-protection-policy/configure-new-type.png" alt-text="Configure a new information type for your information protection policy.":::
-1. You can also modify the built-in types by adding additional search pattern strings, disabling some of the existing strings, or by changing the description.
+1. You can also modify the built-in types by adding additional search pattern strings, disabling some of the existing strings, or by changing the description.
> [!TIP]
- > You can't delete built-in types or change their names.
+ > You can't delete built-in types or change their names.
-1. **Information types** are listed in order of ascending discovery ranking, meaning that the types higher in the list will attempt to match first. To change the ranking between information types, drag the types to the right spot in the table, or use the **Move up** and **Move down** buttons to change the order.
+1. **Information types** are listed in order of ascending discovery ranking, meaning that the types higher in the list attempt to match first. To change the ranking between information types, drag the types to the right spot in the table, or use the **Move up** and **Move down** buttons to change the order.
-1. Select **OK** when you are done.
+1. Select **OK** when you're done.
-1. After you completed managing your information types, be sure to associate the relevant types with the relevant labels, by clicking **Configure** for a particular label, and adding or deleting information types as appropriate.
+1. After you completed managing your information types, be sure to associate the relevant types with the relevant labels, by selecting **Configure** for a particular label, and adding or deleting information types as appropriate.
1. To apply your changes, select **Save** in the main **Labels** page.
-
## Exporting and importing a policy
-You can download a JSON file with your defined labels and information types, edit the file in the editor of your choice, and then import the updated file.
+You can download a JSON file with your defined labels and information types, edit the file in the editor of your choice, and then import the updated file.
:::image type="content" source="./media/sql-information-protection-policy/export-import.png" alt-text="Exporting and importing your information protection policy."::: > [!NOTE]
-> You'll need tenant level permissions to import a policy file.
-
+> You'll need tenant level permissions to import a policy file.
## Permissions
-To customize the information protection policy for your Azure tenant, you'll need the following actions on the tenant's root management group:
- - Microsoft.Security/informationProtectionPolicies/read
- - Microsoft.Security/informationProtectionPolicies/write
+To customize the information protection policy for your Azure tenant, you need the following actions on the tenant's root management group:
+
+- Microsoft.Security/informationProtectionPolicies/read
+- Microsoft.Security/informationProtectionPolicies/write
Learn more in [Grant and request tenant-wide visibility](tenant-wide-permissions-management.md).
Learn more in [Grant and request tenant-wide visibility](tenant-wide-permissions
- [Get-AzSqlInformationProtectionPolicy](/powershell/module/az.security/get-azsqlinformationprotectionpolicy): Retrieves the effective tenant SQL information protection policy. - [Set-AzSqlInformationProtectionPolicy](/powershell/module/az.security/set-azsqlinformationprotectionpolicy): Sets the effective tenant SQL information protection policy.
-
## Next steps
-
+ In this article, you learned about defining an information protection policy in Microsoft Defender for Cloud. To learn more about using SQL Information Protection to classify and protect sensitive data in your SQL databases, see [Azure SQL Database Data Discovery and Classification](/azure/azure-sql/database/data-discovery-and-classification-overview). For more information on security policies and data security in Defender for Cloud, see the following articles:
-
+ - [Setting security policies in Microsoft Defender for Cloud](tutorial-security-policy.md): Learn how to configure security policies for your Azure subscriptions and resource groups - [Microsoft Defender for Cloud data security](data-security.md): Learn how Defender for Cloud manages and safeguards data
defender-for-iot Update Ot Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/update-ot-software.md
This procedure describes how to send a software version update to one or more OT
:::image type="content" source="media/update-ot-software/remote-update-step-1.png" alt-text="Screenshot of the Send package option." lightbox="media/update-ot-software/remote-update-step-1.png":::
-1. In the **Send package** pane that appears, check to make sure that you're sending the software to the sensor you want to update. To jump to the release notes for the new version, select **Learn more** at the top of the pane.
+1. In the **Send package** pane that appears, under **Available versions**, select the software version from the list. If the version you need doesn't appear, select **Show more** to list all available versions.
+
+ To jump to the release notes for the new version, select **Learn more** at the top of the pane.
+
+ :::image type="content" source="media/update-ot-software/send-package-multiple-versions-400.png" alt-text="Screenshot of sensor update pane with option to choose sensor update version." lightbox="media/update-ot-software/send-package-multiple-versions.png" border="false":::
1. When you're ready, select **Send package**, and the software transfer to your sensor machine is started. You can see the transfer progress in the **Sensor version** column, with the percentage complete automatically updating in the progress bar, so you can see that the process has started and letting you track its progress until the transfer is complete. For example: :::image type="content" source="media/update-ot-software/sensor-version-update-bar.png" alt-text="Screenshot of the update bar in the Sensor version column." lightbox="media/update-ot-software/sensor-version-update-bar.png":::
- When the transfer is complete, the **Sensor version** column changes to :::image type="icon" source="media/update-ot-software/ready-to-update.png" border="false" ::: **Ready to update**.
+ When the transfer is complete, the **Sensor version** column changes to :::image type="icon" source="media/update-ot-software/ready-to-update.png" border="true" ::: **Ready to update**.
Hover over the **Sensor version** value to see the source and target version for your update.
This procedure describes how to send a software version update to one or more OT
Run the sensor update only when you see the :::image type="icon" source="media/update-ot-software/ready-to-update.png" border="false"::: **Ready to update** icon in the **Sensor version** column.
-1. Select one or more sensors to update, and then select **Sensor update** > **Remote update** > **Step 2: Update sensor** from the toolbar.
+1. Select one or more sensors to update, and then select **Sensor update** > **Remote update** > **Step 2: Update sensor** from the toolbar. The **Update sensor** pane opens in the right side of the screen.
For an individual sensor, the **Step 2: Update sensor** option is also available from the **...** options menu. For example: :::image type="content" source="media/update-ot-software/remote-update-step-2.png" alt-text="Screenshot of the Update sensor option." lightbox="media/update-ot-software/remote-update-step-2.png":::
-1. In the **Update sensor** pane that appears, verify your update details.
+1. In the **Update sensor** pane that appears, verify your update details.
When you're ready, select **Update now** > **Confirm update**. In the grid, the **Sensor version** value changes to :::image type="icon" source="media/update-ot-software/installing.png" border="false"::: **Installing**, and an update progress bar appears showing you the percentage complete. The bar automatically updates, so that you can track the progress until the installation is complete.
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
Features released earlier than nine months ago are described in the [What's new
> Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >
+## January 2024
+
+|Service area |Updates |
+|||
+| **OT networks** | - [Sensor update in Azure portal now supports selecting a specific version](#sensor-update-in-azure-portal-now-supports-selecting-a-specific-version) <br> |
+
+### Sensor update in Azure portal now supports selecting a specific version
+
+When you update the sensor in the Azure portal, you can now choose to update to any of the supported, previous versions (versions other than the latest version). Previously, sensors onboarded to Microsoft Defender for IoT on the Azure portal were automatically updated to the latest version.
+
+You might want to update your sensor to a specific version for various reasons, such as for testing purposes, or to align all sensors to the same version.
++
+For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md#send-the-software-update-to-your-ot-sensor).
+ ## December 2023 |Service area |Updates |
digital-twins Concepts Ontologies Adopt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-ontologies-adopt.md
description: Learn about existing industry ontologies that can be adopted for Azure Digital Twins Previously updated : 03/29/2023 Last updated : 01/18/2024
Microsoft has partnered with domain experts to create DTDL model sets based on i
| Smart buildings | [Digital Twins Definition Language-based RealEstateCore ontology for smart buildings](https://github.com/Azure/opendigitaltwins-building) | Microsoft has partnered with [RealEstateCore](https://www.realestatecore.io/) to deliver this open-source DTDL ontology for the real estate industry. [RealEstateCore](https://www.realestatecore.io/) is a consortium of real estate owners, software vendors, and research institutions.<br><br>This smart buildings ontology provides common ground for modeling smart buildings, using industry standards (like [BRICK Schema](https://brickschema.org/ontology/) or [W3C Building Topology Ontology](https://w3c-lbd-cg.github.io/bot/https://docsupdatetracker.net/index.html)) to avoid reinvention. The ontology also comes with best practices for how to consume and properly extend it. | You can read more about the partnership with RealEstateCore and goals for this initiative in the following blog post and embedded video: [RealEstateCore, a smart building ontology for digital twins, is now available](https://techcommunity.microsoft.com/t5/internet-of-things/realestatecore-a-smart-building-ontology-for-digital-twins-is/ba-p/1914794). | | Smart cities | [Digital Twins Definition Language (DTDL) ontology for Smart Cities](https://github.com/Azure/opendigitaltwins-smartcities) | Microsoft has collaborated with [Open Agile Smart Cities (OASC)](https://oascities.org/) and [Sirus](https://sirus.be/) to provide a DTDL-based ontology for smart cities, starting with [ETSI CIM NGSI-LD](https://www.etsi.org/committee/cim). | You can also read more about the partnerships and approach for smart cities in the following blog post and embedded video: [Smart Cities Ontology for Digital Twins](https://techcommunity.microsoft.com/t5/internet-of-things/smart-cities-ontology-for-digital-twins/ba-p/2166585). | | Energy grids | [Digital Twins Definition Language (DTDL) ontology for Energy Grid](https://github.com/Azure/opendigitaltwins-energygrid/) | This ontology was created to help solution providers accelerate development of digital twin solutions for energy use cases like monitoring grid assets, outage and impact analysis, simulation, and predictive maintenance. Additionally, the ontology can be used to enable the digital transformation and modernization of the energy grid. It's adapted from the [Common Information Model (CIM)](https://cimug.ucaiug.org/), a global standard for energy grid assets management, power system operations modeling, and physical energy commodity market. | You can also read more about the partnerships and approach for energy grids in the following blog post: [Energy Grid Ontology for Digital Twins](https://techcommunity.microsoft.com/t5/internet-of-things/energy-grid-ontology-for-digital-twins-is-now-available/ba-p/2325134). |
-| Manufacturing | [Manufacturing Ontologies](https://github.com/digitaltwinconsortium/ManufacturingOntologies) | These ontologies were created to help solution providers accelerate development of digital twin solutions for manufacturing use cases like asset condition monitoring, simulation, OEE calculation, and predictive maintenance. Additionally, the ontologies can be used to enable the digital transformation and modernization of factories and plants. They are adapted from [OPC UA](https://opcfoundation.org), [ISA95](https://en.wikipedia.org/wiki/ANSI/ISA-95) and the [Asset Administration Shell](https://www.plattform-i40.de/IP/Redaktion/EN/Standardartikel/specification-administrationshell.html), three global standards widely used in the manufacturing space. | Visit the repository to read more about this ontology and explore a sample solution for ingesting OPC UA data into Azure Digital Twins. |
+| Manufacturing | [Manufacturing Ontologies](https://github.com/digitaltwinconsortium/ManufacturingOntologies) | These ontologies were created to help solution providers accelerate development of digital twin solutions for manufacturing use cases like asset condition monitoring, simulation, OEE calculation, and predictive maintenance. Additionally, the ontologies can be used to enable the digital transformation and modernization of factories and plants. They are adapted from [OPC UA](https://opcfoundation.org), [ISA95](https://www.isa.org/standards-and-publications/isa-standards/isa-standards-committees/isa95) and the [Asset Administration Shell](https://reference.opcfoundation.org/I4AAS/v100/docs/4.1), three global standards widely used in the manufacturing space. | Visit the repository to read more about this ontology and explore a sample solution for ingesting OPC UA data into Azure Digital Twins. |
Each ontology is focused on an initial set of models. You can contribute to the ontologies by suggesting extensions or other improvements through the GitHub contribution process in each ontology repository.
dms Tutorial Sql Server Azure Sql Database Offline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-azure-sql-database-offline.md
In this tutorial, you learn how to:
> - Run an assessment of your source SQL Server databases > - Collect performance data from your source SQL Server instance > - Get a recommendation of the Azure SQL Database SKU that will work best for your workload
-> - Deploy your on-premises database schema to Azure SQL Database
> - Create an instance of Azure Database Migration Service > - Start your migration and monitor progress to completion
Before you begin the tutorial:
- Make sure that the SQL Server login that connects to the source SQL Server instance is a member of the db_datareader role and that the login for the target SQL Server instance is a member of the db_owner role. -- Migrate the database schema from source to target by using the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio.-
+- To migrate the database schema from source to target Azure SQL DB by using the Database Migration Service, the minimum supported [SHIR version](https://www.microsoft.com/download/details.aspx?id=39717) required is 5.37 or above.
+
- If you're using Database Migration Service for the first time, make sure that the Microsoft.DataMigration [resource provider is registered in your subscription](quickstart-create-data-migration-service-portal.md#register-the-resource-provider). > [!NOTE]
-> Make sure to migrate the database schema from source to target by using the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio before selecting the list of tables to migrate.
+> Now, you can migrate database Schema and data both using Database Migration Service. Also, you can use tools like the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio to migrate schema before selecting the list of tables to migrate.
>
-> If no tables exist on the Azure SQL Database target, or no tables are selected before starting the migration, the **Next** button isn't available to select to initiate the migration task.
+> If no table exist on the Azure SQL Database target, or no tables are selected before starting the migration, the **Next** button isn't available to select to initiate the migration task. If no table exists on target then you must select the Schema migration option to move forward.
### Open the Migrate to Azure SQL wizard in Azure Data Studio
To open the Migrate to Azure SQL wizard:
> [!NOTE] > If no tables are selected or if a username and password aren't entered, the **Next** button isn't available to select. >
-> Make sure to migrate the database schema from source to target by using the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio before selecting the list of tables to migrate.
+> Now, you can migrate database Schema and data both using Database Migration Service. Also, you can use tools like the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio to migrate Schema before selecting the list of tables to migrate.
### Create a Database Migration Service instance
Before you begin the tutorial:
- Make sure that the SQL Server login that connects to the source SQL Server instance is a member of the **db_datareader** role, and that the login for the target SQL Server instance is a member of the **db_owner** role. -- Migrate the database schema from source to target by using the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio.
+- To migrate the database Schema from source to target Azure SQL DB by using the Database Migration Service, the minimum supported [SHIR version](https://www.microsoft.com/download/details.aspx?id=39717) required is 5.37 or above.
- If you're using Database Migration Service for the first time, make sure that the `Microsoft.DataMigration` [resource provider is registered in your subscription](quickstart-create-data-migration-service-portal.md#register-the-resource-provider). > [!NOTE]
-> Make sure to migrate the database schema from source to target by using the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio before selecting the list of tables to migrate.
+> Now, you can migrate database Schema and data both using Database Migration Service. Also, you can use tools like the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio to migrate schema before selecting the list of tables to migrate.
>
-> If no tables exists on the Azure SQL Database target, or no tables are selected before starting the migration. The **Next** button isn't available to select to initiate the migration task.
+> If no table exists on the Azure SQL Database target, or no tables are selected before starting the migration. The **Next** button isn't available to select to initiate the migration task. If no table exists on target then you must select the Schema migration option to move forward.
[!INCLUDE [create-database-migration-service-instance](includes/create-database-migration-service-instance.md)]
Before you begin the tutorial:
> [!NOTE] > In an offline migration, application downtime starts when the migration starts. >
- > Make sure to migrate the database schema from source to target by using the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio before selecting the list of tables to migrate.
+ > Now, you can migrate database Schema and data both using Database Migration Service. Also, you can use tools like the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio to migrate schema before selecting the list of tables to migrate.
### Monitor the database migration
dns Dns Protect Private Zones Recordsets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-protect-private-zones-recordsets.md
Azure PowerShell
$lvl = "<lock level>" $lnm = "<lock name>" $rnm = "<zone name>/<record set name>"
-$rty = "Microsoft.Network/privateDnsZones"
+$rty = "Microsoft.Network/privateDnsZones/<record type>"
$rsg = "<resource group name>" New-AzResourceLock -LockLevel $lvl -LockName $lnm -ResourceName $rnm -ResourceType $rty -ResourceGroupName $rsg
dns Dns Reverse Dns Hosting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-reverse-dns-hosting.md
Last updated 04/27/2023--++ ms.devlang: azurecli
The following example explains the process of creating a PTR record for a revers
:::image type="content" source="./media/dns-reverse-dns-hosting/create-record-set-ipv4.png" alt-text="Screenshot of create IPv4 pointer record set.":::
-1. The name of the record set for a PTR record is the rest of the IPv4 address in reverse order.
+1. The name of the record set for a PTR record is the rest of the IPv4 address in reverse order.
- In this example, the first three octets are already populated as part of the zone name `.2.0.192`. That's why only the last octet is needed in the **Name** box. For example, give your record set the name of **15** for a resource whose IP address is `192.0.2.15`.
+ In this example, the first three octets are already populated as part of the zone name `.2.0.192`. That's why only the last octet is needed in the **Name** box. For example, give your record set the name of **15** for a resource whose IP address is `192.0.2.15`.
:::image type="content" source="./media/dns-reverse-dns-hosting/create-ipv4-ptr.png" alt-text="Screenshot of create IPv4 pointer record.":::
New-AzDnsRecordSet -Name 15 -RecordType PTR -ZoneName 2.0.192.in-addr.arpa -Reso
#### Azure classic CLI ```azurecli
-azure network dns record-set add-record mydnsresourcegroup 2.0.192.in-addr.arpa 15 PTR --ptrdname dc1.contoso.com
+azure network dns record-set add-record mydnsresourcegroup 2.0.192.in-addr.arpa 15 PTR --ptrdname dc1.contoso.com
``` #### Azure CLI
The following example explains the process of creating new PTR record for IPv6.
:::image type="content" source="./media/dns-reverse-dns-hosting/create-record-set-ipv6.png" alt-text="Screenshot of create IPv6 pointer record set.":::
-1. The name of the record set for a PTR record is the rest of the IPv6 address in reverse order. It must not include any zero compression.
+1. The name of the record set for a PTR record is the rest of the IPv6 address in reverse order. It must not include any zero compression.
In this example, the first 64 bits of the IPv6 gets populated as part of the zone name (0.0.0.0.c.d.b.a.8.b.d.0.1.0.0.2.ip6.arpa). That's why only the last 64 bits are supplied in the **Name** box. The last 64 bits of the IP address gets entered in reverse order, with a period as the delimiter between each hexadecimal number. Name your record set **e.5.0.4.9.f.a.1.c.b.0.1.4.2.5.f** if you have a resource whose IP address is 2001:0db8:abdc:0000:f524:10bc:1af9:405e. :::image type="content" source="./media/dns-reverse-dns-hosting/create-ipv6-ptr.png" alt-text="Screenshot of create IPv6 pointer record.":::
-1. For *Type*, select **PTR**.
+1. For *Type*, select **PTR**.
1. For *DOMAIN NAME*, enter the FQDN of the resource that uses the IP.
New-AzDnsRecordSet -Name "e.5.0.4.9.f.a.1.c.b.0.1.4.2.5.f" -RecordType PTR -Zone
#### Azure classic CLI ```azurecli
-azure network dns record-set add-record mydnsresourcegroup 0.0.0.0.c.d.b.a.8.b.d.0.1.0.0.2.ip6.arpa e.5.0.4.9.f.a.1.c.b.0.1.4.2.5.f PTR --ptrdname dc2.contoso.com
+azure network dns record-set add-record mydnsresourcegroup 0.0.0.0.c.d.b.a.8.b.d.0.1.0.0.2.ip6.arpa e.5.0.4.9.f.a.1.c.b.0.1.4.2.5.f PTR --ptrdname dc2.contoso.com
```
-
+ #### Azure CLI ```azurecli-interactive
az network dns record-set ptr add-record -g mydnsresourcegroup -z 0.0.0.0.c.d.b.
## View records
-To view the records that you created, browse to your DNS zone in the Azure portal. In the lower part of the **DNS zone** pane, you can see the records for the DNS zone. You should see the default NS and SOA records, plus any new records that you've created. The NS and SOA records are created in every zone.
+To view the records that you created, browse to your DNS zone in the Azure portal. In the lower part of the **DNS zone** pane, you can see the records for the DNS zone. You should see the default NS and SOA records, plus any new records that you've created. The NS and SOA records are created in every zone.
### IPv4
dns Dns Reverse Dns Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-reverse-dns-overview.md
na-+ Last updated 04/27/2023
energy-data-services How To Set Up Private Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-set-up-private-links.md
Use the following steps to create a private endpoint for an existing Azure Data
* Configure network and private IP settings. [Learn more](../private-link/create-private-endpoint-portal.md#create-a-private-endpoint).
- * Configure a private endpoint with an application security group. [Learn more](../private-link/configure-asg-private-endpoint.md#create-private-endpoint-with-an-asg).
+ * Configure a private endpoint with an application security group. [Learn more](../private-link/configure-asg-private-endpoint.md#create-a-private-endpoint-with-an-asg).
[![Screenshot of virtual network information for a private endpoint.](media/how-to-manage-private-links/private-links-4-virtual-network.png)](media/how-to-manage-private-links/private-links-4-virtual-network.png#lightbox)
event-grid Cloudevents Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/cloudevents-schema.md
Title: Use Azure Event Grid with events in CloudEvents schema description: Describes how to use the CloudEvents schema for events in Azure Event Grid. The service supports events in the JSON implementation of CloudEvents. Previously updated : 12/02/2022 Last updated : 01/18/2024 ms.devlang: csharp # ms.devlang: csharp, javascript
You set the input schema for a custom topic when you create the custom topic.
For the Azure CLI, use: ```azurecli-interactive
-az eventgrid topic create \
- --name <topic_name> \
- -l westcentralus \
- -g gridResourceGroup \
- --input-schema cloudeventschemav1_0
+az eventgrid topic create --name demotopic -l westcentralus -g gridResourceGroup --input-schema cloudeventschemav1_0
``` For PowerShell, use: ```azurepowershell-interactive
-New-AzEventGridTopic `
- -ResourceGroupName gridResourceGroup `
- -Location westcentralus `
- -Name <topic_name> `
- -InputSchema CloudEventSchemaV1_0
+New-AzEventGridTopic -ResourceGroupName gridResourceGroup -Location westcentralus -Name demotopic -InputSchema CloudEventSchemaV1_0
``` ### Output schema
You set the output schema when you create the event subscription.
For the Azure CLI, use: ```azurecli-interactive
-topicID=$(az eventgrid topic show --name <topic-name> -g gridResourceGroup --query id --output tsv)
+topicID=$(az eventgrid topic show --name demotopic -g gridResourceGroup --query id --output tsv)
-az eventgrid event-subscription create \
- --name <event_subscription_name> \
- --source-resource-id $topicID \
- --endpoint <endpoint_URL> \
- --event-delivery-schema cloudeventschemav1_0
+az eventgrid event-subscription create --name demotopicsub --source-resource-id $topicID --endpoint <endpoint_URL> --event-delivery-schema cloudeventschemav1_0
``` For PowerShell, use: ```azurepowershell-interactive $topicid = (Get-AzEventGridTopic -ResourceGroupName gridResourceGroup -Name <topic-name>).Id
-New-AzEventGridSubscription `
- -ResourceId $topicid `
- -EventSubscriptionName <event_subscription_name> `
- -Endpoint <endpoint_URL> `
- -DeliverySchema CloudEventSchemaV1_0
+New-AzEventGridSubscription -ResourceId $topicid -EventSubscriptionName <event_subscription_name> -Endpoint <endpoint_URL> -DeliverySchema CloudEventSchemaV1_0
``` ## Endpoint validation with CloudEvents v1.0
If you're already familiar with Event Grid, you might be aware of the endpoint v
### Visual Studio or Visual Studio Code
-If you're using Visual Studio or Visual Studio Code, and C# programming language to develop functions, make sure that you're using the latest [Microsoft.Azure.WebJobs.Extensions.EventGrid](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.EventGrid/) NuGet package (version **3.2.1** or above).
+If you're using Visual Studio or Visual Studio Code, and C# programming language to develop functions, make sure that you're using the latest [Microsoft.Azure.WebJobs.Extensions.EventGrid](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.EventGrid/) NuGet package (version **3.3.1** or above).
In Visual Studio, use the **Tools** -> **NuGet Package Manager** -> **Package Manager Console**, and run the `Install-Package` command (`Install-Package Microsoft.Azure.WebJobs.Extensions.EventGrid -Version 3.2.1`). Alternatively, right-click the project in the Solution Explorer window, and select **Manage NuGet Packages** menu to browse for the NuGet package, and install or update it to the latest version.
namespace Company.Function
public static class CloudEventTriggerFunction { [FunctionName("CloudEventTriggerFunction")]
- public static void Run(
- ILogger logger,
- [EventGridTrigger] CloudEvent e)
+ public static void Run(ILogger logger, [EventGridTrigger] CloudEvent e)
{ logger.LogInformation("Event received {type} {subject}", e.Type, e.Subject); }
event-grid Handler Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/handler-service-bus.md
Title: Service Bus queues and topics as event handlers for Azure Event Grid events description: Describes how you can use Service Bus queues and topics as event handlers for Azure Event Grid events. Previously updated : 11/17/2022 Last updated : 01/17/2024 # Service Bus queues and topics as event handlers for Azure Event Grid events
You can also use the [`New-AzEventGridSystemTopicEventSubscription`](/powershell
When you send an event to a Service Bus queue or topic as a brokered message, the `messageid` of the brokered message is an internal system ID.
-The internal system ID for the message will be maintained across redelivery of the event so that you can avoid duplicate deliveries by turning on **duplicate detection** on the service bus entity. We recommend that you enable duration of the duplicate detection on the Service Bus entity to be either the time-to-live (TTL) of the event or max retry duration, whichever is longer.
+The internal system ID for the message is maintained across redelivery of the event so that you can avoid duplicate deliveries by turning on **duplicate detection** on the service bus entity. We recommend that you enable duration of the duplicate detection on the Service Bus entity to be either the time-to-live (TTL) of the event or max retry duration, whichever is longer.
## Delivery properties
-Event subscriptions allow you to set up HTTP headers that are included in delivered events. This capability allows you to set custom headers that are required by a destination. You can set custom headers on the events that are delivered to Azure Service Bus queues and topics.
+Event subscriptions allow you to set up HTTP headers that are included in delivered events. This capability allows you to set custom headers that the destination requires. You can set custom headers on the events that are delivered to Azure Service Bus queues and topics.
Azure Service Bus supports the use of following message properties when sending single messages.
event-grid Manage Event Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/manage-event-delivery.md
Title: Dead letter and retry policies - Azure Event Grid description: Describes how to customize event delivery options for Event Grid. Set a dead-letter destination, and specify how long to retry delivery. Previously updated : 11/07/2022 Last updated : 01/17/2024 ms.devlang: azurecli
event-grid Post To Custom Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/post-to-custom-topic.md
Title: Post event to custom Azure Event Grid topic description: This article describes how to post an event to a custom topic. It shows the format of the post and event data. Previously updated : 11/17/2022 Last updated : 01/18/2024 # Publish events to Azure Event Grid custom topics using access keys
event-grid Subscribe To Partner Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-partner-events.md
Title: Azure Event Grid - Subscribe to partner events description: This article explains how to subscribe to events from a partner using Azure Event Grid. Previously updated : 10/31/2022 Last updated : 01/18/2024 # Subscribe to events published by a partner with Azure Event Grid
-This article describes steps to subscribe to events that originate in a system owned or managed by a partner (SaaS, ERP, etc.).
+This article describes steps to subscribe to events that originate in a system owned or managed by a partner (SaaS, Enterprise Resource Planning (ERP), etc.).
> [!IMPORTANT] >If you aren't familiar with the **Partner Events** feature, see [Partner Events overview](partner-events-overview.md) to understand the rationale of the steps in this article.
Here's the list of partners and a link to submit a request to enable events flow
## Next steps
-See the following articles for more details about the Partner Events feature:
+For more information, see the following articles about the Partner Events feature:
- [Partner Events overview for customers](partner-events-overview.md) - [Partner Events overview for partners](partner-events-overview-for-partners.md)
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-features.md
IDPS signature rules have the following properties:
|Signature ID |Internal ID for each signature. This ID is also presented in Azure Firewall Network Rules logs.| |Mode |Indicates if the signature is active or not, and whether firewall drops or alerts upon matched traffic. The below signature mode can override IDPS mode<br>- **Disabled**: The signature isn't enabled on your firewall.<br>- **Alert**: You receive alerts when suspicious traffic is detected.<br>- **Alert and Deny**: You receive alerts and suspicious traffic is blocked. Few signature categories are defined as ΓÇ£Alert OnlyΓÇ¥, therefore by default, traffic matching their signatures isn't blocked even though IDPS mode is set to ΓÇ£Alert and DenyΓÇ¥. Customers may override this by customizing these specific signatures to ΓÇ£Alert and DenyΓÇ¥ mode. <br><br>IDPS Signature mode is determined by one of the following reasons:<br><br> 1. Defined by Policy Mode ΓÇô Signature mode is derived from IDPS mode of the existing policy.<br>2. Defined by Parent Policy ΓÇô Signature mode is derived from IDPS mode of the parent policy.<br>3. Overridden ΓÇô You can override and customize the Signature mode.<br>4. Defined by System - Signature mode is set to *Alert Only* by the system due to its [category](idps-signature-categories.md). You may override this signature mode.<br><br>Note: IDPS alerts are available in the portal via network rule log query.| |Severity |Each signature has an associated severity level and assigned priority that indicates the probability that the signature is an actual attack.<br>- **Low (priority 3)**: An abnormal event is one that doesn't normally occur on a network or Informational events are logged. Probability of attack is low.<br>- **Medium (priority 2)**: The signature indicates an attack of a suspicious nature. The administrator should investigate further.<br>- **High (priority 1)**: The attack signatures indicate that an attack of a severe nature is being launched. There's little probability that the packets have a legitimate purpose.|
-|Direction |The traffic direction for which the signature is applied.<br><br>- **Inbound**: Signature is applied only on traffic arriving from the Internet and destined to your [configured private IP address range](#idps-private-ip-ranges).<br>- **Outbound**: Signature is applied only on traffic sent from your [configured private IP address range](#idps-private-ip-ranges) to the Internet.<br>- **Internal**: Signature is applied only on traffic sent from and destined to your [configured private IP address range](#idps-private-ip-ranges).<br>- **Any**: Signature is always applied on any traffic direction.|
+|Direction |The traffic direction for which the signature is applied.<br><br>- **Inbound**: Signature is applied only on traffic arriving from the Internet and destined to your [configured private IP address range](#idps-private-ip-ranges).<br>- **Outbound**: Signature is applied only on traffic sent from your [configured private IP address range](#idps-private-ip-ranges) to the Internet.<br>- **Internal**: Signature is applied only on traffic sent from and destined to your [configured private IP address range](#idps-private-ip-ranges).<br>- **Internal/Inbound**: Signature is applied on traffic arriving from your [configured private IP address range](#idps-private-ip-ranges) or from the Internet and destined to your [configured private IP address range](#idps-private-ip-ranges).<br>- **Internal/Outbound**: Signature is applied on traffic sent from your [configured private IP address range](#idps-private-ip-ranges) and destined to your [configured private IP address range](#idps-private-ip-ranges) or to the Internet.<br>- **Any**: Signature is always applied on any traffic direction.|
|Group |The group name that the signature belongs to.| |Description |Structured from the following three parts:<br>- **Category name**: The category name that the signature belongs to as described in [Azure Firewall IDPS signature rule categories](idps-signature-categories.md).<br>- High level description of the signature<br>- **CVE-ID** (optional) in the case where the signature is associated with a specific CVE.| |Protocol |The protocol associated with this signature.|
firewall Protect Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/protect-azure-kubernetes-service.md
Title: Use Azure Firewall to protect Azure Kubernetes Service (AKS) clusters
description: Learn how to use Azure Firewall to protect Azure Kubernetes Service (AKS) clusters -+ Last updated 10/19/2023
Azure Kubernetes Service (AKS) offers a managed Kubernetes cluster on Azure. For
Despite AKS being a fully managed solution, it doesn't offer a built-in solution to secure ingress and egress traffic between the cluster and external networks. Azure Firewall offers a solution to this. AKS clusters are deployed on a virtual network. This network can be managed (created by AKS) or custom (preconfigured by the user beforehand). In either case, the cluster has outbound dependencies on services outside of that virtual network (the service has no inbound dependencies). For management and operational purposes, nodes in an AKS cluster need to access [certain ports and fully qualified domain names (FQDNs)](../aks/outbound-rules-control-egress.md) describing these outbound dependencies. This is required for various functions including, but not limited to, the nodes that communicate with the Kubernetes API server. They download and install core Kubernetes cluster components and node security updates, or pull base system container images from Microsoft Container Registry (MCR), and so on. These outbound dependencies are almost entirely defined with FQDNs, which don't have static addresses behind them. The lack of static addresses means that Network Security Groups can't be used to lock down outbound traffic from an AKS cluster. For this reason, by default, AKS clusters have unrestricted outbound (egress) Internet access. This level of network access allows nodes and services you run to access external resources as needed.
-
+ However, in a production environment, communications with a Kubernetes cluster should be protected to prevent against data exfiltration along with other vulnerabilities. All incoming and outgoing network traffic must be monitored and controlled based on a set of security rules. If you want to do this, you have to restrict egress traffic, but a limited number of ports and addresses must remain accessible to maintain healthy cluster maintenance tasks and satisfy those outbound dependencies previously mentioned.
-
+ The simplest solution uses a firewall device that can control outbound traffic based on domain names. A firewall typically establishes a barrier between a trusted network and an untrusted network, such as the Internet. Azure Firewall, for example, can restrict outbound HTTP and HTTPS traffic based on the FQDN of the destination, giving you fine-grained egress traffic control, but at the same time allows you to provide access to the FQDNs encompassing an AKS clusterΓÇÖs outbound dependencies (something that NSGs can't do). Likewise, you can control ingress traffic and improve security by enabling threat intelligence-based filtering on an Azure Firewall deployed to a shared perimeter network. This filtering can provide alerts, and deny traffic to and from known malicious IP addresses and domains. See the following video by Abhinav Sriram for a quick overview on how this works in practice on a sample environment: > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE529Qc]
-You can download a zip file from the [Microsoft Download Center](https://download.microsoft.com/download/0/1/3/0131e87a-c862-45f8-8ee6-31fa103a03ff/aks-azfw-protection-setup.zip) that contains a bash script file and a yaml file to automatically configure the sample environment used in the video. It configures Azure Firewall to protect both ingress and egress traffic. The following guides walk through each step of the script in more detail so you can set up a custom configuration.
+You can download a zip file from the [Microsoft Download Center](https://download.microsoft.com/download/0/1/3/0131e87a-c862-45f8-8ee6-31fa103a03ff/aks-azfw-protection-setup.zip) that contains a bash script file and a yaml file to automatically configure the sample environment used in the video. It configures Azure Firewall to protect both ingress and egress traffic. The following guides walk through each step of the script in more detail so you can set up a custom configuration.
The following diagram shows the sample environment from the video that the script and guide configure:
See [virtual network route table documentation](../virtual-network/virtual-netwo
> For applications outside of the kube-system or gatekeeper-system namespaces that needs to talk to the API server, an additional network rule to allow TCP communication to port 443 for the API server IP in addition to adding application rule for fqdn-tag AzureKubernetesService is required.
- You can use the following three network rules to configure your firewall. You might need to adapt these rules based on your deployment. The first rule allows access to port 9000 via TCP. The second rule allows access to port 1194 and 123 via UDP. Both these rules only allow traffic destined to the Azure Region CIDR that we're using, in this case East US.
+ You can use the following three network rules to configure your firewall. You might need to adapt these rules based on your deployment. The first rule allows access to port 9000 via TCP. The second rule allows access to port 1194 and 123 via UDP. Both these rules only allow traffic destined to the Azure Region CIDR that we're using, in this case East US.
Finally, we add a third network rule opening port 123 to an Internet time server FQDN (for example:`ntp.ubuntu.com`) via UDP. Adding an FQDN as a network rule is one of the specific features of Azure Firewall, and you need to adapt it when using your own options.
apiVersion: v1
kind: Service metadata: name: voting-storage
- labels:
+ labels:
app: voting-storage spec: ports:
apiVersion: v1
kind: Service metadata: name: voting-app
- labels:
+ labels:
app: voting-app spec: type: LoadBalancer
apiVersion: v1
kind: Service metadata: name: voting-analytics
- labels:
+ labels:
app: voting-analytics spec: ports:
frontdoor Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/scripts/custom-domain.md
Title: "Azure CLI example: Deploy custom domain in Azure Front Door"
-description: Use this Azure CLI example script to deploy a Custom Domain name and TLS certificate on an Azure Front Door front-end.
+ Title: "Azure CLI example: Deploy custom domain in Azure Front Door"
+description: Use this Azure CLI example script to deploy a Custom Domain name and TLS certificate on an Azure Front Door front-end.
-+ ms.devlang: azurecli Previously updated : 04/27/2022 Last updated : 04/27/2022 # Azure Front Door: Deploy custom domain
healthcare-apis Configure Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/configure-azure-rbac.md
Last updated 06/06/2022 -
+
# Configure Azure RBAC role for Azure Health Data Services
healthcare-apis Change Feed Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/change-feed-overview.md
+
+ Title: Change feed overview for the DICOM service in Azure Health Data Services
+description: Learn how to use the change feed in the DICOM service to access the logs of all the changes that occur in your organization's medical imaging data. The change feed allows you to query, process, and act upon the change events in a scalable and efficient way.
++++ Last updated : 1/18/2024+++
+# Change feed overview
+
+The change feed provides logs of all the changes that occur in the DICOM&reg; service. The change feed provides ordered, guaranteed, immutable, and read-only logs of these changes. The change feed offers the ability to go through the history of DICOM service and acts upon the creates, updates, and deletes in the service.
+
+Client applications can read these logs at any time in batches of any size. The change feed enables you to build efficient and scalable solutions that process change events that occur in your DICOM service.
+
+You can process these change events asynchronously, incrementally or in-full. Any number of client applications can independently read the change feed, in parallel, and at their own pace.
+
+As of v2 of the API, the change feed can be queried for a particular time window.
+
+Make sure to specify the version as part of the URL when making requests. More information can be found in the [API Versioning for DICOM service Documentation](api-versioning-dicom-service.md).
+
+## API Design
+
+The API exposes two `GET` endpoints for interacting with the change feed. A typical flow for consuming the change feed is provided in the [Usage](#usage) section.
+
+Verb | Route | Returns | Description
+: | :-- | :- | :
+GET | /changefeed | JSON Array | [Read the change feed](#change-feed)
+GET | /changefeed/latest | JSON Object | [Read the latest entry in the change feed](#latest-change-feed)
+
+### Object model
+
+Field | Type | Description
+: | :-- | :
+Sequence | long | The unique ID per change event
+StudyInstanceUid | string | The study instance UID
+SeriesInstanceUid | string | The series instance UID
+SopInstanceUid | string | The sop instance UID
+Action | string | The action that was performed - either `create`, `update`, or `delete`
+Timestamp | datetime | The date and time the action was performed in UTC
+State | string | [The current state of the metadata](#states)
+Metadata | object | Optionally, the current DICOM metadata if the instance exists
+
+#### States
+
+State | Description
+:- | :
+current | This instance is the current version.
+replaced | This instance is replaced with a new version.
+deleted | This instance is deleted and is no longer available in the service.
+
+## Change feed
+
+The change feed resource is a collection of events that occurred within the DICOM server.
+
+### Version 2
+
+#### Request
+```http
+GET /changefeed?startTime={datetime}&endtime={datetime}&offset={int}&limit={int}&includemetadata={bool} HTTP/1.1
+Accept: application/json
+Content-Type: application/json
+```
+
+#### Response
+```json
+[
+ {
+ "Sequence": 1,
+ "StudyInstanceUid": "{uid}",
+ "SeriesInstanceUid": "{uid}",
+ "SopInstanceUid": "{uid}",
+ "Action": "create|delete",
+ "Timestamp": "2020-03-04T01:03:08.4834Z",
+ "State": "current|replaced|deleted",
+ "Metadata": {
+ // DICOM JSON
+ }
+ },
+ {
+ "Sequence": 2,
+ "StudyInstanceUid": "{uid}",
+ "SeriesInstanceUid": "{uid}",
+ "SopInstanceUid": "{uid}",
+ "Action": "create|delete",
+ "Timestamp": "2020-03-05T07:13:16.4834Z",
+ "State": "current|replaced|deleted",
+ "Metadata": {
+ // DICOM JSON
+ }
+ },
+ //...
+]
+```
+#### Parameters
+
+Name | Type | Description | Default | Min | Max |
+:-- | :- | :- | : | :-- | :-- |
+offset | long | The number of events to skip from the beginning of the result set | `0` | `0` | |
+limit | int | The maximum number of events to return | `100` | `1` | `200` |
+startTime | DateTime | The inclusive start time for change events | `"0001-01-01T00:00:00Z"` | `"0001-01-01T00:00:00Z"` | `"9999-12-31T23:59:59.9999998Z"`|
+endTime | DateTime | The exclusive end time for change events | `"9999-12-31T23:59:59.9999999Z"` | `"0001-01-01T00:00:00.0000001"` | `"9999-12-31T23:59:59.9999999Z"` |
+includeMetadata | bool | Indicates whether or not to include the DICOM metadata | `true` | | |
+
+### Version 1
+
+#### Request
+```http
+GET /changefeed?offset={int}&limit={int}&includemetadata={bool} HTTP/1.1
+Accept: application/json
+Content-Type: application/json
+```
+
+#### Response
+```json
+[
+ {
+ "Sequence": 1,
+ "StudyInstanceUid": "{uid}",
+ "SeriesInstanceUid": "{uid}",
+ "SopInstanceUid": "{uid}",
+ "Action": "create|delete",
+ "Timestamp": "2020-03-04T01:03:08.4834Z",
+ "State": "current|replaced|deleted",
+ "Metadata": {
+ // DICOM JSON
+ }
+ },
+ {
+ "Sequence": 2,
+ "StudyInstanceUid": "{uid}",
+ "SeriesInstanceUid": "{uid}",
+ "SopInstanceUid": "{uid}",
+ "Action": "create|delete",
+ "Timestamp": "2020-03-05T07:13:16.4834Z",
+ "State": "current|replaced|deleted",
+ "Metadata": {
+ // DICOM JSON
+ }
+ },
+ // ...
+]
+```
+
+#### Parameters
+Name | Type | Description | Default | Min | Max |
+:-- | :- | :- | : | :-- | :-- |
+offset | long | The exclusive starting sequence number for events | `0` | `0` | |
+limit | int | The maximum value of the sequence number relative to the offset. For example, if the offset is 10 and the limit is 5, then the maximum sequence number returned is 15. | `10` | `1` | `100` |
+includeMetadata | bool | Indicates whether or not to include the DICOM metadata | `true` | | |
+
+## Latest change feed
+The latest change feed resource represents the latest event that occurred within the DICOM server.
+
+### Request
+```http
+GET /changefeed/latest?includemetadata={bool} HTTP/1.1
+Accept: application/json
+Content-Type: application/json
+```
+
+### Response
+```json
+{
+ "Sequence": 2,
+ "StudyInstanceUid": "{uid}",
+ "SeriesInstanceUid": "{uid}",
+ "SopInstanceUid": "{uid}",
+ "Action": "create|update|delete",
+ "Timestamp": "2020-03-05T07:13:16.4834Z",
+ "State": "current|replaced|deleted",
+ "Metadata": {
+ //DICOM JSON
+ }
+}
+```
+
+### Parameters
+
+Name | Type | Description | Default |
+:-- | : | :- | : |
+includeMetadata | bool | Indicates whether or not to include the metadata | `true` |
+
+## Usage
+
+### User application
+
+#### Version 2
+
+1. An application regularly queries the change feed on some time interval
+ * For example, if querying every hour, a query for the change feed might look like `/changefeed?startTime=2023-05-10T16:00:00Z&endTime=2023-05-10T17:00:00Z`
+ * If starting from the beginning, the change feed query might omit the `startTime` to read all of the changes up to, but excluding, the `endTime`
+ * For example: `/changefeed?endTime=2023-05-10T17:00:00Z`
+2. Based on the `limit` (if provided), an application continues to query for more pages of change events if the number of returned events is equal to the `limit` (or default) by updating the offset on each subsequent query
+ * For example, if the `limit` is `100`, and 100 events are returned, then the subsequent query would include `offset=100` to fetch the next "page" of results. The queries demonstrate the pattern:
+ * `/changefeed?offset=0&limit=100&startTime=2023-05-10T16:00:00Z&endTime=2023-05-10T17:00:00Z`
+ * `/changefeed?offset=100&limit=100&startTime=2023-05-10T16:00:00Z&endTime=2023-05-10T17:00:00Z`
+ * `/changefeed?offset=200&limit=100&startTime=2023-05-10T16:00:00Z&endTime=2023-05-10T17:00:00Z`
+ * If fewer events than the `limit` are returned, then the application can assume that there are no more results within the time range
+
+#### Version 1
+
+1. An application determines from which sequence number it wishes to start reading change events:
+ * To start from the first event, the application should use `offset=0`
+ * To start from the latest event, the application should specify the `offset` parameter with the value of `Sequence` from the latest change event using the `/changefeed/latest` resource
+2. On some regular polling interval, the application performs the following actions:
+ * Fetches the latest sequence number from the `/changefeed/latest` endpoint
+ * Fetches the next set of changes for processing by querying the change feed with the current offset
+ * For example, if the application processed up to sequence number 15 and it only wants to process at most five events at once, then it should use the URL `/changefeed?offset=15&limit=5`
+ * Processes any entries return by the `/changefeed` resource
+ * Updates its current sequence number to either:
+ 1. The maximum sequence number returned by the `/changefeed` resource
+ 2. The `offset` + `limit` if no change events were returned from the `/changefeed` resource, but the latest sequence number returned by `/changefeed/latest` is greater than the current sequence number used for `offset`
+
+### Other potential usage patterns
+
+Change feed support is well-suited for scenarios that process data based on objects that are changed. For example, it can be used to:
+
+* Build connected application pipelines like ML that react to change events or schedule executions based on created or deleted instance.
+* Extract business analytics insights and metrics, based on changes that occur to your objects.
+* Poll the change feed to create an event source for push notifications.
+
+## Next steps
+
+[Pull changes from the change feed](pull-dicom-changes-from-change-feed.md)
+
healthcare-apis Dicom Services Conformance Statement V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement-v2.md
Title: DICOM Conformance Statement version 2 for Azure Health Data Services
-description: This document provides details about the DICOM Conformance Statement v2 for Azure Health Data Services.
+description: Read about the features and specifications of the DICOM service v2 API, which supports a subset of the DICOMweb Standard for medical imaging data. A DICOM Conformance Statement is a technical document that describes how a device or software implements the DICOM standard.
Previously updated : 10/13/2023 Last updated : 1/18/2024
The Medical Imaging Server for DICOM&reg; supports a subset of the DICOMweb Standard. Support includes: * [Studies Service](#studies-service)
- * [Store (STOW-RS)](#store-stow-rs)
- * [Retrieve (WADO-RS)](#retrieve-wado-rs)
- * [Search (QIDO-RS)](#search-qido-rs)
- * [Delete](#delete)
+ * [Store (STOW-RS)](#store-stow-rs)
+ * [Retrieve (WADO-RS)](#retrieve-wado-rs)
+ * [Search (QIDO-RS)](#search-qido-rs)
+ * [Delete](#delete)
* [Worklist Service (UPS Push and Pull SOPs)](#worklist-service-ups-rs)
- * [Create Workitem](#create-workitem)
- * [Retrieve Workitem](#retrieve-workitem)
- * [Update Workitem](#update-workitem)
- * [Change Workitem State](#change-workitem-state)
- * [Request Cancellation](#request-cancellation)
- * [Search Workitems](#search-workitems)
+ * [Create Workitem](#create-workitem)
+ * [Retrieve Workitem](#retrieve-workitem)
+ * [Update Workitem](#update-workitem)
+ * [Change Workitem State](#change-workitem-state)
+ * [Request Cancellation](#request-cancellation)
+ * [Search Workitems](#search-workitems)
-Additionally, the following nonstandard API(s) are supported:
+Additionally, these nonstandard API(s) are supported:
-* [Change Feed](dicom-change-feed-overview.md)
+* [Change Feed](change-feed-overview.md)
* [Extended Query Tags](dicom-extended-query-tags-overview.md)
+* [Bulk Update](update-files.md)
+* [Bulk Import](import-files.md)
+* [Export](export-dicom-files.md)
The service uses REST API versioning. The version of the REST API must be explicitly specified as part of the base URL, as in the following example:
The service ignores the 128-byte File Preamble, and replaces its contents with n
## Studies Service
-The [Studies Service](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#chapter_10) allows users to store, retrieve, and search for DICOM Studies, Series, and Instances. We've added the nonstandard Delete transaction to enable a full resource lifecycle.
+The [Studies Service](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#chapter_10) allows users to store, retrieve, and search for DICOM Studies, Series, and Instances. We added the nonstandard Delete transaction to enable a full resource lifecycle.
### Store (STOW-RS)
This transaction uses the POST